The Large Synoptic Survey Telescope (LSST) is an 8.4 me-ter telescope with a 10 square degree field degree field and a 3 Gigapixelimager, planned to be on-sky in 2012. It is a dedicated all-sky survey instrument, with several complementary science missions. These includeunderstanding dark energy through weak lensing and supernovae; ex-ploring transients and variable objects; creating and maintaining a solarsystem map, with particular emphasis on potentially hazardous objects;and increasing the precision with which we understand the structure ofthe Milky Way. The instrument operates continuously at a rapid cadence,repetitively scanning the visible sky every few nights. The data flow rates from LSST are larger than those from current sur-veys by roughly a factor of 1000: A few GB/night are typical today.LSST will deliver a few TB/night. From a computing hardware perspec-tive, this factor of 1000 can be dealt with easily in 2012. The major issuesin designing the LSST data management system arise from the fact thatthe number of people available to critically examine the data will notgrow from current levels. This has a number of implications. For example, every large imaging survey today is resigned to the factthat their image reduction pipelines fail at some significant rate. Manyof these failures are dealt with by rerunning the reduction pipeline underhuman supervision, with carefully "tweaked" parameters to deal with theoriginal problem. For LSST, this will no longer be feasible. The problemis compounded by the fact that the processing must of necessity occuron clusters with large numbers of CPU's and disk drives, and with somecomponents connected by long-haul networks. This inevitably results ina significant rate of hardware component failures, which can easily leadto further software failures. Both hardware and software failures must beseen as a routine fact of life rather than rare exceptions to normality.
展开▼