Light intensities (photons/s/um2) in a natural scene vary over several orders of magnitudeudfrom shady woods to direct sunlight. A major challenge facing the visual system is how to map such audlarge dynamic input range to its limited output range, so that signal is neither buried into noise inuddarkness and nor saturated in brightness. A fly photoreceptor has achieved such a large dynamicudrange; it can encode intensity changes from single photons to billions more, outperforming man-madeudlight sensors. This performance requires powerful light-adaptation, the neural implementation ofudwhich has only become clearer recently. A computational fly photoreceptor model, which mimics theudreal phototransduction processes, has elucidated how light adaptation happens dynamically throughudstochastic adaptive quantal information sampling. A Drosophila R1-R6 photoreceptor’s light-sensor, theudrhabdomere, has 30,000 microvilli, each of which stochastically samples incoming photons. Eachudmicrovillus employs a full G-protein-coupled-receptor (GPCR) signalling pathway to adaptivelyudtransduce photons into quantum bumps (QBs, or samples). QBs then sum up the macroscopicudphotoreceptor responses, governed by four quantal sampling factors (limitations): (1) the number ofudphoton sampling units in the cell structure (microvilli); (2) sample size (QB waveform); (3) latencyuddistribution (time delay between photon arrival to emergence of a QB), and (4) refractory perioduddistribution (time for a microvillus to recover after a QB). Here, we review how these factors jointlyudorchestrate light adaptation over a large dynamic range.
展开▼