When our eyes follow an object that moves in space, a number of very complex control systems operate in order to maintain a sharp image at a specific retinal location while keeping the overall retinal illumination more or less constant. In the eye movement control system, it is the distance between retinal image (or area of interest in this image) and the fovea that is minimized. Therefore, the object we fixate with our eyes is always imaged onto that part of the retina with the highest visual acuity. In the pupillary control system, it is the average light level that is measured and kept within comfortable limits by changing the pupil diameter . Finally, in the accommodation control system, certain image errors are evaluated (presently, we do not know exactly which ones) and reduce(' by changing the lens power. In all these control systems, the retina plays a crucial role as the first stage of the error detector. However, the retina is a far more complex structure than the familiar photo-sensitive surfaces in physical light detectors. There are relay stations (bipolar cells, ganglion cells) in each signal channel between a photoreceptor and optic nerve fiber, and abundant anatomical and physiological evidence exists for interactions between signal channels at various levels in the retina (horizontal cells, amacrine cells). These facts have to be kept in mind whenever detector properties of the eye are discussed. Another complication arises from the fact that we can never define the output of the system exactly. In a photocell it would be a single variable, for example, the cathode photocurrent. In the human eye, it is an already coded signal that is transmitted over approximately 10 optic nerve fibers to higher centers of the central nervous system. It is impossible at the pre-sent time to monitor simultapeously signals transmitted over large numbers of optic nerve fibers.
|