Open Access
10 November 2021 Perifoveal retinal augmented reality on contact lenses
Vincent Nourrit, Yoran-Eli Pigeon, Kevin J. Heggarty, Jean-Louis M. de Bougrenet de la Tocnaye
Author Affiliations +
Abstract

We present an innovative augmented reality display to project information on the perifovea. Our system uses a contact lens embedding various diffractive optical elements (DOE) and switchable light sources. The DOEs are located at the iris periphery, keeping the visual axis free of any disturbance while allowing AR content projection onto the perifovea. The use of DOEs limits the quantity of displayable information for instance to some warning symbols but allows the design of more easily manufacturable elements. A proof of concept using a mock up eye (scale 2:1) to assess the impact of laser injection and mydriasis on image reconstruction quality is presented together with a video showing how the system operates dynamically.

1.

Introduction

Technological advances have allowed information to be brought ever closer to the user (books, computers, laptops, smart phones) culminating with today’s see through augmented reality (AR) systems.1 In the last decade, advances in nanotechnology and flexible electronics have made it possible to consider a further step: the transition from AR near eye display to “contact lens display” (CLD). Several systems have been proposed to offer retinal AR (RAR).24 We refer here, by CLD, to systems fully integrating the image generator into the contact lens. Other designs combining a contact lens with an eyewear integrating part of the image generator exist5 but the alignment issues between the projector and the moving eye’s pupil strongly limit their field of view (FOV).

Since the first single-pixel CLD from the Parviz group,6,7 a number of papers have been published addressing the many challenges that need to be overcome to produce a fully functional CLD, such as power management (e.g. battery and energy harvesting8), biocompatibility, mechanical, and electrical integration and display technology (e.g. using liquid crystal modulation9 or micro or nano scaled light emitting components10,11). If we consider for instance Chen et al’s design,10 according to the authors, “the minimal numbers of pixels for the non-foveated and foveated displays are 15.38 and 3.20 megapixels” so far from what is achievable today (assuming a foveated LED array, this is the number of LEDS required to yield an angular resolution of 1’ for a FOV of 100 deg, basically equating the number of LEDs to the number of photoreceptors).

However, the most puzzling and controversial aspect concerns the fact that the display is usually placed centered on the visual axis, which can strongly impact comfort and safety (e.g. reduced transparency, natural light scattering by the display, faulty display becoming opaque, and unwanted superimposition of the displayed information with the direction of gaze). Also, studies on multiple task performances and visual attention12,13 suggest that visual information is processed along two parallels channels: focal vision (for form recognition and identification) and ambient vision (for visual guidance and motor control). Displaying the AR information on the fovea will thus increase the foveal cognitive load, which has been shown to have a negative impact on target detection.14

This is why we decided to exploit the para and perifoveal areas of the retina and keep the pupil free of optical elements although this choice limits the AR capabilities and the nature of the information displayable on the retina.

The retina can be divided into several concentric zones with the fovea in the center (0  deg to 5 deg), surrounded by the parafoveal belt (5  deg to 8 deg) and the perifovea (8  deg to 18 deg). Visual performance (e.g., acuity and contrast sensitivity) tend to decrease away from the fovea due to lower cones and ganglion cell densities, but the parafovea can be used to determine the gist of a scene well enough for a categorization task.15 Peripheral vision is also more sensitive to specific stimuli16 and plays an important role in object detection.17 In addition, the brain processes information coming from different regions of the visual field differently18 and studies suggest that some perception tasks could begin in parafoveal vision in advance of foveal fixation (e.g. reading, processing of emotional visual scenes19).

Projection of warning symbols on the peripheral retina is thus of interest as it could trigger a faster reaction, independently of where visual attention is directed.20 In addition, some simple information such as warning symbols do not need to be displayed with high resolution, which is in agreement with the lower resolution of the perifovea. Placing the image generator off the visual axis to project onto the perifovea allows thus to address the issues of visual axis obstruction and, to some extent, manufacturability (since no high-resolution display is required). In this context, we present an innovative PRAR on a contact lens to project simple stimuli (e.g. warning symbols), on the peripheral retina. (By extension, we use here the term perifovea to encompass both the narrow parafovea and perifovea.)

The PRAR design is presented in detail in the next section followed by an experimental validation at scale 2:1

2.

Parafoveal Retinal Augmented Reality Imaging Principle

The principle of our perifoveal retinal AR device is as follows. The contact lens incorporates a ring (mainly covering the iris) made up of a plurality of DOEs (here 4) (cf. Fig. 1). The illumination of a DOE by a laser (also embedded in the contact lens) allows an image, generated by the holographic DOE, to be formed in the retinal plane. The illumination parameters (beam size, angle of incidence, etc.) are easily adjustable during the design phase.

Fig. 1

Principle of the optical PRAR system. (a) Top view of the contact lens with an embedded DOE, the reflective layer and the associated VCSEL (only one laser and one DOE are represented but several could be embedded into the same contact lens). (b) Schematic representation of the image formation.

OE_60_11_115103_f001.png

To comply with the limited contact lens thickness that limits the superimposition of several optical elements, the VCSEL is placed alongside of the DOE and a light guide, made of two reflective annular layers (e.g., on both side of the lens), is used to guide the laser light to the DOE. This guide could also help in embedding additional optical functions, e.g., to focus the laser beam.

The internal diameter of the ring of DOEs is chosen to keep the pupil free of any optical element while facilitating image projection. As a result, DOE efficiency is impacted by the mydriasis. When the pupil is fully open, the whole of each DOE can be fully illuminated. When the pupil is partially closed, part of the light going through the DOE will be blocked. Because DOEs are usually made of periodic patterns, the hologram-generated figure is still imaged on the retina, the only difference is a reduction in the intensity and to some extent the resolution. The pupil diameter range to achieve good viewing will depend on several parameters such as the laser direction or the DOE pattern but for practical reasons, it should extend to at least 4 mm.

The choice of the light source depends on several constraints: DOEs generally require a coherent source, of limited divergence; the wavelength should ideally correspond to the sensitivity of the photoreceptors in the perifovea (e.g., 420 nm for rods or 534/564 nm for M/L cones); and the light source should fit within the lens. Practically, the choice of the optimum wavelength will depend on a number of factors. Rods are more sensitive that cones and their number increases toward the perifovea while the number of cones sharply decreases. On the other hand, longer wavelength may be preferred for safety reasons and components availability. Due to their coherence, directionality, and reduced size compared to LEDs, vertical-cavity surface-emitting lasers (VCSEL) are of particular interest. VSECLs emitting at 680 nm exist21 but are difficult to obtain. For the present demonstration, we, therefore, used a semiconductor laser at 655 nm even though it does not correspond to the maximum of the photoreceptors sensitivity.

Another important aspect to consider is the eccentricity at which the image is projected since visual acuity strongly decreases with eccentricity. The visual axis makes an angle of approximately 3 deg to 5 deg horizontally and 2 deg to 3 deg vertically with respect to the eye’s optical axis (cf. Fig. 1). In our design, we chose to project the holographic image 10 deg away from the fovea to avoid perturbing the central vision, so 12  deg to 15 deg from the eye’s optical axis. At such eccentricity, the neural resolution is significantly reduced when compared to the fovea22 so that the smallest details in the holographic image should be at least 48  μm large (which corresponds approximately to a visual acuity of 3 cpd), hence a relatively large retinal projection.

3.

Optical Imaging System Design

As previously stated, image formation in our PRAR system may be affected by three factors. First, the laser light should be correctly guided to the DOE then to the targeted retinal zone. Second, since the image generator is off the visual axis, mydriasis will impact the image formation. Third, the DOE forms an image in the Fourier plane, so assuming the laser light is correctly focused to form an image in the retinal plane, accommodation may degrade the image.

To validate the concept experimentally, we created a scaled (2:1) eye model where the cornea is a 20D plano-convex lens (Thorlabs LA1131) and the crystalline lens a 10D biconvex lens (Thorlabs LB1676). Distances between cornea and pupil and lens and retina are respectively 6.6 and 32 mm. A diaphragm can be inserted in front of the crystalline lens (touching it) to assess the influence of the pupil on the image formation. In place of the retina, we used a digital camera (Sony alpha-Nex 5 with 5.07  μm pixel pitch). Our « eye model » is obviously very simple when compared to a real eye, for instance, we project the image on a flat sensor when the retina is curved but it allows reproducing the main elements, which are important for the proof of concept (pupil, crystalline lens, and sensor position).

Since the light source is embedded within the contact lens, the cornea can be considered not to affect light propagation. Indeed, the refractive power of the different interfaces before the crystalline lens will probably have little impact on beam propagation as the differences in refractive index are small and the VCSEL spot size is small (so the surfaces will show little curvature at this scale). The light from the laser will be focused only by the crystalline lens. However, such an eye would not be able to form a focused retinal image of the outside world. This is why we placed the “corneal” lens before the DOE to keep the eye emmetropic.

For the contact lens, in order to simulate a 750-μm thick scleral lens at scale 2:1, we used a one-inch diameter, 1.5-mm thick glass disk.

The laser (APCD-650-02-C2) was placed outside the lens and the beam injected into it through a small projection area. The laser module incorporates a lens that compensates for the fact that the position of the retina does not correspond to the focal plane of the lens. This optical power could be embedded within the guide or at the level of the DOE (but then making it more sensitive to mydriasis).

The light guide and diffractive elements were manufactured as follows. A 1.2-μm thick layer of S1813 (Shipley) photoresist was deposited on the glass disk by spin-coating. Multilevel phase DOEs were written into this layer with our massively parallel direct-write photo-plotter.23 The DOE pattern was calculated using iterative Fourier-transform algorithms to produce a warning sign in the retinal plane. The DOEs (4×5  mm) were placed on an 11-mm diameter ring. The areas surrounding the DOEs are also fully exposed so that on development (303A Microposit), the DOEs were etched into the resist layer and the surrounding photoresist removed. The DOEs were designed using a three-stage IFTA based algorithm with an off-axis reconstruction image to eliminate disturbance by any residual zeroth order. They were manufactured at a resolution of 750 nm and with maximum etch depth of approximately 1010 nm. For the light guide, reflective gold layers were spluttered onto the top and bottom faces of the glass disk through laser cut, PET thin film stencils. The stencils protected the pupil area and the light injection area, keeping both free from spluttered gold. The sides of the glass disk were also protected, to prevent them from being covered by gold. The alignment and resolution of the gold area could be further improved using a photolithographic mask process rather than PET stencils. The thickness of reflective gold layers was a few tens of nanometers. These layers are currently fragile and could be protected or replaced by aluminum (more robust) or silver coating evaporation. Figure 2 shows the device as manufactured for the test. In this study, the written DOE reconstructs a danger road sign (i.e. the so-called “other danger” warning sign).

Fig. 2

View of the glass disk replacing the contact lens with the light guide and DOE (indicated by a red arrow). A golden reflective layer on either side of the disk helps guide the laser light toward the DOE. The rear layer (facing the eye) covers the iris. The front layer is thinner to allow coupling with the laser source so that by propagating in the guide, the light illuminates at the correct incidence the DOE located at the exit of the guide.

OE_60_11_115103_f002.png

Finally, this element was mounted onto the scale model of the eye (cf. Fig. 3) to demonstrate the proof of concept.

Fig. 3

(a) Exploded CAD view of the experimental set-up: A: cornea, B: the artificial contact lens (Fig. 2), C: diaphragm’s slot, D: lens holder, E: crystalline lens. (b) Experimental set-up with elements A, B, D, E as well as the diaphragm (F) and camera (G). (c) Assembled prototype with the camera for retina. A mechanical element (H) allows adjusting the position of the laser source (J) to change the injection angle.

OE_60_11_115103_f003.png

The optimum injection angle for the laser light was calculated using Trace Pro (Fig. 4) and estimated to be 12 deg. The DOE was designed to produce an image 0.42-mm large on the retina (the user would thus see an image approximately as large as three full moons).

4.

Optical Experimentation

The aim of this study was to validate the proof of concept, particularly the impact of the angle of injection, mydriasis, and lens power on the hologram reconstruction.

For the laser injection, based on the above embodiment, we obtained an optimal reconstruction image quality and efficiency [Fig. 5(a)] at an angle of 12 deg with a tolerance of 5  deg corresponding to that predicted by our modeling. If the angle of injection departs from this range, the image becomes severely degraded, with reduced intensity and increased scattering [Figs. 5(b) and 5(c)] or simply not reconstructed at all. The image subtends 0.89 mm in the retinal plane (so slightly larger than expected (0.84 mm scale 2:1)). The width of the line making the triangle is approximately 220-μm large so the symbol could be easily perceived despite the poor acuity in the perifovea.

Fig. 4

Ray tracing through the experimental set-up. The VCSEL light (A) is injected into the lens, and then guided through multiple reflections to the DOE (B). Pupil size (C) may impede image formation.

OE_60_11_115103_f004.png

Fig. 5

Optimum imaging conditions are an injection angle of 12 deg and a pupil size of 10 mm or greater (a). If the angle of injection deviates from this value (b), (c) or if the pupil is reduced (d) the projected image is degraded. (e) influence of a 4D defocus simulating the impact of accommodation.

OE_60_11_115103_f005.png

With respect to pupil size, and considering an optimum injection angle of 12 deg, pupil diameters larger or equal to 10 mm (scale 2:1) had no impact on the image quality [Fig. 5(a)]. At 8 mm (scale 2:1), the image started to show degradation [increased scattering, less sharp image, Fig. 5(d)] and no image could be reconstructed with the 4 and 6 mm pupil (scale 2:1). Since our objective was to demonstrate the proof of concept, we did not apply any image quality metrics.

In terms of lens power, also reducing the lens power by 4D had a similar effect in terms of degraded image quality with a blurred image and increased scattering [Fig. 5(e)]. This means that in the case of the accommodating eye, the image quality could be reduced without the image becoming unreadable.

In addition, we also tested the correct superimposition of the holographic image with a real scene (LogMar chart, Fig. 6). The image was obtained with a 12 mm pupil (scale 2:1) and illustrates the fact that even though part of the DOEs are illuminated by daylight, this does not induce any visual artefacts. The reduced image quality here is mainly due to the eye’s aberrations.

Fig. 6

Superimposition of the holographic image (the warning sign) on a real scene. The white dot in the centre materializes the fixation spot, illustrating that the warning sign appears at the periphery of central vision (materialized by the white ring).

OE_60_11_115103_f006.png

Fig. 7

Video recorded with the experimental set-up (Fig. 3) illustrating the use of a device in a collision avoidance scenario.

OE_60_11_115103_f007.png

The visualization (Fig. 7) presents a short video of the system operating dynamically. The scenario is a collision avoidance situation. A car driver (the angle of view is materialized by the camera angle) does not notice the presence of a pedestrian on a crossing. A separate system detects the potential risk of collision and sends a command to the RAR system, which activates a warning, immediately displayed on the retina and superimposed onto the scene. The driver cannot avoid the warning, whatever the direction of his/her gaze. When the situation no longer presents a risk, the laser switches off and the warning disappears. Such a RAR system could be used in real time. In our design, we assumed a scene observed at long range and thus there was no visual accommodation. In cases where the warning signal is displayed while the user is observing an object at a closer distance, for instance the driver watching his phone, the crystalline lens power would change by approximately 2D. This additional optical power would facilitate image formation by the DOE.

5.

Discussion

In this paper, we have presented and scaled a very simple PRAR concept able to effectively solve the display and eye alignment issue, which is a limitation in the near-eye display (NED). Our system allows the superimposition onto the para or perifovea, of additional fixed symbols generated separately in the pupil periphery. Symbols are obtained from a DOE ring, overlapping partly the pupil and illuminated by embedded laser sources. Although, on our prototype we placed the DOEs around a circle, the position of the DOEs on the contact lens does not need to be rotationally symmetrical and could be adapted to the nasal or temporal part of the retina.

When compared to other solutions, the system presents several advantages. It requires no alignment with an external light source (unlike scanning laser-based systems24) and it keeps the pupil free from any display (e.g., unlike designs such as Shtukater’s11). Its main disadvantage is that the variety of AR contents is limited to a number of monochromatic images (though reconfigurable DOEs are possible). However, such information can be used efficiently to stimulate briefly the perifoveal area for some warning tasks. This could be of particular interest for head-up display devices where AR information may reduce the user’s attentional resources available for the detection of external events (danger and alert).

Further describing the optical system, we have presented a proof of concept experimental set-up at scale 2:1, taking into account the light guiding, mydriasis, and crystalline lens power. In our prototype, the PRAR system allows the correct projection (resolution, size) of AR contents in the perifovea for pupils larger or equal to 4 mm in diameter (scale 1:1). The fact that no image could be obtained for a smaller pupil was due to the decision to place the DOE far from the optical axis (2.5 mm, scale 1:1) to avoid visual disturbance but this could be further optimized. Even with very large pupil diameter (8 mm, scale 1:1) the presence of the DOE did not cause any visual artefacts.

In terms of apparent display size, this is set by the size of the retinal image, which can be easily adapted. In our set-up, we chose a retinal size corresponding to approximately three full moons as a compromise between something large enough to be seen and not too large that it would extend on to the fovea. Filling the entire FOV of the user is obviously achievable, whether by increasing the size of the image or by mosaicking the images from different DOEs but in contradiction with our objective.

In terms of dynamic functioning, the simplest way to switch the laser diode is to power it or not through induction. Similarly, for a system with several DOEs and associated laser sources, switching between different sources could be easily achieved using different carrier waves.

In terms of manufacturability, the integration of switchable laser sources or LEDs into contact lenses has already been demonstrated, for instance, by IMEC and University of Ghent25 as well as the integration of diffractive elements.26 Similarly, we have encapsulated a pointing laser into a scleral contact lens27 as well as photodiodes.28 A recurrent issue with smart contact lenses is the power supply. Here, the light sources could be remotely switched and powered by energy harvesting8 (i.e., powered by an on-board RF antenna converting RF waves into currents triggering the on-board electronics thereby switching the light sources, as demonstrated in Ref. 27), or autonomous thanks to an embedded battery.29 In a next step, the optics implemented here on a glass substrate, should be reduced to the correct scale and then implemented, for instance, on flexible substrates, such as thermoplastic polyurethane that can be molded to match the curvature of the contact lens.30 Another potential extension would be to benefit from recent advances in DOEs on planar wave-guides, which could be used to miniaturize the near-to-eye diffractive optics31 and to directly record the holographic element in the guide itself, keeping in mind that the pupil area should remain free and fully transparent.

6.

Conclusion

A novel CLD was presented that has the advantage of letting the visual axis free of any element that could disturb vision. It is based on the integration into the lens of one or multiple diffractive optical elements (placed at the iris periphery) and associated light sources, and on an image projection away from the fovea. An application could be the projection of warning symbols on the peripheral retina to trigger a faster reaction. A proof of concept has been demonstrated at scale 2:1. This approach is part of the integration of increasingly complex functions on electronic contact lenses.

Acknowledgments

The authors declare no conflicts of interest.

References

1. 

B. Kress, “Digital optical elements and technologies (EDO19): applications to AR/VR/MR,” Proc. SPIE, 11062 1106222 (2019). https://doi.org/10.1117/12.2544404 PSISDG 0277-786X Google Scholar

2. 

J. Lin et al., “Retinal projection head-mounted display,” Front. Optoelectron., 10 1 –8 (2017). https://doi.org/10.1007/s12200-016-0662-8 Google Scholar

3. 

V. Krotov, C. Martinez and O. Haeberlé, “Experimental validation of self-focusing image formation for retinal projection display,” Opt. Express, 27 20632 –20648 (2019). https://doi.org/10.1364/OE.27.020632 OPEXFF 1094-4087 Google Scholar

4. 

C. Jang et al., “Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina,” ACM Trans. Graph., 36 (6), 1 –13 (2017). https://doi.org/10.1145/3130800.3130889 ATGRDF 0730-0301 Google Scholar

5. 

Y. Wu et al., “Design of retinal-projection-based near-eye display with contact lens,” Opt. Express, 26 (9), 11553 –11567 (2018). https://doi.org/10.1364/OE.26.011553 OPEXFF 1094-4087 Google Scholar

6. 

A. R. Lingley et al., “A single-pixel wireless contact lens display,” J. Micromech. Microeng., 21 125014 (2011). https://doi.org/10.1088/0960-1317/21/12/125014 JMMIEZ 0960-1317 Google Scholar

7. 

B. A. Parviz, “For your eye only,” IEEE Spectr., 46 (9), 36 –41 (2009). https://doi.org/10.1109/MSPEC.2009.5210042 IEESAM 0018-9235 Google Scholar

8. 

J. Pandey et al., “A fully integrated RF-powered contact lens with a single element display,” IEEE Trans. Biom. Circuits Syst., 4 (6), 454 –461 (2010). https://doi.org/10.1109/TBCAS.2010.2081989 Google Scholar

9. 

J. De Smet et al., “Progress toward a liquid crystal contact lens display,” J. Soc. Inf. Disp., 21 (9), 399 –406 (2013). https://doi.org/10.1002/jsid.188 JSIDE8 0734-1768 Google Scholar

10. 

J. Chen et al., “Design of foveated contact lens display for augmented reality,” Opt. Express, 27 38204 –38219 (2019). https://doi.org/10.1364/OE.381200 OPEXFF 1094-4087 Google Scholar

11. 

A. Shtukater, “Smart contact lens with embedded display and image focusing system,” (2015). Google Scholar

12. 

C. D. Wickens, “Multiple resources and performance prediction,” Theor. Issues Ergon. Sci., 3 (2), 159 –177 (2002). https://doi.org/10.1080/14639220210123806 Google Scholar

13. 

J. K. Lenneman et al., “Differential effects of focal and ambient visual processing demands on driving performance,” in Proc. Fifth Int. Driving Symp. Human Factors Driver Assessment, Training and Vehicle Design, 306 –312 (2009). Google Scholar

14. 

M. Wittmanna et al., “Effects of display position of a visual in-vehicle task on simulated driving,” Appl. Ergon., 37 (2), 187 –199 (2006). https://doi.org/10.1016/j.apergo.2005.06.002 AERGBW 0003-6870 Google Scholar

15. 

M. Boucart et al., “Scene categorization at large visual eccentricities,” Vision Res., 86 35 –42 (2013). https://doi.org/10.1016/j.visres.2013.04.006 VISRAM 0042-6989 Google Scholar

16. 

A. Traschütz, W. Zinke and D. Wegener, “Speed change detection in foveal and peripheral vision,” Vision Res., 72 1 –13 (2012). https://doi.org/10.1016/j.visres.2012.08.019 VISRAM 0042-6989 Google Scholar

17. 

C. I. Lou et al., “Object recognition test in peripheral vision: a study on the influence of object color, pattern and shape,” Lect. Notes Comput. Sci., 7670 18 –26 (2012). https://doi.org/10.1007/978-3-642-35139-6_3 LNCSD9 0302-9743 Google Scholar

18. 

A. Kennedy, “Parafoveal processing in word recognition,” Q. J. Exp. Psychol., 53 (2), 429 –455 (2000). https://doi.org/10.1080/713755901 QJXPAR 0033-555X Google Scholar

19. 

M. G. Calvo and P. J. Lang, “Parafoveal semantic processing of emotional visual scenes,” J Exp. Psychol. Hum. Percept. Perf., 31 (3), 502 –519 (2005). https://doi.org/10.1037/0096-1523.31.3.502 Google Scholar

20. 

D. J. Bayle, M. A. Henaff and P. Krolak-Salmon, “Unconsciously perceived fear in peripheral vision alerts the limbic system: a MEG study,” PLoS One, 4 (12), e8207 (2009). https://doi.org/10.1371/journal.pone.0008207 POLNCL 1932-6203 Google Scholar

21. 

D. Yun et al., “The red light VCSEL for network communication,” Proc. SPIE, 8906 89061O (2013). https://doi.org/10.1117/12.2034256 PSISDG 0277-786X Google Scholar

22. 

C. A. Curcio et al., “Human photoreceptor topography,” J. Comp. Neurol., 292 (4), 497 –523 (1990). https://doi.org/10.1002/cne.902920402 JCNEAM 0021-9967 Google Scholar

23. 

M. Kessels et al., “Stepper based maskless microlithog-Raphy using a liquid crystal display for massively parallel direct-write of binary and multilevel microstructures,” J. Micro/Nanolithogr. MEMS MOEMS, 6 (3), 033002 (2007). https://doi.org/10.1117/1.2767331 Google Scholar

24. 

C. Iyama et al., “QD laser eyewear as a visual field aid in a visual field defect model,” Sci Rep, 9 1010 (2019). https://doi.org/10.1038/s41598-018-37744-8 Google Scholar

25. 

H. Johnston, “Soft contact lens fitted with microchip and antenna,” Phys. World, 32 (1), 5 (2018). https://doi.org/10.1088/2058-7058/32/1/7 PHWOEW 0953-8585 Google Scholar

26. 

E. J. Tremblay et al., “Switchable telescopic contact lens,” Opt. Express, 21 (13), 15980 –15986 (2013). https://doi.org/10.1364/OE.21.015980 OPEXFF 1094-4087 Google Scholar

27. 

A. Khaldi et al., “A laser emitting contact lens for eye tracking,” Sci. Rep., 10 14804 (2020). https://doi.org/10.1038/s41598-020-71233-1 SRCEC3 2045-2322 Google Scholar

28. 

L. Massin et al., “Development of a new scleral contact lens with encapsulated photodetectors for eye tracking,” Opt. Express, 28 28635 –28647 (2020). https://doi.org/10.1364/OE.399823 OPEXFF 1094-4087 Google Scholar

29. 

M. Nasreldin et al., “Flexible micro-battery for powering smart contact lens,” Sensors, 19 2062 (2019). https://doi.org/10.3390/s19092062 SNSRES 0746-9462 Google Scholar

30. 

C. Stephen, A. Musgrave and F. Fang, “Contact lens materials: a materials science perspective,” Materials, 14 261 (2019). https://doi.org/10.3390/ma12020261 MATEG9 1996-1944 Google Scholar

31. 

J. Kim et al., “Foveated AR: dynamically-foveated augmented reality display,” ACM Trans. Graphics, 40 (4), 1 –15 (2021). https://doi.org/10.1145/3450626.3459776 ATGRDF 0730-0301 Google Scholar

Biography

Kevin Heggarty studied Natural Sciences at the University of Cambridge (UK) and received his doctorate from the E.N.S.T. in Paris. He is now professor of optics/photonics at the French “Grande Ecole” IMT Atlantique where he leads the diffractive optics group. His research interests include non-display applications of spatial light modulators, the design and fabrication of diffractive micro-optical elements and their applications in optical telecommunications and optical information processing.

Biographies of the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Vincent Nourrit, Yoran-Eli Pigeon, Kevin J. Heggarty, and Jean-Louis M. de Bougrenet de la Tocnaye "Perifoveal retinal augmented reality on contact lenses," Optical Engineering 60(11), 115103 (10 November 2021). https://doi.org/10.1117/1.OE.60.11.115103
Received: 2 July 2021; Accepted: 21 October 2021; Published: 10 November 2021
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Diffractive optical elements

Augmented reality

Visualization

Contact lenses

Retina

Eye

Image quality

Back to Top