Open Access
30 June 2022 Enhancing infrared color reproducibility through multispectral image processing using RGB and three infrared channels
Motoshi Sobue, Hiroshi Otake, Hironari Takehara, Makito Haruta, Hiroyuki Tashiro, Kiyotaka Sasagawa, Jun Ohta
Author Affiliations +
Abstract

Various techniques for image color reproducibility under low-light conditions have been proposed, such as high sensitivity, a combination of visible (VIS) and infrared (IR) light, and coloring monochromatic images. However, when the illuminance falls below a certain level, color images cannot be obtained without prior color information. Previously, the exclusive use of IR without VIS illumination was proposed to achieve a pseudocolor image (basic IR color). It improved visibility compared with conventional monochrome images. However, there are cases, depending on the objects, when basic IR light cannot reproduce the correct color. An image processing method for enhancing color reproducibility is proposed, particularly for objects that are not suitable for the basic IR color (enhanced IR color). Moreover, we developed an algorithm to combine the advantages of both VIS and IR colors by utilizing signals of six wavelengths: three wavelengths each of VIS and IR. The proposed method includes an automatic transition between the optimal combination of VIS and IR colors when the illumination level changes, thus providing images with superior color reproducibility under various illumination levels compared with basic IR color.

1.

Introduction

Various technologies and methods for improving image quality in low-light environments have been proposed. One of these approaches involves increasing the sensitivity of visible band (VIS) signals.14 Another approach involves combining VIS and infrared (IR) light by obtaining luminance information from the sensor response of IR and synthesizing color information from the sensor response of VIS light so that color image representation can be used, even in dim environments.513 However, neither of these methods can be used to obtain color information in the complete absence of VIS light. To solve this problem, the usual approach is color estimation of monochrome images. This is an image restoration technique based on the prior color information obtained by either one of the following options: (1) color specified by the user,1416 (2) arbitrary color images to estimate color of monochrome images,1719 or (3) machine learning using sample images.2022 The colorization technique, a method of obtaining a colorized IR image by using a color image as initial information, has also been proposed.2325 However, all these colorization techniques require to specify color information or to analyze many color images in advance.

It has been proposed to reproduce VIS color by the multispectral images in IR region.26 Identifying spectral reflection in IR related to that in VIS, pseudocolor representation was obtained only from the three IR response (basic IR color).27,28 However, basic IR color was an approximation that could not achieve the high color reproducibility of VIS color.

In this study, we propose an image processing method for improving color reproducibility by effectively utilizing VIS light in addition to basic IR color in dim (enhanced IR color), particularly for the objects that are not suitable for basic IR color. The enhanced IR color is superior to basic IR color because it is comparable to basic IR color in dark and better in dim and bright illuminance conditions by utilizing VIS. Therefore, the enhanced IR color is a unique method providing favorable color reproducibility in dark, dim, and bright illuminance conditions without using prior color information. It should be noted that in this study, IR refers to near-IR, approximately in the range of 700 to 1000 nm. As for the illuminance level, the meanings of “bright,” “dim,” and “dark” are “enough VIS light,” “low VIS light,” and “absence of VIS light,” respectively.

2.

Basic IR Color

2.1.

Image Processing for Basic IR Color

Basic IR color, which has been proposed previously, is a technique for colorizing an IR image close to the VIS color by performing color processing based on the weak relationship between the reflection characteristics of the objects in the VIS and IR range. Specifically, by attaching a correspondence between the three primary colors of VIS and the multiple wavelengths of IR, it is possible to estimate the VIS color based on the signal intensity of the multiple IR wavelengths and perform color representation.2931 Image processing methods for IR color are similar to those for VIS color, including white balance (WB), color correction (CC), and noise reduction (NR). An example of image processing for the basic IR color is shown in Fig. 1. It should be noted that image processing can be performed in various color spaces, such as RGB, YCbCr, and CIELab.

Fig. 1

Example of IR color camera and image processing flow.

OE_61_6_063107_f001.png

WB, CC, and NR are conventional image processing methods used for VIS as well. However, because the spectral reflections in the IR region are more gradual than those in the VIS region,32 it is necessary to provide a specific adjustment in the image processing of IR color more than VIS color. The IR color-specific counterparts are as follows.

2.1.1.

White balance

WB means here the global adjustment of the color intensities to make a white object appear white using IR illumination. Because the balance between IR wavelengths varies significantly depending on the light source irradiated on the objects and the sensitivity of the image sensor, the ratio of the IR signals from the imaging sensor can be largely diversified. Therefore, the WB gain must be applicable to a wider range of values (e.g., 0.125 to 8.0) than those for the VIS response. In addition, when a value of <1.0 is set for the WB gain, specific processing is required to avoid the so-called high-luminance coloring phenomenon, which causes unwanted coloring in the saturated portion of high luminance.

2.1.2.

Color correction

Here, CC means a localized color adjustment allocating IR signals to VIS color. As for the characteristics of the materials used as objects, the spectral characteristics in the IR wavelength region are generally less sensitive than those in the VIS wavelength region, and the relative changes between wavelengths are more gradual. To visualize such feature representation, it is necessary to provide relatively strong correction effect, such as a variable range of ±4.0 for the RGB 3×3 matrix coefficients. The colors are adjusted through matrix correction to the approximate VIS color.

2.1.3.

Noise reduction

Owing to the strong correction gain, the NR function plays a crucial role along with the CC function. Maintaining minimum noise while optimizing image quality, including sharpness and edge expression, is essential. In this system, two types of filters, space NR (Gaussian type smoothing filter) and time axis NR (frame addition) are combined especially in low IR illumination where the noise level is high. The purpose of basic IR color is providing color information and it has a trade-off relationship with NR; therefore, an appropriate balance should be set between both.

2.2.

Example of VIS and Basic IR Color

Figure 2 shows an example of the VIS and basic IR color. The objects surrounded by blue frames (tomatoes, ink bottles, and paper clips) reproduce color relatively well, whereas the Macbeth chart surrounded by red frames shows low color reproducibility, thereby suggesting that the accuracy of color reproducibility varies depending on the objects.

Fig. 2

Sample images of VIS and basic IR color: (a) VIS color in bright illuminance (30 lux) and (b) IR color in dark illuminance (No VIS used).

OE_61_6_063107_f002.png

3.

Enhanced IR Color

In this section, we describe an imaging process to enhance IR color utilizing RGB-3NIR six-channel multispectral signals. Although basic IR color uses only IR light assuming complete darkness, in practical applications, such as surveillance, it is rare that there is no VIS light. There are many dim situations where there is a small amount of VIS light. In such cases, instead of providing basic IR color by cutting off all VIS light, we examined a method for improving color reproducibility by effectively utilizing weak VIS signals.

The camera system used in this study is illustrated in Fig. 3. The multispectral camera (NLX-PH001-C) was selected because it is designed to provide VIS and basic IR color. The camera has 3CMOS with a spectrum prism as shown in the transmittance characteristics in Fig. 3. Each CMOS detects different VIS and IR bands: the first CMOS for R and IR1 (810 nm), the second CMOS for B and IR2 (870 nm), and the third CMOS for G and IR3 (940 nm). The camera works as a VIS camera when attaching an IR cut-filter and as a basic IR color camera with a VIS cut-filter. In this exercise, two of these cameras were used because one camera cannot provide VIS and IR six-channel multispectral signals. One (CAM1) is equipped with an IR-cut filter to transmit only R, B, and G. The other without a filter (CAM2) transmits R and IR1, B and IR2, G and IR3. The angles and distance of both cameras are adjusted with using calibration software to avoid pixel shifts. There are two illuminators for the VIS and IR. The VIS illuminator is variable for adjusting the darkness. The IR illuminator is constant and irradiates IR1, IR2, and IR3 light to the target.

Fig. 3

Evaluation system of the enhanced IR color camera: (a) the total system, (b) schematic channel responses for CAM1 and CAM2, (c) normalized spectral intensity of IR illumination (Nanolux Co. Ltd., multi-IR illuminator), and (d) that of VIS illumination (CCS Inc., XX-SC28FS55-P1).

OE_61_6_063107_f003.png

The image processing algorithm for enhanced IR color is shown in Fig. 4. Three unique issues of enhanced IR color must be considered: The first issue is the difference in the spectrum characteristics in the VIS and IR ranges; an optimal image cannot be obtained by simply summing the VIS and IR colors but has to be processed in VIS and IR ranges separately. The second issue is the dynamic transition of the optimal composite ratio of VIS and IR images based on illuminance level; for higher illuminance level, VIS is more reliable than IR color. Therefore, the optimal composite ratio needs to be calculated in the algorithm seamlessly and automatically. The third issue is the relatively weak spectral reflections in the IR region compared with those in the VIS region. Therefore, a strong parameter is required in WB and CC in IR image processing as described in Sec. 2.1. To compensate the noise caused by the strong parameters, a spatial quality signal such as SN (contrast-preferred signal) is separately calculated and added to the signals generated with emphasis on color expression such as hue and Chroma (tonal-preferred signal) in VIS and IR.

Fig. 4

Algorithm for enhanced IR color and three main issues.

OE_61_6_063107_f004.png

3.1.

Solution to Issue 1: Difference in Spectrum Characteristics in VIS and IR Ranges

Because simply summing the VIS and IR colors will not produce an appropriate color, image processing is required in VIS and IR separately. Thus, three VIS and three IR signals are needed. In the camera system shown in Fig. 3, VIS signals are acquired from CAM1, whereas the IR signals require calculation based on the difference between CAM2 and CAM1. Considering that the IR-cut filter on CAM1 is not fully transmitted, the specific formula for extracting IR signals is expressed as follows:

Eq. (1)

Ri=gignRngigvtRRv,

Eq. (2)

Gi=gignGngigvtGGv,

Eq. (3)

Bi=gignBngigvtBBv,
where Rv, Gv, and Bv are the VIS signal from CAM1 after preprocessing, including black correction, defect correction, and distortion correction, Rn, Gn, and Bn are the VIS and IR signals from CAM2 after preprocessing, including black correction, defect correction, and distortion correction, Ri, Gi, and Bi are the IR signals (calculated), gv is the gain values applied to CAM1 sensors, gn is the gain values applied to CAM2 sensors, gi is the gain value to be applied to the IR signal, tR, tG, and tB are the rate at which VIS signals are attenuated by the installation of the IR-cut filter.

3.2.

Solution to Issue 2: Optimal and Automatic VIS and IR Composition Under Various Illuminance Levels

As the lighting environment changes, the optimal composition ratio of VIS and IR colors changes as well. It is necessary to automatically transit the optimal composition of the VIS and IR colors according to the external lighting environment. Although linking parameters by installing a separate illuminance sensor could be considered, the price and complexity of such a method would be impractical. Therefore, we develop a method for setting the optimal composition ratio without an additional sensor by referring to the auto gain value from the auto exposure (AE) function of the camera.

The basic idea of the method is that (1) as the auto gain value for VIS decreases, the composition ratio of VIS decreases and that of IR increases, and (2) as the auto gain value for IR decreases, the composition ratio of IR decreases as well.

The composition is performed on the signals after WB correction, CC, and gamma conversion, as shown in the “VIS and IR composition” presented in Fig. 4. The composition formula is a simple weighted sum operation, which is expressed as follows:

Eq. (4)

[YCblCrl]=[wvY  YvwvC  CbvwvC  Crv]+[wiY  YiwiC  CbiwiC  Cri],
where Yv, Cbv, and Crv are the color reproducibility signal after development from VIS response. Yi, Cbi, and Cri are the color representation signal after development from IR response. wvY is the weight coefficient for luminance signal Yv after development from VIS response. wvC is the weight coefficient for chromaticity signals Cbv and Crv after development from VIS response. wiY is the weight coefficient for luminance signal Yi after development from IR response. wiC is the weight factors for chromaticity signals Cbi and Cri after development from IR response.

At next, the weight coefficients for the VIS light response, wvY and wvC, are calculated using the gain values for the VIS response

Eq. (5)

wvY=wvC=FDQv(Dav),
where Dav is the auto gain value for VIS response (dB). Dai is the auto gain value for IR response (dB). FDQv is the evaluation function of VIS image quality through VIS response gain.

As the auto gain value of the VIS light decreases, the ratio of VIS light should decrease as well.

Finally, the weight coefficients wiY and wiC for the IR response are calculated using the gain values for the VIS and IR responses

Eq. (6)

wiY=FDvSY(Dav)×FDQi(Dai),

Eq. (7)

wiC=FDvSC(Dav)×FDQi(Dai),
where FDvSY and FDvSC are the evaluation function of each signal complement intensity through VIS response gain, FDQi is the evaluation function of IR response image quality through IR light response gain.

FDvSY and FDvSC represent the strength of the IR signal that is allowed to complement the signal obtained from the VIS response when it decreases, and they should be set such that they increase as the gain value for the VIS response decreases.

Figure 5(a) shows an example of the evaluation function. Complementation through the IR signal is set to start at a slightly higher gain, which is intended to keep the VIS color as intact as possible. If the color approximation of the IR color is higher, complementation through the IR signal can be set to start at a lower gain.

Fig. 5

Example of evaluation functions for color composition: (a) evaluation function from VIS response and (b) evaluation function from IR response.

OE_61_6_063107_f005.png

FDQi is set to reduce the weighting of the IR color when the signal from the IR light response decreases, thereby indicating that the IR color is no longer reliable, as shown in Fig. 5(b).

3.3.

Solution to Issue 3: Optimal Composition of Tonal- and Contrast-Preferred Signals

The final image is constructed by applying separately generated contrast- and tonal-preferred signals obtained by the method explained in Sec. 3.2.

The advantage of the contrast-preferred signal is to combine the full signals from the two cameras. Figure 6 shows the detailed processing configuration for combining the tonal- and contrast-preferred signals. As shown in Fig. 6, the contrast-preferred signal is used as the base signal. The difference between the tonal- and contrast-preferred signals is used as an additional signal to ensure the quality of chromatic expression that is finally added for output. The spatial filter for NR processing applied to the tonal-preferred signal or additional signal is an edge-preserving low-pass filter that reduces the high-frequency component caused by noise, but it retains the edge components as much as possible. Finally, the two signal spatial filter was used as an effective process for achieving differential NR.

Fig. 6

Detailed processing configuration for tonal- and contrast-preferred signal composition.

OE_61_6_063107_f006.png

The image processing for the tonal- and contrast-preferred signal composition process is shown in Fig. 7.

Fig. 7

Example of images in the tonal- and contrast-preferred composition process.

OE_61_6_063107_f007.png

4.

Experimental Evaluations

The images obtained using the enhanced IR color system in bright (30 lux), dim (0.7 lux), and dark (0.15 lux) illuminance are compared with basic IR and VIS in Fig. 8. Enhanced IR color provides better color reproducibility of the Macbeth chart (see the red boxes) in dim light than basic IR color while achieving color reproducibility comparable to basis IR in dark and VIS in bright illuminance.

Fig. 8

Enhanced IR color, basic IR color, and VIS color by illumination level.

OE_61_6_063107_f008.png

Figure 9 shows the quantitative evaluation of color reproducibility of the Macbeth chart. We select 12 colors in the second and third rows from the top of the standard, where it was difficult to reproduce the colors using basic IR color. Because the main purpose of the study is to reproduce VIS color using the multispectral signals, the chromatic variance (ΔCab) is measured, using the distance from the correct color of VIS in bright (30 lx) illuminance in the ab dimension of the CIELab color space33

Eq. (8)

ΔCab={Δa2+Δb2}1/2,
where Δa and Δb are the difference of a and b coordinates between enhanced IR color and basic IR color on the CIELab space.

Fig. 9

Chroma reproducibility evaluation of basic IR color and enhanced IR color of Macbeth chart: (a) 12 colors of Macbeth chart used for the evaluation, (b) distances from the correct color (VIS color in bright) on ab-space on average of 12 colors, and (c) those of each color.

OE_61_6_063107_f009.png

The shorter the distance, the more reproducible the chroma. Because basic IR color does not utilize VIS light, there is always a large discrepancy from the VIS color regardless of the illuminance. In contrast, enhanced IR color provides more reproducible colors than basic IR color, not only in bright but also in dim light.

To verify the reproducibility of other objects, the same analysis was performed on the three ink bottles (see the dotted blue boxes in Fig. 8), which achieved relatively good color reproducibility using basic IR color (Fig. 10). Comparing the color reproducibility of basic IR color with the Macbeth chart presented in Fig. 9, the ink bottles indicate slightly better color reproducibility (66 vs. 47). Comparing the basic IR color with the enhanced IR color, the latter is superior on average at all illumination levels for all three colors, although there are cases where the color expression is better in darker illuminance, e.g., the green bottle under 0.7 to 5.1 1ux. Thus, it was determined that enhanced IR color can achieve more reproducible colors than basic IR color, particularly for objects with poor color reproducibility.

Fig. 10

Chroma reproducibility evaluation of basic IR color and enhanced IR color of ink bottles: (a) three colors of ink bottles used for the evaluation, (b) distances from the correct color (VIS color in bright) on ab-space on average of three colors, and (c) those of each color.

OE_61_6_063107_f010.png

5.

Conclusion

We proposed an image processing method for improving color reproducibility compared with that of basic IR. By utilizing a total of six channels (three VIS and IR wavelengths each), we confirmed that the color reproducibility of enhanced IR color exceeded that of basic IR color under all light conditions. We also proposed an algorithm that takes advantage of both VIS and IR colors to automatically transition the optimal composition of VIS and IR colors using auto-gain values that can be easily obtained from an ordinary camera. We confirmed the superiority of enhanced IR color to basic IR color for various objects, particularly for objects with poor color reproducibility in basic IR color.

References

1. 

A. Artusi et al., “High dynamic range imaging technology [Lecture notes],” IEEE Signal Process. Mag., 34 (5), 165 –172 (2017). https://doi.org/10.1109/MSP.2017.2716957 ISPRE6 1053-5888 Google Scholar

2. 

M. Kubota et al., “Ultrahigh-sensitivity new super-HARP camera,” IEEE Trans. Broadcast., 42 (3), 251 –258 (1996). https://doi.org/10.1109/11.536588 Google Scholar

3. 

Canon Inc., “Canon develops SPAD sensor with world-highest 3.2-megapixel count, innovates with low-light imaging camera that realizes high color reproduction even in dark environments,” (2021). Google Scholar

4. 

M. Furuta et al., “A high-speed, high-sensitivity digital CMOS image sensor with a global shutter and 12-bit column-parallel cyclic A/D converters,” IEEE J. Solid-State Circuits, 42 (4), 766 –774 (2007). https://doi.org/10.1109/JSSC.2007.891655 IJSCBC 0018-9200 Google Scholar

5. 

S. Sekiguchi and M. Yamamoto, “Near-infrared image colorization by convolutional neural network with perceptual loss,” in IEEE 9th Global Conf. Consum. Electron. GCCE), 88 –89 (2020). https://doi.org/10.1109/GCCE50665.2020.9291773 Google Scholar

6. 

C. Williams et al., “Grayscale-to-color: scalable fabrication of custom multispectral filter arrays,” ACS Photonics, 6 (12), 3132 –3141 (2019). https://doi.org/10.1021/acsphotonics.9b01196 Google Scholar

7. 

N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng., 52 (9), 090901 (2013). https://doi.org/10.1117/1.OE.52.9.090901 Google Scholar

8. 

H.-J. Kwon and S.-H. Lee, “Visible and near-infrared image acquisition and fusion for night surveillance,” Chemosensors, 9 (4), 75 (2021). https://doi.org/10.3390/chemosensors9040075 Google Scholar

9. 

H. Su, C. Jung and L. Yu, “Multi-spectral fusion and denoising of color and near-infrared images using multi-scale wavelet analysis,” Sensors, 21 (11), 3610 (2021). https://doi.org/10.3390/s21113610 SNSRES 0746-9462 Google Scholar

10. 

H. Yamashita, D. Sugimura and T. Hamamoto, “Enhancing low-light color images using an RGB-NIR single sensor,” in Vis. Commun. and Image Process. (VCIP), 1 –4 (2015). https://doi.org/10.1109/VCIP.2015.7457844 Google Scholar

11. 

T. Honda, D. Sugimura and T. Hamamoto, “Multi-frame RGB/NIR imaging for low-light color image super-resolution,” IEEE Trans. Comput. Imaging, 6 (XX), 248 –262 (2020). https://doi.org/10.1109/TCI.2019.2948779 Google Scholar

12. 

J. M. Amigo, H. Babamoradi and S. Elcoroaristizabal, “Hyperspectral image analysis. A tutorial,” Anal. Chim. Acta, 896 34 –51 (2015). https://doi.org/10.1016/j.aca.2015.09.030 ACACAM 0003-2670 Google Scholar

13. 

Z. Chen, X. Wang and R. Liang, “RGB-NIR multispectral camera,” Opt. Express, 22 (5), 4985 (2014). https://doi.org/10.1364/OE.22.004985 OPEXFF 1094-4087 Google Scholar

14. 

A. Levin, D. Lischinski and Y. Weiss, “Colorization using optimization,” ACM Trans. Graphics, 23 (3), 689 –694 (2004). https://doi.org/10.1145/1015706.1015780 ATGRDF 0730-0301 Google Scholar

15. 

L. Yatziv and G. Sapiro, “Fast image and video colorization using chrominance blending,” IEEE Trans. Image Process., 15 (5), 1120 –1129 (2006). https://doi.org/10.1109/TIP.2005.864231 IIPRE4 1057-7149 Google Scholar

16. 

J. Mairal, M. Elad and G. Sapiro, “Sparse representation for color image restoration,” IEEE Trans. Image Process., 17 (1), 53 –69 (2008). https://doi.org/10.1109/TIP.2007.911828 IIPRE4 1057-7149 Google Scholar

17. 

E. Reinhard et al., “Color transfer between images,” IEEE Comput. Graphics Appl., 21 (4), 34 –41 (2001). https://doi.org/10.1109/38.946629 Google Scholar

18. 

T. Welsh, M. Ashikhmin and K. Mueller, “Transferring color to greyscale images,” in Proc. 29th Annu. Conf. Comput. Graphics and Interact. Tech. - SIGGRAPH ’02, 277 (2002). Google Scholar

19. 

R. K. Gupta et al., “Image colorization using similar images,” in Proc. 20th ACM Int. Conf. Multimedia – MM ’12, 369 (2012). Google Scholar

20. 

Z. Cheng, Q. Yang and B. Sheng, “Deep colorization,” in IEEE Int. Conf. Comput. Vision (ICCV), 415 –423 (2015). Google Scholar

21. 

S. Iizuka, E. Simo-Serra and H. Ishikawa, “Let there be color!,” ACM Trans. Graphics, 35 (4), 1 –11 (2016). https://doi.org/10.1145/2897824.2925974 ATGRDF 0730-0301 Google Scholar

22. 

R. Zhang, P. Isola and A. A. Efros, “Colorful image colorization,” Lect. Notes Comput. Sci., 9907 649 –666 (2016). https://doi.org/10.1007/978-3-319-46487-9_40 LNCSD9 0302-9743 Google Scholar

23. 

T. Hamam, Y. Dordek and D. Cohen, “Single-band infrared texture-based image colorization,” in IEEE 27th Convention of Electr. and Electron. Eng. in Israel, 1 –5 (2012). https://doi.org/10.1109/EEEI.2012.6377111 Google Scholar

24. 

M. Limmer and H. P. A. Lensch, “Infrared colorization using deep convolutional neural networks,” in 15th IEEE Int. Conf. Mach. Learn. and Appl. (ICMLA), 61 –68 (2016). https://doi.org/10.1109/ICMLA.2016.0019 Google Scholar

25. 

P. L. Suárez, A. D. Sappa, B. X. Vintimilla, “Learning to colorize infrared images,” in Advances in Intelligent Systems and Computing, 164 –172 (2017). Google Scholar

26. 

M. Vilaseca, J. Pujol and F. M. Martínez-Verdú, “Color visualization system for near-infrared multispectral images advanced digital reproduction of visual gonio-appearance of objects (ADIREVGAO) View project Metallic Coatings View project,” J. Imaging Sci. Technol., 49 (3), (2005). Google Scholar

27. 

H. Takehara et al., “Multispectral near-infrared imaging technologies for nonmydriatic fundus camera,” in IEEE Biomed. Circuits and Syst. Conf. (BioCAS), 1 –4 (2019). https://doi.org/10.1109/BIOCAS.2019.8918695 Google Scholar

28. 

, “Development of high-definition infrared color night-vision imaging technology,” (2013) https://www.aist.go.jp/aist_e/list/latest_research/2013/20130201/20130201.html Google Scholar

29. 

Y. Nagamune, “Imagine capturing device and image capturing method,” (2014). Google Scholar

30. 

Y. Nagamune, “Image capturing device and image capturing method,” (2014). Google Scholar

31. 

H. Sumi et al., “Next-generation fundus camera with full color image acquisition in 0-lx visible light by 1.12-micron square pixel, 4K, 30-fps BSI CMOS image sensor with advanced NIR multi-spectral imaging system,” in IEEE Symp. VLSI Technol., 163 –164 (2018). https://doi.org/10.1109/VLSIT.2018.8510698 Google Scholar

32. 

Y. Asano et al., “Shape from water: bispectral light absorption for depth recovery,” Lect. Notes Comput. Sci., 9914 635 –649 (2016). https://doi.org/10.1007/978-3-319-46466-4_38 LNCSD9 0302-9743 Google Scholar

33. 

R. Berns, Billmeyer and Saltzman’s Principles of Color Technology, 107 –130 Wiley & Sons, Inc. (2000). Google Scholar

Biography

Motoshi Sobue received his MS degree in engineering from Waseda University, Japan, in 1989 and his MA degree in economics from Duke University, USA, in 1996. He worked for Bank of Japan, Intel, Dell, and BAT. In 2021, he joined the Graduate School of Material Science, Nara Institute of Science and Technology (NAIST), Nara, Japan, as a PhD candidate. He has his own business as a CEO of Nanolux Co. Ltd.

Hiroshi Ohtake graduated from the Tokyo Kogakuin College of Technology, Tokyo, Japan, in 1982. In 1982, he joined the Japan Broadcasting Corporation (NHK), Tokyo. Since then, he has been engaged in the development of advanced image sensors in the Science and Technology Research Laboratories. Since August 2020, he has been with Nanolux corporation and research of multispectral image sensors. He is a fellow member of the Institute of Image Information and Television Engineers.

Hironari Takehara received his ME degree in applied chemistry from Kansai University, Osaka, Japan, in 1986, and a PhD degree in materials science from Nara Institute of Science and Technology (NAIST), Nara, Japan, in 2015, respectively. From 1986 to 2012, he was a semiconductor process engineer at Panasonic Corporation, Kyoto, Japan. In 2015 and 2019, he joined NAIST as a postdoctoral fellow and assistant professor, respectively. His current research interests involve CMOS image sensors and bioimaging.

Makito Haruta received his MS degree in biological science and his Dr. Eng degree in material science from the Nara Institute of Science and Technology (NAIST), Nara, Japan, in 2011 and 2014, respectively. He joined NAIST in 2016 as an assistant professor. In 2019, he joined the Graduate School of Science and Technology, NAIST, as an assistant professor. His research interests include brain imaging devices for understanding brain functions related to animal behaviors.

Hiroyuki Tashiro received his ME degree from Toyohashi University of Technology in 1996. He received his PhD from Nara Institute of Science and Technology (NAIST) in 2017. In 1998, he joined Nidek Co., Ltd., working on the R&D of ophthalmic surgical systems and retinal prostheses. He has been an assistant professor at Kyushu University since 2014 and an associate professor at NAIST since 2019. His current research interests include artificial vision systems and neural interface.

Kiyotaka Sasagawa received his BS degree from Kyoto University in 1999 and his ME and PhD degrees in materials science from NAIST, Japan, in 2001 and 2004, respectively. Then, he was a researcher with the National Institute of Information and Communications Technology, Tokyo. In 2008, he joined NAIST as an assistant professor and has been promoted to associate professor in 2019. His research interests involve bioimaging, biosensing, and electromagnetic field imaging.

Jun Ohta received his ME and Dr. degrees in applied physics from the University of Tokyo, Japan, in 1983 and 1992. In 1983, he joined Mitsubishi Electric Corporation, Japan. In 1998, he joined Nara Institute of Science and Technology (NAIST), Japan, and was appointed as a professor in 2004. His research interests include smart CMOS image sensors for biomedical applications. He is a Fellow of IEEE, the Japan Society of Applied Physics, and ITE.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Motoshi Sobue, Hiroshi Otake, Hironari Takehara, Makito Haruta, Hiroyuki Tashiro, Kiyotaka Sasagawa, and Jun Ohta "Enhancing infrared color reproducibility through multispectral image processing using RGB and three infrared channels," Optical Engineering 61(6), 063107 (30 June 2022). https://doi.org/10.1117/1.OE.61.6.063107
Received: 2 February 2022; Accepted: 17 June 2022; Published: 30 June 2022
Advertisement
Advertisement
KEYWORDS
Infrared imaging

Image processing

Image enhancement

Infrared radiation

Cameras

Multispectral imaging

Infrared cameras

Back to Top