Open Access
21 April 2015 Pushbroom hyperspectral imaging system with selectable region of interest for medical imaging
Author Affiliations +
Abstract
A spatial-scanning pushbroom hyperspectral imaging (HSI) system incorporating a video camera (VC) which is not only used for direct video imaging but also for the selection of the region of interest within the VC’s full field-of-view is presented. Using a VC for these two applications brings many benefits to a pushbroom HSI system, such as a minimized data acquisition time and smaller data storage requirement. A detailed description of the system followed by the methods and formulas used for calibration and electronic hardware interfacing were discussed and analyzed using United States Air Force resolution chart, chicken breast tissue, and fluorescent targets as test samples. The proposed concepts and developed system can find potential biomedical imaging applications and can be extended to endoscopic imaging applications as well.

1.

Introduction

Spectral imaging can be used to collect spectral information from a sample, and different imaging systems and configurations are available for variety of applications and fields, which include remote sensing,1,2 quality assessment of agrofood products,3,4 and biomedical and forensic imaging.59 Normal, multispectral, hyperspectral, and ultraspectral imaging are some of the common terms used in spectral imaging. The main differences between them in the above mentioned order are with respect to a greater number of wavelength bands and higher precision. Two classification criteria are presented in Table 1.1,10 Both criteria use the number of wavelength bands as one of the defining parameters. However, Fresse et al.10 used precision, whereas Puschell1 used resolution. By using the number of wavelength bands, a common parameter between these two definitions, the latter makes it easier for one to classify a system as a hyperspectral (HS) or ultraspectral imager. The first definition which is stricter is used to define the type of spectral imaging that is employed in the pushbroom imager presented in this paper.

Table 1

Classification of spectral imaging.

Spectral imagingFresse et al.10Puschell1
Band no.Precision (Δλ/λ)Band no.Resolution (nm)
Multispectral5 to 100.1Order of 1020 to 100
Hyperspectral100 to 2000.0130 to 300<10
Ultraspectral1000 to 10,0000.001>300<1

Hyperspectral imaging (HSI) and ultraspectral imaging give more detailed spectral signatures for identification purposes with higher accuracy by enabling the detection of more wavelength bands along with an increase in spectral range, precision, and resolution.

Similar to other conventional spectral imaging, HSI requires a component such as a spectrograph to spectrally split or filter the incoming light from the sample. Thereafter, the light needs to be captured by a sensor. There are four main types of imager, namely spatial scanning, spectral scanning, spatiospectral scanning, and snapshot imagers. Each imager has its own advantages and constraints and can be used for specific objective or applications. They can be used to achieve the same type of information, known as a datacube. A datacube stores the intensity value in three dimensions (spatial-spatial-spectral). The value in each voxel [similar to pixels in a two-dimensional (2-D) dataset] of a datacube indicates the intensity of a specific wavelength from one spatial point in a 2-D sample.11

A spatial-scanning imager usually uses a dispersive element such as a prism-grating-prism assembly in a spectrograph12 to separate the incoming spectrum so that information of the constituent wavelength bands can be detected by the detector array. The conventional point-scanning spectrometer records the spectrum of a spatial point in each scan to give one-dimensional (1-D) spectral information. By repeating the scan across multiple points in a 2-D area (whiskbroom imager), a datacube can be formed. Spatial scanning can be done using a 2-D stage to move the sample or using a microelectromechanical system scanner to direct the point illumination to different parts of the sample.13 With a large sample, the data acquisition time can be long as scanning needs to be repeated for each point in the sample. A faster alternative in this category is to use a line-scanning or pushbroom imager.14,15 In each scan, a line-scanning imager is able to capture the individual spectrum of every point across a line of the sample. The light from this line of the sample is dispersed into different wavelengths onto the 2-D detector. Thus, the image on the detector has one spatial and one spectral dimension. The scan is repeated by relative motion perpendicular to the detector’s line of view (LOV) of the sample. Line scanning is repeated in one dimension by going to a subsequent row, compared to a 2-D point scanning. This reduces the acquisition time in line scanning by a factor equal to the number of rows, which can be significant.

The development of acousto-optic tunable filters16,17 and liquid crystal tunable filters18,19 makes a spectral-scanning imager possible. These filters can be electronically tuned to selectively choose a wavelength band to be transmitted and be imaged by the detector. Each scan provides intensity data of the sample (2-D spatial-spatial) for a wavelength band and is repeated by tuning the filter so that the next wavelength band (1-D spectral) can be recorded.

In a spatiospectral-scanning imager, each scan provides intensity data of a diagonal slice of the datacube. This differs from a spatial- and spectral-scanning imager, which performs scanning in the orthogonal direction of the datacube. Relative motion between the detector and the sample is required to give a sequence of diagonal slices to build the datacube.20

A snapshot imager acquires all the information required to build a datacube in one scan. Thus, no temporal scanning and relative motion between the sample and imager is required.21

The main advantage of a pushbroom imager over a spectral-scanning or snapshot imager is that it offers a much higher spectral resolution across a broad spectral range. However, its main drawback is the need to have relative motion between the sample and the detector, which can limit its data acquisition rate.

A few spatial-scanning HS imagers have been reported in literature for biomedical-related applications. Some of these imagers do not use a video camera (VC) in the system,14,22 whereas others incorporate a VC in the setup for direct video imaging,23,24 which has many benefits. Using a VC for direct video imaging gives a better visual representation with a color image, which can be used to verify the data after measurement. The VC and detector camera (DC) can be positioned such that both the cameras capture a focused image simultaneously. Using the VC, samples of different thicknesses can be easily positioned to maintain the same working distance. The VC allows the sample to be positioned precisely and this is especially important for a system with small field-of-view (FOV). Unwanted and repeated scanning can be prevented and this saves time and minimizes deterioration of the sample. However, having direct video imaging capability alone does not allow the user to pinpoint exactly which area in the FOV of the VC as the region of interest (ROI).

In this context, this paper reports the instrumentation, calibration, and the theoretical framework that used to set up a pushbroom HS imager incorporating a VC for both direct video imaging and user-selectable ROI. The advantages of using such a configuration include the benefits for direct video imaging mentioned earlier. The function introduced for user-selectable ROI allows the storing of the information from only within the ROI, minimizing measurement time, data size, and computational time. This precise mining of information from only within the ROI is accomplished by mechanical and digital means. While the top-to-bottom scanning of the ROI (height) is done by an automated scanning stage, the mining of data from only the spectral range of interest and within the width of the ROI is done by digital means.

2.

Instrumentation of the Pushbroom HS Imager

The proposed pushbroom HS imager’s design and configuration is shown in Fig. 1. The imager consists of a three-axis motorized stage (Physik Instrumente, x and y-axes: M-112.2DG, z-axis: M-110.1DG) to position the sample. The y-axis stage is used to move the sample between each scan. Light from the sample then passes through the forelens (Navitar 2-50145 doublet lens) and is placed in a fine focus adapter (Navitar 2-16265). This adapter is attached to the bottom side of the quadrocular adapter (Nikon Y-QT), which houses a sliding mirror. The sliding mirror is initially pushed into the quadrocular adapter and directs light toward the VC (Path 1 in Fig. 1) before scanning commences. The VC (iDS UI-1550LE-C-HQ) has 1600×1200 light sensitive pixels, allowing direct video imaging of the sample. The software developed allows the user to choose a particular region within this FOV as the ROI. After selection of the ROI, the sliding mirror is pulled out of the quadrocular adapter and light travels straight toward the spectrograph and the DC (Path 2 in Fig. 1). Scanning can now begin after the sliding mirror is pulled out. The spectrograph (Specim ImSpectors V10E, 400 to 1000 nm, 2.8-nm spectral resolution) is used for the dispersion of light and the DC (Andor EMCCD LucaEM DL-604M-OEM, 400-1000 nm), with 1004×1002 light sensitive pixels, is used to record the spectral information.

Fig. 1

Schematic diagram of the proposed pushbroom hyperspectral imager.

JBO_20_4_046010_f001.png

3.

Instrumentation of the Pushbroom HS Imager

The calibration can be divided into three main parts (FOV, spectral, and position).

3.1.

FOV Calibration

CalFOV (mm) refers to the length of the FOV of the VC in the vertical direction. At the minimum and maximum zooms (adjusted using the fine focus adapter), CalFOV is measured to be 5.17 and 4.32 mm, respectively. This is done by first placing a sample onto the stage. The stage is displaced by a distance to move the sample’s reference point from the top to the bottom of the FOV of the VC. This stage displacement is CalFOV. The results presented in the following sections of this paper are all at maximum zoom where CalFOV was 4.32 mm.

3.2.

Spectral Calibration

The spectrum from each sample point along the DC’s LOV is dispersed by the spectrograph. Each spectrum spreads along the y-axis of the DC. This calibration assigns each row of DC (DCY) to a specific wavelength band. Calibration is carried out by imaging a flat sample illuminated by 12 calibration wavelengths (WLCal) (470 nm and 500 to 1000 nm with 50-nm incremental steps) from a tunable laser source (NKT Photonics SuperK Extreme EXR-15, SuperK Select 4xVIS/IR, SuperK Select-/nIR1). The second-order polynomial model used to relate each DCY to its calibration wavelength is shown in Eq. (1), where a, b, and c are constants. Subsequently, a second-order polynomial regression model is used to determine the values of a, b, and c, which are found to be a=7.34536 E-5 nm, b=0.725977nm, and c=331.871nm. With these constants, each DCY will later be assigned a wavelength.

Eq. (1)

WLCal=a·DCY2+b·DCY+c.

3.3.

Position Calibration

The VC and DC have different views of the sample. The VC has a rectangular view of the sample, whereas the DC has an LOV across the sample. The length of the DC’s LOV is also longer than the width of the VC’s FOV (Fig. 2). Thus, position calibration between these two cameras is required.

Fig. 2

Field-of-view (FOV) of video camera (VC) and line-of-view (LOV) of direct camera (DC).

JBO_20_4_046010_f002.png

3.3.1.

CalL and CalR

This calibration is done as the width across the sample viewed by VC is shorter than DC. CalL and CalR refer to the DC column indices (DCX) corresponding to the extreme left and right views of the VC, respectively. The sample used is a United States Air Force (USAF) chart, placed such that the left edge of a dark square is along the extreme left view of the VC. By looking at the DC image, the dark square is easily identified. The DCX which corresponds to the left side of the dark portion is CalL. This process is shown in Fig. 3. CalL is found to be 224, which means that the left most view of the VC is imaged onto the 224th DC column index. CalR is obtained using similar procedure and is found to be 777.

Fig. 3

CalL calibration.

JBO_20_4_046010_f003.png

3.3.2.

CalLOV

This calibration is done to determine the VC row index (VCY), which shares the same view as the DC’s LOV. CalLOV can be found by first looking at the DC view and then slowly changing the sample’s position until a change on the DC view is observed. This happens when the sample enters the DC’s LOV. CalLOV is found to be 542. The DC has an LOV across the sample and this LOV corresponds to the 542th row from the top of the VC’s light sensitive pixel array.

4.

User Defined Parameters

These parameters give the user flexibility in using the system so that it can be faster and give only the required data for later analysis.

4.1.

Region of Interest

The user-selectable ROI determines the sample region within the VC’s FOV from which the data are collected and stored. Selection is done by simply dragging a rectangular section across the VC’s FOV. The ROI is described by four parameters; “top, bottom, left, and right.” “Top and bottom” refer to the VCY, which correspond to the top and bottom of the ROI, respectively. “Left and right” refer to the VC column index (VCX), which corresponds to the extreme left and right views of the ROI, respectively. A shorter ROI (vertical direction) can result in fewer scans, thus reducing data acquisition time and data size. A narrower ROI (horizontal direction) will not reduce the data acquisition time but does reduce data size.

4.2.

Spectral Range

The spectral range of both the DC and spectrograph is 400 to 1000 nm; therefore, the maximum spectral range of the integrated system is also the same. The user selected spectral range is defined using WLMin (nm) and WLMax (nm), which depends on the illumination source and the spectral range of interest. Spectral information beyond this range will not be recorded. A smaller spectral range results in a smaller data size but will not affect the acquisition time.

4.3.

Stage Step Size

The pushbroom HS imager sequentially scans the ROI from top to bottom. The distance that the y-axis stage moves in each step between subsequent scans is defined by “Step.” For example, when Step is set to 5, it means the y-axis stage will move by a distance covered by five rows of the VC pixel. A bigger Step results in a shorter acquisition time but can give a poorer y-axis spatial resolution. Thus, Step has to be adjusted to find a balance between data acquisition time and y-axis spatial resolution.

4.4.

DC Setting

The exposure time and electron-multiplying (EM) gain of DC can be adjusted depending on the illumination condition. A high EM gain is used in low-intensity illumination conditions to increase DC sensitivity. An EM gain value that is too high can, however, lead to DC pixel saturation. Both the EM gain and exposure time have to be optimized to reduce exposure time while still getting high quality DC images. This will reduce the overall data acquisition time.

5.

Return Values/Vectors

All the steps and procedures mentioned in Secs. 3 and 4 are used to produce four return values and two vectors (to be described in detail in this section). They communicate with the DC and y-axis stage to collect data only from the user defined parameters.

5.1.

XMin and XMax

XMin and XMax refer to the DCX which correspond to the left and right of the ROI, respectively. Each scan records data from the DC between XMin and XMax only. VCX and DCX are akin to different scales while referring to the same object (Fig. 4). Linear interpolation is used to determine the value of XMin and XMax in Eqs. (2) and (3), respectively. The CalL and CalR mentioned in Sec. 3.3.1 are used here. The DC does not recognize XMax. It requires the starting column index XMin and the length of column XLength, which is calculated using Eq. (4).

Eq. (2)

XMin=rd[Left116001(CalRCalL)+CalL],

Eq. (3)

XMax=rd[Right116001(CalRCalL)+CalL],

Eq. (4)

XLength=XMaxXMin+1,
where rd means round off to nearest integer.

Fig. 4

Xmin and Xmax.

JBO_20_4_046010_f004.png

5.2.

WL Vector

WL is the wavelength assigned to each DCY. WL is calculated using Eq. (5). The constants a, b, and c obtained in Sec. 3.2 for spectral calibration are used here.

Eq. (5)

WL=a·DCY2+b·DCY+c.

5.3.

YMin and YMax

YMin and YMax refer to the DCY which correspond to the WLMin and WLMax of the selected spectral range, respectively. In each scan, only data between the DC row index YMin and YMax will be recorded. The constants a, b, and c from the spectral calibration in Sec. 3.2 are used here. YMin and YMax can be determined using the real solution of a quadratic equation. Equation (6) is formed by rearranging Eq. (1) for WLMin. The real solution to Eq. (6) is used to calculate YMin using Eq. (7). Similarly, YMax is calculated using Eq. (8). The DC does not recognize YMax. It requires the starting row index YMin and the length of row YLength, which is calculated using Eq. (9).

Eq. (6)

a·DCY2+b·DCY+(cWLMin)=0,

Eq. (7)

YMin=rd[b+b24a(cWLMin)2a],

Eq. (8)

YMax=rd[b+b24a(cWLMax)2a],

Eq. (9)

YLength=YMaxYMin+1.

Due to the spectral range of 400 to 1000 nm of both the spectrograph and DC, the maximum spectral range of this system is also 400 to 1000 nm. The maximum YLength is calculated to be 756. This means that the pushbroom HSI system detects 756 wavelength bands across a 400 to 1000 nm spectral range. Using the chosen definition of spectral imaging from Fresse et al.,10 this system is classified as an HS imager. The average spectral distance between adjacent bands is about 0.795 nm, but the spectrograph has a spectral resolution of 2.8 nm. Therefore, the system’s overall spectral resolution is 2.8 nm.

5.4.

Stage Position Vector

This vector represents the position of the y-axis stage that it needs to be throughout the entire scan so that only the ROI is scanned from its top and bottom at a stage step specified by the user. The vector is counted from the home position of the y-axis stage. CalFOV and CalLOV from Secs. 3.1 and 3.3.2, respectively, are needed.

The relationship between the count and displacement of the y-axis stage (CD) was determined to be about 116508.4counts/mm using the y-axis stage’s specifications.

Prior to the first scan, the y-axis stage shifts the sample until the top of the ROI is in line with the DC LOV. This displacement in millimeters is calculated using top, CalLOV, and CalFOV. This is later converted to displacement in counts of the y-axis stage using CD. By adding this to the current y-axis stage position in counts (YPos), the position of the y-axis stage in counts for the first scan (PosStart) can be calculated using Eq. (10). Similarly, the position of the final scan (PosEnd) can be calculated from Eq. (11).

The y-axis stage is closer to its home position during the first scan compared to the last scan. Thus, PosStart is smaller than PosEnd. The step in counts of the y-axis stage (StepCts) is calculated with respect to the user defined Step, CalFOV, and CD using Eq. (12).

Eq. (10)

PosStart=TopCalLOV1200(CalFOV)CD+YPos,

Eq. (11)

PosEnd=BottomCalLOV1200(CalFOV)CD+YPos,

Eq. (12)

StepCts=Step1200(CalFOV)CD.

5.5.

Significance of Return Values

XMin and XMax are related to the location of the ROI in the X-direction. YMin and YMax refer to the user defined spectral range. These four values together form a corresponding region on the DC pixel array from which data are recorded in each scan. Each scan produces an array of data in the spatial-spectral domain, and the stage moves on to the next position. This process is repeated until scanning takes place at all the positions indicated by the stage position vector.

6.

HyperSpec

A software based on LabView®, called HyperSpec, has been developed in-house. The control panel is shown in Fig. 5. It is used for the software interfacing of the VC, DC, and three-axis stage and incorporates all the points discussed in Secs. 3, 4, and 5. After calibration and entering the user-defined parameters, the scanning can begin. The return values are determined automatically, and the repeating process of stage movement and then DC data recording will also run on its own. After all the scanning has been completed, the stage places the sample back to the original position it was at just before scanning started.

Fig. 5

HyperSpec control panel.

JBO_20_4_046010_f005.png

7.

Data Rearrangement and Representation

The files saved are imported and processed by an in-house written script in MATLAB®. The script arranges the 2-D data to a single three-dimensional datacube. As data representation is more flexible and can vary depending on the needs, more parameters can be altered and customized. Many types of plots can be made available, such as spectrum plot, images at different wavelength bands and a datacube.

8.

Measurement and Results

The following measurements in this section were taken at maximum zoom where the full FOV of the VC is about 4.32×5.76mm2 with a working distance of about 21.5 cm.

8.1.

VC for Selectable ROI

A USAF resolution chart is used in this section. A fiber-optic pigtailed source (Edmund Optics MI-150) was used for illumination. The full VC FOV before measurement and the selected ROI (Group 3 of the USAF chart) indicated by a black rectangle can be seen in Fig. 5. This section first shows the different plots of the results that can be acquired from each set of data (Figs. 6Fig. 78). This section also compares the image of the ROI selected to the spatial image captured by the system at a particular wavelength to validate whether or not the system is working well and capturing data only from the ROI.

Fig. 6

(a) Sequence of data acquisition and (b) datacube.

JBO_20_4_046010_f006.png

Fig. 7

(a) Cut datacube and (b) wavelength stack of bands 550:25:750 nm.

JBO_20_4_046010_f007.png

Fig. 8

Intensity mapping of nine selected spectral bands from the 756 total acquired bands (100×677pixels).

JBO_20_4_046010_f008.png

Figure 9 is made up of an image of the ROI, with two other identical images from the data at 650 nm placed beside and below it. The four dashed lines in this figure match features in the ROI to the same feature in the data. Therefore, it is observed that the system performed scanning only across the selected ROI, and only data in the ROI are saved. This validates the steps and formulas mentioned in Secs. 3 to 7. The longer vertical dotted line also shows that the ROI and data have the same orientation. Therefore, the DC, VC, and y-axis stage are all aligned with respect to each other. The VC is successfully implemented into the pushbroom HS imager for a user-selectable ROI to minimize data acquisition time and data size.

Fig. 9

Comparison of region of interest (ROI) and intensity mapping for validation of user-selectable ROI capability.

JBO_20_4_046010_f009.png

8.2.

Determining Lateral Resolution Using USAF Chart

This section uses the same set of data as in Sec. 8.1. From Fig. 10, the horizontal and vertical lines of Group 3 Element 5 can still be distinguished. Thus, the horizontal and vertical lateral resolution of the system at 650 nm in the basic configuration without any image enhancement is determined using Group 3 Element 5 of the USAF chart and is calculated to be 40μm.

Fig. 10

(a) ROI and (b) intensity mapping of 650 nm.

JBO_20_4_046010_f010.png

8.3.

Sample Analysis Demonstration Using Chicken Breast in Reflection Mode

Chicken breast tissue devoid of fat and skin is used, with a visible blood clot on the surface. This part of the chicken breast was chosen so that the blood clot can provide a contrast in the image. The sample on the glass slide and the ROI are shown in Figs. 11(a) and 11(b), respectively. The same illumination source as Sec. 8.1 was used (Edmund Optics MI-150) for conducting this study. Figure 12 shows the intensity mapping at four different wavelengths. The regions where 400 spectra were extracted and processed to represent the spectra of the blood clot and the chicken breast tissue are marked by the small white and black rectangles, respectively, in Figs. 11(b) and 12. Figure 13 shows the processed spectra of the chicken breast tissue and the blood clot and are found to be easily distinguishable from each other. These results indicate that such spectral data can be used as a data library to compare and identify unknown samples in the future.

Fig. 11

(a) Chicken breast tissue on glass slide and (b) ROI.

JBO_20_4_046010_f011.png

Fig. 12

Intensity mapping of (a) 550 nm, (b) 630 nm, (c) 670 nm, and (d) 850 nm.

JBO_20_4_046010_f012.png

Fig. 13

Spectra of blood clot and chicken breast tissue.

JBO_20_4_046010_f013.png

8.4.

Fluorescence Imaging

A Rhodamine 6G fluorescent film, which has been placed on a tissue phantom (Simulab Corporation), and the ROI are shown in Figs. 14(a) and 14(b), respectively. An excitation wavelength of 500 nm (NKT Photonics SuperK Extreme EXR-15, SuperK Select 4xVIS/IR) was used together with a beam expander unit so that the expanded beam covered the entire FOV of the VC. The measurement was taken with an exposure time of 150 ms and an EM gain of 10. The entire spectral range from 400 to 1000 nm was recorded, though Fig. 16 shows only a shorter spectral range for a better representation.

Fig. 14

(a) Rhodamine 6G fluorescent film on tissue phantom and (b) ROI.

JBO_20_4_046010_f014.png

The intensity mapping of 535, 563, and 585 nm is shown in Fig. 15 to illustrate the differences in fluorescence intensity at varying wavelengths. The fluorescence spectrum is calculated from the area within the black boxes in Figs. 14(b) and 15. Figure 16 shows the processed excitation and fluorescence spectra, each normalized with respect to itself. The orange solid line shows the fluorescence spectrum which is an average of 400 spectra from the region indicated by the black rectangle in Figs. 14(b) and 15. The green dotted line is the excitation spectrum measured separately from a piece of white paper.

Fig. 15

Intensity mapping of (a) 535 nm, (b) 563 nm (peak emission), and (c) 585 nm.

JBO_20_4_046010_f015.png

Fig. 16

Normalized excitation and fluorescence spectra.

JBO_20_4_046010_f016.png

The HSI of fluorescing samples is able to capture multiple fluorescent images at different wavelength bands. In this study, about 250 fluorescent images were captured between 500 to 700 nm (three of them shown in Fig. 15). Compared to the use of conventional imaging setup which uses a fluorescence filter to capture all the emission wavelengths in a single image, HSI provides much more information that can be used for a more accurate disease diagnosis. This can prove useful in disease diagnosis of the colon where the intensity and distribution of endogenous fluorophores are indicators of disease progression.25

9.

Conclusion

A pushbroom HS imager which incorporates a VC not only for direct video imaging (benefits mentioned in Sec. 1) but also for a user-selectable ROI within the full imaging FOV of the VC is proposed and demonstrated in this paper. These concepts bring several benefits especially to a pushbroom HS imager. After selecting the ROI, scanning takes place only within the ROI. There is no unwanted scanning, thus minimizing data acquisition time as well as the data size. A smaller data size in turn translates to a shorter computational time in data processing and analysis. Similar applications can also be applied to spectral-scanning and snapshot imagers. However, it will not result in a shorter data acquisition time in spectral-scanning (number of scans depends on number of spectral band, not size of ROI) or a snapshot imager (only one scan required.). The use of VC for a user-selectable ROI presented in this paper tries to negate the pushbroom HS imager as being a relatively slower HS imager.

In the current configuration, the VC has an adjustable full imaging FOV using the fine focus adapter. The minimum and maximum full imaging FOV is about 4.32×5.76mm2 (working distance of about 21.5 cm) and 5.17×6.89mm2 (working distance of about 23.8 cm), respectively. The full FOV is also the maximum size of the ROI that can be selected by the user. The system has a maximum spectral range covering the visible to near-infrared light from 400 to 1000 nm and can detect 756 spectral bands within this spectral range. By using a DC and spectrograph suitable for imaging wavelengths more than 1000 nm, it is possible to extend the spectral range further into the infrared wavelengths. The horizontal and vertical lateral resolution of this system at maximum zoom without the use of any image enhancement is about 40μm. Such a lateral resolution makes the system suitable for use in biomedical imaging on tissue.

In reflection mode imaging, a common and relatively cheap quartz halogen white light source (Edmund Optics MI-150) was used. With respect to the maximum spectral range of interest (400 to 1000 nm), the bulb used in this light source has a poor transmittance from 400 to 500 nm and 800 to 1000 nm. Also within the same spectral region, the DC has a lower quantum efficiency. This can be seen in the spectral plot from the reflection mode (Fig. 13) where intensity counts below 450 nm and above 900 nm are always much lower compared to the central wavelengths. Without changing the DC, this issue can be resolved using a light source with a higher intensity in the extreme ends of the maximum spectral range of interest.

The experiments with the bio and fluorescent phantom samples shown here also demonstrate that this developed pushbroom HS imager that can be used as both reflection and fluorescence imaging modalities. The lateral resolution can be varied and improved using additional optical elements and by optodigital schemes. The use of this system can also be extended to other applications such as cellular scale biomedical imaging. In addition, by integrating the proposed configuration with a flexible probe scheme, it is expected to find potential endoscopic imaging applications as well.

Acknowledgments

The authors acknowledge the financial support received through COLE-EDB and PhotoniTech-NTU RCA.

References

1. 

J. J. Puschell, “Hyperspectral imagers for current and future missions,” Proc. SPIE, 4041 121 –132 (2000). http://dx.doi.org/10.1117/12.390476 PSISDG 0277-786X Google Scholar

2. 

J. Nieke et al., “Imaging spaceborne and airborne sensor systems in the beginning of the next century,” Proc. SPIE, 3221 581 –592 (1997). http://dx.doi.org/10.1117/12.298124 PSISDG 0277-786X Google Scholar

3. 

D. Lorente et al., “Recent advances and applications of hyperspectral imaging for fruit and vegetable quality assessment,” Food Bioprocess. Tech., 5 (4), 1121 –1142 (2012). http://dx.doi.org/10.1007/s11947-011-0725-1 FBTOAV 1935-5130 Google Scholar

4. 

R. Cubeddu et al., “Nondestructive quantification of chemical and physical properties of fruits by time-resolved reflectance spectroscopy in the wavelength range 650–1000 nm,” Appl. Opt., 40 (4), 538 –543 (2001). http://dx.doi.org/10.1364/AO.40.000538 APOPAI 0003-6935 Google Scholar

5. 

V. K. Shinoj and V. M. Murukeshan, “Hollow-core photonic crystal fiber based multifunctional optical system for trapping, position sensing, and detection of fluorescent particles,” Opt. Lett., 37 (10), 1607 –1609 (2012). http://dx.doi.org/10.1364/OL.37.001607 OPLEDP 0146-9592 Google Scholar

6. 

V. M. Murukeshan and N. U. Sujatha, “Integrated simultaneous dual-modality imaging endospeckle fluoroscope system for early colon cancer diagnosis,” Opt. Eng., 44 (11), 110501 (2005). http://dx.doi.org/10.1117/1.2117487 OPEGAR 0091-3286 Google Scholar

7. 

H. Cen and R. Lu, “Optimization of the hyperspectral imaging-based spatially-resolved system for measuring the optical properties of biological materials,” Opt. Express, 18 (16), 17412 –17432 (2010). http://dx.doi.org/10.1364/OE.18.017412 OPEXFF 1094-4087 Google Scholar

8. 

L. Seah et al., “Fluorescence optimisation and lifetime studies of fingerprints treated with magnetic powders,” Forensic Sci. Int., 152 (2), 249 –257 (2005). http://dx.doi.org/10.1016/j.forsciint.2004.09.121 FSINDR 0379-0738 Google Scholar

9. 

L. Seah et al., “Time-resolved imaging of latent fingerprints with nanosecond resolution,” Opt. Laser Technol., 36 (5), 371 –376 (2004). http://dx.doi.org/10.1016/j.optlastec.2003.10.006 OLTCAS 0030-3992 Google Scholar

10. 

V. Fresse, D. Houzet and C. Gravier, “GPU architecture evaluation for multispectral and hyperspectral image analysis,” in Proc. IEEE Conf. Design and Architectures for Signal and Image Processing, 121 –127 (2010). https://doi.org/10.1109/DASIP.2010.5706255 Google Scholar

11. 

N. Gat, “Imaging spectroscopy using tunable filters: a review,” Proc. SPIE, 4056 50 –64 (2000). http://dx.doi.org/10.1117/12.381686 PSISDG 0277-786X Google Scholar

12. 

M. Kosec et al., “Characterization of a spectrograph based hyperspectral imaging system,” Opt. Express, 21 (10), 12085 –12099 (2013). http://dx.doi.org/10.1364/OE.21.012085 OPEXFF 1094-4087 Google Scholar

13. 

Y. Wang et al., “MEMS scanner enabled real-time depth sensitive hyperspectral imaging of biological tissue,” Opt. Express, 18 (23), 24101 –24108 (2010). http://dx.doi.org/10.1364/OE.18.024101 OPEXFF 1094-4087 Google Scholar

14. 

Z. Liu et al., “Parallel scan hyperspectral fluorescence imaging system and biomedical application for microarrays,” J. Phys., 277 012023 (2011). http://dx.doi.org/10.1088/1742-6596/277/1/012023 2165-5286 Google Scholar

15. 

R. A. Schultz et al., “Hyperspectral imaging: a novel approach for microscopic analysis,” Cytometry, 43 (4), 239 –247 (2001). http://dx.doi.org/10.1002/(ISSN)1097-0320 CYTODQ 0196-4763 Google Scholar

16. 

R. Leitner, T. Arnold and M. De Biasio, “High-sensitivity hyperspectral imager for biomedical video diagnostic applications,” Proc. SPIE, 7674 76740E (2010). http://dx.doi.org/10.1117/12.849442 PSISDG 0277-786X Google Scholar

17. 

Y. Guan et al., “New-styled system based on hyperspectral imaging,” in Proc. IEEE Conf. Photonics and Optoelectronics, 1 –3 (2011). https://doi.org/10.1109/SOPO.2011.5780492 Google Scholar

18. 

M. E. Martin et al., “Development of an advanced hyperspectral imaging (HSI) system with applications for cancer detection,” Ann. Biomed. Eng., 34 (6), 1061 –1068 (2006). http://dx.doi.org/10.1007/s10439-006-9121-9 ABMECF 0090-6964 Google Scholar

19. 

B. S. Sorg et al., “Hyperspectral imaging of hemoglobin saturation in tumor microvasculature and tumor hypoxia development,” J. Biomed. Opt., 10 (4), 044004 (2005). http://dx.doi.org/10.1117/1.2003369 JBOPFO 1083-3668 Google Scholar

20. 

S. Grusche, “Basic slit spectroscope reveals three-dimensional scenes through diagonal slices of hyperspectral cubes,” Appl. Opt., 53 (20), 4594 –4603 (2014). http://dx.doi.org/10.1364/AO.53.004594 APOPAI 0003-6935 Google Scholar

21. 

R. T. Kester et al., “Real-time hyperspectral endoscope for early cancer diagnostics,” Proc. SPIE, 7555 75550A (2010). http://dx.doi.org/10.1117/12.842726 PSISDG 0277-786X Google Scholar

22. 

Z. Liu et al., “Line-monitoring, hyperspectral fluorescence setup for simultaneous multi-analyte biosensing,” Sensors, 11 (11), 10038 –10047 (2011). http://dx.doi.org/10.3390/s111110038 SNSRES 0746-9462 Google Scholar

23. 

M. B. Sinclair et al., “Design, construction, characterization, and application of a hyperspectral microarray scanner,” Appl. Opt., 43 (10), 2079 –2088 (2004). http://dx.doi.org/10.1364/AO.43.002079 APOPAI 0003-6935 Google Scholar

24. 

M. B. Sinclair et al., “Hyperspectral confocal microscope,” Appl. Opt., 45 (24), 6283 –6291 (2006). http://dx.doi.org/10.1364/AO.45.006283 APOPAI 0003-6935 Google Scholar

25. 

N. Uedo et al., “Diagnosis of colonic adenomas by new autofluorescence imaging system: A pilot study,” Digest. Endosc., 19 S134 –S138 (2007). http://dx.doi.org/10.1111/den.2007.19.issue-s1 0915-5635 Google Scholar

Biography

Hoong-Ta Lim received his bachelor’s degree in engineering from NTU in 2012 and is currently pursuing his PhD at the Centre for Optical and Laser Engineering (COLE), School of Mechanical and Aerospace Engineering (MAE), NTU. His main research interests are in the area of multi- and hybrid-modality imaging for biomedical applications.

Vadakke Matham Murukeshan is an associate professor with the School of MAE and deputy director of COLE, NTU. His main research interests are biomedical optics, nanoscale optics, and applied optics for metrology. He has published over 250 research articles in leading journals and conference proceedings and has 6 patents and 8 innovations disclosures. He is a Fellow of the Institute of Physics and is a member of SPIE.

© 2015 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2015/$25.00 © 2015 SPIE
Hoong Ta Lim and Vadakke Matham Murukeshan "Pushbroom hyperspectral imaging system with selectable region of interest for medical imaging," Journal of Biomedical Optics 20(4), 046010 (21 April 2015). https://doi.org/10.1117/1.JBO.20.4.046010
Published: 21 April 2015
Lens.org Logo
CITATIONS
Cited by 15 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Imaging systems

Virtual colonoscopy

Hyperspectral imaging

Data acquisition

Calibration

Video

Biomedical optics

RELATED CONTENT


Back to Top