Open Access
23 December 2017 Establishment of hybridized focus measure functions as a universal method for autofocusing
Mohammad Imran Shah, Smriti Mishra, Chittaranjan Rout
Author Affiliations +
Abstract
Exact focusing is essential for any automatic image capturing system. Performances of focus measure functions (FMFs) used for autofocusing are sensitive to image contents and imaging systems. Therefore, identification of universal FMF assumes a lot of significance. Eight FMFs were hybridized in pairs of two and implemented simultaneously on a single stack to calculate the hybrid focus measure. In total, 28 hybrid FMFs (HFMFs) and eight FMFs were implemented on stacks of images from three different imaging modalities. Performance of FMFs was found to the best at 50% region sampling. Accuracy, focus error, and false maxima were calculated to evaluate the performance of each FMF. Nineteen HFMFs provided <90% accuracy. Image distortion (noise, contrast, saturation, illumination, etc.) was performed to evaluate robustness of HFMFs. Hybrid of tenengrad variance and steerable filter-based (VGRnSFB) FMFs was identified as the most robust and accurate function with an accuracy of ≥90% and a relatively lower focus error and false maxima rate. Sharpness of focus curve of VGRnSFB along with eight individual FMFs was also computed to determine the efficacy of HFMF for the optimization process. VGRnSFB HFMF may be implemented for automated capturing of an image for any imaging system.

1.

Introduction

Automated focusing techniques were widely implemented in various optical imaging systems, such as microscopes, industrial inspection tools, and cameras.14 This technique determines the best-focused image by analyzing the content of a sequence (stack) of images of the same view field acquired on different focal positions. A focused image is defined as the best average focus over an entire view field on a stack of images acquired at different focuses from a single view field. The maximum value of the focus measure function (FMF) generally corresponds to the best-focused image.5 Studies have indicated that the performance of FMFs depends on image content, which is broadly classified as higher, medium, and lower density background images.6,7 General images have low-density background, whereas images captured from different experiments such as conventional bright-field microscope (CM) have higher density background due to the presence of artifacts, dye to stain the bacteria, etc. Similarly, fluorescent microscope (FM) images have average (medium) density background. Most of the FMFs efficiently work on visible optical systems like commercial cameras and have a higher accuracy rate due to the high resolution and sharp edges of visible image. However, it is difficult to obtain a high accuracy rate for the infrared optical system (near infrared, thermal, etc.) due to the poor resolution, low contrast, and blur edges in infrared images.8 Studies have also indicated that the significantly better performing FMFs in fluorescence microscopy images produced average outcomes in CM911 and vice versa.12

Several studies were performed to determine the efficient FMFs on microscopic (CM and FM) imaging data, but most of their outcomes led to different conclusions.1,2,5,6,12 Mateos–Pérez et al.2 found that midfrequency discrete cosine transform (96.67%), Vollath’s autocorrelation (VCR) (89%), and tenengrad (TGR) (89%) were the efficient FMFs in FM images. Two different studies were performed to determine efficient FMF for Ziehl–Neelsen (ZN)-stained sputum smear CM images. Six [normalized gray-level variance (GNV), Brenner gradient (BGR), modified Laplacian, energy of Laplacian (ELP), VCR, and TGR] and three [ELP, Gaussian derivative (GDR), and variance of the log histogram] were the most commonly used FMFs on CM images.1,6 VCR and BGR were reported as the best FMFs in the first study, while ELP was the best in the second. GNV, gray-level variance (GLV) and VCR were reported as the most efficient FMFs in CM pathological images.13 Studies were also performed to determine the efficient FMFs on visible2,4,14,15 and infrared optical system.8,16,17 The ELP operator was the best FMF for visible and near-infrared images, while fast Hessian detector-based FMF was the best in thermal spectrum (TS).16

Incorporation of automated methods in microscopy can increase the sensitivity and specificity by analyzing a large number of view fields.18 Exact focusing is very crucial in any automatic microscopy system as performance of successive steps such as automatic object segmentation and classification depends on it.1 Autofocusing is also very significant in developing consumer-level user friendly digital cameras that can capture high-quality images with minimal user intervention.19

To overcome inconsistent performance of FMFs, this study evaluated the performance of HFMFs across the different modalities as well as to different imaging conditions (noise, saturation, etc.). The eight most common autofocus algorithms were hybridized by simultaneously implementing two FMFs on well-versed datasets from three different modalities, namely CM, FM, and multispectral (MS) images, to identify efficient hybrid focus measure functions (HFMFs) for any imaging system. MS datasets contain diverse images from visible, near-infrared, and TS, and it will be helpful to determine a robust and global HFMF. Three different categories of FMF algorithms were incorporated in this study according to their working principles. The changes in performances of HFMFs were also analyzed after image distortion using noise addition, saturation increment, contrast reduction, and uneven illumination to evaluate the effectiveness of this approach. The performance of every HFMF is also compared with individual FMFs for better interpretation.

2.

Material and Methods

The methodology followed identifying robust HFMFs using different experiments/parameters is shown in Fig. 1 and Table 1.

Fig. 1

Flow diagram of methodology used in identifying robust HFMFs.

JBO_22_12_126004_f001.png

Table 1

Experiments performed for performance evaluation of HFMFs.

CategoryExperimentSectiona
Region samplingCentral parts containing 25%, 50%, and 75% of original image were used to evaluate FMFs and HFMFsSections 2.4 and 3.1
Without preprocessingPreprocessing technique was not appliedSection 3.2
PreprocessingPoisson noise additionSections 2.5 and 3.3
Saturation increment
Contrast reduction
Uneven illumination incorporation
Performance evaluationAccuracySections 2.6 and 3.4
Focus error
False maxima
Convergence rateSharpness curveSections 2.7 and 3.4

aMethods and results are provided in subsequent sections.

2.1.

Datasets

Three different image modalities containing 87 stacks of images were used to evaluate the performance of individual and hybrid FMFs. Three diverse data types covering ZN (CM), FM, and multispectral (MS) images were used to evaluate HFMFs. Detailed description for each imaging modality is given below.

2.1.1.

Ziehl–Neelsen sputum smear conventional microscopy

A total of 31 autofocusing stacks were extracted from ZN Sputum Smear Microscopy Image Database.20 These stacks were prepared from 10 different ZN-stained sputum smear slides of tuberculosis patient using three different microscopes. Each stack contains 20 images captured at different focus points over the same view field [Fig. 2(a)]. Acquired images were diverse as image contents ranged from medium to high noisy backgrounds. Image contents also varied due to improper use of staining dye (over- and under-staining).

Fig. 2

Image modalities used to evaluate HFMFs. (a) Image acquired from ZN sputum smear CM, (b) image acquired from sputum smear fluorescent microscopy (FM), (c) image acquired in VS, (d) image acquired in near-IS, (e) image acquired in TS, (f) depiction of the 50% region sampled area, and (g) depiction of the 25% region sampled area used to evaluate FMFs.

JBO_22_12_126004_f002.png

2.1.2.

Fluorescent sputum smear microscopy

In total, 35 autofocusing datasets, prepared from smear slides of 10 patients, were randomly extracted from Ref. 2. Every stack has 20 images that were acquired at different focus points over the same view field [Fig. 2(b)].

2.1.3.

Multispectral dataset

In total, 21 autofocusing datasets in visible, near-infrared, and TS were retrieved from Ref. 16. The images acquired in visible spectrum (VS) were divided into seven sets, where each set contains a stack of 12 images [Fig. 2(c)]. Acquired objects in the VS include headphones, keyboard, keys, loudspeaker, mixer, sunglasses, and guitar. The images acquired in near-infrared spectrum (IS) were divided into seven sets where each set contains a stack of 21 images [Fig. 2(d)]. Acquired objects in IS include building, car, corridor, head, keyboard, office desk, and pens. The images acquired in TS were divided into seven sets where each set contains a stack of 27 images [Fig. 2(e)]. Acquired objects in TS include building, circuit breaker, circuit, car engine, printer, server, and tube.

2.2.

Focus Measure Functions

The eight most common FMFs were included in this study as their performances in ZN (CM) and FM images were good.1,2,21 Other FMFs such as Laplacian-based operator and wavelet-based operator drastically failed on CM images; therefore, they were not included in the current study. These eight FMFs were hybridized, and their performances were evaluated in this study to identify the best-focused images from ZN, FM, and MS images (Table 2). An FMF has the highest value at the best focus position, and the values reduce sequentially in both directions as focusing decreased. Three major categories of FMFs and their HFMFs were implemented in MATLAB (Table 2).

Table 2

Category of FMFs used to form the hybrid FMF for identifying the best-focused images.

S. No.CategoryFMFReference
1.Gradient-basedGDR22
2.TGR11
3.VGR23
4.Statistics-basedGNV24
5.VCR10 and 11
6.OtherHemli and Scherer’s mean (HELM)25
7.SFB3
8.Spatial frequency measure (SFM)26

2.2.1.

Gradient-based focus measure functions

These functions assume that a well-focused image has more high-frequency content. Therefore, large intensity differences between neighboring pixels leads to sharper edges. A higher gradient represents more sharp edges; therefore, these FMFs use the gradient (first-order derivative) of the image to find the best-focused image.

2.2.2.

Statistics-based focus measure functions

These FMFs use various statistical measures, such as standard deviation, variance, and autocorrelation, to identify the best-focused image. Generally, these FMFs are more consistent in high frequency noise as compared with derivative-based FMFs.

2.2.3.

Other focus measure functions

This group contains the functions, which are not in the above two categories due to their working principles.

2.3.

Hybridization

Eight FMFs were hybridized in pairs of two, and a total of 36 combinations were obtained. Hybridization of FMFs means two FMFs are implemented simultaneously on a single stack to calculate the hybrid focus measure using the following equation:

Eq. (1)

HybridFMF(HFMF)=FMF1+FMF22,
where FMF1 and FMF2 are the two FMFs, which were used simultaneously. Hybrid focus measure is the average of two FMFs.

2.4.

Region Sampling

Region sampling was performed to implement FMFs on 25%, 50%, and 75% central parts of whole image. For 25% region sampling, a total of 25% of pixels from the central part of the original image were retained. For 100×100 size (10,000 pixels) image, 25 pixels from each end of the rows and columns were removed to get an image of 50×50 dimensions (2500 pixels). The resultant image was sampled to 25% as the total number of pixels was reduced to one fourth of the original image [Fig. 2(g)]. Similarly, region sampling of 50% [Fig. 1(f)] and 75% was performed. Region sampling was performed to achieve better accuracy as well as to reduce computation time.9,27

2.5.

Image Preprocessing

Poisson noise was added to check the robustness of HFMFs to noise. A MATLAB function “imnoise” is used to add Poisson noise generated from the image itself. A scaling factor 1×1010 is used for the significant effect of noise on the image. In general, FMFs are more sensitive to the higher level of noise.2

The saturation level of an image also alters the performance of FMFs, which was previously tested on normal images.28 To check the efficacy of HFMFs with respect to an increase in the saturation level, ZN and FM images were converted to hue, saturation, and value (HSV) color space. Furthermore, saturation of HSV images was increased by 25% using MATLAB.

Reduced contrast level leads to smoothening of edges in images, which reduces differentiation of the best focus image from the defocused one. Contrast was incorporated in the preprocessing step to verify the effectiveness of HFMF at the low contrast level. Generally better focused methods are not perturbed by low contrast, which was reduced for every stack by mapping the image pixel values to a narrow range.28

Uneven illumination was incorporated into images using a luminance gradient to test the effectiveness of HFMFs in low signal-to-noise ratio conditions due to poor illumination. Grayscale image is used to represent luminance gradient using quadratic polynomial function, and it is multiplied by the original images to get resultant images.

2.6.

Evaluation of Focus Measures

The following three criteria were used to evaluate the performance of FMFs and HFMs.29

Accuracy criterion: The accuracy value was assigned a score of 1, 0.5, or 0 if a stack was correctly classified; if the second best focus was classified as the best focus when the difference between the best and second best image differs marginally; or if the stack was misclassified, respectively. Finally, the accuracy rate in percent was calculated using the following equation:1

Eq. (2)

Sum of all scoresTotal number of stacks×100.

A higher score represents a more accurate FMF.

Focus error: It determines the difference between the manually obtained and predicted best-focused image.2

Number of false maxima: This criterion was used to calculate the number of false maxima produced by an HFMF or FMF. A number of maxima present in a sharpness curve of the FMF or HFMF excluding global maximum was determined.1

2.7.

Convergence Rate of Focus Measure Functions

Finally, “sharpness of focus curve” was used to identify the FMF and HFMFs with better convergence rate. It is used to calculate the narrowness of the peak. Narrower peak of FMF represents rapid convergence to the best focus point; hence, FMF would be implementable in the real system.16

3.

Results and Discussion

The ZN, FM, and MS (VS, IS, and TS) datasets are diverse in terms of image contents, and performances of FMFs were not consistent in these modalities.12,16 Therefore, this study proposed an autofocus system using HFMF and assumed that some HFMFs could be effective across diverse image modalities as well as different imaging conditions (noise, saturation, etc.). The eight most commonly used FMFs that performed better in highly noisy ZN and FM images were hybridized and implemented.

3.1.

Region Sampling and Hybridization of Focus Measure Function

Different parameters and configuration were checked prior to evaluating the performance of HFMFs. Images regions were sampled to 25%, 50%, and 75% to evaluate the accuracy of FMFs on different region sampling rates in comparison with original images [Fig. 2(a)]. Overall accuracy of most of the FMFs was reduced by 1% to 11% at 25% region sampling, while it was increased by 1% to 4% at 50% and 75% region sampled images (Fig. 3). HFMFs analyses were performed only on 50% region sampled images as the result was optimal and the mean computation time was minimal at this level. Improved performance of FMFs on 50% and 75% region sampling might be due to better focusing on the central part of the image than boundaries. Hybridization of two and three FMFs implemented on separate locations of the same view field was evaluated, but performance of most of these FMFs was inconsistent and poor due to the different imaging contents. Therefore, the FMFs were superimposed on the same location of view field image to calculate the unbiased focus measure. A combination of three FMFs yielded poor accuracy in most of the HFMFs, while combinations of two FMFs have provided a better accuracy rate. Therefore, only two FMFs were superimposed and used as the final configuration. Performances of HFMFs were evaluated on overall datasets as well as separately on three individual types of datasets (ZN, FM, and MS) to determine the effect of different imaging modalities on HFMFs. ZN datasets also contain microscopic images captured from a Smartphone camera to evaluate the performance of HFMFs. Mean computational time taken by each FMF or HFMF was determined on Intel® Core™ i3-3220 CPU at 3.30 GHz with eight GB RAM (Table 4 of Appendix). The comparative performances of HFMFs without preprocessing and post preprocessing are provided in the following sections.

Fig. 3

Accuracy of FMFs in percent with different region sampling data, i.e., 25%, 50%, 75%, and original image.

JBO_22_12_126004_f003.png

3.2.

Without Image Preprocessing

Average performances of 36 (eight individual and twenty-eight HFMFs) FMFs were computed separately on each dataset at different region sampling rates (Fig. 3). More than 90% overall accuracy at 50% region sampling was obtained using 19 HFMFs, which indicated that HFMFs were consistent w.r.t different imaging modalities [Fig. 4(a) and Table 3]. Focus error and false maxima rate of these 27 FMFs (eight individual and nineteen HFMFs) were also computed to validate the analysis. Most of these HFMFs performed accurately and had less focus errors [Fig. 4(b)] and false maxima [Fig. 4(c)], whereas most of the individual FMFs provided an accuracy <90%, with higher focus error and false maxima (except GDR and TGR). HELMnTGR, SFMnTGR, VGRnGDR, VGRnHELM, and VGRnSFB HFMFs obtained >95% accuracy and outperformed most of the individual FMFs. Measures such as accuracy, focus error, and false maxima along with standard deviation, mean, and combined results for each dataset are also provided (Table 3). Mean accuracy, its standard deviation, and combined accuracies showed that VGRnHELM and VGRnSFB were the most accurate and consistent HFMFs with minimal standard deviation of 1.78 and 1.62, respectively (Table 3).

Fig. 4

Performance of FMFs and HFMFs without preprocessing at 50% region sampling data. (a) Accuracy in percent, (b) focus error, and (c) false maxima.

JBO_22_12_126004_f004.png

Table 3

Accuracy in percent, focus error and false maxima of FMF and HFMF without preprocessing at 50% subsampling.

MethodAccuracyFocus errorFalse maxima
ZNFMMSSDMeanCombinedZNFMMSSDMeanCombinedZNFMMSSDMeanCombined
GDR91.994.3100.03.3995.4194.80.080.060.000.030.050.050.130.110.000.060.080.09
GNV80.68.660.030.3149.7446.50.483.362.351.192.062.090.231.000.450.330.560.59
HELM96.865.790.013.3484.1682.60.030.770.100.330.300.350.060.430.150.160.210.23
SFB93.525.770.028.1263.0960.50.031.600.800.640.810.860.100.940.300.360.450.49
SFM88.787.190.01.1788.6288.40.530.271.650.600.820.690.130.200.100.040.140.15
TGR95.292.9100.02.9896.0195.30.050.070.000.030.040.050.100.140.000.060.080.09
VCR74.292.977.58.1381.5282.61.350.071.530.650.980.870.290.140.250.060.230.22
VGR71.097.197.512.4288.5487.80.390.030.030.170.150.160.390.060.050.160.160.17
GDRnTGR91.994.3100.03.3995.4194.80.080.060.000.030.050.050.130.110.000.060.080.09
HELMnGDR91.994.3100.03.3995.4194.80.080.060.000.030.050.050.130.110.000.060.080.09
HELMnTGR96.892.9100.02.9296.5495.90.030.070.000.030.030.040.060.140.000.060.070.08
SFBnGDR91.985.7100.05.8592.5591.30.110.200.000.080.100.120.130.230.000.090.120.14
SFBnTGR93.584.3100.06.4592.6191.30.160.240.000.100.130.160.100.230.000.090.110.13
SFMnGDR91.994.3100.03.3995.4194.80.080.060.000.030.050.050.130.110.000.060.080.09
SFMnHELM96.887.190.04.0491.3191.30.030.271.650.710.650.510.060.200.100.060.120.13
SFMnTGR95.292.9100.02.9896.0195.30.050.070.000.030.040.050.100.140.000.060.080.09
SFMnVCR91.991.490.00.8291.1291.30.080.231.650.710.650.510.100.140.100.020.110.12
VCRnGDR91.992.9100.03.6094.9394.20.080.210.000.090.100.120.130.110.000.060.080.09
VCRnSFB93.588.687.52.6489.8790.10.160.260.230.040.210.220.100.200.150.040.150.15
VCRnTGR95.291.497.52.5094.7094.20.050.230.030.090.100.120.100.140.050.040.100.10
VGRnGDR91.995.7100.03.2995.8895.30.080.040.000.030.040.050.130.090.000.050.070.08
VGRnHELM93.597.197.51.7896.0695.90.060.030.030.020.040.040.100.060.050.020.070.07
VGRnSFB93.595.797.51.6295.5995.30.160.040.030.060.080.080.100.090.050.020.080.08
VGRnSFM88.797.192.53.4592.7893.00.240.030.080.090.120.120.130.060.100.030.100.09
VGRnTGR91.995.797.52.3295.0594.80.080.040.030.020.050.050.130.090.050.030.090.09
VCRnHELM96.891.482.55.8990.2391.30.030.231.380.590.550.420.060.140.200.060.140.13
VGRnGNV79.095.797.58.3290.7590.10.500.040.030.220.190.200.260.090.050.090.130.14
Note: GDR, Gaussian derivative; GNV, normalized gray-level variance; HSM, Hemli and Scherer’s mean; SFB, steerable filters-based; SFM, spatial frequency; TGR, tenegrad; VCR, Vollath’s autocorrelation; and VGR, tenengrad variance. HFMFs abbreviations are created by concatenating original FMF abbreviations using “n” letter. ZN, Ziehl–Neelsen stained sputum smear conventional microscope; FM, fluorescent microscope; MS, multispectral datasets; and SD, standard deviation.

3.3.

Image Preprocessing

Effectiveness of HFMFs to different imaging conditions is very important because the occurrence of noise, poor contrast, illumination, etc. may affect its performance. Poisson noise addition, saturation-level increment, contrast reduction, and uneven illumination were incorporated to find out the effect of image distortion on FMFs and HFMFs performance.

In the first step, Poisson noise was added to all the images. Generally, a higher level of noise in an image significantly affects FMF performance.2,28 Most of the HFMFs were more robust than individual FMFs after noise addition (Fig. 5). Though GDR and TGR FMFs produce higher accuracy in focused image identification, they failed drastically after noise addition [Fig. 5(a)]. GDR FMF accuracies dropped to 25.8%, 67.1%, and 65% for ZN, FM, and MS datasets, respectively. Similarly, the TGR accuracies dropped to 90%, 30%, and 80% for the above datasets after noise addition. Focus error [Fig. 5(b)] and false maxima [Fig. 5(c)] rate were also increased in the above two individual FMFs. VGRnSFB HFMF was least affected by noise addition and outperformed all the individual FMFs in terms of accuracy, focus error, and false maxima, whereas VGRnGNV HFMF ranked second after noise addition [Fig. 5(a)].

Fig. 5

Performance of FMFs and HFMFs after noise addition at 50% region sampling data. (a) Accuracy in percent, (b) focus error, and (c) false maxima.

JBO_22_12_126004_f005.png

In the second step, the saturation was increased by 25% in all images. Generally, performance of all FMFs decreases as the saturation level increases.28 On MS datasets, GDR and TGR have shown a poor accuracy rate of 65% and 75%, respectively. The performances of most of the HFMFs were better than individual FMFs after increased saturation levels. The performance of VGRnSFB was altered slightly and showed highest accuracy rate with less focus error and false maxima (Fig. 6).

Fig. 6

Performance of FMFs and HFMFs after 25% saturation increment at 50% region sampling data. (a) Accuracy in percent, (b) focus error, and (c) false maxima.

JBO_22_12_126004_f006.png

In the third step, the contrasts of all images were reduced using the imadjust function of MATLAB. Generally, marginal reduction of contrast has no or minimum effects on FMFs performance.22,28 Performance of all the HFMFs was affected marginally by contrast reduction, and accuracies slightly dropped (Fig. 7). VGRnSFB remained consistent to reduced contrast level and obtained an overall accuracy of 91.3%.

Fig. 7

Accuracy of FMFs and HFMFs in percent after contrast reduction at 50% region sampling data.

JBO_22_12_126004_f007.png

Uneven illumination has a minimal effect on performance of HFMFs in FM images. In some cases, performance has been improved.2 Uneven illumination was incorporated in all the images. Most of the HFMFs have shown relatively consistent performance and have marginal changes in accuracy (Fig. 8).

Fig. 8

Accuracy of FMFs and HFMFs in percent after uneven illumination at 50% region sampling data.

JBO_22_12_126004_f008.png

Evaluation of FMFs and HFMFS in various imaging conditions (such as without preprocessing, noise addition, saturation increment, etc.) shows that VGRnSFB was the most robust and accurate HFMF with an overall accuracy >90%, less focus error, and false maxima.

3.4.

Discussion

The main objective of this study is to propose the most accurate robust HFMF applicable to all the imaging modalities. Eight FMFs, evaluated in this study, were earlier implemented in different applications, such as CM, FM, and shape from focus.1,2,28 VCR, BGR, and ELP were reported as the best FMFs in CM.1 Mateos-Pérez et al.2 established that midfrequency discrete cosine transform (96.67%), VCR (89%), and TGR (89%) FMFs performed better in FM images. Pertuz et al.28 found that Laplacian-based operators were outperformed when preprocessing was not applied. Zukal et al.16 proposed interest point detection-based FMFs (the Harris–Laplace detector, fast Hessian detector, and the features from accelerated segment test detector) for MS datasets. Performance of these methods is poor on VS and IS images, and only fast Hessian detector performed better on TS datasets. None of the previously reported FMFs were consistent to diverse imaging modalities, such as ZN, FM, and MS images. These inconsistencies of results have emphasized the importance of robust HFMFs that may capture focused images automatically irrespective of imaging system.

The performances of 36 (twenty-eight hybrid and eight individual) HFMFs were evaluated on the datasets covering diverse image contents with high, medium, and low density backgrounds and lack of sharp edges in images. Initially, 19 HFMFs provided an overall accuracy rate 90%. VGRnSFB HFMF has been identified as the most robust and consistent after evaluating performance in different imaging conditions, such as noise addition, contrast reduction, saturation increment, and uneven illumination. VGRnSFB HFMF has also shown consistent performance in all three modalities of MS dataset (100%, 100%, and 92.8% accuracies without preprocessing for TS, IS, and VS, respectively). An efficient HFMF has lots of application potential as it is easier to implement when intervention of preprocessing and other requirements is minimal. Better performance of VGRnSFB is significant as there was no HFMF reported earlier that was robust to ZN, FM, and MS images simultaneously.

Finally, the sharpness of the focus curve was evaluated for eight individual FMFs (GDR, GNV, HELM, SFB, SFM, TGR, VCR, and VGR) and an HFMF (VGRnSFB) (Fig. 9). VGRnSFB, VGR, and SFB are rapidly converged to the best focus position. Though the sharpness curve of VGR is better, the VGRnSFB HFMF curve is comparable to it and found to be suitable for implementation in real systems. This HFMF has also produced a comparable sharpness curve in MS images (Fig. 10 of Appendix).

Fig. 9

Sharpness curve of nine FMFs including HFMF (VGRnSFB). Narrow peak represents rapid convergence of FMF. (a) VGRnSFB (HFMF), VGR, and SFB were rapidly converged to the best focus position in ZN (CM) images and (b) VGRnSFB and VGR were rapidly converged to the best focus position in FM images.

JBO_22_12_126004_f009.png

Fig. 10

Sharpness curve of nine FMFs including VGRnSFB HFMF on MS datasets. Narrow peak represent rapid convergence of FMF. (a) Sharpness curve on headphones images of VS, (b) sharpness curve on headphones building images of near-IS, and (c) sharpness curve on breaker images of TS.

JBO_22_12_126004_f010.png

Based on content, the images for autofocusing could be categorized into three broad categories, namely low, medium, and high density background. A current study contains a stack of images from diverse modalities of MS, FM, and ZN with low, medium, and high densities. It was evident from the earlier studies that the performances of FMFs were dependent on imaging contents and varied when the focus area changed.1,2,16,21 However, the current study (VGRnSFB FMF) achieved better and consistent results in all five diverse imaging modalities, such as ZN, FM, and MS (TS, IS, and VS) (Figs. 4Fig. 5Fig. 6Fig. 78). Furthermore, various parameters such as accuracy and focus error have indicated that VGRnSFB is robust in different experimental setup and imaging conditions. However, HFMFs performance could be validated on other image modalities for their universal applications. In the future, the effectiveness of these HFMFs on live imaging techniques may be evaluated for their applications in detecting objects or microorganisms, which are not static.30 The region sampling incorporated in this study has increased the accuracy of FMFs as well as HFMFs. As computation time is not a major factor for FMF evaluated in this study, image compression by means of subsampling has not been performed and may be evaluated in the future to optimize the performance of HFMFs. Nonetheless, the HFMFs have outperformed individual FMFs in all the tested imaging modalities, which are diverse and have low, medium, and high density backgrounds with different levels of noise.

4.

Conclusion

Exact autofocusing using FMFs is a very crucial step in any imaging system. Studies have reported that the performance of FMFs is sensitive to image contents.28 Therefore, identification of efficient and robust FMFs is very significant in any imaging system for the development of autofocusing instrument. A comprehensive analysis of 28 HFMFs on diverse datasets, spanning a broad range of image categories, provided 19 hybrid methods with an accuracy 90%. Effectiveness of these HFMFs was tested under different imaging conditions, such as noise addition, saturation increment, contrast reduction, and uneven illumination. VGRnSFB was found to be the most robust and accurate HFMF as it showed the best overall accuracy and robustness as the performances were independent of different image distortions. This HFMF may be implemented in any imaging system, which can capture the best-focused image automatically.

Appendices

Appendix

Mean computation time of each FMF per stack was determined on Intel® Core™ i3-3220 CPU at 3.30 GHz with eight GB RAM (Table 4). Images of 1600×1200 dimensions were subjected to 50% region sampling prior to calculation of mean computation time.

Table 4

Mean computation time (in second) per stack of eight FMFs at 50% region sampling. Original images were of 1600×1200 dimensions.

FMFsaMean time (S)
GDR0.76
GNV0.17
HELM0.97
SFB3.37
SFM0.49
TGR0.41
VCR0.4
VGR0.58

aGDR, Gaussian derivative; GNV, normalized gray-level variance; HSM, Hemli and Scherer’s mean; SFB, steerable filters-based; SFM, spatial frequency; TGR, tenegrad; VCR, Vollath’s autocorrelation; and VGR, tenengrad variance.

Sharpness of focus curve was also determined on MS (visible, near-infrared, and TS) datasets (Fig. 10). Narrower peak represents the rapid convergence rate of FMFs. VGRnSFB along with VGR, GDR, and TGR have shown the rapid convergence rates on MS datasets.

Disclosures

The authors declare that they have no conflict of interest.

Acknowledgments

We would like to thank Jaypee University of Information Technology, Solan, for providing the doctoral fellowship. We would also like to thank Dr. Malay Sarkar and Dr. S.K. Sudarshan, IGMC, Shimla, for help in understanding sputum smear microscopy concepts.

References

1. 

O. A. Osibote et al., “Automated focusing in bright-field microscopy for tuberculosis detection,” J. Microsc., 240 (2), 155 –163 (2010). http://dx.doi.org/10.1111/j.1365-2818.2010.03389.x PRLEDGJMICAR 0167-86550022-2720 Google Scholar

2. 

J. M. Mateos-Pérez et al., “Comparative evaluation of autofocus algorithms for a real-time system for automatic detection of Mycobacterium tuberculosis,” Cytometry Part A, 81A (3), 213 –221 (2012). http://dx.doi.org/10.1002/cyto.a.22020 1552-4922 Google Scholar

3. 

R. Minhas, A. Mohammed and Q. Wu, “An efficient algorithm for focus measure computation in constant time,” IEEE Trans. Circ. Syst. Video Technol., 22 152 –156 (2012). http://dx.doi.org/10.1109/TCSVT.2011.2133930 ITCTEM 1051-8215 Google Scholar

4. 

J. Windjaja and S. Jutamulia, “Wavelet transform based auto-focus camera systems circuits and systems,” in IEEE Asia-Pacific Conf., 49 –52 (1998). Google Scholar

5. 

J. A. Kimura et al., “Evaluation of autofocus functions of conventional sputum smear microscopy for tuberculosis,” in Annual Int. Conf. of the IEEE Engineering in Medicine and Biology, 3041 –3044 (2010). http://dx.doi.org/10.1109/IEMBS.2010.5626143 Google Scholar

6. 

M. J. Russell and T. S. Douglas, “Evaluation of autofocus algorithms for tuberculosis microscopy,” in Proc. of Annual Int. Conf. of the IEEE Engineering in Medicine, 3489 –3492 (2007). http://dx.doi.org/10.1109/IEMBS.2007.4353082 Google Scholar

7. 

C. C. Gu et al., “Region sampling for robust and rapid autofocus in microscope,” Microsc. Res. Techn., 78 (5), 382 –390 (2015). http://dx.doi.org/10.1002/jemt.22484 MRTEEO 1059-910X Google Scholar

8. 

Z. F. Z. Fan et al., “Autofocus algorithm based on wavelet packet transform for infrared microscopy,” in 3rd Int. Congress on Image and Signal Processing (CISP), 2510 –2514 (2010). Google Scholar

9. 

D. Vollath, “Automatic focusing by correlative methods,” J. Microsc., 147 279 –288 (1987). http://dx.doi.org/10.1111/jmi.1987.147.issue-3 JMICAR 0022-2720 Google Scholar

10. 

D. Vollath, “The influence of the scene parameters and of noise on the behavior of automatic focusing algorithms,” J. Microsc., 151 133 –146 (1988). http://dx.doi.org/10.1111/jmi.1988.151.issue-2 JMICAR 0022-2720 Google Scholar

11. 

A. Santos et al., “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc., 188 264 –272 (1997). http://dx.doi.org/10.1046/j.1365-2818.1997.2630819.x JMICAR 0022-2720 Google Scholar

12. 

R. O. Panicker et al., “A review of automatic methods based on image processing techniques for tuberculosis detection from microscopic sputum smear images,” J. Med. Syst., 40 (1), 17 (2016). http://dx.doi.org/10.1007/s10916-015-0388-y JMSYDA 0148-5598 Google Scholar

13. 

R. Redondo et al., “Autofocus evaluation for brightfield microscopy pathology,” J. Biomed. Opt., 17 (3), 036008 (2012). http://dx.doi.org/10.1117/1.JBO.17.3.036008 JBOPFO 1083-3668 Google Scholar

14. 

Z. Gao, W. Jiang and K. F. Zhu, “Auto-focusing algorithm based on most gradient and threshold,” J. Electron. Meas. Instrum., 21 (5), 49 –54 (2007). Google Scholar

15. 

Z. M. Kang, L. Zhang and P. Xie, “Implementation of an automatic focusing algorithm based on spatial high frequency energy and entropy,” Acta Electron. Sin., 31 (4), 552 –555 (2003). TTHPAG 0372-2112 Google Scholar

16. 

M. Zukal et al., “Interest points as a focus measure in multi-spectral imaging,” Radioengineering, 22 (1), 68 –81 (2013). Google Scholar

17. 

R. Benes et al., “Multi-focus thermal image fusion,” Pattern Recogn. Lett., 34 (5), 536 –544 (2012). http://dx.doi.org/10.1016/j.patrec.2012.11.011 Google Scholar

18. 

A. Tapley et al., “Mobile digital fluorescence microscopy for diagnosis of tuberculosis,” J. Clin. Microbiol., 51 (6), 1774 –1778 (2013). http://dx.doi.org/10.1128/JCM.03432-12 JCMIDW 1070-633X Google Scholar

19. 

X. Xu et al., “Robust automatic focus algorithm for low contrast images using a new contrast measure,” Sensors, 11 (12), 8281 –8294 (2011). http://dx.doi.org/10.3390/s110908281 SNSRES 0746-9462 Google Scholar

20. 

M. I. Shah et al., “Ziehl–Neelsen sputum smear microscopy image database: a resource to facilitate automated bacilli detection for tuberculosis diagnosis,” J. Med. Imaging, 4 (2), 027503 (2017). http://dx.doi.org/10.1117/1.JMI.4.2.027503 JMEIET 0920-5497 Google Scholar

21. 

M. I. Shah et al., “Identification of robust focus measure functions for the automated capturing of focused images from Ziehl–Neelsen stained sputum smear microscopy slide,” Cytometry: Part A, (2017). http://dx.doi.org/10.1002/cyto.a.23142 Google Scholar

22. 

J. M. Geusebroek et al., “Robust autofocusing in microscopy,” Cytometry, 39 (1), 1 –9 (2000). http://dx.doi.org/10.1002/(ISSN)1097-0320 CYTODQ 0196-4763 Google Scholar

23. 

J. P. Pacheco et al., “Diatom autofocusing in brightfield microscopy: a comparative study,” in Proc. of the Int. Conf. on Pattern Recognition, 314 –317 (2000). Google Scholar

24. 

F. Groen, I. T. Young and G. Ligthart, “A comparison of different focus functions for use in autofocus algorithms,” Cytometry, 12 8 –91 (1985). http://dx.doi.org/10.1002/cyto.990060202 CYTODQ 0196-4763 Google Scholar

25. 

F. Helmli and S. Scherer, “Adaptive shape from focus with an error estimation in light microscopy,” in Proc. of the Int. Symp. on Image and Signal Processing and Analysis, 188 –193 (2012). Google Scholar

26. 

W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recogn. Lett., 28 493 –500 (2007). http://dx.doi.org/10.1016/j.patrec.2006.09.005 PRLEDG 0167-8655 Google Scholar

27. 

C. C. Gu et al., “Region sampling for robust and rapid autofocus in microscope,” Microsc. Res. Techn., 78 (5), 382 –390 (2015). http://dx.doi.org/10.1002/jemt.v78.5 MRTEEO 1059-910X Google Scholar

28. 

S. Pertuz, D. Puig and M. A. Garcia, “Analysis of focus measure operators for shape-from-focus,” Pattern Recog., 46 (5), 1415 –1432 (2013). http://dx.doi.org/10.1016/j.patcog.2012.11.011 Google Scholar

29. 

S. Duthaler and B. J. Nelson, “Autofocusing algorithm selection in computer microscopy,” in IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 70 –76 (2005). http://dx.doi.org/10.1109/IROS.2005.1545017 Google Scholar

30. 

D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging,” Science, 300 (5616), 82 –86 (2003). http://dx.doi.org/10.1126/science.1082160 SCIEAS 0036-8075 Google Scholar

Biography

Mohammad Imran Shah received his MSc degree in bioinformatics from Manipal University, Karnataka. Currently, he is a PhD student in the Department of Biotechnology and Bioinformatics at Jaypee University of Information Technology, Solan, India. His present interests include image processing, medical image analysis, computer aided diagnosis, machine learning, and bioinformatics databases and tools development.

Smriti Mishra received her MSc degrees in bioinformatics from Manipal University, Manipal, Karnatka, India. Currently, she is a PhD research scholar in the Department of Biotechnology and Bioinformatics at Jaypee University of Information Technology. Her present research interests include algorithm development, artificial intelligence, machine learning, and bioinformatics databases and tools development.

Chittaranjan Rout received his PhD in computational chemistry from the Department of Chemistry at the University of Delhi, India. He worked as a research associate at the School of Computational and Integrative Sciences, Jawaharlal Nehru University, New Delhi. Currently, he is working as an associate professor at Jaypee University of Information Technology, Solan. His research interests include image processing, computer-aided diagnosis, prediction of vaccine candidates and drug targets, computational drug development, and development of clinical decision support systems.

© 2017 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2017/$25.00 © 2017 SPIE
Mohammad Imran Shah, Smriti Mishra, and Chittaranjan Rout "Establishment of hybridized focus measure functions as a universal method for autofocusing," Journal of Biomedical Optics 22(12), 126004 (23 December 2017). https://doi.org/10.1117/1.JBO.22.12.126004
Received: 12 June 2017; Accepted: 29 November 2017; Published: 23 December 2017
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top