Optofluidic time-stretch imaging system has enabled high-throughput phenotyping of cells with unprecedented high speed and resolution. However, significant amount of raw image data is produced, which requires recognition algorithm with not only high accuracy but also high speed to analyze image data efficiently. In this paper, we compare the performance of popular feature extraction methods and learning-based classification algorithms on time-stretch microscopy image recognition. The applied image recognition system comprises an outlier detection step, feature extraction method and classification. The main concept of outlier detection uses DBSCAN (Density-Based Spatial Clustering of Applications with Noise) to eliminate error images. Gabor wavelet, HOG (Histograms of Oriented Gradients), LBP (Local Binary Pattern) and PCA (Principal Components Analysis) are applied and compared as the feature extraction methods. Finally, with a set of extracted features, the computing time and accuracy of SVM (Support Vector Machines), LR (Logistics Regression), ResNet (Residual Neural Network) and XGBoost (Extreme Gradient Boosting) classification algorithms are evaluated. The tested cell image datasets are acquired from high-throughput imaging of numerous drug-treated and untreated cells (N = ~21,000) with an optofluidic time-stretch microscope. Results show that PCA feature extraction and XGBoost classification proves to be the fastest algorithms with the highest level of accuracy. DBSCAN outlier detection helps to improve the recognition accuracy by 2% approximately. Therefore, we propose a recognition algorithm consisting of DBSCAN outlier detection, PCA feature extraction and XGBoost classification as a promising solution to process the image data of high-throughput optofluidic time-stretch microscopy accurately and rapidly.
KEYWORDS: Ultrafast imaging, Flow cytometry, Imaging systems, Signal detection, Signal processing, Field programmable gate arrays, Data acquisition, Data storage, Signal generators, Pulsed laser operation
Ultrafast imaging flow cytometry can be realized by time-encoded single-pixel imaging technique, with high imaging speed (<10million frame/s) and high throughput (<10,000 cells/s). However, the signal of background image without cells occupies a large part of the acquired data and takes up a lot of storage space. In this paper, a FPGA-based triggering and storage system is proposed, which allows real-time storage of signal of cells with blank background neglected. Moreover, it is easy to implement and of high accuracy, as well as adaptivity to different sampling rate. This system reduces the required storage space and enables efficient storage for ultrafast imaging flow cytometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.