Significance: Our study introduces an application of deep learning to virtually generate fluorescence images to reduce the burdens of cost and time from considerable effort in sample preparation related to chemical fixation and staining.
Aim: The objective of our work was to determine how successfully deep learning methods perform on fluorescence prediction that depends on structural and/or a functional relationship between input labels and output labels.
Approach: We present a virtual-fluorescence-staining method based on deep neural networks (VirFluoNet) to transform co-registered images of cells into subcellular compartment-specific molecular fluorescence labels in the same field-of-view. An algorithm based on conditional generative adversarial networks was developed and trained on microscopy datasets from breast-cancer and bone-osteosarcoma cell lines: MDA-MB-231 and U2OS, respectively. Several established performance metrics—the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural-similarity-index (SSIM)—as well as a novel performance metric, the tolerance level, were measured and compared for the same algorithm and input data.
Results: For the MDA-MB-231 cells, F-actin signal performed the fluorescent antibody staining of vinculin prediction better than phase-contrast as an input. For the U2OS cells, satisfactory metrics of performance were archieved in comparison with ground truth. MAE is <0.005, 0.017, 0.012; PSNR is >40 / 34 / 33 dB; and SSIM is >0.925 / 0.926 / 0.925 for 4′,6-diamidino-2-phenylindole/hoechst, endoplasmic reticulum, and mitochondria prediction, respectively, from channels of nucleoli and cytoplasmic RNA, Golgi plasma membrane, and F-actin.
Conclusions: These findings contribute to the understanding of the utility and limitations of deep learning image-regression to predict fluorescence microscopy datasets of biological cells. We infer that predicted image labels must have either a structural and/or a functional relationship to input labels. Furthermore, the approach introduced here holds promise for modeling the internal spatial relationships between organelles and biomolecules within living cells, leading to detection and quantification of alterations from a standard training dataset.
Significance: We introduce an application of machine learning trained on optical phase features of epithelial and mesenchymal cells to grade cancer cells’ morphologies, relevant to evaluation of cancer phenotype in screening assays and clinical biopsies.
Aim: Our objective was to determine quantitative epithelial and mesenchymal qualities of breast cancer cells through an unbiased, generalizable, and linear score covering the range of observed morphologies.
Approach: Digital holographic microscopy was used to generate phase height maps of noncancerous epithelial (Gie-No3B11) and fibroblast (human gingival) cell lines, as well as MDA-MB-231 and MCF-7 breast cancer cell lines. Several machine learning algorithms were evaluated as binary classifiers of the noncancerous cells that graded the cancer cells by transfer learning.
Results: Epithelial and mesenchymal cells were classified with 96% to 100% accuracy. Breast cancer cells had scores in between the noncancer scores, indicating both epithelial and mesenchymal morphological qualities. The MCF-7 cells skewed toward epithelial scores, while MDA-MB-231 cells skewed toward mesenchymal scores. Linear support vector machines (SVMs) produced the most distinct score distributions for each cell line.
Conclusions: The proposed epithelial–mesenchymal score, derived from linear SVM learning, is a sensitive and quantitative approach for detecting epithelial and mesenchymal characteristics of unknown cells based on well-characterized cell lines. We establish a framework for rapid and accurate morphological evaluation of single cells and subtle phenotypic shifts in imaged cell populations.
Deep convolutional neural networks (DCNNs) offer a promising performance for many image processing areas, such as super-resolution, deconvolution, image classification, denoising, and segmentation, with outstanding results. Here, we develop for the first time, to our knowledge, a method to perform 3-D computational optical tomography using 3-D DCNN. A simulated 3-D phantom dataset was first constructed and converted to a dataset of phase objects imaged on a spatial light modulator. For each phase image in the dataset, the corresponding diffracted intensity image was experimentally recorded on a CCD. We then experimentally demonstrate the ability of the developed 3-D DCNN algorithm to solve the inverse problem by reconstructing the 3-D index of refraction distributions of test phantoms from the dataset from their corresponding diffraction patterns.
This paper utilizes a synchronized Lorenz chaotic drive/response system, which uses Haar filtering and appropriate thresholding in order to detect a transmitted random binary message. Using the Lorenz chaotic attractor to obscure the message, the transmission is passed through an Additive White Gaussian (AWG) channel to successfully retrieve the original binary random data. The detection mechanism employs the Haar Wavelet Transform in combating the channel noise. A communication technique using Chaotic Parameter Modulation (CPM) is simulated in Matlab and prototyped on a reconfigurable hardware platform from Xilinx.
KEYWORDS: Binary data, Free space optics, Signal to noise ratio, Wavelets, Field programmable gate arrays, Optical filters, Prototyping, Telecommunications, Free space optical communications, Interference (communication)
High bandwidth, fast deployment with relatively low cost implementation are some of the important advantages of free space optical (FSO) communications. However, the atmospheric turbulence has a substantial impact on the quality of a laser beam propagating through the atmosphere. A new method was presented in [1] and [2] to perform bit synchronization and detection of binary Non-Return-to-Zero (NRZ) data from a free-space optical (FSO) communication link. It was shown that, when the data is binary NRZ with no modulation, the Haar wavelet transformation can effectively reduce the scintillation noise. In this paper, we leverage and modify the work presented in [1] in order to provide a real-time streaming hardware prototype. The applicability of these concepts will be demonstrated through providing the hardware prototype using one of the state-of-the-art reconfigurable hardware, namely Field Programmable Gate Arrays, and highly productive high-level design tools such as System Generator for DSP from Xilinx.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.