Computational pathology, integrating computational methods and digital imaging, has shown to be effective in advancing disease diagnosis and prognosis. In recent years, the development of machine learning and deep learning has greatly bolstered the power of computational pathology. However, there still remains the issue of data scarcity and data imbalance, which can have an adversarial effect on any computational method. In this paper, we introduce an efficient and effective data augmentation strategy to generate new pathology images from the existing pathology images and thus enrich datasets without additional data collection or annotation costs. To evaluate the proposed method, we employed two sets of colorectal cancer datasets and obtained improved classification results, suggesting that the proposed simple approach holds the potential for alleviating the data scarcity and imbalance in computational pathology.
In digital and computational pathology, semantic segmentation can be considered as the first step toward assessing tissue specimens, providing the essential information for various downstream tasks. There exist numerous semantic segmentation methods and these often face challenges as they are applied to whole slide images, which are high-resolution and gigapixel-sized, and thus require a large amount of computation. In this study, we investigate the feasibility of an efficient semantic segmentation approach for whole slide images, which only processes the low-resolution pathology images to obtain the semantic segmentation results as equivalent as the results that can be attained by using high-resolution images. We employ five advanced semantic segmentation models and conduct three types of experiments to quantitatively and qualitatively test the feasibility of the efficient semantic segmentation approach. The quantitative experimental results demonstrate that, provided with low-resolution images, the semantic segmentation methods are inferior to those with high-resolution images. However, using low-resolution images, there is a substantial reduction in the computational cost. Furthermore, the qualitative analysis shows that the results obtained from low-resolution images are comparable to those from high-resolution images, suggesting the feasibility of the low-to-high semantic segmentation in computational pathology.
Digital and computational pathology tools often suffer from a lack of relevant data. Although more and more data centers publicize the datasets, high-quality ground truth annotations may not be available in a timely manner. Herein, we propose a knowledge distillation framework that can utilize a teacher network that is already trained on a relatively larger amount of data and achieve accurate and robust performance on histopathology images by a student network. For an effective and efficient knowledge distillation, we introduce a quintet margin loss that pushes the student network not only to mimic the knowledge representation of the teacher network but also to outperform the teacher network on a target domain. We systematically evaluated the proposed approach. The results show that the proposed approach outperforms other competing models with and without different types of knowledge distillation methods.
Perineural invasion refers to a process where tumor cells invade, surround, or pass through nerve cells, serving as an indicator of aggressive tumor and related to poor prognosis. Herein, we propose an efficient and effective hybrid computational method for an automated detection of perineural invasion junctions in multi-tissue digitized histology images. The proposed approach conducts the detection of perineural invasion junctions in three stages. The first state identifies candidate regions for perineural invasion. The second stage delineates perineural invasion junctions. The last stage eliminates any false positive regions for perineural invasion. In the first two stages, we exploit an advanced deep neural network. In the last stage, we utilize hand-crafted features and a conventional machine learning algorithm. To evaluate the proposed approach, we employ 150 whole slide images obtained from PAIP2021 Challenge: Perineural Invasion in Multiple Organ Cancer and conduct a five-fold cross-validation. The experimental results show that the proposed hybrid approach could facilitate an automated, accurate identification of perineural invasion in histology images.
In digital pathology, nuclei segmentation still remains a challenging task due to the high heterogeneity and variability in the characteristics of nuclei, in particular, the clustered and overlapping nuclei. We propose a distance ordinal regression loss for an improved nuclei instance segmentation in digitized tissue specimen images. A convolutional neural network with two decoder branches is built. The first decoder branch conducts the nuclear pixel prediction and the second branch predicts the distance to the nuclear center, which is utilized to identify the nuclear boundary and to separate out overlapping nuclei. Adopting a distance-decreasing discretization strategy, we recast the problem of the distance prediction as an ordinal regression problem. To evaluate the proposed method, we conduct experiments on multiple independent multitissue histology image datasets. The experimental results on the multi-tissue datasets demonstrate the effectiveness of the proposed model.
Acute ischemic stroke (AIS) is not only a common cause of disability but also a leading cause of mortality worldwide. Recent studies have shown that the collateral status could play a vital role in assessing AIS and determining the treatment options for the patients. Herein, we propose a joint regression and ordinal learning approach for AIS, built upon 3-D deep convolutional neural networks, that facilitates an automated and objective collateral imaging from dynamic susceptibility contrast-enhanced magnetic resonance perfusion (DSC-MRP). DSC-MRP images of 159 AIS subjects and 186 healthy subjects are employed to evaluate the proposed approach. The collateral status is manually assessed in arterial, capillary, early and late venous, and delay phases and served as the ground truth. The proposed method, on average, obtained 0.901 squared correlation coefficient, 0.063 mean absolute error, 0.945 Tanimoto measure, and 0.933 structural similarity index. The quantitative results between AIS and healthy subjects are comparable. Overall, the experimental results suggest that the proposed network could aid in automating the evaluation of collateral status and enhancing the quality and yield of diagnosis of AIS.
In digital pathology, deep learning approaches have been increasingly applied and shown to be effective in analyzing digitized tissue specimen images. Such approaches have, in general, chosen an arbitrary scale or resolution at which the images are analyzed for several reasons, including computational cost and complexity. However, the tissue characteristics, indicative of cancer, tend to present at differing scales. Herein, we propose a framework that enables deep convolutional neural networks to perform multiscale histological analysis of tissue specimen images in an efficient and effective manner. A deep residual neural network is shared across multiple scales, extracting high-level features. The high-level features from multiple scales are aggregated and transformed in a way that the scale information is embedded in the network. The transformed features are utilized to classify tissue images into cancer and benign. The proposed method is compared to other methodologies to combine the feature from different scales. These competing methods combine the multi-scale features via 1) concatenation 2) addition and 3) convolution. Tissue microarrays (TMAs) were employed to evaluate the proposed method and the other competing methods. Three TMAs, including 225 benign and 377 cancer tissue samples, were used as training dataset. Two TMAs with 151 benign and 252 cancer tissue samples was utilized as testing dataset. The proposed method obtained an accuracy of 0.953 and the area under the receiver operating characteristics curve (AUC) of 0.971 (95% CI: 0.955-0.987), outperforming other competing methods. This suggests that the proposed multiscale approaches via a shared neural network and scale embedding scheme, could aid in improving digital pathology analysis and cancer pathology.
Recently, unmanned aerial vehicles (UAVs) have gained much attention. In particular, there is a growing interest in utilizing UAVs for agricultural applications such as crop monitoring and management. We propose a computerized system that is capable of detecting Fusarium wilt of radish with high accuracy. The system adopts computer vision and machine learning techniques, including deep learning, to process the images captured by UAVs at low altitudes and to identify the infected radish. The whole radish field is first segmented into three distinctive regions (radish, bare ground, and mulching film) via a softmax classifier and K-means clustering. Then, the identified radish regions are further classified into healthy radish and Fusarium wilt of radish using a deep convolutional neural network (CNN). In identifying radish, bare ground, and mulching film from a radish field, we achieved an accuracy of ≥97.4%. In detecting Fusarium wilt of radish, the CNN obtained an accuracy of 93.3%. It also outperformed the standard machine learning algorithm, obtaining 82.9% accuracy. Therefore, UAVs equipped with computational techniques are promising tools for improving the quality and efficiency of agriculture today.
A prostate computer-aided diagnosis (CAD) based on random forest to detect prostate cancer using a combination of spatial, intensity, and texture features extracted from three sequences, T2W, ADC, and B2000 images, is proposed. The random forest training considers instance-level weighting for equal treatment of small and large cancerous lesions as well as small and large prostate backgrounds. Two other approaches, based on an AutoContext pipeline intended to make better use of sequence-specific patterns, were considered. One pipeline uses random forest on individual sequences while the other uses an image filter described to produce probability map-like images. These were compared to a previously published CAD approach based on support vector machine (SVM) evaluated on the same data. The random forest, features, sampling strategy, and instance-level weighting improve prostate cancer detection performance [area under the curve (AUC) 0.93] in comparison to SVM (AUC 0.86) on the same test data. Using a simple image filtering technique as a first-stage detector to highlight likely regions of prostate cancer helps with learning stability over using a learning-based approach owing to visibility and ambiguity of annotations in each sequence.
Prostate cancer (PCa) is the second most common cause of cancer related deaths in men. Multiparametric MRI (mpMRI) is the most accurate imaging method for PCa detection; however, it requires the expertise of experienced radiologists leading to inconsistency across readers of varying experience. To increase inter-reader agreement and sensitivity, we developed a computer-aided detection (CAD) system that can automatically detect lesions on mpMRI that readers can use as a reference. We investigated a convolutional neural network based deep-learing (DCNN) architecture to find an improved solution for PCa detection on mpMRI. We adopted a network architecture from a state-of-the-art edge detector that takes an image as an input and produces an image probability map. Two-fold cross validation along with a receiver operating characteristic (ROC) analysis and free-response ROC (FROC) were used to determine our deep-learning based prostate-CAD’s (CADDL) performance. The efficacy was compared to an existing prostate CAD system that is based on hand-crafted features, which was evaluated on the same test-set. CADDL had an 86% detection rate at 20% false-positive rate while the top-down learning CAD had 80% detection rate at the same false-positive rate, which translated to 94% and 85% detection rate at 10 false-positives per patient on the FROC. A CNN based CAD is able to detect cancerous lesions on mpMRI of the prostate with results comparable to an existing prostate-CAD showing potential for further development.
We present a deep learning approach for detecting prostate cancers. The approach consists of two steps. In the first step,
we perform tissue segmentation that identifies lumens within digitized prostate tissue specimen images. Intensity- and
texture-based image features are computed at five different scales, and a multiview boosting method is adopted to
cooperatively combine the image features from differing scales and to identify lumens. In the second step, we utilize
convolutional neural networks (CNN) to automatically extract high-level image features of lumens and to predict
cancers. The segmented lumens are rescaled to reduce computational complexity and data augmentation by scaling,
rotating, and flipping the rescaled image is applied to avoid overfitting. We evaluate the proposed method using two
tissue microarrays (TMA) – TMA1 includes 162 tissue specimens (73 Benign and 89 Cancer) and TMA2 comprises 185
tissue specimens (70 Benign and 115 Cancer). In cross-validation on TMA1, the proposed method achieved an AUC of
0.95 (CI: 0.93-0.98). Trained on TMA1 and tested on TMA2, CNN obtained an AUC of 0.95 (CI: 0.92-0.98). This
demonstrates that the proposed method can potentially improve prostate cancer pathology.
Histopathology forms the gold standard for cancer diagnosis and therapy, and generally relies on manual examination of microscopic structural morphology within tissue. Fourier-Transform Infrared (FT-IR) imaging is an emerging vibrational spectroscopic imaging technique, especially in a High-Definition (HD) format, that provides the spatial specificity of microscopy at magnifications used in diagnostic surgical pathology. While it has been shown for standard imaging that IR absorption by tissue creates a strong signal where the spectrum at each pixel is a quantitative “fingerprint” of the molecular composition of the sample, here we show that this fingerprint also enables direct digital pathology without the need for stains or dyes for HD imaging. An assessment of the potential of HD imaging to improve diagnostic pathology accuracy is presented.
Computerized histopathology image analysis enables an objective, efficient, and quantitative assessment of digitized histopathology images. Such analysis often requires an accurate and efficient detection and segmentation of histological structures such as glands, cells and nuclei. The segmentation is used to characterize tissue specimens and to determine the disease status or outcomes. The segmentation of nuclei, in particular, is challenging due to the overlapping or clumped nuclei. Here, we propose a nuclei seed detection method for the individual and overlapping nuclei that utilizes the gradient orientation or direction information. The initial nuclei segmentation is provided by a multiview boosting approach. The angle of the gradient orientation is computed and traced for the nuclear boundaries. Taking the first derivative of the angle of the gradient orientation, high concavity points (junctions) are discovered. False junctions are found and removed by adopting a greedy search scheme with the goodness-of-fit statistic in a linear least squares sense. Then, the junctions determine boundary segments. Partial boundary segments belonging to the same nucleus are identified and combined by examining the overlapping area between them. Using the final set of the boundary segments, we generate the list of seeds in tissue images. The method achieved an overall precision of 0.89 and a recall of 0.88 in comparison to the manual segmentation.
Digitized histopathology images have a great potential for improving or facilitating current assessment tools in cancer
pathology. In order to develop accurate and robust automated methods, the precise segmentation of histologic objects
such epithelium, stroma, and nucleus is necessary, in the hopes of information extraction not otherwise obvious to the
subjective eye. Here, we propose a multivew boosting approach to segment histology objects of prostate tissue. Tissue
specimen images are first represented at different scales using a Gaussian kernel and converted into several forms such
HSV and La*b*. Intensity- and texture-based features are extracted from the converted images. Adopting multiview
boosting approach, we effectively learn a classifier to predict the histologic class of a pixel in a prostate tissue specimen.
The method attempts to integrate the information from multiple scales (or views). 18 prostate tissue specimens from 4
patients were employed to evaluate the new method. The method was trained on 11 tissue specimens including 75,832
epithelial and 103,453 stroma pixels and tested on 55,319 epithelial and 74,945 stroma pixels from 7 tissue specimens.
The technique showed 96.7% accuracy, and as summarized into a receiver operating characteristic (ROC) plot, the area
under the ROC curve (AUC) of 0.983 (95% CI: 0.983-0.984) was achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.