PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Terahertz (THz) medical imaging is a promising noninvasive technique for monitoring the skin’s conditions, early detection of the human skin cancer, and recovery from burns and wounds. It can be applied for visualization of the healing process directly through clinical dressings and restorative ointments, minimizing the frequency of dressing changes. The THz imaging technique is cost effective, as compared to the magnetic resonance method. Our aim was to develop an approach capable of providing better image resolution than the commercially available THz imaging cameras.
Approach
The terahertz-to-infrared (THz-to-IR) converters can visualize the human skin cancer by converting the latter’s specific contrast patterns recognizable in THz radiation range into IR patterns, detectable by a standard IR imaging camera. At the core of suggested THz-to-IR converters are flat matrices transparent both in the THz range to be visualized and in the operating range of the IR camera, these matrices contain embedded metal nanoparticles (NPs), which, when irradiated with THz rays, convert the energy of THz photons into heat and become nanosources of IR radiation detectable by an IR camera.
Results
The ways of creating the simplest converter, as well as a more complex converter with wider capabilities, are considered. The first converter is a gelatin matrix with gold 8.5-nm diameter NPs, and the second is a polystyrene matrix with 2-nm diameter NPs from copper–nickel MONEL® alloy 404.
Conclusions
An approach with a THz-to-IR converter equipped with an IR camera is promising in that it could provide a better image of oncological pathology than the commercially available THz imaging cameras do.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our purpose is to investigate the timing resolution in edge-on silicon strip detectors for photon-counting spectral computed tomography. Today, the timing for detection of individual x-rays is not measured, but in the future, timing information can be valuable to accurately reconstruct the interactions caused by each primary photon.
Approach
We assume a pixel size of 12 × 500 μm2 and a detector with double-sided readout with low-noise CMOS electronics for pulse processing for every pixel on each side. Due to the electrode width in relation to the wafer thickness, the induced current signals are largely dominated by charge movement close to the collecting electrodes. By employing double-sided readout electrodes, at least two signals are generated for each interaction. By comparing the timing of the induced current pulses, the time of the interaction can be determined and used to identify interactions that originate from the same incident photon. Using a Monte Carlo simulation of photon interactions in combination with a charge transport model, we evaluate the performance of estimating the time of the interaction for different interaction positions.
Results
Our simulations indicate that a time resolution of 1 ns can be achieved with a noise level of 0.5 keV. In a detector with no electronic noise, the corresponding time resolution is ∼0.1 ns.
Conclusions
Time resolution in edge-on silicon strip CT detectors can potentially be used to increase the signal-to-noise-ratio and energy resolution by helping in identifying Compton scattered photons in the detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TOPICS: Aorta, Image segmentation, 3D modeling, Education and training, Data modeling, Visualization, Feature fusion, Contrast transfer function, Aneurysms, Tissues
Segmentation of vascular structures in preoperative computed tomography (CT) is a preliminary step for computer-assisted endovascular navigation. It is a challenging issue when contrast medium enhancement is reduced or impossible, as in the case of endovascular abdominal aneurysm repair for patients with severe renal impairment. In non-contrast–enhanced CTs, the segmentation tasks are currently hampered by the problems of low contrast, similar topological form, and size imbalance. To tackle these problems, we propose a novel fully automatic approach based on convolutional neural network.
Approach
The proposed method is implemented by fusing the features from different dimensions by three kinds of mechanisms, i.e., channel concatenation, dense connection, and spatial interpolation. The fusion mechanisms are regarded as the enhancement of features in non-contrast CTs where the boundary of aorta is ambiguous.
Results
All of the networks are validated by three-fold cross-validation on our dataset of non-contrast CTs, which contains 5749 slices in total from 30 individual patients. Our methods achieve a Dice score of 88.7% as the overall performance, which is better than the results reported in the related works.
Conclusions
The analysis indicates that our methods yield a competitive performance by overcoming the above-mentioned problems in most general cases. Further, experiments on our non-contrast CTs demonstrate the superiority of the proposed methods, especially in low-contrast, similar-shaped, and extreme-sized cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We perform anatomical landmarking for craniomaxillofacial (CMF) bones without explicitly segmenting them. Toward this, we propose a simple, yet efficient, deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations among the landmarks in CMF bones; specifically, mandible, maxilla, and nasal bones.
Approach
The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units. For a given few landmarks as input, RRN treats the landmarking process similar to a data imputation problem where predicted landmarks are considered missing.
Results
We applied RRN to cone-beam computed tomography scans obtained from 250 patients. With a fourfold cross-validation technique, we obtained an average root mean squared error of <2 mm per landmark. Our proposed RRN has revealed unique relationships among the landmarks that help us in inferring informativeness of the landmark points. The proposed system identifies the missing landmark locations accurately even when severe pathology or deformations are present in the bones.
Conclusions
Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for CMF surgeries. Achieving this goal without the need for explicit bone segmentation addresses a major limitation of segmentation-based approaches, where segmentation failure (as often is the case in bones with severe pathology or deformation) could easily lead to incorrect landmarking. To the best of our knowledge, this is the first-of-its-kind algorithm finding anatomical relations of the objects using deep learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a deep learning (DL) based method called TextureWGAN. It is designed to preserve image texture while maintaining high pixel fidelity for computed tomography (CT) inverse problems. Over-smoothed images by postprocessing algorithms have been a well-known problem in the medical imaging industry. Therefore, our method tries to solve the over-smoothing problem without compromising pixel fidelity.
Approach
The TextureWGAN extends from Wasserstein GAN (WGAN). The WGAN can create an image that looks like a genuine image. This aspect of the WGAN helps preserve image texture. However, an output image from the WGAN is not correlated to the corresponding ground truth image. To solve this problem, we introduce the multitask regularizer (MTR) to the WGAN framework to make a generated image highly correlated to the corresponding ground truth image so that the TextureWGAN can achieve high-level pixel fidelity. The MTR is capable of using multiple objective functions. In this research, we adopt a mean squared error (MSE) loss to maintain pixel fidelity. We also use a perception loss to improve the look and feel of result images. Furthermore, the regularization parameters in the MTR are trained along with generator network weights to maximize the performance of the TextureWGAN generator.
Results
The proposed method was evaluated in CT image reconstruction applications in addition to super-resolution and image-denoising applications. We conducted extensive qualitative and quantitative evaluations. We used PSNR and SSIM for pixel fidelity analysis and the first-order and the second-order statistical texture analysis for image texture. The results show that the TextureWGAN is more effective in preserving image texture compared with other well-known methods such as the conventional CNN and nonlocal mean filter (NLM). In addition, we demonstrate that TextureWGAN can achieve competitive pixel fidelity performance compared with CNN and NLM. The CNN with MSE loss can attain high-level pixel fidelity, but it often damages image texture.
Conclusions
TextureWGAN can preserve image texture while maintaining pixel fidelity. The MTR is not only helpful to stabilize the TextureWGAN’s generator training but also maximizes the generator performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TOPICS: Prostate, Education and training, Magnetic resonance imaging, Image segmentation, Performance modeling, Deep learning, Magnetism, Principal component analysis, Image classification, Visual process modeling
To bypass manual data preprocessing and optimize deep learning performance, we developed and evaluated CROPro, a tool to standardize automated cropping of prostate magnetic resonance (MR) images.
Approach
CROPro enables automatic cropping of MR images regardless of patient health status, image size, prostate volume, or pixel spacing. CROPro can crop foreground pixels from a region of interest (e.g., prostate) with different image sizes, pixel spacing, and sampling strategies. Performance was evaluated in the context of clinically significant prostate cancer (csPCa) classification. Transfer learning was used to train five convolutional neural network (CNN) and five vision transformer (ViT) models using different combinations of cropped image sizes (64 × 64, 128 × 128, and 256 × 256 pixels2), pixel spacing (0.2 × 0.2, 0.3 × 0.3, 0.4 × 0.4, and 0.5 × 0.5 mm2), and sampling strategies (center, random, and stride cropping) over the prostate. T2-weighted MR images (N = 1475) from the online available PI-CAI challenge were used to train (N = 1033), validate (N = 221), and test (N = 221) all models.
Results
Among CNNs, SqueezeNet with stride cropping (image size: 128 × 128, pixel spacing: 0.2 × 0.2 mm2) achieved the best classification performance (0.678 ± 0.006). Among ViTs, ViT-H/14 with random cropping (image size: 64 × 64 and pixel spacing: 0.5 × 0.5 mm2) achieved the best performance (0.756 ± 0.009). Model performance depended on the cropped area, with optimal size generally larger with center cropping (∼40 cm2) than random/stride cropping (∼10 cm2).
Conclusion
We found that csPCa classification performance of CNNs and ViTs depends on the cropping settings. We demonstrated that CROPro is well suited to optimize these settings in a standardized manner, which could improve the overall performance of deep learning models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning has demonstrated excellent performance enhancing noisy or degraded biomedical images. However, many of these models require access to a noise-free version of the images to provide supervision during training, which limits their utility. Here, we develop an algorithm (noise2Nyquist) that leverages the fact that Nyquist sampling provides guarantees about the maximum difference between adjacent slices in a volumetric image, which allows denoising to be performed without access to clean images. We aim to show that our method is more broadly applicable and more effective than other self-supervised denoising algorithms on real biomedical images, and provides comparable performance to algorithms that need clean images during training.
Approach
We first provide a theoretical analysis of noise2Nyquist and an upper bound for denoising error based on sampling rate. We go on to demonstrate its effectiveness in denoising in a simulated example as well as real fluorescence confocal microscopy, computed tomography, and optical coherence tomography images.
Results
We find that our method has better denoising performance than existing self-supervised methods and is applicable to datasets where clean versions are not available. Our method resulted in peak signal to noise ratio (PSNR) within 1 dB and structural similarity (SSIM) index within 0.02 of supervised methods. On medical images, it outperforms existing self-supervised methods by an average of 3 dB in PSNR and 0.1 in SSIM.
Conclusion
noise2Nyquist can be used to denoise any volumetric dataset sampled at at least the Nyquist rate making it useful for a wide variety of existing datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical coherence tomography (OCT) is a noninvasive, high-resolution imaging modality capable of providing both cross-sectional and three-dimensional images of tissue microstructures. Owing to its low-coherence interferometry nature, however, OCT inevitably suffers from speckles, which diminish image quality and mitigate the precise disease diagnoses, and therefore, despeckling mechanisms are highly desired to alleviate the influences of speckles on OCT images.
Approach
We propose a multiscale denoising generative adversarial network (MDGAN) for speckle reductions in OCT images. A cascade multiscale module is adopted as MDGAN basic block first to raise the network learning capability and take advantage of the multiscale context, and then a spatial attention mechanism is proposed to refine the denoised images. For enormous feature learning in OCT images, a deep back-projection layer is finally introduced to alternatively upscale and downscale the features map of MDGAN.
Results
Experiments with two different OCT image datasets are conducted to verify the effectiveness of the proposed MDGAN scheme. Results compared those of the state-of-the-art existing methods show that MDGAN is able to improve both peak-single-to-noise ratio and signal-to-noise ratio by 3 dB at most, with its structural similarity index measurement and contrast-to-noise ratio being 1.4% and 1.3% lower than those of the best existing methods.
Conclusions
Results demonstrate that MDGAN is effective and robust for OCT image speckle reductions and outperforms the best state-of-the-art denoising methods in different cases. It could help alleviate the influence of speckles in OCT images and improve OCT imaging-based diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TOPICS: Image segmentation, Education and training, Data modeling, Medical imaging, Performance modeling, Machine learning, Current controlled current source, Cardiovascular magnetic resonance imaging, Ablation, Heart
Neural networks have potential to automate medical image segmentation but require expensive labeling efforts. While methods have been proposed to reduce the labeling burden, most have not been thoroughly evaluated on large, clinical datasets or clinical tasks. We propose a method to train segmentation networks with limited labeled data and focus on thorough network evaluation.
Approach
We propose a semi-supervised method that leverages data augmentation, consistency regularization, and pseudolabeling and train four cardiac magnetic resonance (MR) segmentation networks. We evaluate the models on multiinstitutional, multiscanner, multidisease cardiac MR datasets using five cardiac functional biomarkers, which are compared to an expert’s measurements using Lin’s concordance correlation coefficient (CCC), the within-subject coefficient of variation (CV), and the Dice coefficient.
Results
The semi-supervised networks achieve strong agreement using Lin’s CCC (>0.8), CV similar to an expert, and strong generalization performance. We compare the error modes of the semi-supervised networks against fully supervised networks. We evaluate semi-supervised model performance as a function of labeled training data and with different types of model supervision, showing that a model trained with 100 labeled image slices can achieve a Dice coefficient within 1.10% of a network trained with 16,000+ labeled image slices.
Conclusion
We evaluate semi-supervision for medical image segmentation using heterogeneous datasets and clinical metrics. As methods for training models with little labeled data become more common, knowledge about how they perform on clinical tasks, how they fail, and how they perform with different amounts of labeled data is useful to model developers and users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TOPICS: Kidney, Image segmentation, Magnetic resonance imaging, Cancer detection, Data modeling, Solids, Education and training, Tumor growth modeling, Tissues, 3D modeling
Accurate detection of small renal masses (SRM) is a fundamental step for automated classification of benign and malignant or indolent and aggressive renal tumors. Magnetic resonance image (MRI) may outperform computed tomography (CT) for SRM subtype differentiation due to improved tissue characterization, but is less explored compared to CT. The objective of this study is to autonomously detect SRM on contrast-enhanced magnetic resonance images (CE-MRI).
Approach
In this paper, we described a novel, fully automated methodology for accurate detection and localization of SRM on CE-MRI. We first determine the kidney boundaries using a U-Net convolutional neural network. We then search for SRM within the localized kidney regions using a mixture-of-experts ensemble model based on the U-Net architecture. Our dataset contained CE-MRI scans of 118 patients with different solid kidney tumor subtypes including renal cell carcinomas, oncocytomas, and fat-poor renal angiomyolipoma. We evaluated the proposed model on the entire CE-MRI dataset using 5-fold cross validation.
Results
The developed algorithm reported a Dice similarity coefficient of 91.20 ± 5.41 % (mean ± standard deviation) for kidney segmentation from 118 volumes consisting of 25,025 slices. Our proposed ensemble model for SRM detection yielded a recall and precision of 86.2% and 83.3% on the entire CE-MRI dataset, respectively.
Conclusions
We described a deep-learning-based method for fully automated SRM detection using CE-MR images, which has not been studied previously. The results are clinically important as SRM localization is a pre-step for fully automated diagnosis of SRM subtypes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TOPICS: Education and training, Breast density, Deep learning, Data modeling, Mammography, Feature extraction, Linear regression, Performance modeling, Cancer, Image processing
Mammographic breast density is one of the strongest risk factors for cancer. Density assessed by radiologists using visual analogue scales has been shown to provide better risk predictions than other methods. Our purpose is to build automated models using deep learning and train on radiologist scores to make accurate and consistent predictions.
Approach
We used a dataset of almost 160,000 mammograms, each with two independent density scores made by expert medical practitioners. We used two pretrained deep networks and adapted them to produce feature vectors, which were then used for both linear and nonlinear regression to make density predictions. We also simulated an “optimal method,” which allowed us to compare the quality of our results with a simulated upper bound on performance.
Results
Our deep learning method produced estimates with a root mean squared error (RMSE) of 8.79 ± 0.21. The model estimates of cancer risk perform at a similar level to human experts, within uncertainty bounds. We made comparisons between different model variants and demonstrated the high level of consistency of the model predictions. Our modeled “optimal method” produced image predictions with a RMSE of between 7.98 and 8.90 for cranial caudal images.
Conclusion
We demonstrated a deep learning framework based upon a transfer learning approach to make density estimates based on radiologists’ visual scores. Our approach requires modest computational resources and has the potential to be trained with limited quantities of data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-Guided Procedures, Robotic Interventions, and Modeling
An augmented reality (AR) system was developed to facilitate free-hand real-time needle guidance for transperineal prostate (TP) procedures and to overcome the limitations of a traditional guidance grid.
Approach
The HoloLens AR system enables the superimposition of annotated anatomy derived from preprocedural volumetric images onto a patient and addresses the most challenging part of free-hand TP procedures by providing real-time needle tip localization and needle depth visualization during insertion. The AR system accuracy, or the image overlay accuracy (n = 56), and needle targeting accuracy (n = 24) were evaluated within a 3D-printed phantom. Three operators each used a planned-path guidance method (n = 4) and free-hand guidance (n = 4) to guide needles into targets in a gel phantom. Placement error was recorded. The feasibility of the system was further evaluated by delivering soft tissue markers into tumors of an anthropomorphic pelvic phantom via the perineum.
Results
The image overlay error was 1.29 ± 0.57 mm, and needle targeting error was 2.13 ± 0.52 mm. The planned-path guidance placements showed similar error compared to the free-hand guidance (4.14 ± 1.08 mm versus 4.20 ± 1.08 mm, p = 0.90). The markers were successfully implanted either into or in close proximity to the target lesion.
Conclusions
The HoloLens AR system can provide accurate needle guidance for TP interventions. AR support for free-hand lesion targeting is feasible and may provide more flexibility than grid-based methods, due to the real-time 3D and immersive experience during free-hand TP procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Perception, Observer Performance, and Technology Assessment
The aim of our study was to compare the image quality assessments of vascular anatomy between interventional radiographers and interventional radiologists using digital subtraction angiography (DSA) runs acquired during an interventional radiology procedure.
Approach
Visual grading characteristics (VGC) analysis was used to assess image quality by comparing two groups of images, where one group consisted of procedures in which radiation dose was optimized (group A, n = 10) and one group where dose optimization was not performed (group B, n = 10). The radiation dose parameters were optimized based on theoretical and empirical evidence to achieve radiation dose reductions during uterine artery embolization procedures. The two observer groups comprised of interventional radiologists (n = 4) and interventional radiographers (n = 4). Each observer rated the image quality of 20 DSA runs using a five-point rating scale.
Results
The VGC analysis produced an area under the VGC curve (AUCVGC) of 0.55 for interventional radiographers (P = 0.61) and AUCVGC of 0.52 for interventional radiologists (P = 0.83). The optimization of radiation dose parameters demonstrated a reduction in kerma-area product by 35% (P = 0.026, d = 0.5) and reference air kerma (Ka, r) by 43% (P = 0.042, d = 0.5) between group A and group B.
Conclusions
VGC analysis indicated that the image quality assessments of interventional radiographers were comparable with interventional radiologists, where a reduction in radiation dose revealed no effect on both observer groups regarding their image quality assessment of vascular anatomy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TOPICS: Digital breast tomosynthesis, Breast density, Mammography, Breast, Cancer, Education and training, Diagnostics, Breast cancer, Cancer detection, Radiology
This study aims to investigate the diagnostic performances of Australian and Shanghai-based Chinese radiologists in reading full-field digital mammogram (FFDM) and digital breast tomosynthesis (DBT) with different levels of breast density.
Approach
Eighty-two Australian radiologists interpreted a 60-case FFDM set, and 29 radiologists also reported a 35-case DBT set. Sixty Shanghai radiologists read the same FFDM set, and 32 radiologists read the DBT set. The diagnostic performances of Australian and Shanghai radiologists were assessed using truth data (cancer cases were biopsy proven) and compared overall in specificity, case sensitivity, lesion sensitivity, receiver operating characteristics (ROC) area under the curve, and jack-knife free-response receiver operating characteristics (JAFROC) figure of merit, and they were stratified by case characteristics using the Mann–Whitney U test. The Spearman rank test was used to explore the association between radiologists’ performances and their work experience in mammogram interpretation.
Results
There were significantly higher performances of Australian radiologists compared with Shanghai radiologists in low breast density for case sensitivity, lesion sensitivity, ROC, and JAFROC in the FFDM set (P < 0.0001); in high breast density, Shanghai radiologists’ performances in lesion sensitivity and JAFROC were also lower than Australian radiologists (P < 0.0001). In the DBT test set, Australian radiologists performed better than Shanghai radiologists in cancer detection in both low and high breast density. The work experience of Australian radiologists was positively linked to their diagnostic performances, whereas this association was not statistically significant in Shanghai radiologists.
Conclusion
There were significant variations in reading performances between Australian and Shanghai radiologists in FFDM and DBT across different levels of breast density, lesion types, and lesion sizes. An effective training initiative tailored to suit local readers is essential to enhancing the diagnostic accuracy of Shanghai radiologists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Purpose: Digital whole slide imaging allows pathologists to view slides on a computer screen instead of under a microscope. Digital viewing allows for real-time monitoring of pathologists’ search behavior and neurophysiological responses during the diagnostic process. One particular neurophysiological measure, pupil diameter, could provide a basis for evaluating clinical competence during training or developing tools that support the diagnostic process. Prior research shows that pupil diameter is sensitive to cognitive load and arousal, and it switches between exploration and exploitation of a visual image. Different categories of lesions in pathology pose different levels of challenge, as indicated by diagnostic disagreement among pathologists. If pupil diameter is sensitive to the perceived difficulty in diagnosing biopsies, eye-tracking could potentially be used to identify biopsies that may benefit from a second opinion.
Approach: We measured case onset baseline-corrected (phasic) and uncorrected (tonic) pupil diameter in 90 pathologists who each viewed and diagnosed 14 digital breast biopsy cases that cover the diagnostic spectrum from benign to invasive breast cancer. Pupil data were extracted from the beginning of viewing and interpreting of each individual case. After removing 122 trials (<10 % ) with poor eye-tracking quality, 1138 trials remained. We used multiple linear regression with robust standard error estimates to account for dependent observations within pathologists.
Results: We found a positive association between the magnitude of phasic dilation and subject-centered difficulty ratings and between the magnitude of tonic dilation and untransformed difficulty ratings. When controlling for case diagnostic category, only the tonic-difficulty relationship persisted.
Conclusions: Results suggest that tonic pupil dilation may indicate overall arousal differences between pathologists as they interpret biopsy cases and could signal a need for additional training, experience, or automated decision aids. Phasic dilation is sensitive to characteristics of biopsies that tend to elicit higher difficulty ratings and could indicate a need for a second opinion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3D) printing has had a significant impact on patient care. However, there is a lack of standardization in quality assurance (QA) to ensure printing accuracy and precision given multiple printing technologies, variability across vendors, and inter-printer reliability issues. We investigated printing accuracy on a diverse selection of 3D printers commonly used in the medical field.
Approach
A specially designed 3D printing QA phantom was periodically printed on 16 printers used in our practice, covering five distinct printing technologies and eight different vendors. Longitudinal data were acquired over six months by printing the QA phantom monthly on each printer. Qualitative assessment and quantitative measurements were obtained for each printed phantom. Accuracy and precision were assessed by comparing quantitative measurements with reference values of the phantom. Data were then compared among printer models, vendors, and printing technologies; longitudinal trends were also analyzed.
Results
Differences in 3D printing accuracy across printers were observed. Material jetting and vat photopolymerization printers were found to be the most accurate. Printers using the same 3D printing technology but from different vendors also showed differences in accuracy, most notably between vat photopolymerization printers from two different vendors. Furthermore, differences in accuracy were found between printers from the same vendor using the same printing technology, but different models/generations.
Conclusions
These results show how factors such as printing technology, vendor, and printer model can impact 3D printing accuracy, which should be appropriately considered in practice to avoid potential medical or surgical errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.