The detection of anatomical structures in medical imaging data plays a crucial role as a preprocessing step for various downstream tasks. It, however, poses a significant challenge due to highly variable appearances and intensity values within medical imaging data. In addition, there is a scarcity of annotated datasets in medical imaging data, due to high costs and the requirement for specialized knowledge. These limitations motivate researchers to develop automated and accurate few-shot object detection approaches. While there are generalpurpose deep learning models available for detecting objects in natural images, the applicability of these models for medical imaging data remains uncertain and needs to be validated. To address this, we carry out an unbiased evaluation of the state-of-the-art few-shot object detection methods for detecting head and neck anatomy in CT images. In particular, we choose Query Adaptive Few-Shot Object Detection (QA-FewDet), Meta Faster R-CNN, and Few-Shot Object Detection with Fully Cross-Transformer (FCT) methods and apply each model to detect various anatomical structures using novel datasets containing only a few images, ranging from 1- to 30-shot, during the fine-tuning stage. Our experimental results, carried out under the same setting, demonstrate that few-shot object detection methods can accurately detect anatomical structures, showing promising potential for integration into the clinical workflow.
The CNNs have significantly advanced in analyzing cellular movements. Unfortunately, the CNN-based networks incorporate the information loss caused by the intrinsic characteristics of the convolution operators, leading to degrading the performance of cell segmentation and tracking. Researchers have proposed consecutive CNNs to overcome these limitations, although these models are still in the preproduction stage. In this study, we present a novel approach that utilizes cumulative CNNs to segment and track cells in fluorescence videos. Our method incorporates the state-of-the-art Vision Transformer (ViT) and Bayesian Network to improve accuracy and performance. By leveraging the ViT architecture and Bayesian network, we aim to mitigate information losses and enhance the precision of cell segmentation and tracking tasks.
Otoscopy is an important procedure for the diagnosis of otitis media allowing examiners to visually inspect a patient's eardrum. However, a traditional otoscope enables imaging of the target under white light only, limiting the capability to assess color differences and tympanum morphology, which are distinguishing features in the diagnosis of otitis media. We present a smartphone-attachable trimodal otoscope head capable of spectral, autofluorescence, and photometric 3D stereo imaging. This device uses LEDs, optical fibers, and a smartphone camera to collect quantitative spectral signatures and qualitative morphological data that carry information about the biochemistry and 3D morphology of the sampled eardrum and middle ear to aid examiners in providing precise diagnosis with ubiquitous connectivity and portability of a smartphone device, which is beneficial in telemedicine applications. Finally, we collected normal, otitis media with effusion, and adhesive otitis media data and evaluated our device’s capabilities using deep-learning classifiers.
We propose a novel deep learning algorithm, denoted as Deep Optical Flow (DoF), capable of interpreting and predicting cell behaviors with high accuracy in time-lapse fluorescence images. DoF has dual pipeline networks, including 4D-Rank convolution operations. One classifies the behavior of induvial cells while generating Optical Flow for the cells, whereas another predicts the next few frames of cells. DoF was verified with our and public datasets for cell tracking, segmentation, and identification. The experimental results demonstrate that DoF outperformed other state-of-the-arts in the analysis of cell behaviors. Therefore, these suggested that DoF has the potentials to become a novel tool for a better understanding of cell behaviors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.