PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11050, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this VR system is to simulate a bowling game for adaption in muscular rehabilitation training. The virtual environment allows the user to pick up a bowling ball and hit the pins, followed by an update and display of the score they gained; and the players can alternate between each other to have a competition. Implemented on the Unity engine and SteamVR, the VR Toolkit is employed in modeling and script development. Technical innovations are made in generation of the grabbing and releasing controllers with adjustable colliders, and the respawn detector triggered when the ball hits the back of the bowling alley in the game. We present the specific tasks of muscular rehabilitation, conceptualization of VR techniques and the detailed implementation of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The authors have found a phenomenon in which the blinking speed appears to be faster when the beta motion is viewed in peripheral view. In this paper, we focused on the beta motion on the circle and conducted an experiment on how it looks in peripheral vision. As a result of the experiment, it is understood that the speed increases as the retinal eccentricity increases regardless of the annular ring size. In addition, it became clear that as the retinal eccentricity in the horizontal direction increases, the apparent shape of the beta motion tends to change from the annular ring shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Region-of-interest (ROI) imaging is considered an effective method to reduce the exposure dose. We propose ROIbased beam modulation acquisition to restore the information outside of the ROI. The CT system and 3D voxelized abdominal phantom were simulated using the MATLAB R2017b program. A total of 360 projections were obtained and used for CT reconstruction with a filtered back projection (FBP) algorithm. Beam modulation CT images were reconstructed using 288 truncated and 72 full projections. An interpolation method and our proposed method based on a projection onto convex sets (POCS) algorithm corrected the truncated projections. The image quality of three ROIs was evaluated using the structural similarity index measure (SSIM). The reconstructed image obtained by beam modulation acquisition resulted in a much higher SSIM value for the external information than that obtained by the ROI scan. The proposed method based on a POCS algorithm provides the best image quality in beam modulation acquisition. In conclusion, we have verified the possibility of restoring the ROI external information using beam modulation acquisition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polychromatic X-ray in computed tomography (CT) can cause metal artifacts and beam hardening artifacts, which are limiting factors in the detection and diagnosis of lesions. Several groups have introduced virtual monochromatic imaging (VMI) techniques using dual-source CT to reduce these artifacts. However, the dual-source system with two exposures can increase the patient dose. The photon-counting detector with one exposure can replace a dual-source system. In this study, we investigated the feasibility of VMI in a photon-counting system. A prototype of the photon-counting CT system, which has 64 line-pixels Cadmium Zinc Telluride (CZT)-based photon-counting detector, was used. The source-to-detector distance and the source-to-center of rotation distance were 1,400 and 1,200 mm, respectively. Energy bins were set at 23 - 32, 33 - 42, 43 - 52, 53 - 62, and 63 - 90 keV. For comparison, the integrating mode was obtained by sum of five energy bins, which is assumed to polychromatic X-ray. Two copper (Cu) rods were inserted into PMMA cylinder phantom. As results, the VMI effectively removed metal artifacts. Noise and Signal-to-noise ratio (SNR) were evaluated and the optimal VMI was measured at 77 keV. Our results indicated that VMI in the prototype of the photon-counting system effectively eliminates the metal artifact and provides better image quality than integrating mode at 23 - 90 keV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multi-focused plenoptic camera is a powerful device that can capture a light field (LF), which is interpreted as a set of dense multi-view images. The camera has potential ability such that we can obtain LFs with high spatial/view resolutions and deep depth-of-field. To extract multi-view images, we need a sophisticated rendering process due to the complicated optical system of such cameras. However, there are few researches on this topic, and the only available rendering software to the best of our knowledge could not work well for some camera configurations. We therefore propose an improved rendering method and release our rendering software. Our software can extract multi-view images from a multi-focused plenoptic camera with higher quality than the previous one and work for various camera configurations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed highly viscous phantoms that include ultrasound scatterers in polyacrylamide gels, to be measurable by both magnetic resonance elastography and ultrasound elastography. The purpose of this study is to evaluate whether US-based shear-wave elastography (SWE) can measure elasticity accurately in a highly viscous, embedded phantom. Shear-wave speeds in the embedded hard parts were equivalent to the reference values. The reference values (RVs) of the embedded parts were measured by SWE in a homogeneous phantom whose content was the same as the embedded part. The RV of the background part was measured at a deeper area of the embedded phantom. The embedded part on the velocity-mode image was demonstrated as larger than the same part on the B-mode image. This phantom has the potential as a quality control phantom to mimic living tissues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The beam-hardening effect is one of the most important factors of metal artifact that degrades CT image quality. In the polychromatic X-ray, this occurs noticeably when scanning metallic materials with large changes in energy-dependent attenuation coefficient. This violates the assumption of a CT reconstruction based on a fixed attenuation coefficient in a monochromatic X-ray, which leads to beam-hardening artifacts such as streaking and cupping shapes. Numerous studies have been researched to reduce the beam-hardening artifacts. Most of the methods need the optimization based on iterative reconstruction, which causes a time-consuming problem. This study aims at an efficient methodology in terms of performance time while providing acceptable correction of beam-hardening artifacts. For this, the attenuation coefficient error due to beam hardening is modeled with respect to the length of the X-ray passing through the metallic material. And the model is approximated by a linear combination of four basis functions determined by the length. The linearity is also preserved in the reconstruction image, so that the coefficient of each basis function can be obtained by solving the minimization problem of the variance of the homogeneous metal region in the image. For the evaluation, a phantom including three titanium rods was scanned by a cone-beam CT system (Ray, South Korea) and the images were reconstructed by the standard FDK algorithm. The results showed that the proposed method is superior in terms of speed while delivering acceptable beam-hardening correction compared to recent methods. The proposed model will be effective for the applications where processing speed is important for the beam-hardening correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Microcirculation plays an important role in maintaining our lives. Observing the microcirculation has been considered important in understanding the disease mechanisms and diagnosing diseases. Sidestream dark-field (SDF) imaging is one of the methods to observe the microcirculation. However, the SDF imaging has several problems for instance artifacts caused by pressure and heat. Measurement points is under pressure because SDF imaging requires direct contact with measurement points, which may affect hemodynamics. Therefore, we construct a non-contact setup. Furthermore, at the early stage of sepsis, it is known that the microcirculation is impaired. To investigate the relationship between the flow of red blood cells (RBCs) and septic shock, we conducted an experiment using the setup to observe septic model rats and sham rats. Moreover, we calculated the blood velocity to estimate the flow of RBCs by using acquired motion pictures. We confirmed that the sham rats showed slight change in lactate value during the observation and improved the blood velocity compared with just after abdominal closure. However, lactate value of the septic model rats increased and the blood velocity of septic model rats decreased. This finding suggests that microcirculatory alteration may be a sign of sepsis and septic shock progression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many studies have shown that iterative reconstruction (IR) algorithm is possible to make the tube current and/or voltage in CT imaging lower without a major loss of image quality. However, there are not many studies on the acquisition conditions for low dose CT images using IR algorithm to achieve the same image quality as routine dose images using FBP algorithm. The aim of this study was to investigate the image quality of low dose CT images obtained with IR algorithm. Images were reconstructed with filtered back projection (FBP) and iDose4 hybrid IR algorithm (Philips Healthcare, Cleveland, OH). CTDIvol for routine protocol and low dose protocol were 5.2 mGy, and 2 mGy, respectively. Images were quantitatively assessed through Hounsfield unit (HU), noise power spectrum (NPS) and contrast to noise ratio (CNR). The results showed that image quality of iDose4 algorithm was improved than that of FBP algorithm. When the same low-dose protocol is used, the IR algorithm provided improved imaging performance compared with the FBP algorithm, and also demonstrated that IR algorithm provides potential for maintaining or improving image quality with much less radiation dose than FBP algorithm with routine dose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Improving image quality from low-dose CT image and keeping diagnostic features is integral to lowering the amount of exposure to radiation and its potential risks. Noise reduction methods using deep neural network have been developed and displayed impressive performance, but there are limitations on noise remnants, blurring on high-frequency edge, and artifacts occurrence. To increase noise reduction performance and deal with those issues simultaneously, we have implemented block-based REDCNN model and applied patch-based Landweber-type iteration to images passed through REDCNN model. The model successfully smooths noise on CT images which are imposed Gaussian and Poisson noise, and outperforms noise reduction by other state-of-the-art deep neural network models. We also have tested the effect of repetition of an iterative reconstruction, changing a step size and the number of iteration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radiomics is attracting research interests for characterization of the tumor phenotype as well as for prediction of patient outcome. However, many radiomic features are known to be affected by a multitude of variability sources, such as CT acquisition parameters, which might lead to false discovery if unknowingly used. Therefore, in order to avoid such pitfalls, the appropriate selection of robust features is an essential task in radiomic studies. We investigate the variability of CT imaging features which were previously reported as radiomic markers in non- small cell lung cancer (NSCLC). We scanned a standardized phantom with 64-slice multi-detector CT scanner with various scan conditions. We extracted forty-seven radiomic features including two texture features and first order statistics. Feature variability index was measured to evaluate the feature robustness depending on the scan parameters. The proportion of feature less effect on kernel was observed to only 32%. Our study revealed a high variability of CT image features depending on technical parameters. These characteristics should be considered in the feature extraction procedure when different protocols are used in the patient dataset. Use of the same CT protocol is preferred. Otherwise, the application of kernel normalization techniques is necessary for the radiomic study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we present a deep learning approach for denoising of ultra-low-dose chest CT by combining a low-dose simulation and convolutional neural network (CNN). A total of 18,456 anonymized regular-dose chest CT images were used for training of the CNN. The training CT images were fed into the low-dose simulation tool to generate a paired set of simulated low-dose CT and synthetic low-dose noise. A modified U-net model with 4×4 kernel size and five layers was trained with these paired datasets to predict the low-dose noise from the given low-dose CT image. Independent 10 ultra-low-dose chest CT scans at 120 kVp and 5 mAs were used for testing the denoising performance of the trained Unet. Denoised CT images were obtained by subtracting the predicted noise image from ultra-low-dose chest CT images. We evaluated the image quality by measuring noise standard deviation of soft tissue and with visual assessment of bronchial wall, lung fissure, and soft tissue. For comparison, the image quality was assessed on FBP, VEO, and deep learning-denoised FBP images. The visual assessment made with 4 points scale were 1.0, 3.4 and 4.0 in FBP, VEO, and deep learning-denoised FBP images. Image noise of soft tissue was 101±28HU, 20±5HU, 28±10HU in FBP, VEO, deep learning-denoised images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When super-resolution processing is applied to facial images, wrinkle and stains are emphasized, so super-resolution processing on skins part is not suitable. Therefore, in the previous study, we proposed a method to perform facial correction on skin parts. However, we confirmed that there was a problem that image quality deteriorated according to the skin color detection accuracy. Therefore, in this paper, we propose a novel skin color detection method and an experimental result demonstrates that a high quality super-resolution image is obtained for a facial image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we designed a digital hardware circuit for a field-programmable gate array (FPGA) to provide an effective contrast improvement algorithm for dichromats. The proposed method employs the Craik-O ’Brien (C-O) effect. The C-O effect is an optical illusion effect in which subjective contrast is created from contour information. In the proposed method, the contrast modification is only conducted around the contours of objects to apply the C-O effect for dichromats. To extract the contour information of objects, a T-model filter which only requires a one-line buffer is introduced. The proposed method can realize the C-O effect without using dividers and multipliers. Therefore, it is relatively simple to implement in the FPGA. Through experiments with software and logic simulation, the effectiveness and validity of the proposed method were evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we aim to separate the ghost artifacts from the limited angle CT image by using Robust Principle Component Analysis (RPCA) and thus improve the reconstructed CT images. Conventionally, RPCA method separates the foreground and the background. Often, the background is assumed as static or quasi-static. When applied to limited angle CT images, the artifacts are considered as quasi-static background whereas the anatomical structures are considered foreground. Thus, RPCA is performed to segment the foreground from the background. Finally, different post-reconstruction de-noising parameters are applied to each foreground and background to remove the artifact effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a phase retrieval method using the Hilbert transform (HT) and low-rank method is proposed to obtain differential phase contrast (DPC) imaging. The method has the following advantages: 1. A single grating system can be implemented without the mechanical movement of the grating. 2. The complex computation of the phase retrieval method by the fast Fourier transform (FFT) method can be avoided. 3. The noise rejection by the low-rank can be handled due to fringes from the various energy bins. Specifically, the low-rank method is the singular value decomposition (SVD) by the rank-one property. The phase retrieval of the HT method and noise filtering by the lowrank method have been performed to validate the proposed method. The proposed method provided the clear boundary division between the sample area and the air area. The boundary division of the high energy image between the sample area and the air area was improved with the low rank method (resulting in a sharp division). Moreover, the profile of the DPC image obtained by HT had symmetrical form similar to the theoretical profile.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study investigated the physiological effects of ginger aroma, based on salivary amylase activity (SAA) and heart rate under stress conditions. For this purpose, we divided the study's 50 subjects into two groups: a scented-environment group and an unscented-environment group, and conducted experiments in which each group was asked to solve continuous calculation tasks. We measured the subjects' SAA and heart rate while they were performing a calculation task for 15 min. To elucidate the influence of individual preferences for ginger aroma, we allocated the subjects in the scented-environment group to two sub-groups: a favorable-aroma (FA) group and an unfavorable-aroma (UA) group. The results suggested that ginger aroma had the effect of reducing stress, except for immediately after smelling the aroma and when approaching the end of the calculation task. Furthermore, we found that the heart rates of the FA group were consistently lower than those of the UA and unscented-environment groups. It was inferred that ginger aroma has a sedative effect for those who like the aroma.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Effective segmentation of abdominal organs on CT images is necessary not only in the quantitative analysis but also in the dose simulation of radiational oncology. However, the manual or semi-automatic segmentation is tedious and subject to inter- and intra-observer variances. To overcome these shortcomings, the development of a fully automatic segmentation is required. In this paper, we propose the deep learning based fully-automated method to segment multiple organs from abdominal CT images and evaluate its performance on clinical dataset. Total 120 cases were used for training and testing. The DSC values in 20 test dataset were 0.945±0.016, 0.836±0.084, 0.912±0.052 and 0.886±0.068 for the liver, stomach, right and left kidney, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Additive manufacturing processes create the opportunity for freedom of design as it allows parts to be manufactured where conventional methods would fail. Printing methods such as selective laser sintering (SLM), electron beam melting (EBM) and others usually produce micro-porosity [1]. These micro defects can have a major impact on the functionality and lifetime of the components. X-ray Computed Tomography (XCT) is an image acquisition technique that allows a complete three-dimensional capture of an object including its internal features and structures. Typically, an XCT system captures many digital 2D radiography images as the sample is being rotated. A computed tomography algorithm then post-processes the 2D images into a reconstructed 3D digital image that represents the scanned part. This technology is an established method of non-destructive evaluation (NDE) to detect the presence of cracks and large porosity in additively manufactured components. However, micro-defects and cracks are known to be difficult to detect. The lack of distinction between X-ray artefacts due to scattering and beam hardening, and the defects make it impossible for simple intensity-based image processing algorithms, such as thresholding, to reliably detect and quantify the presence of a defect especially as the defect size approaches the imaging resolution. Here, an approach to improve micro-porosity and crack detection through the use of random forest classifier was studied. This method was optimized to detect defects that are very close to the voxel size. To achieve this, trainable segmentation with a random forest classifier was used with three pre-defined classes (Pore, Material, and Air). Random forest classifier is a general ensemble learning method that can be utilized for image classification. It creates a set of decision trees from randomly selected subsets of the training set. It then compiles the probability aggregate of each layer of the node within the tree to decide on the outcome. A reference artefact was designed to digitally simulate a CT scan of internal micro-holes, purposefully cut inside the material to simulate the presence of micro-porosity. Using a Computer-Aided-Design (CAD) model as an input to the aRTist simulation software (from Federal Institute for Materials Research and Testing (BAM). Simulated XCT data was obtained to perform supervised training. Subsequently. Verification of the performance of this approach is compared with results from a commercial software.)
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an automatic segmentation of meniscus from knee MR images using multi-atlas segmentation and patchbased edge classification. To prevent registration to large tissues, meniscus is targeted using segmented bone and articular cartilage information. To segment the meniscus with large shape variations and remove leakage to the collateral ligaments robustly, meniscus is segmented using shape- and intensity-based locally-weighted voting (LWV) and patchbased edge classification. Experimental result shows that the Dice similarity coefficient of proposed method as comparison with two manually outlining results provides over 80% in average and is improved compared to LWV based on multi-atlas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an automatic segmentation of the orbital bone in 3D maxillofacial CT image with double-bone-segmentation network for reconstruction of the orbital bone. Due to similar intensity value with surrounding tissues and low intensity value in thin bone, there is a limitation of under-segmentation in thin bone. To improve segmentation of thin bone, we divide into cortical and thin bone and apply to single-bone-segmentation network respectively. Experimental results show that our DBS-Net results in the improved segmentation of the orbital bone, especially in thin bone of orbital medial wall and the orbital floor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The malignancy rate of GGN is different according to the presence and the size of a solid component. Thus, it is important to differentiate part-solid GGN with a variable sized solid component from pure GGN. In this paper, we propose a method of classifying the GGNs according to presence or size of solid component using multiple 2.5- dimensional deep CNNs. First, to consider not only intensity but also texture, and shape information, we propose an enhanced input image using image augmentation and removing background. Second, we proposed GGN-Net which can classify GGNs into three classes using multiple input images in chest CT images. Finally, we comparatively evaluate the classification performance according to different type of input images. In experiments, the accuracy of the proposed method using multiple input images was the highest at 82.76% and it was 10.35%, 13.79%, and 6.90% higher than that of using three single input image such as intensity-based, texture- and shape-enhanced input images, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Histological subtypes, i.e. adenocarcinoma (ADN) and squamous cell carcinoma (SCC), identified from a single biopsy occasionally differ from those from actual surgical resections in NSCLC. For increasing the classification accuracy, we aim to develop an automated approach for classifying histological subtypes of NSCLC using Gaussian, linear and polynomial support vector machines (SVMs) with radiomic features. Classification models of Gaussian, linear and polynomial SVMs constructed with radiomic features achieved the areas under the curves of 0.7542, 0.7522 and 0.7531, respectively. Histological subtypes of NSCLC could be classified into ADN and SCC using a Gaussian SVM with radiomic features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The segmentation of medical image applying in medical anatomy plays an important role in various application. So, the study of medical image processing is very important and necessary. Due to the presence of noise and complexity of structure, the existing methods have various shortcomings and the performances are not ideal. In this study, we propose a new method which based on back propagation (BP) neural network and AdaBoost algorithm. The BP neural network we created is 1-7-1 structure. then we trained the system by Gravitational search algorithm (Here, we use the segmented images which were obtained by classic fuzzy c-means algorithm as the ideal output data). Based on this, we established and trained 10 groups of BPNN (We also call it as weak classifier) by applying 10 groups of different data. subsequently, we adopted the AdaBoost algorithm to obtain the weight of each BPNN. Finally, we made up a new BPAdaboost system for image segmentation. In this experiment, we used one group of datasets: Brain MRI. A comparison with the conventional segmentation method through subjective observation and objective evaluation indexes reveals that the proposed method achieved better results based on brain image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we performed deep learning analysis for the automatic segmentation of vessel and lumen in intravascular ultrasound (IVUS) images. Extracting vascular boundaries from intravascular ultrasound images are essential for the quantitative analysis of cardiovascular diseases. We applied a fully convolutional network (FCN) based semantic segmentation technique and transfer learning. To consider the continuity of the IVUS images, we filled in RGB channels with the central image and the nearby images with displacement and trained different FCN model for each displacement. Based on our experiments, we obtained 0.97 ± 0.03 of dice similarity coefficient (DSC) value in the vessel and 0.91 ± 0.09 of DSC value in the lumen. Due to their robustness and accuracy, this method is highly promising to be used in clinical practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In total hip arthroplasty, analysis of postoperative images is important to evaluate surgical outcome. Since CT is most prevalent modality in orthopedic surgery, we aimed at the analysis of CT image. The challenge in this work is the metal artifact in postoperative CT caused by the metallic implant, which reduces the accuracy of segmentation especially in the vicinity of the implant. Our goal was to develop an automated segmentation method of the muscles in the postoperative CT images. In this paper, we propose a method that combines Normalized Metal Artifact Reduction (NMAR), which is one of the state-of-the-art metal artifact reduction methods, and a CNN- based segmentation using the U-Net architecture. We conducted experiments using simulated images and real images of the lower extremity to evaluate the segmentation accuracy of 19 muscles that are contaminated with metallic artifact. The training dataset we used is 20 CTs that were manually traced by an expert surgeon. In simulation study, the proposed method improved the average symmetric surface distance (ASD) from 1.85 ± 1.63 mm to 1.24 ± 0.67 mm (mean ± std). The real image study using two CTs with the ground truth of gluteus maximus, medius and minimus muscles showed the reduction of ASD from 1.67 ± 0.40 mm to 1.52 ± 0.47 mm. Our future work includes the end-to-end convolutional neural network for metal artifact reduction and musculoskeltal segmentation and to establish a ground truth dataset by performing non-rigid registration between the postoperative and preoperative CT of the same patient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, a new computer-aided system was proposed to automatically reconstruct the spine model. The bi-planar EOS X-ray imaging was adopted as the scanning technology, which is capable of a simultaneous capture of bi-planar X-ray images by slot scanning of the whole body using ultra-low radiation doses. High quality and high contrast anteroposterior (AP) and lateral (LAT) X-ray images will be acquired during scanning period and these two radiographs enable a precise three-dimensional reconstruction of vertebrae, pelvis and other parts of the skeletal system. To overcome the timeconsuming issue of spine reconstruction using EOS system, a generative adversarial network (GAN) was applied to reconstruct the entire spine model, which is consist of generator and discriminator and training by unsupervised learning approach. Nowadays, GAN model has already been adopted in the transformation from 2D image to 3D scenes. Therefore, our approach represents a potential alternative for EOS reconstruction while still maintaining a clinically acceptable diagnostic accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, the examination results of speed of sound of sliced rat organs analyzed with multi-frequency ultrasound (80 and 250 MHz) from the acquiring radiofrequency (RF) echo signals observed by our self-made scanning acoustic microscopy (SAM) system is reported. The frequency dependence of SoS was evaluated by analysis method involving filtering considering spatial resolution at each frequency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection and the evaluation of the shape of liver from abdominal computed tomography (CT) images are fundamental tasks in the computer-assisted liver surgery planning such as radiation therapy. However, the segmentation of the liver still remains many challenges to be solved, such as ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we developed an automatic liver segmentation model based on 3D U-net network. Some preprocessing steps were done to elevate the performance of our protocol first. Also, an approximate liver map was generated by calculating the gradient of CT images. The area which had high possibility to be liver was select as the training set to make sure the balance of data. Then, a deep learning U-net structure was applied for the processed training data. Finally, some post-processing methods, which include k-means clustering and morphology algorithms, was applied in our protocol. Our protocol showed the results with high structure similarity index (SSIM), dice score coefficient and peak signal-to noise ratio (PSNR) of liver segmentation model, demonstrating the potential clinical applicability of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Burdens of doctors for chest X-ray (CXR) examination have increased because number of X-ray images increases. Furthermore, since diagnosis is based on the experience and subjectivity of them, there is a possibility that a misdiagnosis may occur. Therefore, we performed Computer-Aided Diagnosis (CAD). In this study, we detected pulmonary nodules using R-CNN (Region with Convolutional Neural Network)[1] which is a kind of Deep Learning. First, we created CNN (Convolutional Neural Network) which classified data into classes of nodule opacities and nonnodule opacities. Next, we detected the object candidate regions from the chest X-ray images by Selective Search[2], and applied the CNN to the candidate regions to classify them and estimate the detailed position of the object. Thus, we propose a method to detect pulmonary nodules from the chest X-ray images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The skeletal muscle exists in the whole body and can be observed in many cross sections in various tomographic images. Skeletal muscle atrophy is due to aging and disease, and the abnormality is difficult to distinguish visually. In addition, although skeletal muscle analysis requires a technique for accurate site-specific measurement of skeletal muscle, it is only realized in a limited region. We realized automatic site-specific recognition of skeletal muscle from whole-body CT images using model-based methods. Three-dimensional texture analysis revealed imaging features with statistically significant differences between amyotrophic lateral sclerosis (ALS) and other muscular diseases accompanied by atrophy. In recent years, deep learning technique is also used in the field of computer-aided diagnosis. Therefore, in this initial study, we performed automatic classification of amyotrophic diseases using deep learning for the upper extremity and lower limb regions. The classification accuracy was highest in the right forearm, which was 0.960 at the maximum (0.903 on average). In the future, methods for differentiating more kinds of muscular atrophy and clinical application of ALS detection by analyzing muscular regions must be considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gross tumor volume (GTV) regions of lung tumors should be determined with repeatability and reproducibility on planning computed tomography (CT) in radiation treatment planning to reduce intra- and inter-observer variations of GTV regions. Therefore, we have attempted to develop an automated segmentation framework of the GTV regions on planning CT images using dense V-Net deep learning (DenseVDL). In order to evaluate the GTV regions extracted by the DenseVDL network, Dice similarity coefficient (DSC) was used in this study. The proposed framework achieved average 2D-DSC of 0.73 and 3D-DSC of 0.76 for sixteen cases. The proposed framework using the DenseVDL may be useful for assisting in radiation treatment planning for lung cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lung Cancer is a leading cause of death worldwide, and about 85% of lung cancer is non-small cell lung cancer (NSCLC). The staging of lymph nodes in NSCLC patients is extremely important because respective stages require different treatments. FDG-PET/CT is a gold standard for lymph node metastasis staging of NSCLC. However, the results of discriminating lymph node staging on 18F-2-fluoro-2-deoxy-d-glucose (FDG) positron emission tomography (PET) / computed tomography (CT) still needs improvement. In addition to the traditional image parameters of FDG-PET/CT such as standardized uptake value (SUV), there are many other parameters available from FDG-PET/CT images, for example, the lymphatic drainage pathway. Other than this, texture analysis which distinguishes subtle difference can also be a way to define lymph node staging. For the purpose of a better accuracy on lymph node metastasis diagnosis on NSCLC patient in FDG-PET/CT, this research developed a computer-aided diagnosis (CAD) system to improve the diagnostic efficiency, which achieved 88.056% accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, many medical image analysis methods based on the Deep Learning techniques have been proposed. The Deep Learning techniques have been used for various medical applications such as organ segmentation and cancer detection. Segmentation of lung region from chest X-ray (CXR) images is also important task for computer-aided diagnosis (CAD). However, many methods based on Deep Learning techniques for this purpose were proposed, the regions where the lung and the heart overlap have been excluded from the target to be extracted in spite of the importance for detection of diseases. The aim of this paper is to extract whole lung regions from CRX images by using the U-net based method. As widely known, the U-net shows its high performance for various applications. As the result of the experiment, the authors archive 0.91 in the average of the Dice coefficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new computer-aided diagnosis system is proposed to automatically diagnose liver cirrhosis based on fourphases CT images, which included non-contrast phase, arterial phase, delay phase and portal venous phase. It is developed for the purpose of discriminating the cirrhosis into mild or severe level by automatic liver segmentation method and classification method using machine learning algorithm. First, the gradient-inverse map of CT images are calculated to derive the relative-smooth features in local area. Then we compared the centroid and area of each binary labeled groups through each slice to quantitatively extract the volume of interest (VOI) of liver automatically. In classification step, some first-order features and texture features are calculated to describe the intensity representation of liver parenchyma. Some parameters are also used to quantify the distribution of intensity in VOI. By the way, we also quantified the shape of VOI and derived some structural features. Finally, the trained support vector machine (SVM) and Neural Network (NN) classifier is applied to classify the subjects into clinical stages of the liver cirrhosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the dopamine nerves of the nigrostriatal body in the brain, 123I-FP-CIT binds to dopamine transporter (DAT), the distribution of which can be visualized on a single photon-emission computed tomography (SPECT) image. The Tossici-Bolt method is generally used to analyze SPECT images. However, since the Tossici-Bolt method uses a fixed region of interest, it is susceptible to the influence of non-accumulated parts. Magnetic resonance (MR) images are effective for recognizing the shape of the striatal region. Here we used MR images generated by deep learning from low-dose CT images taken with SPECT/CT devices. The purpose of this study was to perform a quantitative analysis with high repeatability using the striatal region extracted from automatically generated MR images. First, an MR image was generated from a CT image by pix2pix. After that, a striatal region was extracted from the generated MR image by PSPNet[3]. A quantitative analysis using specific binding ratio was performed using this region. For the experiments, 60 clinical cases of SPECT/CT and MR images were used. The specific binding ratios calculated by this method and the Tossici-Bolt method were compared. As a result, better results than with the Tossici-Bolt method were calculated in 12 cases. Therefore, generating MR images from low-dose CT images and segmentation by deep learning may contribute to quantitative analysis with high reproducibility of DAT imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this VR system is to simulate a puzzle / challenge for adaption in cognitive rehabilitation training. Development of the VR system is inspired by a 1st person puzzle-platform game where the player must navigate and complete through a series of puzzle rooms with each room being more difficult than the last. The unique features in this work include the use of “portals” and “portal gun”. The portal gun allows the player to shoot two separate “portals” on walls that will allow anything to be teleported from one portal to the other. Implemented on the Unity engine and SteamVR, the VR Toolkit is employed in modeling and script development. Technical innovations are made in modeling the animated and self-collision detectable spider; upon being collided with a weapon (bullet or blade) it uses a special dissolve shader to give the effect of disappearing gradually from the game. We present the specific tasks of cognitive rehabilitation, conceptualization of VR techniques and the detailed implementation of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current screening of mammography results in a high recall rate. Furthermore, distinguishing between BI-RADS 3 and BI-RADS 4 is a challenge for radiologists. In order to help radiologists’ diagnosis, researches of CAD system recently have shown that methods of deep learning can significantly improve lesion detection, segmentation, and classification. However, there is not enough evidence to show that deep learning models can reduce the high recall rate because few researches provide the performance of cases in BI-RADS 3 and BI-RADS 4. Moreover, few researches extended the current models to involve images in CC and MLO in a single prediction. Thus, we proposed convolutional neural networks to classify breast cancer. Our model could predict images in four input sizes. Besides, we extended our model to consider images in CC and MLO in a single prediction. To validate our models, we split the data depending on patients rather than images. Our training set was composed of 4255 images, and test set contained 355 images that were proven by biopsy and callback. The overall performance of human experts yielded on an accuracy of 65.3% while our model achieved a better accuracy of 79.6%. Besides, the performance of cases in BI-RADS 3 and 4 by human experts was accuracy of 54.1%, but our model maintained a high accuracy of 75.7%. When we combined images in CC and MLO in the single prediction, we achieved AUC of 0.86.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coronary artery disease (CAD) as a common disease is now indeed affecting the quality of daily life of patients. Qualification analysis of the causing reasons for this kind of disease needs more vessel inner tissue (healthy or not healthy) information in detail. Recent years, an intravascular OCT technology is starting implemented to the patients for a appropriate treatment. Lesion tissue analysis of thousands of IVOCT image data per patient is time-consuming and lower efficiency especially on manual analyzing. Traditional machine learning methods are always applied to investigate the feature extracted from the image data with some special feature engineering technologies, but for deeper abstract features, it's still difficult to draw out. Currently, the utility of deep learning method to image target detection and classification tasks has won a great success and it's generally common to use the deep learning method attack many computer version issues. In this paper, we propose a method based on the Convolutional Neural Network (CNN) to model a VGG-Net-like for category classification of vessel lesion tissues. We preprocess the IVOCT image with catheter and guide-wire removal methods and obtain the lumen boundary. Analyzing the intensity of vessel tissues with light attenuation, we crop rectangle regions with fixed size along the circumferential direction to gain a number of patches as the input samples of CNN. Three kinds of input type, LBP-based single channel, RGB channels and merging-channel containing LBP and RGB, are fed into the model we built to discuss the prediction results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hepatocellular Carcinoma (HCC) is a worldwide tumor, but the prognosis can be improved by early diagnosis. In contrast-enhanced CT, a modality commonly used for HCC diagnosis, HCC lesion represents dynamic enhancement patterns. To incorporate HCC dynamic characteristic in multi-phase into an automatic lesion detection system, multiphase CT images were aligned by using image registration scheme. The registered artery, portal venous and delayed phase images were merged into one RGB image. 2D based deep convolutional neural network (DCNN) detection model was trained and tested in total of 251 CT dataset. The performance of the proposed DCNN model with dynamic multiphase information showed a sensitivity of 93.88% in the false positives (FPs) of 2.98/patient in 52 test CT dataset. This result is better than the best performance among three single phase settings with sensitivity of 73.47% at 3.15 FPs/patient, indicating that the inclusion of dynamic information in multi-phase CT images is more effective in HCC detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Research on Computer-Aided Diagnosis (CAD), which discriminates the presence or absence of diseases by machine learning and supports doctors’ diagnosis, has been actively conducted. However, training of machine learning requires many training data with annotations. Since the annotations are done by radiologists manually, annotating hundreds to thousands of images is very hard work. This study proposes classifiers using convolutional neural network (CNN) with transfer learning for efficient opacity classification of diffuse lung diseases, and the effects of transfer learning are analyzed under various conditions. In detail, classifiers with nine different conditions of transfer learning and without transfer learning are compared to show the best conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mammary gland density is used as one of the measures in managing the risk of breast cancer. It can be divided into four categories. In addition, mammography is used for population-based breast cancer screening in Japan. However, mass and calcification are assumed to be hidden in the shadow of the mammary gland as displayed by the mammogram when patients showing heterogeneously dense or extremely dense in the mammary gland density category are scanned with mammography. Therefore, it is necessary to recommend an examination suitable for each category of mammary gland density. In one example, a doctor recommends ultrasonography in addition to mammography for patients with dense breasts. However, mammary gland density is distinguished visually using subjective judgment. Against such a background, we have worked on an automatic classification of mammary gland densities using a deep learning technique. Moreover, we investigated the effect of image resolution on the classification results in the automatic classification of mammary gland density with deep learning. The resolution was varied from 1/100 (474 × 354) to 1/3600 (79 × 59) using 1106 cases of resolution 4740 × 3540 (pixels) obtained with Fuji Computed Radiography (FCR) by Fujifilm Co. Ltd. As a result, the accuracy of automatic classification of mammary gland density exceeded 90% up to a resolution of 1/400 (237 × 177), and was 89% even at the lowest resolution of 1/3600 (79 × 59).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have investigated an approach for prediction of parotid gland tumor (PGT) malignancy on preoperative magnetic resonance (MR) images. The PGT regions were segmented on the MR images of 42 patients. A total of 972 radiomic features were extracted from tumor regions in T1- and T2-weighted MR images. Five features were selected as a radiomic biomarker from the 972 features by using a least absolute shrinkage and selection operator (LASSO). Malignancies of PGTs (high grade versus intermediate and low grades) were predicted by using random forest (RF) and k-nearest neighbors (k-NN) with the radiomic biomarker. The proposed approach was evaluated using the accuracy and the mean area under the receiver operating characteristic curve (AUC) based on a leave-one-out cross validation test. The accuracy and AUC of the malignancy prediction of PGTs were 73.8% and 0.88 for the RF and 88.1% and 0.95 for the k-NN, respectively. Our results suggested that the radiomics-based k-NN approach using preoperative MR images could be feasible to predict the malignancy of PGT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a method to classify metastatic bone tumors using treatment-planning computed tomography images. The proposed method utilizes pre-trained deep convolutional neural network (DCNN) models as feature extractors and enables the metastatic bone tumor classification by using the obtained features. Performance of several state-of-the-art DCNN-based features was compared and evaluated in our experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of 3D printing technology anybody can print weapons with home 3D printer. In this paper, we would like to present an anti-weapon detection algorithm for safe 3D printing using the convolutional neural networks (CNNs) to prevent the printing of weapons in 3D printing industry. The proposed algorithm is based on training the D2 shape distribution of 3D weapon models by the improved CNNs. The D2 shape distribution of 3D weapon model is calculated from geometric features and points on the surface of 3D triangle mesh in order to construct a D2 vector. The D2 vector is then trained by the improved CNNs. The training and testing results show that the proposed algorithm is more accuracy than the conventional works and previous methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new method for mapping onto a brain volume model including inner organs with complicated shapes such as lateral ventricles. The proposed method is based on a volumetric Self-organizing Deformable Model (vSDM) which allows to control the mapping positions of inner organs while preserving geometrical features before and after the mapping. The control sometimes causes the self-intersection of the volume model. The solution for the self-intersection in vSDM is to move vertices of the volume model. However, when the inner organ has complicated shape, the vertex movement cannot always correct the self-intersection. To solve this problem, we extend vSDM by introducing a new process of editing the mesh structure of the volume model. Moreover, by applying the proposed method to six brain volume models, a volumetric Statistical Shape Model (SSM) is constructed which represents the shape variations of not only brain surface but also brain inner organs. From experimental results, we confirmed the volumetric SSM has an acceptable performance compared with general surface SSMs generated by organ surface models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Research on computer-aided diagnosis (CAD) for medical images using machine learning has been actively conducted. However, machine learning, especially deep learning, requires a large number of training data with annotations. Deep learning often requires thousands of training data, but it is tough work for radiologists to give normal and abnormal labels to many images. In this research, aiming the efficient opacity annotation of diffuse lung diseases, unsupervised and semi-supervised opacity annotation algorithms are introduced. Unsupervised learning makes clusters of opacities based on the features of the images without using any opacity information, and semi-supervised learning efficiently uses the small number of training data with annotation for training classifiers. The performance evaluation is carried out by the classification of six kinds of opacities of diffuse lung diseases: consolidation, ground-glass opacity, honeycombing, emphysema, nodular and normal, and the effectiveness of the methods is clarified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Micro-CT is a nondestructive scanning device that is capable of capturing three dimensional structures at _m level. With the spread of this device uses in medical fields, it is expected that this device may bring further understanding of the human anatomy by analyzing three-dimensional micro structure from volume of in vivo specimens captured by micro-CT. In the topic of micro structure analysis of lung, the methods for extracting surface structures including the interlobular septa and the visceral pleura were not commonly studied. In this paper, we introduce a method to extract sheet structure such as the interlobular septa and the visceral pleura from micro-CT volumes. The proposed method consists of two steps: Hessian analysis based method for sheet structure extraction and Radial Structure Tensor combined with roundness evaluation for hollow-tube structure extraction. We adopted the proposed method on complex phantom data and a medical lung micro-CT volume. We confirmed the extraction of the interlobular septa from medical volume from experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we report on the construction of a pancreatic tumor model that represents the relationship between the tumor growth and the micro anatomical structures. The former, the tumor growth, is described by referring to the temporal series of MRI images of the whole body and the latter, the micro structures of the tumor, is described by a spatial series of microscopic images of thin-sections sliced from the extracted pancreatic tumor. For the model construction, we developed new non-rigid registration methods for (1) accurate description of tumor growth, (2) reconstruction of 3D microscopic images, and (3) registration between an MRI image and corresponding microscopic images. In addition, we constructed a neural network that can generate a set of fake microscopic image patches of a pancreatic tumor that corresponds to each voxel inside the tumor region in an MRI image. The outlines of the methods are introduced and some examples of experimental results are demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents spatiotemporal statistical models of organ surfaces during human embryonic development, in which size, shape, and topology of organs are dynamically changed. The modeling scheme comprised two steps: (1) each temporal stage of an embryo was statistically modeled, and (2) models between neighboring temporal stages were interpolated. This paper includes optimization of interpolation techniques and a novel method for modeling nested shapes, such as brain and ventricular surfaces. The effectiveness of our method was demonstrated in the context of statistical modeling of a human embryo from the Kyoto Collection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Histopathological imaging and Magnetic Resonance (MR) are two equally important yet very distinct modalities of medical imaging. The high resolution of the first and the non-invasiveness of the later provide complementary information for medical diagnosis and research. Due to their largely different resolutions, the registration between 3D images of these two modalities is challenging. The objective of this paper is to create a multimodal 3D model of pancreatic cancer tumor by performing the registration of a reconstructed 3D pathological image and an MR image from a KPC mouse. The tumor portions were manually segmented and the 3D pathological image was reconstructed using landmark-based non-linear registration. The process starts by registering the outline of the images using the LDDMM non-linear registration method to match the binary labels of the tumor regions. Next, a non-linear B-spline deformation method based on mutual information maximization is used to register the internal structures of the images. Experimental results show that the overall shape of the tumor and its internal necrosis portion could be correctly registered, although the quality of the manual segmentations affects the accuracy of the registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Swallowing is achieved by a sequence of actions performed by cervical structures. Although a lot of patients suffer from dysphagia in the world, the mechanism and kinematics of swallowing are not elucidated sufficiently. This study aims to segment intervertebral disks (IDs), which are ones of representative cervical structures, in videofluorographic (VF) images by use of convolutional neural network (CNN). The proposed method consists of three steps: extraction of cervical masks, CNN-based segmentation of candidate regions of IDs, and the elimination of false positives. This segmentation method was applied to actual VF images of eleven participants that have fifty-one not-occluded IDs, and forty-three IDs were segmented successfully.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an automatic feature generation by deep convolutional autoencoder (deep CAE) without lesion data. The main idea of the proposed method is based on anomaly detection. Deep CAE is trained by only normal volume patches. Trained deep CAE calculates low-dimensional features and reproduction error from 2.5 dimensional (2.5D) volume patch. The proposed method was evaluated experimentally with 150 chest CT cases. By using both previous features and the deep CAE based features, an improved classification performance was obtained; AUC=0.989 and ANODE=0.339.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we present a 3D-printing based realistic anthropomorphic dental phantom and its imaging evaluation. A real skull phantom was scanned with MDCT at high resolution, and then image segmentation and 3D model were carried out. The created phantom was scanned with by using an MDCT and a dental CT scanner for image quality evaluation of metal artifacts. Our study demonstrated the feasibility of making 3D printing-based making realistic anthropomorphic phantoms which can be used in various dental imaging studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is substantially difficult for radiologists to measure intracranial aneurysm sizes because of overlapping structures and/or unusual locations, especially for aneurysms smaller than 7 mm. Therefore, we have developed an automated approach for estimation of unruptured intracranial aneurysm sizes in MRA images. The errors of estimated aneurysm sizes in the longest, middle and shortest diameters were 2.53%, 10.79% and 12.62%, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early detection of hypertension is important because hypertension leads to stroke and cardiovascular diseases. Hypertensive changes in the retina are diagnosed by measuring the arteriovenous ratio near the optic disc. Therefore, classification of arteries and veins is necessary for ratio measurement, and previous studies classified them by using pixel-based features, such as pixel values, texture features, and shape features etc. For simplification of the classification process, a convolutional neural network (CNN) was applied in this study. For evaluation of the classification process, CNN was tested using centerlines extracted manually in this study. As a result of a fourfold cross-validation with 40 retinal images, the mean classification ratio of the arteries and veins was 98%. Furthermore, CNN was tested using the centerlines of blood vessels automatically extracted using the CNN-based method for testing the fully automatic method. CNN classified 90% of blood vessels into arteries and veins in the arteriovenous ratio measurement zone. CNN had 30 trained and 10 tested retinal images. This result may work as an important processing for abnormality detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because most of the capsule-endoscopic images contain normal mucous membranes, physicians spend most of their reading time observing normal areas. Thus, a significant reduction in their reading time would be possible if only the portion of the image frame for which a particular lesion is suspected can be read intensively. This study aims to develop a deep convolutional neural-network-based model capable of automatically detecting lesions in the capsule-endoscopic images of a small bowel. The proposed model consists of two deep neural networks in parallel, each of which takes in images in RGB and CIELab color spaces, respectively. The neural-networks model is based on transfer-learned GoogLeNet architecture. Our proposed algorithm showed promising results in classifying endoscopic images where lesions exist (98.56% accuracy). If the proposed algorithm is used to screen abnormal images, it is expected to reduce a physician's reading time and to improve his/her reading accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection of atrial fibrillation (AF) is a critical issue of healthcare because it is an increased risk of serious brain infarction due to cerebral embolism despite that it is the commonest sustained arrhythmia. To improve the reliability of the detection of AF by the long-term monitoring of heartbeat signals, we developed machine-learning systems for detecting AF using the Allostatic State Mapping by Ambulatory ECG Repository (ALLSTAR) database of 24-h ambulatory electrocardiograms. Lorenz plot images were generated from consecutive segment of 600 R-R intervals and the pattern of image characteristic to AF was discriminated from those of non-AF segments, including sinus rhythm, frequent atrial ectopic beats, and atrial flutter. Lorenz plot images consisting of 10,035 known AF and 10,107 non-AF samples were provided to the machine learning algorithms of Convolutional Neural Network (CNN). The performance to detect AF was evaluated in the independent 50 samples of 24-h ECG including paroxysmal AF episodes. As the results, the CNN that detected Lorenz plot of AF with 100% sensitivity and 100% specificity was developed through the deep learning. The developed CNN system classified accurately all 24-h ECG data including paroxysmal AF episodes. Lorenz plot imaging of R-R interval dynamics is useful for effectively discriminating AF from non-AF by artificial intelligence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.