CT image technology has been widely used in recent research on dental imaging. High-quality CT images can be accessed by adopting a high-dose radiation dose that unfortunately exerts a detrimental influence on the human body. Reducing the radiation dose is a solution to eliminate the negative effect, while it will put noise into the CT images. In this paper, a dental CT image denoising method is proposed based on the generative adversarial network. A total of 6144 pairs in which each contained a noise dental CT image and a corresponding original dental CT image were trained through the model of the generative adversarial network. The test was carried out on the original untrained image. Comparing with the image without denoising, the test results indicate a significant improvement on the results. For example, the PSNR is increased by 21.52% and the SSIM is increased by 52.95%. In the comparison with the professional artificial vision performance, the result has also been significantly improved, which proves the effectiveness of the method in this paper. Moreover, the method proposed in this paper merely needs little training data. It is foreseeable that with the increasing of the training data, this method will have a better performance in the aspect of noise reduction in dental CT images.
Compressed sensing (CS) computed tomography has been proven to be important for several clinical applications, such as sparse-view computed tomography (CT), digital tomosynthesis and interior tomography. Traditional compressed sensing focuses on the design of handcrafted prior regularizers, which are usually image-dependent and time-consuming. Inspired by recently proposed deep learning-based CT reconstruction models, we extend the state-of-the-art LEARN model to a dual-domain version, dubbed LEARN++. Different from existing iteration unrolling methods, which only involve projection data in the data consistency layer, the proposed LEARN++ model integrates two parallel and interactive subnetworks to perform image restoration and sinogram inpainting operations on both the image and projection domains simultaneously, which can fully explore the latent relations between projection data and reconstructed images. The experimental results demonstrate that the proposed LEARN++ model achieves competitive qualitative and quantitative results compared to several state-of-the-art methods in terms of both artifact reduction and detail preservation.
Fasteners are the important components of railway system, which can be used to fix the tracks to the sleepers and reduce the likelihood of derailment. Nowadays, the extensively used approaches for the automatic detection of defective fasteners are vision-based approaches. However, they are not robust and efficient enough to be applied in reality. To solve this problem, this paper applies deep convolutional networks for the automatic detection of fastener defect and proposes a two-stage fastener defect detection framework. The framework is composed of a CenterNet-based fastener localization module and a VGG-based defect classification module. Besides,we innovatively introduce an attention mechanism named CBAM into localization network and an adaptive weighted softmax loss in classification network training procedure to elevate the accuracy of both modules. The experiment result shows that both methods have obviously improved the performance of the fastener defect detection system. The proposed localization network has a better accuracy-speed trade-off with 99.94% AP at 63 FPS on the test set. In addition, the proposed defect classification network has the best accuracy (up to 98.10%) on the test set and can be used to classify up to 5 categories of defects.
In order to ensure the safety of rail transit, detecting the flaws on the rail surface is vitally important. Instead of present manual inspections, detecting defects on rail surface by an automatic approach enables the work more efficient and safe currently. In this paper, we propose a novel two-stage pipeline method for defect detection on rail surface by localizing rails and sliding a deep convolutional neural network (DCNN) on rail surface. Specifically, in the first stage, we use an anchor-free detector to locate the tracks in original images and get the cropped images which focus on rail part. In the second stage, a trained deep convolutional neural network slide on the cropped images to detect defects and we can finally get the types and approximate locations of the defects on rail surface. The experimental results show that the proposed method has robustness and achieves practical performance in defect detection precision.
Given the potential risk of X-ray radiation to the patient, low-dose CT has attracted a considerable interest in the medical imaging field. Currently, the main stream low-dose CT methods include vendor-specific sinogram domain filtration and iterative reconstruction algorithms, but they need to access raw data whose formats are not transparent to most users. Due to the difficulty of modeling the statistical characteristics in the image domain, the existing methods for directly processing reconstructed images cannot eliminate image noise very well while keeping structural details. Inspired by the idea of deep learning, here we combine the autoencoder, deconvolution network, and shortcut connections into the residual encoder-decoder convolutional neural network (RED-CNN) for low-dose CT imaging. After patch-based training, the proposed RED-CNN achieves a competitive performance relative to the-state-of-art methods. Especially, our method has been favorably evaluated in terms of noise suppression and structural preservation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.