PurposeX-ray scatter causes considerable degradation in the cone-beam computed tomography (CBCT) image quality. To estimate the scatter, deep learning–based methods have been demonstrated to be effective. Modern CBCT systems can scan a wide range of field-of-measurement (FOM) sizes. Variations in the size of FOM can cause a major shift in the scatter-to-primary ratio in CBCT. However, the scatter estimation performance of deep learning networks has not been extensively evaluated under varying FOMs. Therefore, we train the state-of-the-art scatter estimation neural networks for varying FOMs and develop a method to utilize FOM size information to improve performance.ApproachWe used FOM size information as additional features by converting it into two channels and then concatenating it to the encoder of the networks. We compared our approach for a U-Net, Spline-Net, and DSE-Net, by training them with and without the FOM information. We utilized a Monte Carlo–simulated dataset to train the networks on 18 FOM sizes and test on 30 unseen FOM sizes. In addition, we evaluated the models on the water phantoms and real clinical CBCT scans.ResultsThe simulation study demonstrates that our method reduced average mean-absolute-percentage-error for U-Net by 38%, Spline-Net by 40%, and DSE-net by 33% for the scatter estimation in the 2D projection domain. Furthermore, the root-mean-square error on the 3D reconstructed volumes was improved for U-Net by 43%, Spline-Net by 30%, and DSE-Net by 23%. Furthermore, our method improved contrast and image quality on real datasets such as water phantom and clinical data.ConclusionProviding additional information about FOM size improves the robustness of the neural networks for scatter estimation. Our approach is not limited to utilizing only FOM size information; more variables such as tube voltage, scanning geometry, and patient size can be added to improve the robustness of a single network.
Cone-beam computed tomography (CBCT) has become a vital imaging technique in various medical fields but scatter artifacts are a major limitation in CBCT scanning. This challenge is exacerbated by the use of large flat panel 2D detectors. The scatter-to-primary ratio increases significantly with the increase in the size of FOV being scanned. Several deep learning methods, particularly U-Net architectures, have shown promising capabilities in estimating the scatter directly from the CBCT projections. However, the influence of varying FOV sizes on these deep learning models remains unexplored. Having a single neural network for the scatter estimation of varying FOV projections can be of significant importance towards real clinical applications. This study aims to train and evaluate the performance of a U-Net network on a simulated dataset with varying FOV sizes. We further propose a new method (Aux-Net) by providing auxiliary information, such as FOV size, to the U-Net encoder. We validate our method on 30 different FOV sizes and compare it with the U-Net. Our study demonstrates that providing auxiliary information to the network enhances the generalization capability of the U-Net. Our findings suggest that this novel approach outperforms the baseline U-Net, offering a significant step towards practical application in real clinical settings where CBCT systems are employed to scan a wide range of FOVs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.