The advent of Cone Beam Computed Tomography (CBCT) in the late 20th century marked a significant leap forward, transitioning dental imaging from the conventional multi-slice CT to a more robust 3D imaging modality. Despite its advancements, CBCT often fails to capture the entire dentition in a single image, leading to the additional need for panoramic radiography. This combined approach, while comprehensive, results in increased patient exposure to radiation and extended scan times, posing significant drawbacks in terms of patient comfort and clinical efficiency. The introduction of synthetic panoramic radiography (2D), derived from computed tomography (3D) scans, has emerged as a potential solution. However, this advancement is not without its limitations. A critical drawback of synthetic panoramic radiography is its tendency to produce images of low resolution compared to conventional panoramic x-rays, causing diagnostic errors. To circumvent this issue, our research employs an unsupervised learning approach. Specifically, we utilize Cycle Generative Adversarial Networks (CycleGAN) to perform super resolution on these synthetic images, thus eliminating the need for paired low- and high-resolution images, which are relatively small and challenging to collect. This technique effectively eliminates the need for paired low- and high-resolution images, thereby overcoming a significant hurdle in supervised learning approach in medical field. Our method demonstrates a significant improvement over conventional techniques, yielding sharper line profiles and higher signal-to-noise ratios and contrast-to-noise ratios. These improvements are evident when compared not only to the original synthetic images but also to those processed with Gaussian filters. The results demonstrate that our generation model improves the resolution and quality of dental images.
|