Biomedical photoacoustic tomography, which can provide high-resolution 3D soft tissue images based on optical absorption, has advanced to the stage at which translation from the laboratory to clinical settings is becoming possible. The need for rapid image formation and the practical restrictions on data acquisition that arise from the constraints of a clinical workflow are presenting new image reconstruction challenges. There are many classical approaches to image reconstruction, but ameliorating the effects of incomplete or imperfect data through the incorporation of accurate priors is challenging and leads to slow algorithms. Recently, the application of deep learning (DL), or deep neural networks, to this problem has received a great deal of attention. We review the literature on learned image reconstruction, summarizing the current trends and explain how these approaches fit within, and to some extent have arisen from, a framework that encompasses classical reconstruction methods. In particular, it shows how these techniques can be understood from a Bayesian perspective, providing useful insights. We also provide a concise tutorial demonstration of three prototypical approaches to learned image reconstruction. The code and data sets for these demonstrations are available to researchers. It is anticipated that it is in in vivo applications—where data may be sparse, fast imaging critical, and priors difficult to construct by hand—that DL will have the most impact. With this in mind, we conclude with some indications of possible future research directions.
KEYWORDS: 3D modeling, Tissues, 3D image processing, Tissue optics, Image segmentation, Data modeling, In vivo imaging, Network architectures, Absorption, Photoacoustic imaging
Significance: Two-dimensional (2-D) fully convolutional neural networks have been shown capable of producing maps of sO2 from 2-D simulated images of simple tissue models. However, their potential to produce accurate estimates in vivo is uncertain as they are limited by the 2-D nature of the training data when the problem is inherently three-dimensional (3-D), and they have not been tested with realistic images.
Aim: To demonstrate the capability of deep neural networks to process whole 3-D images and output 3-D maps of vascular sO2 from realistic tissue models/images.
Approach: Two separate fully convolutional neural networks were trained to produce 3-D maps of vascular blood oxygen saturation and vessel positions from multiwavelength simulated images of tissue models.
Results: The mean of the absolute difference between the true mean vessel sO2 and the network output for 40 examples was 4.4% and the standard deviation was 4.5%.
Conclusions: 3-D fully convolutional networks were shown capable of producing accurate sO2 maps using the full extent of spatial information contained within 3-D images generated under conditions mimicking real imaging scenarios. We demonstrate that networks can cope with some of the confounding effects present in real images such as limited-view artifacts and have the potential to produce accurate estimates in vivo.
KEYWORDS: Optical flow, Computed tomography, Reconstruction algorithms, Bone, Data modeling, 3D applications, In vivo imaging, Magnetic resonance imaging
The foot and ankle is a complex structure consisting of 28 bones and 30 joints that changes from being completely mobile when positioning the foot on the floor to a rigid closed pack position during propulsion such as when running or jumping. An understanding of this complex structure has largely been derived from cadaveric studies. In vivo studies have largely relied on skin surface markers and multi-camera systems that are unable to differentiate small motions between the bones of the foot. MRI and CT based studies have struggled to interpret functional weight bearing motion as imaging is largely static and non-load bearing. Arthritic diseases of the foot and ankle are treated either by fusion of the joints to remove motion, or joint replacement to retain motion. Until a better understanding of the biomechanics of these joints can be achieved.
There are occasions, perhaps due to hardware constraints, or to speed-up data acquisition, when it is helpful to be able to reconstruct a photoacoustic image from an under-sampled or incomplete data set. Here, we will show how Deep Learning can be used to improve image reconstruction in such cases. Deep Learning is a type of machine learning in which a multi-layered neural network is trained from a set of examples to perform a task. Convolutional Neural Networks (CNNs), a type of deep neural network in which one or more layers perform convolutions, have seen spectacular success in recent years in tasks as diverse as image classification, language processing and game playing. In this work, a series of CNNs were trained to perform the steps of an iterative, gradient-based, image reconstruction algorithm from under-sampled data. This has two advantages: first, the iterative reconstruction is accelerated by learning more efficient updates for each iterate; second, the CNNs effectively learn a prior from the training data set, meaning that it is not necessary to make potentially unrealistic regularising assumptions about the image sparsity or smoothness, for instance. In addition, we show an example in which the CNNs learn to remove artifacts that arise when a slow but accurate acoustic model is replaced by a fast but approximate model. Reconstructions from simulated as well as in vivo data will be shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.