We present a full-waveform inversion (FWI) of an in-vivo data set acquired with a transmission-reflection optoacoustic ultrasound imaging platform containing a cross-sectional slice through a mouse. FWI is a high-resolution reconstruction method that provides quantitative images of tissue properties such as the speed of sound. As an iterative data-fitting procedure, FWI relies on the ability to accurately predict the physics of wave propagation in heterogeneous media to account for the non-linear relationship between the ultrasonic wavefield and the tissue properties. A key component to accurately predict the ultrasonic field numerically is a precise knowledge of the source characteristics. For realistic problems, however, the source-time function is generally unknown, which necessitates an auxiliary inversion that recovers the time series for each transducer. This study presents an updated sound speed reconstruction of a cross-section through a mouse using source wavelets that are inverted individually per transducer. These source wavelets have been estimated from a set of observed data by application of a source-wavelet correction filter, which is equivalent to a water-level deconvolution. Compared to previous results, the spatial resolution of anatomical features such as the vertebral column is increased whilst artefacts are suppressed.
Validating processing algorithms for photoacoustic images is complex due to a gap between simulated and experimental data. To address this challenge, we present a multi-device dataset of well-characterised phantoms and investigate the simulation gap using a supervised calibration of the forward model. We use N=15 phantoms for calibration and systematically compare simulated and experimental data from the remaining N=15 phantoms. Our results highlight the importance of the device geometry, impulse response, and noise for accurate simulation. By reducing the simulation gap and providing an open dataset, our work will contribute to advancing data-driven photoacoustic image processing techniques.
Optoacoustic tomography is typically implemented with bulky solid-state lasers delivering per-pulse energies in the millijoule range. Light emitting diodes (LEDs) represent a cost-effective and portable alternative for signal excitation further offering excellent pulse-to-pulse stability. Herein, we describe a full-view LED-based optoacoustic tomography (FLOAT) system for deep-tissue in vivo imaging. A custom-made electronic unit driving a stacked array of LEDs attains stable light pulses with total per-pulse energy of 0.48 mJ and 100 ns pulse width. The LED array was arranged on a circular configuration and integrated in a full-ring ultrasound array enabling full-view tomographic imaging performance in cross-sectional (2D) geometry. As a proof of concept, we scanned the medial phalanx of the index finger without extrinsic administration of a contrast agent. We anticipate that this compact, affordable, and versatile illumination technology will facilitate dissemination of the optoacoustic technology in resource-limited settings.
Optoacoustic tomography (OAT) has made major advances towards clinical diagnostics in recent years. One major obstacle inhibiting the establishment of this non-invasive non-ionizing technique as a routine diagnostic device is the unfamiliarity of clinicians to OAT images. Several works have already been dedicated to combining Optoacoustic and Ultrasound imaging (OPUS). However, thus far, dual mode 1D arrays have mostly been employed. Not only are the resulting 2D OAT images subject to out-of-plane artefacts, but as the transducer specifications are typically optimized for OA imaging, the US image quality tended to be comparatively poor. Here, we present a concave spherical detector with dedicated OAT and US transducer, where the optimized transducer design boasts excellent image resolution for both modalities. Real-time OPUS acquisitions were performed on healthy human subjects in several regions, including the neck and forearm. 3D OAT volumes were supplemented with a 2D US cross-sections, enabling the complementary identification of key anatomical structures. The contextual structural information offered by US allows for the further exploitation of the rich OA molecular contrast. This showcase demonstration is an important step towards establishing OAT as a clinical point-of-care device.
Non-alcoholic fatty liver disease (NAFLD) starts with the accumulation of lipids in liver tissues before progressing into liver cirrhosis and hepatocellular carcinoma. Transmission-reflection optoacoustic ultrasound (TROPUS) can simultaneously interrogate biological tissues with three ultrasound-based imaging modalities based on different contrast mechanisms. We propose TROPUS imaging for the assessment of NAFLD in vivo and ex vivo. Multispectral optoacoustic tomography resolves the oxy- and deoxy-hemoglobin, lipid and melanin content in the tissues. Reflection ultrasound computed tomography facilitates segmenting the liver by providing anatomical information. Transmission ultrasound computed tomography quantifies changes in speed of sound due to lipid accumulation.
High-intensity focused ultrasound (HIFU) capitalizes on both heating and cavitation effects for the treatment of several conditions. Optoacoustic (OA) imaging has previously been shown to provide high sensitivity to temperature changes and coagulation in HIFU-ablated tissues. In this work, we demonstrate the feasibility of real-time monitoring of heating and cavitation with a hybrid optoacoustic-ultrasound (OPUS) imaging system based on a multi-segment transducer array. The OPUS results in experiments with liver tissues ex-vivo and a mouse post-mortem were validated with thermal camera measurements and with cryo-sections of the mouse. The suggested approach thus holds promise to be clinically translated.
Multi-spectral optoacoustic tomography (MSOT) combines rich contrast of optical imaging and high resolution of ultrasound, and becomes an attractive biomedical research tool in the last decade. Aligning MSOT images with anatomical map provided by magnetic resonance imaging (MRI) can potentially enhance the interpretation of optoacoustic signal which mainly reflects molecular and functional information. Therefore, developing an automated algorithm of image registration between MSOT and MRI is crucial. Existing MSOT-MRI registration algorithms mostly relied on manual segmentation, which requires user-dependent experience. Herein, we developed a fully automated algorithm for MSOT-MRI registration based on deep learning (DL). This workflow consists of DL-based segmentation and image transformation. We have experimentally demonstrated the accuracy and computational efficiency of the method, paving the way towards high-throughput MSOT data analysis in close future.
Optoacoustic images are often afflicted with distortions and artifacts corresponding to system limitations, including limited-view tomographic data. We developed a convolutional neural network (CNN) approach for optoacoustic image quality enhancement combining training on both time-resolved signals and tomographic reconstructions. Reference human finger data for training the CNN were recorded using a full-ring array system with optimal tomographic coverage. The reconstructions were further refined with a dedicated algorithm that minimizes acoustic reflection artifacts induced by acoustically mismatch structures, such as bones. The combined methodology is shown to outperform other CNN-based methods solely operating on image-domain data.
Ultrasound (US) and optoacoustic (OA) imaging provide complementary information for quantitative analysis of the tumor microenvironment. Herein, we demonstrate the unique capabilities of transmission-reflection optoacoustic ultrasound (TROPUS) for characterizing breast cancer in tumor-bearing mice. For this, 4 different mice featuring orthotopic tumor of different sizes were scanned with a full-ring ultrasound transducer array to simultaneously render pulse-echo US images, speed of sound (SoS) maps and OA images. The tumor size, vascular density and its elastic parameters were further quantified in the images. Our results pave the way toward clinical translation of the hybrid TROPUS imaging for tumor detection and characterization.
Multispectral optoacoustic tomography (MSOT) offers the unique capability to map the distribution of spectrally distinctive endogenous and exogenous substances in heterogeneous biological tissues by exciting the sample at various wavelengths and detecting the optoacoustically-induced ultrasound waves. This powerful functional and molecular imaging capability can greatly benefit from hybridization with pulse-echo ultrasound (US), which provides additional information on tissue anatomy and blood flow. However, speed of sound variations and acoustic mismatches in the imaged object generally lead to errors in the coregistration of compounded images and loss of spatial resolution in both imaging modalities. The spatially- and wavelength-dependent light fluence attenuation further limits the quantitative capabilities of MSOT. Proper segmentation of different regions and assignment of corresponding acoustic and optical properties turns then essential for maximizing the performance of hybrid optoacoustic and ultrasound (OPUS) imaging. Particularly, accurate segmentation of the boundary of the sample can significantly improve the images rendered. Herein, we propose an automatic segmentation method based on a convolutional neural network (CNN) for segmenting the mouse boundary in a pre-clinical OPUS system. The experimental performance of the method, as characterized with the Dice coefficient metric between the network output and the ground truth (manually segmented) images, is shown to be superior than that of a state-of-the-art active contour segmentation method in a series of two-dimensional (cross-sectional) OPUS images of the mouse brain, liver and kidney regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.