Real-time monitoring of functional tissue parameters, such as local blood oxygenation, based on optical imaging could provide groundbreaking advances in the diagnosis and interventional therapy of various diseases. Although photoacoustic (PA) imaging is a modality with great potential to measure optical absorption deep inside tissue, quantification of the measurements remains a major challenge. We introduce the first machine learning-based approach to quantitative PA imaging (qPAI), which relies on learning the fluence in a voxel to deduce the corresponding optical absorption. The method encodes relevant information of the measured signal and the characteristics of the imaging system in voxel-based feature vectors, which allow the generation of thousands of training samples from a single simulated PA image. Comprehensive in silico experiments suggest that context encoding-qPAI enables highly accurate and robust quantification of the local fluence and thereby the optical absorption from PA images. |
1.IntroductionPhotoacoustic (PA) imaging is an imaging concept with a high potential for real-time monitoring of functional tissue parameters such as blood oxygenation deep inside tissue. It measures the acoustic waves arising from the stress-confined thermal response of optical absorption in tissue.1 More specifically, a PA signal in a location is a pressure response to the locally absorbed energy , which, in turn, is a product of the absorption coefficient , the Grueneisen coefficient and the light fluence Given that the local light fluence not only depends on the imaging setup but is also highly dependent on the optical properties of the surrounding tissue, quantification of optical absorption based on the measured PA signal is a major challenge.2,3 So far, the field of quantitative PA imaging (qPAI) has focused on model-based iterative optimization approaches to infer optical tissue parameters from measured signals (cf. e.g., Refs. 34.5.6.7.8.9.10.11.–12). Although these methods are well suited for tomographic devices with high image quality (cf. e.g., Refs. 1314.–15) as used in small animal imaging, translational PA research with clinical ultrasound transducers or similar handheld devices (cf. e.g., Refs. 1 and 1617.18.19.20.21.–22) focuses on qualitative image analysis. As an initial step toward clinical qPAI, we introduce a machine learning-based approach to quantifying PA measurements. The approach features high robustness to noise while being computationally efficient. In contrast to all other approaches proposed to date, our method relies on learning the light fluence on a voxel level to deduce the corresponding optical absorption. Our core contribution is the development of a voxel-based context image (CI) that encodes relevant information of the measured signal voxel together with characteristics of the imaging system in a single feature vector. This enables us to tackle the challenge of fluence estimation as a machine learning problem that we can solve in a fast and robust manner. Comprehensive in silico experiments indicate high accuracy, speed, and robustness of the proposed context encoding (CE)-qPAI approach. This is demonstrated for estimation of (1) fluence and optical absorption from PA images, as well as (2) blood oxygen saturation as an example of functional imaging using multispectral PA images. 2.Materials and MethodsA common challenge when applying machine learning methods to biomedical imaging problems is the lack of labeled training data. In the context of PAI, a major issue is the strong dependence of the signal on the surrounding tissue. This renders separation of voxels from their context—as in surface optical imaging23—impossible or highly inaccurate. Simulation of a sufficient number of training volumes covering a large range of tissue parameter variations, on the other hand, is computationally not feasible given the generally long runtime of Monte Carlo methods, which are currently the gold standard for the simulation of light transportation in tissue.11 Inspired by an approach to shape matching, where the shape context is encoded in a so-called spin image specifically for each node in a mesh,24 we encode the voxel-specific context in so-called CIs. This allows us to train machine learning algorithms on a voxel level rather than image level and we thus require orders of magnitude fewer simulated training volumes. CIs encode relevant information of the measured signal as well as characteristics of the imaging system represented by so-called voxel-specific fluence contribution maps (FCMs). The CIs serve as a feature vector for said machine learning algorithm, which is trained to estimate fluence in a voxel. The entire quantification method is shown in Fig. 1, which serves as an overview with details given in the following sections. 2.1.Fluence Contribution MapAn important prerequisite for computing the CI for a voxel is the computation of the corresponding FCM, referred to as . represents a measure for the likelihood that a photon arriving in voxel has passed . In other words, an FCM reflects the impact of a PA signal in on the drop in fluence in voxel . An illustration of an FCM corresponding to a typical handheld PA setup is shown in Fig. 2. The is dependent on how the PA excitation light pulse propagates through homogeneous tissue to arrive in given a chosen hardware setup. The FCMs per imaging plane are generated once for each new hardware setup and each voxel in the imaging plane. In this first implementation of the CE-qPAI concept, FCMs are simulated with the same resolution as the input data assuming a background absorption coefficient of and a constant reduced scattering coefficient of .25 The number of photons is varied to achieve a consistent photon count in the target voxel. The FCMs are generated with the widely used Monte Carlo simulation tool mcxyz.26 We integrated mcxyz into the open-source Medical Image Interaction Toolkit MITK27 as mitkMcxyz and modified it to work in a multithreaded environment. Sample FCMs for three different voxels are shown in Fig. 2, which also shows the generation of CIs for those three example voxels. 2.2.Context ImageThe CI for a voxel in a PA volume is essentially a two-dimensional (2-D) histogram composed of (1) the measured PA signal in the tissue surrounding and (2) the corresponding . More specifically, it is constructed from the tuples where is defined as . This constraint is set to exclude voxels with a negligible contribution to the fluence in . The tuples are arranged by magnitude of and into a 2-D histogram and thereby encode the relevant context information in a compact form. In our prototype implementation of the CE-qPAI concept, the fluence contribution and signal axes of the histogram are discretized in 12 bins and scaled logarithmically to better represent the predominantly low signal and fluence contribution components. The ranges of the axes are set as and . Signals and fluence contributions larger than the upper boundary are included in the highest bin, whereas smaller signals and fluence contributions are not. Figure 2 shows the generation of CIs from FCMs and PA signals. Labeled CIs are used for training a regressor that can later estimate fluence, which, in turn, is used to reconstruct absorption [Eq. (1)]. 2.3.Machine Learning-Based Regression for Fluence EstimationDuring the training phase, a regressor is presented tuples of and corresponding ground truth fluence values for each voxel in a set of PAI volumes. For estimation of optical absorption in a voxel of a previously unseen image, the voxel-specific CI is generated and used to infer fluence using the trained algorithm. In our prototype implementation of the CE-qPAI method, we use a random forest regressor. A random forest regressor is an ensemble of decision trees, where the weighted vote of the individual trees is used as the estimation.28 To train the random forest, all labeled CIs of the respective training set need to be evaluated at once. With voxel-based CIs, thousands of training samples can be extracted from a single slice of a simulated PA training volume. Ground truth training data generation is performed using a dedicated software plugin integrated into MITK and simulating the fluence with mitkMcxyz. It should be noted that the simulated images consist mainly of background voxels and not of vessel structures, which are our regions of interest (ROI). This leads to an imbalance in the training set. To avoid poor estimation for underrepresented classes,29 we undersample background voxels in the training process to ensure a 1:1 ROI/background sample ratio. The parameters of the random forest are set to the defaults of sklearn 0.18 using python 2.7, except for the tree count which was set to . CIs are used as feature vectors and labeled with the optical property to be estimated (e.g., fluence or oxygenation). The parameters were chosen based on a grid search on a separate dataset not used in the experiments of this work. 2.4.Hardware SetupWe assume a typical linear probe hardware setup,30 where the ultrasound detector array and the light source move together and the illumination geometry is the same for each image recorded. This is also the case for other typical tomographic devices.31,32 All simulations were performed on high-end CPUs (Intel i7-5960X). 3.Experiments and ResultsIn the following validation experiments, we quantify the fluence up to an imaging depth of 28 mm in unseen test images for each dataset. With our implementation and setup, all images comprise 3008 training samples, which results in an average simulation time of about 50 ms per training sample. This allows us to generate enough training samples in a feasible amount of time, to train a regressor that enables fluence estimation in a previously unseen image in near real time. The measured computational time for quantifying fluence in a single image slice is . In the following, we present the experimental design and results of the validation of CE-qPAI. First, we will validate the estimation of absorption from PAI volumes acquired at a fixed wavelength and then estimate blood oxygenation from multispectral PAI volumes. 3.1.Monospectral Absorption Estimation3.1.1.ExperimentTo assess the performance of CE-qPAI in PA images of blood vessels, we designed six experimental datasets (DS) with varying complexities as listed in Table 1. With the exception of , each of the six experimental DS is composed of 150 training items, 25 validation items, and 25 test items, where each item comprises a three-dimensional (3-D) simulated PA image of dimensions and 0.6-mm equal spacing as well as a corresponding (ground truth) fluence map. Table 1The design parameters of the DS. All ranges denote sampling from uniform distributions within the given bounds.
As labels of the generated CIs, we used a fluence correction , where is a fluence simulation based on a homogeneous background tissue assumption. We used five equidistant slices out of each volume, resulting in a generation of a total of 2,256,000; 376,000 and 376,000 CIs for each dataset—for training, parameter optimization, and testing, respectively. To account for the high complexity of , we increased the number of training volumes for that set from 150 to 400. The baseline dataset represents simulations of a transcutaneously scanned simplified model of a blood vessel of constant radius (3 mm) and constant absorption (vessel: , background: ) and reduced scattering coefficient (). To approximate partial volume effects, the absorption coefficients in the ground truth images were Gaussian blurred with a sigma of 0.6 mm. Single slices were simulated using photons for all training sets and photons for the respective test and validation sets and then compounded in a fully scanned volume. Different shapes and poses of the vessel were generated by a random walk with steps defined as where is a free parameter constant in each vessel with an inter-vessel variation within a uniform distribution and is varied for each of its components in each step within a uniform distribution . To investigate how variations in geometry and optical properties impact the performance of our method, we designed further experimental DS in which the number of vessels (), the radii of the vessels (), the optical absorption coefficients within the vessels (), the absorption coefficient of the background (), as well as all of the above () were varied. We tested the robustness of CE-qPAI to this range of scenarios without retuning CI or random forest parameters.Although most studies assess the performance of a method in the entire image (cf. e.g., Refs. 6, 33, and 34), it must be pointed out that the accuracy of signal quantification is often most relevant in a defined region of interest—such as in vessels or regions that provide a meaningful PA signal. These are typically also the regions, where quantification is particularly challenging due to the strongest signals originating from boundaries with discontinuous tissue properties. To address this important aspect we validated our method, not only on the entire image, but also in the ROI, which we define for our DS as voxels representing a vessel and at the same time having a contrast-to-noise ratio (CNR) of larger than 2, to only include significant signal in the ROI. We define CNR following Walvaert and Rosseel35 in a voxel as where the and are the average and standard deviations of the background signal over a simulated image slice with a background absorption coefficient of and no other structures. Using such an image without application of a noise model, we simulated an intrinsic background noise of a.u.To investigate the robustness of CE-qPAI to noise, we added the following noise models to each dataset. The noise models consist of an additive Gaussian noise term applied on the signal volumes followed by a multiplicative white Gaussian noise term, similar to noise assumptions used in prior work.6,33 We examined three noise levels to compare against the simulation-intrinsic noise case:
The additive and multiplicative noise components follow an estimation of noise components on a custom PA system.30 For each experimental dataset introduced in Table 1 and each noise set, we applied the following validation procedure separately. Following common research practice, we used the training data subset for training of the random forest and the validation data subset to ensure the convergence of the training process, as well as to set suitable parameters for the random forest and ROI, whereas we only evaluated the test data subset to report the final results (as described in Ref. 36). As an error metric, we report the relative fluence estimation error rather than an absorption estimation error, to separate the error in estimating fluence with CE-qPAI from errors introduced through simulation-intrinsic or added noise on the signal, which will affect the quantification regardless of fluence estimation.3.1.2.ResultsFigures 3(a)–3(c) show representative examples of the previously unseen 125 simulated test images from the baseline dataset , with their corresponding fluence estimation results. The optical absorption is reconstructed using the fluence estimation. A histogram illustrating absorption estimation accuracy in ROI voxels of is shown in Fig. 3(d) and compared with a static fluence correction approach. Table 2 summarizes the descriptive statistics of the relative fluence estimation errors for the experiments on absorption estimation using single wavelength PA images. The relative fluence estimation error does not follow a normal distribution due to large outliers especially in complex DS, which is why we report median with interquartile ranges (IQR) for all DS. Even for the most complex dataset with variations of multiple parameters, CE-qPAI yields a median overall relative fluence estimation error below 4%. Errors are higher in the ROI, especially in DS with high variations of absorption. Table 2Descriptive statistics of fluence estimation results. The median and IQR of the relative fluence estimation error er for the six validation DS used for the single wavelength experiments. The median error and IQR are provided (1) for all voxels in the respective test set as well as (2) for the voxels in the ROI only.
Previously proposed qPAI approaches reveal high drops in estimation performance when dealing with noisy data (cf. e.g., Ref. 37). To remedy this, methods have been proposed to incorporate more accurate noise representations into model-based reconstruction algorithms.33,38 When validating the robustness of CE-qPAI to noise, it yields high accuracy even under unrealistically high noise levels of up to 20% (cf. Fig. 4). Regardless of the noise level applied, the highest median errors occur in the ROIs of DS that are characterized by high absorption and inhomogeneous tissue properties. 3.2.Multispectral Blood Oxygenation EstimationThe concept of CE cannot only be used to estimate fluence and absorption, but also derived functional parameters such as blood oxygenation. To this end, the estimated absorption in a voxel for multiple wavelengths can be applied to resolve oxygenation via linear spectral unmixing. Alternatively, a regressor can be trained using the CIs labeled with ground truth oxygenation. 3.2.1.ExperimentTo investigate the performance of CE-qPAI for blood oxygenation () estimation, we designed an additional multispectral simulated dataset using the wavelengths 750, 800, and 850 nm. It consists of 240 multispectral training volumes and 11 multispectral test volumes, each featuring homogeneous oxygenation and one vessel with a radius of 2.3 to 4 mm—modeled after a carotid artery.39 For each image slice and at each wavelength, photons were used for simulation. Oxygenation values for the training images were drawn randomly from a uniform distribution . For testing, we simulated 11 multispectral volumes at three wavelengths and 11 blood oxygenation levels (). The optical absorption was adjusted by wavelength and oxygenation, as described by Jacques.25 Hemoglobin concentration was assumed to be .25 The blood volume fraction was set to 0.5% in the background tissue and to 100% in the blood vessels. The reduced scattering coefficient was again set to . We estimated the oxygenation using three methods:
3.2.2.ResultsEstimation of local blood oxygen saturation () is one of the main qPAI applications and is only possible with multispectral measurements. As such, the presented approaches were validated together with the baseline method on the dataset . As shown in Fig. 5(a), the estimation results for both methods are in very close agreement with the ground truth. In fact, the median absolute oxygen estimation error was 3.1% with IQR (1.1% and 6.4%) for CE-qPAI and 0.8% with IQR (0.3% and 1.8%) for the fCE-qPAI adaptation. Furthermore, our methodology outperforms a baseline approach based on linear spectral unmixing of the raw signal (as also compared to in Ref. 15). By means of example Fig. 5(b) shows that the linear spectral unmixing of the ROI on the uncorrected signal fails deep inside the ROI, where the fluence varies strongly for different wavelengths. To compensate for this effect when comparing the approach to our method, we validate all methods only on the MIP along the depth axis (as also used in Ref. 41) in Fig. 5(a). 4.DiscussionThis paper addresses one of the most important challenges related to PA imaging, namely the quantification of optical absorption based on the measured signal. In contrast to all other approaches proposed to qPAI to date (cf. e.g., Refs. 34.5.6.7.8.9.10.11.–12), our method relies on learning the light fluence in a voxel to deduce the corresponding optical absorption. Comprehensive in silico experiments presented in this manuscript show the high potential of this approach to estimate optical absorption as well as derived functional properties, such as oxygenation, even in the presence of high noise. Although machine learning methods have recently been applied to PAI related problems (cf. e.g., Refs. 4243.–44), these have mainly focused on image reconstruction but not signal quantification. We attribute this to the fact that in vivo training data generation for machine learning-based qPAI is not at all straightforward given the lack of reference methods for estimating optical absorption in depth. Despite recent developments related to hybrid diffusion approximation and Monte Carlo methods,45 fast generation of in silico training data also remains an unsolved challenge. Note in this context that commonly applied methods of data augmentation (i.e., methods that may be used to automatically enlarge training data sets as discussed in Ref. 46) cannot be applied to PA images due to the interdependence of fluence and signal. With our contribution, we have addressed the challenge by introducing the concept of CIs, which allow us to generate one training case from each voxel rather than from each image. As an important contribution with high potential impact, we adapted CE-qPAI to estimate functional tissue properties from multiwavelength data. Both variants—linear spectral unmixing of the fluence corrected signal, as well as direct estimation of oxygenation from multi wavelength CIs, yielded accurate results that outperformed a baseline approach based on linear spectral unmixing of the raw PA signal. It should be noted that linear spectral unmixing of the signal for estimation is usually performed on a wider range of wavelengths to increase accuracy. However, even this increase in the number of wavelengths cannot fully account for nonlinear fluence effects.3 Combined with the separately established robustness to noise, multiwavelength applications of CE-qPAI are very promising. In our first prototype implementation of CE-qPAI, we used random forests regressors with standard parameters. It should be noted, however, that fluence estimation from the proposed CI can in principle be performed by any other machine learning method in a straightforward manner. Initial experiments suggest that even better performance can be achieved with convolutional neural networks.47 By relating the measured signals in the neighborhood of to the corresponding fluence contributions we relate the absorbed energy in , to the fluence contribution of to . In this context, it has to be noted that the fluence contribution is only an approximation of the true likelihood that a photon passing has previously passed , because is generated independently of the scene under observation assuming constant background absorption and scattering. Nevertheless due to the generally low variance of scattering in tissue, it serves as a reliable input for the proposed machine learning-based quantification. A limitation of our study can be seen in the fact that we performed the validation in silico. To apply CE-qPAI in vivo, further research will have to be conducted in two main areas. First, we are working on accurately solving the acoustical inverse problem for specific scanners.48 The method will be integrated into the quantification algorithm to enable quantification of images acquired with common PAI probes such as clinical linear transducers. Second, training data have to be generated as close to reality as possible—considering, for example, imaging artifacts. In contrast to prior work (cf. e.g., Refs. 6, 7, 33, 49, and 34), our initial validation handles the whole range of near infrared absorption in whole blood at physiological hemoglobin concentrations and demonstrates high robustness to noise. The impact of variations of scattering still needs investigation although these should be small in the near infrared. Long-term goal of our work is the transfer of CE-qPAI to clinical data. In this context, run-time of the algorithm will play an important role. Although our current implementation can estimate absorption on single slices within a second, this might not be sufficient for interventional clinical estimation of whole tissue volumes and at higher resolutions. An efficient GPU implementation of the time intensive CI generation should enable real-time quantification. In summary, CE-qPAI is the first machine learning-based approach to quantification of PA signals. The results of this work suggest that quantitative real-time functional PA imaging deep inside tissue is feasible. Code and Data AvailabilityThe code for the method as well as the experiments was written in C++ and python 2.7 and is partially open source and available at https://phabricator.mitk.org/source/mitk.git. Additional code and all raw and processed data generated in this work are available from the corresponding authors on reasonable request. DisclosuresThe authors have no relevant financial interests in this article and no potential conflicts of interest to disclose. AcknowledgmentsThe authors would like to acknowledge support from the European Union through the ERC starting grant COMBIOSCOPY under the New Horizon Framework Programme grant agreement ERC-2015-StG-37960. We would like to thank the ITCF of the DKFZ for the provision of their computing cluster and C. Feldmann for her support with figure design. ReferencesL. V. Wang and J. Yao,
“A practical guide to photoacoustic tomography in the life sciences,”
Nat. Methods, 13
(8), 627
–638
(2016). https://doi.org/10.1038/nmeth.3925 1548-7091 Google Scholar
L. V. Wang and S. Hu,
“Photoacoustic tomography: in vivo imaging from organelles to organs,”
Science, 335
(6075), 1458
–1462
(2012). https://doi.org/10.1126/science.1216210 SCIEAS 0036-8075 Google Scholar
B. T. Cox, J. G. Laufer and P. C. Beard,
“The challenges for quantitative photoacoustic imaging,”
Proc. SPIE, 7177 717713
(2009). https://doi.org/10.1117/12.806788 PSISDG 0277-786X Google Scholar
N. Iftimia and H. Jiang,
“Quantitative optical image reconstruction of turbid media by use of direct-current measurements,”
Appl. Opt., 39
(28), 5256
–5261
(2000). https://doi.org/10.1364/AO.39.005256 APOPAI 0003-6935 Google Scholar
B. T. Cox et al.,
“Quantitative photoacoustic imaging: fitting a model of light transport to the initial pressure distribution,”
Proc. SPIE, 5697 49
–55
(2005). https://doi.org/10.1117/12.597190 PSISDG 0277-786X Google Scholar
B. T. Cox et al.,
“Two-dimensional quantitative photoacoustic image reconstruction of absorption distributions in scattering media by use of a simple iterative method,”
Appl. Opt., 45
(8), 1866
–1875
(2006). https://doi.org/10.1364/AO.45.001866 APOPAI 0003-6935 Google Scholar
Z. Yuan and H. Jiang,
“Quantitative photoacoustic tomography: recovery of optical absorption coefficient maps of heterogeneous media,”
Appl. Phys. Lett., 88
(23), 231101
(2006). https://doi.org/10.1063/1.2209883 APPLAB 0003-6951 Google Scholar
J. Laufer et al.,
“Quantitative spatially resolved measurement of tissue chromophore concentrations using photoacoustic spectroscopy: application to the measurement of blood oxygenation and haemoglobin concentration,”
Phys. Med. Biol., 52
(1), 141
–168
(2007). https://doi.org/10.1088/0031-9155/52/1/010 PHMBA7 0031-9155 Google Scholar
E. Malone, B. Cox and S. Arridge,
“Multispectral reconstruction methods for quantitative photoacoustic tomography,”
Proc. SPIE, 9708 970827
(2016). https://doi.org/10.1117/12.2212440 PSISDG 0277-786X Google Scholar
M. Haltmeier, L. Neumann and S. Rabanser,
“Single-stage reconstruction algorithm for quantitative photoacoustic tomography,”
Inverse Probl., 31
(6), 065005
(2015). https://doi.org/10.1088/0266-5611/31/6/065005 INPEEY 0266-5611 Google Scholar
B. Cox et al.,
“Quantitative spectroscopic photoacoustic imaging: a review,”
J. Biomed. Opt., 17
(6), 061202
(2012). https://doi.org/10.1117/1.JBO.17.6.061202 JBOPFO 1083-3668 Google Scholar
B. Banerjee et al.,
“Quantitative photoacoustic tomography from boundary pressure measurements: noniterative recovery of optical absorption coefficient from the reconstructed absorbed energy map,”
J. Opt. Soc. Am. A, 25
(9), 2347
–2356
(2008). https://doi.org/10.1364/JOSAA.25.002347 JOAOD6 0740-3232 Google Scholar
L. V. Wang,
“Multiscale photoacoustic microscopy and computed tomography,”
Nat. Photonics, 3
(9), 503
–509
(2009). https://doi.org/10.1038/nphoton.2009.157 NPAHBY 1749-4885 Google Scholar
J. Xia and L. V. Wang,
“Small-animal whole-body photoacoustic tomography: a review,”
IEEE Trans. Biomed. Eng., 61
(5), 1380
–1389
(2014). https://doi.org/10.1109/TBME.2013.2283507 IEBEAX 0018-9294 Google Scholar
S. Tzoumas et al.,
“Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues,”
Nat. Commun., 7 12121
(2016). https://doi.org/10.1038/ncomms12121 NCAOBW 2041-1723 Google Scholar
J. J. Niederhauser et al.,
“Combined ultrasound and optoacoustic system for real-time high-contrast vascular imaging in vivo,”
IEEE Trans. Med. Imaging, 24
(4), 436
–440
(2005). https://doi.org/10.1109/TMI.2004.843199 ITMID4 0278-0062 Google Scholar
S. Zackrisson, S. M. W. Y. van de Ven and S. S. Gambhir,
“Light in and sound out: emerging translational strategies for photoacoustic imaging,”
Cancer Res., 74
(4), 979
–1004
(2014). https://doi.org/10.1158/0008-5472.CAN-13-2387 Google Scholar
P. K. Upputuri and M. Pramanik,
“Recent advances toward preclinical and clinical translation of photoacoustic tomography: a review,”
J. Biomed. Opt., 22
(4), 041006
(2017). https://doi.org/10.1117/1.JBO.22.4.041006 JBOPFO 1083-3668 Google Scholar
J. Gamelin et al.,
“Curved array photoacoustic tomographic system for small animal imaging,”
J. Biomed. Opt., 13
(2), 024007
(2008). https://doi.org/10.1117/1.2907157 JBOPFO 1083-3668 Google Scholar
K. H. Song et al.,
“Noninvasive photoacoustic identification of sentinel lymph nodes containing methylene blue in vivo in a rat model,”
J. Biomed. Opt., 13
(5), 054033
(2008). https://doi.org/10.1117/1.2976427 JBOPFO 1083-3668 Google Scholar
C. Kim et al.,
“Handheld array-based photoacoustic probe for guiding needle biopsy of sentinel lymph nodes,”
J. Biomed. Opt., 15
(4), 046010
(2010). https://doi.org/10.1117/1.3469829 JBOPFO 1083-3668 Google Scholar
A. Garcia-Uribe et al.,
“Dual-modality photoacoustic and ultrasound imaging system for noninvasive sentinel lymph node detection in patients with breast cancer,”
Sci. Rep., 5 15748
(2015). https://doi.org/10.1038/srep15748 SRCEC3 2045-2322 Google Scholar
S. J. Wirkert et al.,
“Robust near real-time estimation of physiological parameters from megapixel multispectral images with inverse Monte Carlo and random forest regression,”
Int. J. Comput. Assist. Radiol. Surg., 11
(6), 909
–917
(2016). https://doi.org/10.1007/s11548-016-1376-5 Google Scholar
A. E. Johnson and M. Hebert,
“Using spin images for efficient object recognition in cluttered 3D scenes,”
IEEE Trans. Pattern Anal. Mach. Intell., 21
(5), 433
–449
(1999). https://doi.org/10.1109/34.765655 ITPIDJ 0162-8828 Google Scholar
S. L. Jacques,
“Optical properties of biological tissues: a review,”
Phys. Med. Biol., 58
(11), R37
–R61
(2013). https://doi.org/10.1088/0031-9155/58/11/R37 PHMBA7 0031-9155 Google Scholar
S. L. Jacques,
“Coupling 3D Monte Carlo light transport in optically heterogeneous tissues to photoacoustic signal generation,”
Photoacoustics, 2
(4), 137
–142
(2014). https://doi.org/10.1016/j.pacs.2014.09.001 Google Scholar
I. Wolf et al.,
“The medical imaging interaction toolkit,”
Med. Image Anal., 9
(6), 594
–604
(2005). https://doi.org/10.1016/j.media.2005.04.005 Google Scholar
L. Breiman,
“Random forests,”
Mach. Learn., 45
(1), 5
–32
(2001). https://doi.org/10.1023/A:1010933404324 MALEEZ 0885-6125 Google Scholar
A. Estabrooks, T. Jo and N. Japkowicz,
“A multiple resampling method for learning from imbalanced data sets,”
Comput. Intell., 20
(1), 18
–36
(2004). https://doi.org/10.1111/coin.2004.20.issue-1 Google Scholar
T. Kirchner et al.,
“Freehand photoacoustic tomography for 3D angiography using local gradient information,”
Proc. SPIE, 9708 97083G
(2016). https://doi.org/10.1117/12.2209368 PSISDG 0277-786X Google Scholar
V. Neuschmelting et al.,
“Performance of a multispectral optoacoustic tomography (MSOT) system equipped with 2D vs. 3D handheld probes for potential clinical translation,”
Photoacoustics, 4
(1), 1
–10
(2016). https://doi.org/10.1016/j.pacs.2015.12.001 Google Scholar
A. Needles et al.,
“Development and initial application of a fully integrated photoacoustic micro-ultrasound system,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 60
(5), 888
–897
(2013). https://doi.org/10.1109/TUFFC.2013.2646 ITUCER 0885-3010 Google Scholar
T. Tarvainen et al.,
“Bayesian image reconstruction in quantitative photoacoustic tomography,”
IEEE Trans. Med. Imaging, 32
(12), 2287
–2298
(2013). https://doi.org/10.1109/TMI.2013.2280281 ITMID4 0278-0062 Google Scholar
R. J. Zemp,
“Quantitative photoacoustic tomography with multiple optical sources,”
Appl. Opt., 49
(18), 3566
–3572
(2010). https://doi.org/10.1364/AO.49.003566 APOPAI 0003-6935 Google Scholar
M. Welvaert and Y. Rosseel,
“On the definition of signal-to-noise ratio and contrast-to-noise ratio for FMRI data,”
PLoS One, 8
(11), e77089
(2013). https://doi.org/10.1371/journal.pone.0077089 POLNCL 1932-6203 Google Scholar
B. D. Ripley, Pattern Recognition and Neural Networks, Cambridge University Press, Cambridge
(2007). Google Scholar
E. Beretta et al.,
“A variational method for quantitative photoacoustic tomography with piecewise constant coefficients,”
Variational Methods, 202
–224 Walter de Gruyter(2016). Google Scholar
T. Tarvainen et al.,
“Image reconstruction with noise and error modelling in quantitative photoacoustic tomography,”
Proc. SPIE, 9708 97083Q
(2016). https://doi.org/10.1117/12.2209477 PSISDG 0277-786X Google Scholar
J. Krejza et al.,
“Carotid artery diameter in men and women and the relation to body and neck size,”
Stroke, 37
(4), 1103
–1105
(2006). https://doi.org/10.1161/01.STR.0000206440.48756.f7 SJCCA7 0039-2499 Google Scholar
N. Keshava and J. F. Mustard,
“Spectral unmixing,”
IEEE Signal Process. Mag., 19
(1), 44
–57
(2002). https://doi.org/10.1109/79.974727 ISPRE6 1053-5888 Google Scholar
X. L. Deán-Ben, E. Bay and D. Razansky,
“Functional optoacoustic imaging of moving objects using microsecond-delay acquisition of multispectral three-dimensional tomographic data,”
Sci. Rep., 4 5878
(2014). https://doi.org/10.1038/srep05878 SRCEC3 2045-2322 Google Scholar
A. Reiter and M. A. L. Bell,
“A machine learning approach to identifying point source locations in photoacoustic data,”
Proc. SPIE, 10064 100643J
(2017). https://doi.org/10.1117/12.2255098 PSISDG 0277-786X Google Scholar
A. Hauptmann et al.,
“Model based learning for accelerated, limited-view 3D photoacoustic tomography,”
(2017). Google Scholar
S. Antholzer, M. Haltmeier and J. Schwab,
“Deep learning for photoacoustic tomography from sparse data,”
(2017). Google Scholar
C. Zhu and Q. Liu,
“Hybrid method for fast Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with tumor-like heterogeneities,”
J. Biomed. Opt., 17
(1), 010501
(2012). https://doi.org/10.1117/1.JBO.17.1.010501 JBOPFO 1083-3668 Google Scholar
A. Dosovitskiy et al.,
“Discriminative unsupervised feature learning with convolutional neural networks,”
in Advances in Neural Information Processing Systems,
766
–774
(2014). Google Scholar
K. He et al.,
“Deep residual learning for image recognition,”
in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR),
(2016). https://doi.org/10.1109/CVPR.2016.90 Google Scholar
D. Waibel et al.,
“Reconstruction of initial pressure from limited view photoacoustic images using deep learning,”
Proc. SPIE, 10494 104942S
(2018). https://doi.org/10.1117/12.2288353 PSISDG 0277-786X Google Scholar
W. Naetar and O. Scherzer,
“Quantitative photoacoustic tomography with piecewise constant material parameters,”
SIAM J. Imaging Sci., 7
(3), 1755
–1774
(2014). https://doi.org/10.1137/140959705 Google Scholar
BiographyThomas Kirchner received his MSc degree in physics from the University of Heidelberg in 2015. He currently works on his PhD at the Division of Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), where he does research in computational biophotonics, focusing on real-time multispectral photoacoustics and signal quantification. Janek Gröhl received his MSc degree in medical informatics from the University of Heidelberg and Heilbronn University of Applied Sciences in 2016. He currently works on his PhD at the Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ) and does research in software engineering and computational biophotonics focusing on signal quantification in photoacoustic imaging. Lena Maier-Hein received her PhD from the Karlsruhe Institute of Technology in 2009 and conducted her postdoctoral research in the Division of Medical and Biological Informatics, German Cancer Research Center (DKFZ), and in the Hamlyn Centre for Robotics Surgery, Imperial College London. She is leading the Division of Computer Assisted Medical Interventions (CAMI) at the DKFZ. Currently, she is working on multimodal image processing, surgical data science, and computational biophotonics. |