In the domain of brain imaging of small animals including rats, ultrasound (US) imaging is an appealing tool because it offers a high frame rate, easy access, and involves no radiation. However, the rat skull causes artifacts that influence brain image quality in terms of contrast and resolution. Therefore, minimizing the skull-induced artifacts in US imaging is a significant challenge. Unfortunately, the amount of literature on rat skull-induced artifacts is limited, and there is a particular lack of studies exploring reducing skull-induced artifacts. Due to the difficulty of experimentally imaging the same rat brain with and without a skull, numerical simulation becomes a reasonable approach to studying skull-induced artifacts. In this work, we investigated the effects of skull-induced artifacts by simulating a grid of point targets inside the skull cavity and quantifying the pattern of skull-induced artifacts. With the capacity to automatically capture the artifact pattern given a large amount of paired training data, deep learning (DL) models can effectively reduce image artifacts in multiple modalities. This work explored the feasibility of using DL-based methods to reduce skull-induced artifacts in US imaging. Simulated data were used to train a U-Net-derived, image-to-image regression network. US channel data with artifact signals served as inputs to the network, and channel data with reduced artifact signals were the regression outcomes. Results suggest the proposed method can reduce skull-induced artifacts and enhance target signals in B-mode images.
This paper presents the design, fabrication, and experimental validation of a photoacoustic (PA) imaging probe for robotic surgery. PA is an emerging imaging modality that combines the high penetration of ultrasound (US) imaging with high optical contrast. When equipped with a PA probe, a surgical robot can provide intraoperative guidance to the operating physician, alerting them of the presence of vital substrate anatomy (e.g., nerves or blood vessels) invisible to the naked eye. Our probe is designed to work with the da Vinci surgical system to produce three-dimensional PA images: We propose an approach wherein the robot provides Remote Center-of-Motion (RCM) scanning across a region of interest, and successive PA tomographic images are acquired and interpolated to produce a three-dimensional PA image. To demonstrate the accuracy of the PA guidance in scanning 3D tomography actuated by the robot, we conducted an experimental study that involved the imaging of a multi-layer wire phantom. The computed Target Registration Error (TRE) between the acquired PA image and the phantom was 1.5567±1.3605 mm. The ex vivo study demonstrated the function of the proposed laparoscopic device in 3D vascular detection. These results indicate the potential of our PA system to be incorporated into clinical robotic surgery for functional anatomical guidance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.