Here we proposed a designed, built, and evaluated 20G vertically inserted razor edge cannula (VIREC) robotic device guided by optical coherence tomography (OCT) for pneumatic dissection. The fiber sensor was glued inside the needle at a fixed offset of ~500 um. During the experiment, the robotic needle driver precisely moves the VIREC based on the surgeon input which is carefully monitored by the M-mode OCT system. Once the needle is inserted into the desired depth, the air is injected by the surgeon to separate stroma from Descemet’s membrane (DM). During in vivo study (N=8), the “big bubble” was effectively generated in six of eight eyes tested and DM was perforated in two eyes. This demonstrated the reliability and effectiveness of VIREC for “big bubble” DALK.
Real-time fringe projection profilometry (FPP) is developed as a 3D vision system to plan and guide autonomous robotic intestinal suturing. Conventional FPP requires sinusoidal patterns with multiple frequencies, and phase shifts to generate tissue point clouds, resulting in a slow frame rate. Therefore, although FPP can reconstruct dense and accurate tissue point clouds, it is often too slow for dynamic measurements. To address this problem, we propose a deep learning-based single-shot FPP algorithm, which reconstructs tissue point clouds with a single sinusoidal pattern using a Swin-Unet. With this approach, we have achieved a FPP imaging frame rate of 50Hz while maintaining high point cloud measurement accuracy. System performance was trained and evaluated both by synthesized and an experimental dataset. An overall relative error of 1~3% was achieved.
KEYWORDS: Signal attenuation, Optical coherence tomography, Backscatter, Tissues, Signal intensity, Monte Carlo methods, Biological samples, Scattering, Optical properties, Attenuation
SignificanceExtracting optical properties of tissue [e.g., the attenuation coefficient (μ) and the backscattering fraction] from the optical coherence tomography (OCT) images is a valuable tool for parametric imaging and related diagnostic applications. Previous attenuation estimation models depend on the assumption of the uniformity of the backscattering fraction (R) within layers or whole samples, which does not accurately represent real-world conditions.AimOur aim is to develop a robust and accurate model that calculates depth-wise values of attenuation and backscattering fractions simultaneously from OCT signals. Furthermore, we aim to develop an attenuation compensation model for OCT images that utilizes the optical properties we obtained to improve the visual representation of tissues.ApproachUsing the stationary iteration method under suitable constraint conditions, we derived the approximated solutions of μ and R on a single scattering model. During the iteration, the estimated value of μ can be rectified by introducing the large variations of R, whereas the small ones were automatically ignored. Based on the calculation of the structure information, the OCT intensity with attenuation compensation was deduced and compared with the original OCT profiles.ResultsThe preliminary validation was performed in the OCT A-line simulation and Monte Carlo modeling, and the subsequent experiment was conducted on multi-layer silicone-dye-TiO2 phantoms and ex vivo cow eyes. Our method achieved robust and precise estimation of μ and R for both simulated and experimental data. Moreover, corresponding OCT images with attenuation compensation provided an improved resolution over the entire imaging range.ConclusionsOur proposed method was able to correct the estimation bias induced by the variations of R and provided accurate depth-resolved measurements of both μ and R simultaneously. The method does not require prior knowledge of the morphological information of tissue and represents more real-life tissues. Thus, it has the potential to help OCT imaging based disease diagnosis of complex and multi-layer biological tissue.
We reported a design and evaluation of an optical coherence tomography (OCT) sensor-integrated 27 gauge vertically inserted razor edge cannula (VIREC) for pneumatic dissection of Descemet’s membrane (DM) from the stromal layer. The VIREC was inserted vertically at the apex of the cornea to the desired depth near DM. The study was performed using ex vivo bovine corneas (N = 5) and rabbit corneas (N = 5). A clean penumodissection of a stromal layer was successfully performed using VIREC without any stomal blanching on bovine eyes. The “big bubble” was generated in all five tests without perforation. Only micro bubbles were observed on rabbit eyes. The results proved that VIREC can be an effective surgical option for “big bubble” DALK.
PurposeIntraoperative evaluation of bowel perfusion is currently dependent upon subjective assessment. Thus, quantitative and objective methods of bowel viability in intestinal anastomosis are scarce. To address this clinical need, a conditional adversarial network is used to analyze the data from laser speckle contrast imaging (LSCI) paired with a visible-light camera to identify abnormal tissue perfusion regions.ApproachOur vision platform was based on a dual-modality bench-top imaging system with red-green-blue (RGB) and dye-free LSCI channels. Swine model studies were conducted to collect data on bowel mesenteric vascular structures with normal/abnormal microvascular perfusion to construct the control or experimental group. Subsequently, a deep-learning model based on a conditional generative adversarial network (cGAN) was utilized to perform dual-modality image alignment and learn the distribution of normal datasets for training. Thereafter, abnormal datasets were fed into the predictive model for testing. Ischemic bowel regions could be detected by monitoring the erroneous reconstruction from the latent space. The main advantage is that it is unsupervised and does not require subjective manual annotations. Compared with the conventional qualitative LSCI technique, it provides well-defined segmentation results for different levels of ischemia.ResultsWe demonstrated that our model could accurately segment the ischemic intestine images, with a Dice coefficient and accuracy of 90.77% and 93.06%, respectively, in 2560 RGB/LSCI image pairs. The ground truth was labeled by multiple and independent estimations, combining the surgeons’ annotations with fastest gradient descent in suspicious areas of vascular images. The total processing time was 0.05 s for an image size of 256 × 256.ConclusionsThe proposed cGAN can provide pixel-wise and dye-free quantitative analysis of intestinal perfusion, which is an ideal supplement to the traditional LSCI technique. It has potential to help surgeons increase the accuracy of intraoperative diagnosis and improve clinical outcomes of mesenteric ischemia and other gastrointestinal surgeries.
Fringe projection profilometry (FPP) is being developed as a 3D vision system to assist robotic surgery and autonomous suturing. Conventionally, fluorescence markers are placed on a target tissue to indicate suturing landmarks, which not only increase the system complexity, but also impose safety concerns. To address these problems, we propose a numerical landmark detection algorithm based on deep learning. A landmark heatmap is regressed using an adopted U-Net from the four channel data generated by the FPP. A Markov random field leveraging the structure prior is developed to search the correct set of landmarks from the heatmap. The accuracy of the proposed method is verified through ex-vivo porcine intestine landmark detection experiments.
Optical coherence tomography (OCT) has evolved into a powerful imaging technique that allows high-resolution visualization of biological tissues. However, most in vivo OCT systems for real-time volumetric (3D) imaging suffer from image distortion due to motion artifacts induced by involuntary and physiological movements of the living tissue, such as the eye that is constantly in motion.While several methods have been proposed to account for and remove motion artifacts during OCT imaging of the retina, fewer works have focused on motion-compensated OCT-based measurements of the cornea. Here, we propose an OCT system for volumetric imaging of the cornea, capable of compensating both axial and lateral motion with micron-scale accuracy and millisecond-scale time consumption based on higher-order regression. System performance was evaluated during volumetric imaging of corneal phantom and bovine (ex vivo) samples that were positioned in the palm of a hand to simulate involuntary 3D motion. An overall motion-artifact error of less than 4.61 μm and processing time of about 3.40 ms for each B-scan was achieved.
Optical coherence tomography (OCT) with a robust depth-resolved attenuation compensation method for a wide range of imaging applications is proposed and demonstrated. We derive a model for deducing the attenuation coefficients and the signal compensation value using the depth-dependent backscattering profiles, to mitigate under and overestimation in tissue imaging. We validated the method using numerical simulation and phantoms, where we achieved stable and robust compensation results over the entire depth of samples. The comparison between other attenuation characterization models and our proposed model is also performed.
We developed a fully automated abdominal tissue classification algorithm for swept-source OCT imaging using a hybrid multilayer perceptron (MLP) and convolutional neural network (CNN) classifier. For MLP, we incorporated an extensive set of features and a subset was chosen to improve network efficiency. For CNN, we designed a threechannel model combining the intensity information with depth-dependent optical properties of tissues. A rule-based decision fusion approach was applied to find more convincing predictions between these two portions. Our model was trained using ex vivo porcine samples, (~200 B-mode images, ~200,000 A-line signals), evaluated by a hold-out dataset. Compared to other algorithms, our classifiers achieve the highest accuracy of 0.9114 and precision of 0.9106. The promising results showed its feasibility for real-time abdominal tissue sensing during robotic-assisted laparoscopic OCT surgery.
KEYWORDS: Signal attenuation, Optical coherence tomography, Tissues, Calibration, Monte Carlo methods, Image segmentation, Visualization, Speckle, Signal to noise ratio, Point spread functions
Optical coherence tomography (OCT) with a robust depth-resolved attenuation compensation method for a wide range of imaging applications is proposed and demonstrated. The proposed novel OCT attenuation compensation algorithm introduces an optimized axial point spread function (PSF) to modify existing depth-resolved methods and mitigates under and overestimation in biological tissues, providing a uniform resolution over the entire imaging range. The preliminary study is implemented using A-mode numerical simulation, where this method achieved stable and robust compensation results over the entire depth of samples. The experiment results using phantoms and corneal imaging exhibit agreement with the simulation result evaluated using signal-to-noise (SNR) and contrast-to-noise (CNR) metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.