The evaluation and trial of computer-assisted surgery systems is an important part of the development process. Since human and animal trials are difficult to perform and have a high ethical value artificial organs and phantoms have become a key component for testing clinical systems. For soft-tissue phantoms like the liver it is important to match its biomechanical properties as close as possible. Organ phantoms are often created from silicone that is shaped in casting molds. Silicone is relatively cheap and the method doesn’t rely on expensive equipment. One big disadvantage of silicone phantoms is their high rigidity. To this end, we propose a new method for the generation of silicon phantoms with a softer and mechanically more accurate structure. Since we can’t change the rigidity of silicone we developed a new and easy method to weaken the structure of the silicone phantom. The key component is the misappropriation of water-soluble support material from 3D FDM-printing. We designed casting molds with an internal grid structure to reduce the rigidity of the structure. The molds are printed with an FDM (Fused Deposition Modeling) printer and entirely from water-soluble PVA (Polyvinyl Alcohol) material. After the silicone is hardened, the mold with the internal structure can be dissolved in water. The silicone phantom is then pervaded with a grid of cavities. Our experiments have shown that we can control the rigidity of the model up to a 70% reduction of its original value. The rigidity of our silicon models is simply controlled with the size of the internal grid structure.
Providing the surgeon with the right assistance at the right time during minimally-invasive surgery requires computer-assisted surgery systems to perceive and understand the current surgical scene. This can be achieved by analyzing the endoscopic image stream. However, endoscopic images often contain artifacts, such as specular highlights, which can hinder further processing steps, e.g., stereo reconstruction, image segmentation, and visual instrument tracking. Hence, correcting them is a necessary preprocessing step. In this paper, we propose a machine learning approach for automatic specular highlight removal from a single endoscopic image. We train a residual convolutional neural network (CNN) to localize and remove specular highlights in endoscopic images using weakly labeled data. The labels merely indicate whether an image does or does not contain a specular highlight. To train the CNN, we employ a generative adversarial network (GAN), which introduces an adversary to judge the performance of the CNN during training. We extend this approach by (1) adding a self-regularization loss to reduce image modification in non-specular areas and by (2) including a further network to automatically generate paired training data from which the CNN can learn. A comparative evaluation shows that our approach outperforms model-based methods for specular highlight removal in endoscopic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.