Colorectal cancer is the fourth leading cause of cancer deaths worldwide, the standard for detection and prevention is the identification and removal of premalignant lesions through optical colonoscopy. More than 60% of colorectal cancer cases are attributed to missed polyps. Current procedures for automated polyp detection are limited by the amount of data available for training, underrepresentation of non-polypoid lesions and lesions which are inherently difficult to label and do not incorporate information about the topography of the surface of the lumen. It has been shown that information related to depth and topography of the surface of the lumen can boost subjective lesion detection. In this work, we add predicted depth information as an additional mode of data when training deep networks for polyp detection, segmentation and classification. We use conditional GANs to predict depth from monocular endoscopy images and fuse these predicted depth maps with RGB white light images in feature space. Our empirical analysis demonstrates that we achieve state-of-the-art results with RGB-D polyp segmentation with a 98% accuracy on four different publically available datasets. Moreover, we demonstrate a 87.24% accuracy on lesion classification. We also show that our networks can domain adapt to a variety of different kinds of data from different sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.