Poster + Paper
27 May 2022 Teaching robots to see clearly: optimizing cross-modal domain adaptation models through sequential classification and user evaluation
Jayant Singh, Lalit Bauskar, Kara Capps, Ori Yonay, Alicia Yong, Seth Polsley, Samantha Ray, Suya You, Tracy Hammond
Author Affiliations +
Conference Poster
Abstract
As society becomes increasingly reliant on autonomous vehicles, it becomes necessary for these vehicles to have the ability to navigate new environments. Environmental data is expensive to label especially because it comes from many different sensors, and it can be difficult to interpret how the underlying models works. Therefore, an adequate machine learning model for multi-modal, unsupervised domain adaptation (UDA) that is accurate and explainable is necessary. We aim to improve xMUDA, a state-of-the-art multi-modal UDA model by incorporating a multi-step binary classification algorithm, which allows us to prioritize certain data labels, and alongside human evaluation, we report the mIoU and accuracy of the final output.
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jayant Singh, Lalit Bauskar, Kara Capps, Ori Yonay, Alicia Yong, Seth Polsley, Samantha Ray, Suya You, and Tracy Hammond "Teaching robots to see clearly: optimizing cross-modal domain adaptation models through sequential classification and user evaluation", Proc. SPIE 12100, Multimodal Image Exploitation and Learning 2022, 121000J (27 May 2022); https://doi.org/10.1117/12.2618853
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

RGB color model

Image segmentation

Sensors

Binary data

Robots

Visual process modeling

RELATED CONTENT


Back to Top