Presentation + Paper
14 May 2019 Fast multi-modal reuse: co-occurrence pre-trained deep learning models
Author Affiliations +
Abstract
The purpose of this paper is on the study of data fusion applications in traditional, spatial and aerial video stream applications which addresses the processing of data from multiple sources using co-occurrence information and uses a common semantic metric. Use of co-occurrence information to infer semantic relations between measurements avoids the need to make use of such external information, such as labels. Many of the current Vector Space Models (VSM) do not preserve the co-occurrence information leading to a not so useful similarity metric. We propose a proximity matrix embedding part of the learning metric embedding which has entries showing the relations between co-occurrence frequency observed in input sets. First, we show an implicit spatial sensor proximity matrix calculation using Jaccard similarity for an array of sensor measurements and compare with the state-of-the-art kernel PCA learning from feature space proximity representation; it relates to a k-radius ball of nearest neighbors. Finally, we extend the class co-occurrence boosting of our unsupervised model using pre-trained multi-modal reuse.
Conference Presentation
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Vasanth Iyer, Alexander Aved, Todd B. Howlett, Jeffrey T. Carlo, Asif Mehmood, Niki Pissinou, and S. S. Iyengar "Fast multi-modal reuse: co-occurrence pre-trained deep learning models", Proc. SPIE 10996, Real-Time Image Processing and Deep Learning 2019, 109960A (14 May 2019); https://doi.org/10.1117/12.2519546
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Data modeling

Principal component analysis

Computer programming

Feature extraction

Image sensors

Neural networks

Back to Top