We present an approach to automatic visible light/infrared (VL/IR) image registration that leverages multiple visible light apertures for fast computation on resource-constrained systems. VL/IR registration is computationally challenging due to the different modalities of image generation. Although feature-based algorithms for direct registration exist, these methods proved too complex to reliably perform registration on low-cost, embedded processors in real time. We instead employed a second VL camera to dynamically estimate 2D translations aligning the brightest (warmest) objects in the IR video stream with their counterparts in the first VL video stream. Regions of interest are first selected based on the brightest areas in the IR image, as our application is primarily concerned with detecting objects warmer than background. The same broad region - e.g. the lower-left quadrant of the frame - is then selected in the VL1 and VL2 images. The translation that best registers the first VL ROI to the second is estimated through template matching. Because all apertures in our camera system are fixed and coplanar relative to one another, the translation that best aligns the IR ROI to the VL1 ROI can be estimated from the translation from the VL2 ROI to the VL1 ROI. This approach provides dynamic registration of 1080P video at upwards of 10Hz on an ODROID-XU4 single-board computer, while also allowing the processor time to render the IR-augmented video stream at 20Hz. Imagery collected using Deep Analytics’ IR Boom Camera will be presented to demonstrate the approach.
Developing automated threat detection algorithms for imaging equipment used by explosive ordnance disposal (EOD) and public safety personnel has the potential to improve mission efficiency and safety by automatically drawing a user’s attention to potential threats. To demonstrate the value of automated threat detection algorithms to the EOD community, Deep Analytics LLC (DA) developed an object detection algorithm that runs in real-time on resource constrained devices. The object detection algorithm identifies 10 common classes of improvised explosive device (IED) components in live video and alerts a user when an IED component is detected. In this paper we discuss the development of the IED component dataset, the training and evaluation of the object detection algorithm, and the deployment on the algorithm on resource constrained hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.