This work focuses on a bimodal vision system which was previously demonstrated as a relevant sensing candidate for detecting and tracking fast objects by combining the unique event-based sensor features i.e. high temporal resolution, reduced bandwidth needs, low energy consumption, and passive detection capabilities with the high-spatial resolution of a RGB camera. In this study, we aim to propose a model based on the principle of attentional vision for real-time detection and tracking of UAVs, taking into account computing and on-board resource constraints. A laboratory demonstrator have been proposed to evaluate the operational limits in terms of computation time, system performances (including target detection) versus speed. Our first indoor and outdoor tests revealed the interest and potential of our system to quickly detect objects flying at hundreds of kilometers an hour.
Compared to frame-based visual streams, event-driven visual streams offer very low bandwidth needs and high temporal resolution, making them an interesting choice for embedded object recognition. Such visual systems are seen to overcome standard cameras performances but have not yet been studied in the frame of Homing Guidance for projectiles, with drastic navigation constraints. This work starts from a first interaction model between a standard camera and an event camera, validated in the context of unattended ground sensors and situational awareness applications from a static position. In this paper we propose to extend this first interaction model by bringing a higher-level activity analysis and object recognition from a moving position. The proposed event-based terminal guidance system is studied firstly through a target laser designation scenario and the optical flow computation to validate guidance parameters. Real-time embedded processing techniques are evaluated, preparing the design of a future demonstrator of a very fast navigation system. The first results have been obtained using embedded Linux architectures with multi-threaded features extractions. This paper shows and comments these first results.
A new challenging vision system has recently gained prominence and proven its capacities compared to traditional imagers: the paradigm of event-based vision. Instead of capturing the whole sensor area in a fixed frame rate as in a frame-based camera, Spike sensors or event cameras report the location and the sign of brightness changes in the image. Despite the fact that the currently available spatial resolutions are quite low (640x480 pixels) for these event cameras, the real interest is in their very high temporal resolution (in the range of microseconds) and very high dynamic range (up to 140 dB). Thanks to the event-driven approach, their power consumption and processing power requirements are quite low compared to conventional cameras. This latter characteristic is of particular interest for embedded applications especially for situational awareness. The main goal for this project is to detect and to track activity zones from the spike event stream, and to notify the standard imager where the activity takes place. By doing so, automated situational awareness is enabled by analyzing the sparse information of event-based vision, and waking up the standard camera at the right moments, and at the right positions i.e. the detected regions of interest. We demonstrate the capacity of this bimodal vision approach to take advantage of both cameras: spatial resolution for standard camera and temporal resolution for event-based cameras. An opto-mechanical demonstrator has been designed to integrate both cameras in a compact visual system, with embedded Software processing, enabling the perspective of autonomous remote sensing. Several field experiments demonstrate the performances and the interest of such an autonomous vision system. The emphasis is placed on the ability to detect and track fast moving objects, such as fast drones. Results and performances are evaluated and discussed on these realistic scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.