KEYWORDS: Radar, 3D image processing, Visualization, Antennas, Extremely high frequency, Image processing, 3D modeling, 3D metrology, Cameras, 3D visualizations
This article describes a novel approach to the real-time visualization of 3D imagery obtained from a 3D millimeter wave scanning radar. The MMW radar system employs a spinning antenna to generate a fan-shaped scanning pattern of the entire scene. The beams formed this way provide all weather 3D distance measurements (range/azimuth display) of objects as they appear on the ground. The beam width of the antenna and its side lobes are optimized to produce the best possible resolution even at distances of up to 15 Kms. To create a full 3D data set the fan-pattern is tilted up and down with the help of a controlled stepper motor. For our experiments we collected data at 0.1 degrees increments while using both bi-static as well as a mono-static antennas in our arrangement. The data collected formed a stack of range-azimuth images in the shape of a cone. This information is displayed using our high-end 3D visualization engine capable of displaying high-resolution volumetric models with 30 frames per second. The resulting 3D scenes can then be viewed from any angle and subsequently processed to integrate, fuse or match them against real-life sensor imagery or 3D model data stored in a synthetic database.
KEYWORDS: Radar, 3D image processing, 3D modeling, Image sensors, Data modeling, Image processing, Visualization, 3D metrology, Extremely high frequency, Image fusion
This paper describes a novel approach to the real-time visualization of 3D imagery obtained from a 3D millimeter wave radar. The radar system uses two scanning beams to provide all weather 3D distance measurements of objects appearing on the ground. This information is displayed using our high-end 3D visualization engine capable of delivering models of up to 100,000 polygons with 30 frames per second. The resulting 3D models can then be viewed from any angle and subsequently processed to integrate match them against 3D model data stored in a synthetic database. The resulting Synthetic Radar Vision System will provide a truly novel way to obtained all weather 3D images. The paper will focus on the real-time imaging and display aspects of our solution, and will discuss technical details of the radar design itself. Engineering challenges will be outlined in the context of a practical application.
KEYWORDS: 3D modeling, Human-computer interaction, Human-machine interfaces, Process modeling, Visualization, Solid modeling, Telecommunications, Head, Data communications, 3D image processing
We describe an advanced Human Computer Interaction (HCI) model that employs photo-realistic virtual humans to provide digital media users with information, learning services and entertainment in a highly personalized and adaptive manner. The system can be used as a computer interface or as a tool to deliver content to end-users. We model the interaction process between the user and the system as part of a closed loop dialog taking place between the participants. This dialog, exploits the most important characteristics of a face-to-face communication process, including the use of non-verbal gestures and
meta communication signals to control the flow of information. Our solution is based on a Virtual Human Interface (VHI) technology that was specifically designed to be able to create emotional engagement between the virtual agent and the user, thus increasing the efficiency of learning and/or absorbing any information broadcasted through this device. The paper reviews the basic building blocks and technologies needed to create such a system and discusses its advantages over other existing methods.
This paper describes a biologically motivated visual architecture for automatic target acquisition and tracking. The model, that is based on principal characteristics of the Human visual System (HVS), was incorporated into a prototype ATR testbed that performs multi-resolution target signature extraction at the sensor level. The extracted target features are then integrated into a consistent representation of the scene using a parallel attention model of the HVS. The described ATR solution integrates a number of innovations on target segmentation, camouflage elimination, 3D invariant target identification and intelligent tracking into a concise framework. The architecture is transparent to sensor technology. Simulation and experimental results are presented.
We describe a general approach for the representation and recognition of 3D objects, as it applies to Automatic Target Recognition (ATR) tasks. The method is based on locally adaptive target segmentation, biologically motivated image processing and a novel view selection mechanism that develops 'visual filters' responsive to specific target classes to encode the complete viewing sphere with a small number of prototypical examples. The optimal set of visual filters is found via a cross-validation-like data reduction algorithm used to train banks of back propagation (BP) neural networks. Experimental results on synthetic and real-world imagery demonstrate the feasibility of our approach.
We describe a novel approach for fully automated face recognition and show its feasibility on a large data base of facial images (FERET). Our approach, based on a hybrid architecture consisting of an ensemble of connectionist networks -- radial basis functions (RBF) -- and inductive decision trees (DT), combines the merits of 'discrete and abstractive' features with those of 'holistic' template matching.' Training for face detection takes place over both positive and negative examples. The benefits of our architecture include (1) robust detection of facial landmarks using decision trees, and (2) robust face recognition using consensus methods over ensembles of RBF networks. Experiments carried out using k-fold cross validation on a large data base consisting of 748 images corresponding to 374 subjects, among them 11 duplicates, yield on the average 87% correct match, and (ROC curves where) 99% correct verification is achieved for a 2% reject rate.
Access control and authentication techniques were developed within the framework of face recognition. The corresponding face recognition tasks considered herein include, (1) surveilling a gallery of images for the presence of specific probes, and (2) CBIR subject to correct ID ('match') displaying specific facial landmarks such as wearing glasses. We describe a novel approach for fully automated face recognition and show its feasibility on a large data base of facial images (FERET). Our approach, based on a hybrid architecture consisting of an ensemble of connectionist networks -- radial basis functions (RBF) -- and inductive decision trees (DT), combines the merits of 'discrete and abstractive' features with those of 'holistic template matching.' Training for face detection takes place over both positive and negative examples. The benefits of our architecture include (1) detection of faces using decision trees, and (2) robust face recognition using consensus methods over ensembles of RBF networks. Experimental results, proving the feasibility of our approach, yield (1) 96% accuracy, using cross validation, for surveillance on a data base consisting of 904 images corresponding to 350 subjects, and (2) 93% accuracy, using cross validation, for CBIR subject to correct ID match tasks on a data base of 200 images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.