For security related applications MMW radar and radiometer systems in remote sensing or stand-off configurations are well established techniques. The range of development stages extends from experimental to commercial systems on the civil and military market. Typical examples are systems for personnel screening at airports for concealed object detection under clothing, enhanced vision or landing aid for helicopter and vehicle based systems for suspicious object or IED detection along roads. Due to the physical principle of active (radar) and passive (radiometer) MMW measurement techniques the appearance of single objects and thus the complete scenario is rather different for radar and radiometer images. A reasonable combination of both measurement techniques could lead to enhanced object information. However, some technical requirements should be taken into account. The imaging geometry for both sensors should be nearly identical, the geometrical resolution and the wavelength should be similar and at best the imaging process should be carried out simultaneously. Therefore theoretical and experimental investigations on a suitable combination of MMW radar and radiometer information have been conducted. First experiments in 2016 have been done with an imaging linescanner based on a cylindrical imaging geometry [1]. It combines a horizontal line scan in azimuth with a linear motion in vertical direction for the second image dimension. The main drawback of the system is the limited number of pixel in vertical dimension at a certain distance. Nevertheless the near range imaging results where promising. Therefore the combination of radar and radiometer sensor was assembled on the DLR wide-field-of-view linescanner ABOSCA which is based on a spherical imaging geometry [2]. A comparison of both imaging systems is discussed. The investigations concentrate on rather basic scenarios with canonical targets like flat plates, spheres, corner reflectors and cylinders. First experimental measurement results with the ABOSCA linescanner are shown.
Change detection using very high resolution SAR images is an important source of information for reconnaissance applications. Modern SAR sensors are capable of acquiring many images in short periods of time, which creates the need for a reliable automatic change detection method. In this paper, we will describe a new automatic change detection approach that combines very high resolution SAR images with prior knowledge about the imaged scene. In this case, the prior knowledge about the scene will come from vector maps, which can be obtained from a Geographic Information System (GIS). These vector maps will allow us to determine which regions are of interest for the change detection, and what kind of changes/objects can be expected there. The algorithm described in this paper will be applied to a time series of high resolution TerraSAR-X images of a port with military shipyards, and used to automatically detect ship activity and extract information about the detected ships. In this case, the vector maps were obtained from a Geographic Information System (GIS) containing map data from OpenStreetMap
KEYWORDS: Radar, Data fusion, 3D image processing, Synthetic aperture radar, Visualization, Information fusion, Data acquisition, Imaging systems, Remote sensing, 3D modeling, Data modeling, Buildings, Ray tracing, Sensors
Due to challenges in the visual interpretation of radar signatures or in the subsequent information extraction, a fusion with other data sources can be beneficial. The most accurate basis for a fusion of any kind of remote sensing data is the mapping of the acquired 2D image space onto the true 3D geometry of the scenery. In the case of radar images this is a challenging task because the coordinate system is based on the measured range which causes ambiguous regions due to layover effects. This paper describes a method that accurately maps the detailed 3D information of a scene to the slantrange-based coordinate system of imaging radars. Due to this mapping all the contributing geometrical parts of one resolution cell can be determined in 3D space. The proposed method is highly efficient, because computationally expensive operations can be directly performed on graphics card hardware. The described approach builds a perfect basis for sophisticated methods to extract data from multiple complimentary sensors like from radar and optical images, especially because true 3D information from whole cities will be available in the near future. The performance of the developed methods will be demonstrated with high resolution radar data acquired by the space-borne SAR-sensor TerraSAR-X.
Specific imaging effects that are caused mainly by the range measurement principle of a radar device, its much lower frequency range as compared to the optical spectrum, the slanted imaging geometry and certainly the limited spatial resolution complicates the interpretation of radar signatures decisively. Especially the coherent image formation which causes unwanted speckle noise aggravates the problem of visually recognizing target objects. Fully automatic approaches with acceptable false alarm rates are therefore an even harder challenge.
At the Microwaves and Radar Institute of the German Aerospace Center (DLR) the development of methods to implement a robust overall processing workflow for automatic target recognition (ATR) out of high resolution synthetic aperture radar (SAR) image data is under progress. The heart of the general approach is to use time series exploitation for the former detection step and simulation-based signature matching for the subsequent recognition. This paper will show the overall ATR chain as a proof of concept for the special case of airplane recognition on image data from the space borne SAR sensor TerraSAR-X.
In general, interpretation of signatures from synthetic aperture radar (SAR) data is a challenging task even for the expert image analyst. For the most part, this is caused by radar specific imaging effects, e.g. layover, multi-path propagation or speckle noise. Specifically for the application in maritime security, ship signatures exhibit additional defocusing effects due to the ship’s movement even when they are anchored. Focusing on object recognition, the detection of target signatures can be done with a pretty good chance of success, but the identification is often impossible. To assist image analysts in their recognition tasks, a SAR simulation tool has been developed recently. It is very simple to operate, by simulating available 3D model data of ships and test the resulting simulated signatures with their real counterpart from SAR images. This is a very robust way to identify larger vessels out of current one meter resolution space borne SAR data. Nevertheless, for smaller vessels this can be still very challenging, because the resolution is too coarse. Recently, TerraSAR-X initiated a new staring spotlight imaging mode that enhances cross-range resolution significantly and therefore also improves the chance for the identification of smaller vessels. This paper demonstrates the capabilities of the developed simulation tool in assisted target recognition specifically on ship signatures. The improvement of recognition performance will be studied by comparing results for TerraSAR-X sliding spotlight mode and staring spotlight mode data.
Current space borne synthetic aperture radar (SAR) systems are able to provide users with high resolution image data of
around one meter. Focusing on systems operating in the X-band, this value is not the end of possible improvements in
resolution. There still lies a great potential in an increase of bandwidth of the radar signal itself and also in a significant
enlargement of the synthetic aperture. From the technical point of view this certainly is a challenge, but could be possible
for future space borne SAR missions already with current state of the art hardware. As a matter of proof TerraSAR-X
introduces a new staring spotlight image product that significantly improves the azimuthal resolution to around a quarter
of a meter. The technical realization of the very high resolution SAR system is not the only obstacle to overcome.
Especially the increase of Doppler bandwidth along the synthetic aperture requests special treatment and considerations
in system design from a signal processing’s point of view. Challenges like orbital accuracy, tropospheric effects,
approximations in SAR processing methods and depth-of-focus issues have to be addressed. In this paper, most of these
challenges are studied separately by performing parametric simulations for single point targets and also for complex
signatures of an airplane. A comparable SAR parameter set as used by the high resolution sliding spotlight mode and the
new staring spotlight mode of TerraSAR-X are used for simulation.
The fusion of image data from different sensor types is an important processing step for many remote sensing
applications to maximize information retrieval from a given area of interest. The basic process to fuse image data is to
select a common coordinate system and resample the data to this new image space. Usually, this is done by
orthorectifying all those different image spaces, which means a transformation of the image’s projection plane to a
geographic coordinate system. Unfortunately, the resampling of the slant-range based image space of a space borne
synthetic aperture radar (SAR) to such a coordinate system strongly distorts its content and therefore reduces the amount
of extractable information. The understanding of the complex signatures, which are already hard to interpret in the
original data, even gets worse. To preserve maximum information extraction, this paper shows an approach to transform
optical images into the radar image space. This can be accomplished by using an optical image along with a digital
elevation model and project it to the same slant-range image plane as the one from the radar image acquisition. This
whole process will be shown in detail for practical examples.
In contrast to remote sensing with optical sensors, synthetic aperture radar (SAR) satellites require a slant imaging
geometry for image acquisition. This fact and because SAR systems operate their sensors actively emphasize that the
resulting shadowing effects can have crucial influence on the information content of the image product. Additionally,
information retrieval is aggravated by layover effects, where e.g. signatures of target objects superimpose with clutter
information. Especially for security applications, the prediction of the expected information content and the calculation
of layover and shadow regions during mission planning could greatly improve the image product.
This paper presents a toolset to optimize imaging geometry parameters for the image acquisition of a SAR sensor, that
performs simulation techniques for finding layover and shadow regions in a given target scene. The described methods
will be verified by applying them to TerraSAR-X system parameters and image data.
Imagery data acquired by recently launched space borne SAR systems demonstrate a very good spatial resolution (e.g.
one meter with TerraSAR-X). The designs of such complex systems make it compulsory to do SAR end-to-end
simulations to optimize image quality (e.g. spatial and radiometric resolution, ambiguity suppression, dynamic range,
etc.). The most complex, critical and challenging modules have to be designed for the generation of SAR raw data and
SAR image generation, because the limits of computability and memory requirements are reached very quickly.
Moreover, the analysis of SAR images is a demanding task, because of their sensor specific effects. Therefore, a
simulation tool is under development to analyze realistic target features and make the scattering processes transparent to
the user.
With the method presented in this paper, SAR images of complex scattering bodies can be generated in a very efficient
way. This is done by directly localizing scattering centers and identifying their persistency along the synthetic aperture.
Thus the usual raw data generation and processing steps are dropped. The resulting images show a very good similarity
to reality, because scattering centers due to multipath propagation effects are also handled. Furthermore this toolkit
makes it possible to visualize the scattering centers and their evolution, by mapping them on the 3D structure of the
scattering body. This results in transparency of the whole scattering process, which greatly improves the understanding
of the image effects. The paper presents this new approach for the application of inverse SAR (ISAR) and first
simulation results.
Accurate simulation tools for the design of space borne synthetic aperture radar systems (SAR) are compulsory for the
analysis of the system's capabilities, because ground based experimental tests are in most cases impossible and very
costly. Through a simulation process it is possible to analyze the image quality parameters for a given system
configuration or evaluating the effects in SAR images when this configuration is changed.
A new fast SAR image simulator (SARIS) is currently under development on the basis of an existing toolset called SAR
end-to-end simulator (SETES). This image simulator produces SAR images by using the point spread function (PSF) of a
focused point target response in contrast to SETES's very expensive raw data generation module. In SARIS the SAR
image is produced through a convolution of the PSF with the so-called reflectivity map of the scene.
In this paper first simulation results with a prototype of SARIS are given to show effects like motion errors and low
peak-to-side-lobe ratios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.