Hobbyist electronics have greatly improved in both quality and capability over the past several years. It is now possible to solve computationally challenging problems with equipment costing less than a few hundred US Dollars. Both of the datalogger concepts presented in this work leverage this improvement by using off-the-shelf technology to replace what could previously only be done with expensive custom hardware. The processors used in these concepts, the Teensy 4.0 Audio Adapter and the CTAG BEAST, were originally designed for musicians who require the ability to manipulate multiple channels of audio simultaneously. This capability, however, also enables the construction of dataloggers capable of recording perfectly synchronized multi-channel audio – a requirement for passive phased sonar arrays. Each datalogger carries the additional benefit of low power consumption, permitting the array to be deployed for several hours before requiring recovery. Future versions of the dataloggers are expected to have mission durations comparable to existing commercial systems. This paper follows the development process of both of these concepts and compares each of their performance. The first concept, the Teensy, consists of two Teensy 4.0 control boards, each sandwiched between two Teensy Audio Adapters. This assembly is capable of recording up to eight channels of audio in near perfect sync. The second concept, the BEAST, consists of a BeagleBone Black single board computer augmented with a CTAG BEAST cape. This system is capable of recording eight channels of perfectly synchronized audio. Both of these systems were tested in the field with a four-element co-prime sonar array. The data is analyzed and the results of the comparison are presented in this work. Finally, some operational recommendations and possibilities for improvement are also discussed.
Acquiring field data for machine learning-based image enhancement techniques can be challenging. These techniques often use a supervised learning architecture that requires pairs of degraded and target images that would be unfeasible to acquire in the field. One alternative approach is to employ simulation models that can accurately capture the unique characteristics of a degraded visual environment (DVE). The fidelity or accuracy of these models determines the effectiveness of the fully trained image enhancement algorithm. This paper explores the benefits and deficits of utilizing simulation software to properly portray underwater LiDAR capture in DVEs. This is accomplished by employing 3DSMAX Studios to generate 3D renderings of underwater targets. The Image Systems Engineering Toolbox for Cameras (ISETCAM) is then used to synthetically generate a training dataset. Subjective and objective metrics are devised to measure the effectiveness of these approaches in training a GAN-based underwater LiDAR image enhancement algorithm.
Although single-frame machine learning image restoration techniques have been shown to be effective, the proposed multi-frame approach takes advantage of both spatial and temporal information to resolve high-resolution and high-dynamic-range images. The proposed algorithm is an extension of the previously proposed algorithm DeblurGAN-C and aims to further improve the capabilities of image restoration in degraded visual environments. The main contributions of the proposed techniques include: 1) Development of an effective framework to generate a multi-frame training dataset typical of degraded visual environments; 2) Adopting a multi-frame image restoration framework that generates a single restored image as the output; 3) Conducting substantial experiments against the generated multi-frame training dataset and demonstrate the effectiveness of the proposed multi-frame image enhancement algorithm.
While machine learning-based image restoration techniques have been the focus in recent years, these algorithms are not adequate to address the effects of a degraded visual environment. An algorithm that successfully mitigates these issues is proposed. The algorithm is built upon the state-of-the-art DeblurGAN algorithm but overcomes several of its deficiencies. The key contributions of the proposed techniques include: 1)Development of an effective framework to generate training datasets typical of a degraded visual environment; 2) Adopting a correntropy based loss function to integrate with the original VGG16 based perceptual loss function and an L1 loss function; 3) Conducting substantial experiments against images from the artificial training datasets and demonstrate the effectiveness of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.