A yet-unsolved problem in the Radio Frequency (RF) domain is with transmission collisions in Amplitude Modulated (AM) radio, an example of which is air traffic control radios. In a high-traffic environment like aviation, radio operators often unknowingly transmit at the same time, leading to other radios receiving both transmissions layered together. This renders both transmissions difficult – if not impossible – to understand, leading to frustration at best and, at worst, critical transmissions being completely lost. Machine learning (ML) can successfully separate multiple overlapping speakers in audio, but we extend this idea to the RF domain. Training a performant ML model for such a scenario requires ample quantities of not only the signals with the overlapping transmissions, but also each original, individual signal. This data is easier to collect for audio – and many open-source datasets for such tasks are readily available – but no such datasets exist for AM radio separation. To collect adequate volumes of such data that is sufficiently diverse would be time-consuming and expensive. To solve this problem, we turn to data generation. Using our custom data generation pipeline combined with a Deep Neural Network (DNN), we demonstrate a 98.9% increase in signal separation efficacy when using AM radio as compared to when using audio alone. AM radio collision mitigation has broad implications, especially in congested scenarios with a high likelihood of colliding transmitters like aviation communications. Successful separation of such signals enables mitigation logic, leading to a smoother and safer user experience.
|