Presentation + Paper
22 April 2020 Auditory implicit learning in machines versus humans
Author Affiliations +
Abstract
We investigated the difference in performance on an implicit learning task between humans and machines in the auditory domain. Implicit learning is the process of ingesting information, such as patterns of everyday life, without being actively aware of doing so and without formal instruction. In pattern and anomaly detection, it is desirable to learn the patterns of everyday life in order to detect irregularities. In addition, we also considered how affect or emotion-like aspects interacts with this process. In our experiments, we created a synthetic pattern for both positive and negative sounds using a Markov grammar, which we then asked a machine-learning algorithm or humans to process. Results indicated that the generated pattern is a trivial task for even a simple RNN. For a similar but more complex task, humans performed significantly better under the condition of positive affect inducing sounds than they performed with negative sounds. Possibilities for the outcomes are discussed, along with other potential methods to compare human and machine implicit learning performance.
Conference Presentation
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Lei Qian, Kathleen G. Larson, Edmund Zelnio, Rik Warren, Bradley Bush, Lukas Garcia, Trisha Kulkarni, and Susan Latiff "Auditory implicit learning in machines versus humans", Proc. SPIE 11423, Signal Processing, Sensor/Information Fusion, and Target Recognition XXIX, 114230Q (22 April 2020); https://doi.org/10.1117/12.2559478
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Analytical research

Feature extraction

Algorithm development

Back to Top