To support spatial audio research, we aim to take recordings from complex acoustic environments with moving sources and microphones, however we observe a lack of research tools that can accomplish this. Past approaches recorded people engaging in various tasks, which produces rich data that unfortunately lacks repeatability. We propose using robots to recreate dynamic scenes without the inherent variability of human motion. To be useful, this Mechatronic Acoustic Research System must be remotely accessible, offer concise representations of dynamic scenes, support a variety of robot and audio devices, and synchronize robot motion. In this talk, we show how we solved these challenges. Remote experimentation is facilitated by our virtual interface, which uses a simple GUI to describe robot motion and audio playback/recording. A digital twin physical simulation is used for visualization and validation of motion paths. We propose using the Robot Operating System for multi-robot coordination so that networked robots can be incorporated with little overhead. We use MARS to run experiments where a cable-driven parallel robot moves a loudspeaker along a 3D path while being recorded from distributed Matrix Voice microphone arrays. We evaluate the measured audio to show repeatability of the system, justifying its use in research.
Skip Nav Destination
Article navigation
October 2022
Meeting abstract. No PDF available.
October 01 2022
Mechatronic acoustic research system for generating real large-scale dynamic datasets
Austin Lu;
Austin Lu
Univ. of Illinois Urbana-Champaign, 503 E Clark St., Champaign, IL 61820, austinl8@illinois.edu
Search for other works by this author on:
Arya Nallanthighall;
Arya Nallanthighall
Univ. of Illinois Urbana-Champaign, Champaign, IL
Search for other works by this author on:
Andrew C. Singer
Andrew C. Singer
Univ. of Illinois Urbana-Champaign, Urbana, IL
Search for other works by this author on:
J. Acoust. Soc. Am. 152, A51 (2022)
Citation
Austin Lu, Ethaniel Moore, Arya Nallanthighall, Mankeerat S. Sidhu, Kanad Sarkar, Manan Mittal, Ryan M. Corey, Paris Smaragdis, Andrew C. Singer; Mechatronic acoustic research system for generating real large-scale dynamic datasets. J. Acoust. Soc. Am. 1 October 2022; 152 (4_Supplement): A51. https://doi.org/10.1121/10.0015511
Download citation file:
77
Views
Citing articles via
Vowel signatures in emotional interjections and nonlinguistic vocalizations expressing pain, disgust, and joy across languages
Maïa Ponsonnet, Christophe Coupé, et al.
The alveolar trill is perceived as jagged/rough by speakers of different languages
Aleksandra Ćwiek, Rémi Anselme, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
Electromagnetic mechatron module control algorithm based on linear performance element of industrial robots
AIP Conf. Proc. (June 2024)
Implementation of mechatronic problem-based learning for outcome-based education
AIP Conf. Proc. (June 2023)
Interdisciplinary design of mechatronic systems based on the integration of separate software
AIP Conference Proceedings (June 2022)
Hybrid approach to synthesis of electromechanical block control system of mechatronic module
AIP Conf. Proc. (March 2023)
Methods in the analysis of mobile robots behavior in unstructured environment
AIP Conference Proceedings (November 2012)