Recently, multi-modal presentation systems have gained much interest to study big data with interactive user groups. One of the problems of these systems is to provide a venue for both personalized and shared information. Especially, sound fields containing parallel audio streams can distract users from extracting necessary information. The way spatial information is processed in the brain allows humans to take complicated visuals and focus on details or the whole. However, temporal information, which can be better presented through audio, is processed differently, making dense sound environments difficult to segregate. In Rensselaer’s CRAIVE-Lab, sounds are presented spatially using an array of 134 loudspeakers to address individual participants who are working on analyzing data together. In this talk, we will present and discuss different methods to improve the ability of participants to focus on their designated audio streams using co-modulated visual cues. In this scheme, the virtual reality space is combined with see-through, augmented reality glasses to optimize the boundaries between personalized and global information. [Work supported by NSF #1229391 and the Cognitive and Immersive Systems Laboratory (CISL).]
Skip Nav Destination
Article navigation
Meeting abstract. No PDF available.
May 01 2017
Using visual cues to perceptually extract sonified data in collaborative, immersive big-data display systems
Wendy Lee;
Wendy Lee
School of Architecture, Rensselaer Polytechnic Inst., 110 8th St., Troy, NY 12180, leew14@rpi.edu
Search for other works by this author on:
Samuel Chabot;
Samuel Chabot
School of Architecture, Rensselaer Polytechnic Inst., 110 8th St., Troy, NY 12180, leew14@rpi.edu
Search for other works by this author on:
Jonas Braasch
Jonas Braasch
School of Architecture, Rensselaer Polytechnic Inst., 110 8th St., Troy, NY 12180, leew14@rpi.edu
Search for other works by this author on:
J. Acoust. Soc. Am. 141, 3896 (2017)
Citation
Wendy Lee, Samuel Chabot, Jonas Braasch; Using visual cues to perceptually extract sonified data in collaborative, immersive big-data display systems. J. Acoust. Soc. Am. 1 May 2017; 141 (5_Supplement): 3896. https://doi.org/10.1121/1.4988751
Download citation file:
Citing articles via
Related Content
A raytracing method to create inhomogeneous wave fields for collaborative virtual reality systems
J Acoust Soc Am (March 2018)
Using binaural and spherical microphone arrays to assess the quality of synthetic spatial sound fields
J Acoust Soc Am (May 2017)
High-density data sonification of stock market information in an immersive virtual environment
J Acoust Soc Am (May 2017)
Panoptic reconstruction and dynamic synthesis of immersive virtual soundscapes
J Acoust Soc Am (October 2020)
Using Hilbert curves to organize, sample, and sonify solar data
Am. J. Phys. (October 2021)