User interaction with the virtual acoustic environment (VAE) has become of increasing interest in research. However, so far little attention has been paid to interaction by means of self-generated sound, like, for example, the own voice, even though this would open up new possibilities for natural interaction. Moreover, there is evidence that adequate reproduction of self-generated sound might enhance presence. This paper presents a system that allows such interaction with the VAE. For this, the so-called reactive VAE captures the self-generated sound, feeds it back into the virtual room, and provides the acoustic response to the actions of the user in real time. The specific features are that the system considers the dynamic directivity of the sound source and that it generally works with any source. The paper describes the implementation of the system, a technical evaluation that confirms basic functionality, and two implementation examples. Moreover, the study discusses technical general conditions of the system architecture. The potential applications of the reactive VAE are diverse, including use as a virtual practice room for musicians or as a research tool. In future work, the system will provide a basis for experiments investigating the influence of self-generated sound on human perceptual processes.

This content is only available via PDF.