Virtual acoustics aims to immerse the user in a virtual acoustic environment (VAE). However, most VAE systems do not feed self-generated sound back into the virtual room, even though there is evidence that adequate reproduction of self-generated sound affects the user’s perception and might even enhance immersion. Thus, if at all possible, sonic interaction between the user and the virtual room is very limited in most current systems. This work presents a VAE system that is able to capture and reproduce self-generated sound in real time. Hence, the VAE is complemented with a reactive component providing the acoustic response to the actions of the user. The major difference compared to the few reactive VAEs introduced so far is that the system presented here considers the varying directivity of the user or the sound source, and that it generally works with any arbitrary source. The study includes a first technical evaluation of the system as well as an example application of a virtual concert hall. The reactive VAE can be used as a virtual practice room for musicians, or as a tool for psychoacoustic experiments investigating the influence of self-generated sound on human perceptual processes.