This paper describes a methodology for incorporating physically accurate sound composition in forensic visualizations. The use of sound in forensic visualization provides the viewer a more realistic and comprehensive understanding of actual accident events. Forensic visualization represents complex events such as car crashes, through animation, making it easier to understand the accident. Without sound, however, a visual representation of an accident will lack important information. For instance, sound adds a spatial dimension to animations, defining space through reflection and reverberation. Sound also provides an understanding of important details such as the duration and severity of an accident, and potential sounds heard by a witness. Sound also allows the viewer to experience events in the accident that are occluded from view. Currently, there is no methodology for compositing sound in an animation to follow the principles of sound and reflect the specifics of an accident. Acoustical principals define how sound attenuate, reflects, dampens, blends and changes in pitch and sound level. The unique circumstances of an accident define what sounds are present and the timing and sequencing of these sounds. This paper provides a methodology for creating sound the both follows acoustical principles and reflects the unique circumstances of an accident.

This content is only available via PDF.