Skip to main content icon/video/no-internet

Virtual Reality: Auditory

Virtual reality (VR) systems provide users with a computer-mediated experience of a sensory world that exists for them as a virtual world within which they can act and interact. Popular fiction has portrayed the VR experience as generating such a realistic illusion that users can forget that they are experiencing a virtual world that is generated by computer. In practice, however, such strong illusions do not typically result unless users are presented with a coordinated interactive display of information via multiple sensory modalities. Therefore, VR systems often include auditory displays as well as visual displays so that users can hear as well as see what occurs in the virtual environments that are generated for them through computer simulations. Of course, haptic display technology can also be an extremely important component of a VR system because it supports the user experience of touching and feeling virtual objects; but here again, the illusion of reality will be best supported if an appropriate sound is heard, for example, whenever the user's hand makes contact with a virtual object. This entry covers spatial auditory displays, perception and action in auditory virtual reality, and research in auditory virtual reality.

Consider the potential advantage users might enjoy if they are able to receive coordinated multimodal stimulation in a telerobotic application, such as remote operation of a vehicle through unknown terrain. Driving a vehicle remotely will almost certainly be facilitated if users are able to listen to sound arriving from all directions around them while focusing their gaze on the environment to be navigated via a relatively constrained visual field. For example, imagine how difficult it would be to drive a car on ice and drifting snow without being able to hear or feel the change in the traction of the tires. Such auditory and haptic feedback could well guide user behavior, allowing successful navigation when vision alone is not enough.

Of course, there are many VR systems serving applications that do not include auditory displays, such as VR-based scientific visualization systems. These are systems that operate upon sets of data that exist in three or more dimensions and can represent the datasets as solid objects that users can explore visually, without any sonic component. On the other hand, VR-based entertainment systems as a rule will include audio, and many use sophisticated audio signal processing to create realistic spatial impressions of virtual sound sources positioned in the virtual environments in which users find themselves. For example, immersive VR-based games use spatial auditory display technology to provide users with an enhanced awareness of their situation. Indeed, just as in everyday life, auditory events occurring outside of the field of view often lead users to direct their gaze to bring the source of the auditory event into focal vision.

Spatial Auditory Displays

Spatial auditory display technology is used to produce, at the ears of VR-system users, sound signals that closely match the signals that would reach their ears if the sound sources were present in an actual acoustic environment similar to that which the virtual environment simulates. The common approach to such virtual acoustic simulation is to use headphone reproduction of sound signals created through binaural synthesis, although multichannel loudspeaker systems also can create convincing illusions of spatially immersive virtual acoustic environments. Binaural synthesis uses measured or simulated head-related transfer functions (HRTFs) that capture how impinging sounds are transformed by the acoustics of a listener's head and external ears. These HRTFs impose acoustic cues on sound sources that enable listeners to localize those sources in three-dimensional (3-D) space. Although source movement to the left and right of the listener can be cued by changing the relative level of the sound at the two ears, HRTFs can enable distinctions between sources moving forward and backward or upward and downward in space. These HRTFs can be measured for each individual by placing small microphones inside their ears, and the results of these measurements stored for subsequent use in binaural sound processing for that individual. Such binaural synthesis has been commercially available via real-time digital signal processing since the early 1980s, but there remains some controversy surrounding the issue of whether or not the best HRTF-based spatial auditory display results require the use of the HRTFs measured for the individual user. Generic transfer functions that offer a generalized solution to the problem of sound spatialization may be adequate for most users in many VR applications, and contemporary binaural synthesis solutions often provide a means for customizing the transfer functions according to the anatomical size of the user.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading