Extensions:Uni-Verse/General overview of the audio system

提供: wiki
移動先: 案内検索

Introduction

This document presents a distributed interactive audiovisual virtual reality system. In addition to visual rendering tool, the system contains a complete package of tools for producing interactive virtual audio environments from dynamic 3D geometries. Aim of the system presented in this document is to produce realistic audio-visual environments in real-time, where dynamic changes in the scene are possible. This is achieved with multiple applications sharing the same 3D information and a chain of novel sound rendering tools. All components seen in the figure 1 are briefly explained in this document. More information about each of the components can be found in other pages of this wiki.

Figure 1. General overview of the Uni-Verse audio system

Uni-Verse tools for graphics

The backbone in the sharing of 3D data is Verse, as emphasized in figure 1. The changes made by any application are seen by all the other applications. Dynamic changes in the 3D geometry are immediately heard in the auralized sound when acoustic simulation and sound renderer are used. A visual rendering engine such as Quel Solaar or OSG renderer allows the users to see the modifications made by the content creation tools. It can be thought of as a window to the virtual world that Verse server is hosting. Changes in textures, modifications in the geometry, and added objects can be seen by anyone using this tool. Also using an other computer for visual rendering can ease the task for the computer running the content creation tool. Different applications can be used to move the listener, but visual rendering engine is obvious choice because graphics is seen and sound is heard from the first person viewpoint. The visual renderer can update the listener orientation and coordinates to Verse with high refresh rates allowing the visual and aural feedback to be continuous.

Content creation tools can be used to modify the 3D geometry. Plugins in 3D modeling software Blender or Autodesk 3ds Max allow multiple users to collaborate by building and modifying the same geometry stored in Verse server. This geometry is shared with an acoustic simulation tool that immediately reacts to the changes in the geometry. In addition, creating and moving multiple sound sources is possible through these applications. When the sound source is placed in the geometry, an URL of the sound file or stream is defined. The URL is attached to the sound source information and forwarded through Verse to other applications. Content creation tools can also be used to define the positions of the sound sources and to move them inside the model. More information about the plugin for 3ds Max is available here and for Blender here

Geometry reduction

The geometrical data used by the graphics tools is usually too complex to be used directly in the acoustic simulation. Thus a geometry reduction module, utilizing a novel simplification technique, is attached to the system. It reads the data from the Verse server, reduces its geometrical complexity, and forwards it to the acoustic simulation module. The process is performed at interactive rates, which allows the use of dynamic geometries.

Acoustic simulation

The role of the room acoustic simulation module (UVAS) is to find out how the sound emitted from sound sources is propagated through the model to the listener. The module gets the geometry data of the model from a Verse server and is using the beam-tracing method to find the relevant early reflections of the active sound sources. The result is a set of reflection trees describing the reflections and their relations, consisting of one tree for each active sound source. The result is sent to the sound renderer which is responsible for computing the audible result.


Sound renderer

The sound renderer is connected to the room acoustic simulation module UVAS and it receives commands for updating the aural response. The instructions how to render the audio environment is sent through Uni-Verse Sound Rendering Protocol (UVSRP). The Sound renderer is the back end for a chain of acoustic simulation modules and reproduces the sound for the listener using loudspeakers or headphones. It auralizes the direct sound and early reflection paths calculated by UVAS and adds artificial reverberation. The data transmitted to the sound renderer is stored in reflection trees that consist of listener, source, and image source information including, e.g., position, orientation, visibility, URL of the sound file, and child-parent relationship.