Blog homepage

Wwise Spatial Audio Implementation Workflow in Scars Above

Game Audio / Spatial Audio

What is this article about?
What is Spatial Audio API?
Spatial Audio API Workflow
    Rooms and Portals
        Asset organization and naming conventions
            Naming Conventions
            Wwise objects organization
            Unreal Editor assets organization 
        Planning
            Defining zones
        Initial setup
            Wwise setup
            Unreal Editor setup
        Implementation
            Phase 1: Unreal Editor side
            Phase 2: Wwise side
            Phase 3: Unreal Editor side
        Testing / Troubleshooting
            Wwise RoomVerb
            Rooms and Portals
    Ak Geometry
        What is Ak Geometry API?
        Diffraction and transmission loss
        When do we use Ak Geometry API?
        Implementation
        Testing / Troubleshooting
            Unreal Editor side
            Wwise side
    Optimization
        Optimization of AkGeometry by associating with a Room
        Turning off Diffraction on fully convex volumes
        Turning off floor surfaces
        Remove obsolete portals
        Have simpler geometry
        Per platform Spatial Audio settings
        World Composition issues
    Our custom solutions
        Custom Listener
        VisualizeAkGeometry
        SplineAudioWalls
        Weapon Tail Switching
Final thoughts

What is this article about?

In this article, I will try to delve into the intricacies of Spatial Audio and how it applies to Scars Above, our soon-to-be-released third-person action-adventure game. In Scars Above, there are many different areas and spaces that we traverse during the gameplay, each with their physical and acoustic characteristics that would ideally impact the soundscape in such a way that the immersion and realism conveyed to the player is heightened and never broken. Spatial Audio API provided by Audiokinetic as a part of the Wwise integration has proven to be more than capable of providing such experience to the player, if implemented correctly. I will focus on the technical side of that implementation, aimed at a Sound Designer, with a goal to present a clear and concise guide on how to get the most out of Spatial Audio in the game, followed by advice regarding troubleshooting and optimization and the work we at Mad Head Games had done to further improve the Spatial Audio implementation workflow.

I won't be going into detail about setting up the project or what's going on behind the scenes of Wwise Authoring tool and Unreal Editor. A moderate proficiency in Wwise and Unreal Engine is expected from a Sound Designer dedicated to Spatial Audio implementation. Having said that, anything that might be missing from this document can be found in the online documentation, so I believe that even beginners can find the information provided here to be useful.

This article is written with Scars Above in mind, so some details might not apply to all projects. Scars Above is compiled using our custom engine based on the Unreal Engine version 4.27.2 and will be shipped with the Wwise integration for Unreal Engine version 2021.1.9.7847.2311. We did not use Wwise Reflect and we packaged our sounds using the legacy SoundBank system (no Event-Based Packaging). For the sake of simplicity, and since it was our main source of reverberation processing, I will be using the Wwise RoomVerb plug-in in all examples.

What is Spatial Audio API?

Spatial Audio API is a part of the Wwise API which contains a set of tools for emulation of an acoustic environment of a game. Using these tools, it is possible to faithfully recreate many acoustic properties of sound such as reflections, propagation, diffraction and other acoustic phenomena, with the purpose of simulating a realistic acoustic environment as a supplement to a physical environment of a game level. The purpose of the simulation is to enhance the feeling of realism in the game, to help players orient themselves in space and to provide important information about actions taking place around the player.

A Sound Designer designated for Spatial Audio implementation should be thoroughly familiar with its workflow. Ideally, their job of implementing Spatial Audio would start after the level's Final Blockout, when all the meshes, blocking volumes and other geometry is placed that allow us to have a full playthrough of the level. Implementation of Spatial Audio can be done in parallel with other Level Design phases that come after the blockout (like dressing) and shouldn't be a dependency to any other department. It's important to note, though, that the Spatial Audio implementation is highly dependent on all changes in the level layout, so it is necessary to keep track of the work being done on the level by our colleagues from the Level Design team, should they further iterate on the level.

It is beneficial for a Sound Designer designated for Spatial Audio implementation to possess a basic knowledge of 3D modeling and/or practical experience in using the Brush Editing tool in Unreal Engine, since we will mostly rely on using the Brush Editing tool to mold the geometry used by Spatial Audio.

Spatial Audio API Workflow

There are two separate functionalities of Spatial Audio API which can be utilized in parallel in order to achieve a desired level of acoustic realism in the game: Rooms and Portals and AkGeometry API. Their usage will be described in detail in the following parts of this document.

Rooms and Portals

From the Audiokinetic Wwise documentation:

"The rooms and portals network can be thought of as a high level abstraction (or as a low level-of-detail version) of the surrounding geometry. With care, the combination of rooms and portals with level geometry can result in an acoustic simulation that is both detailed and efficient."

A Sound Designer's job when working with Rooms and Portals is to manually populate the Sound sublevel with Rooms and Portals actors in such a way that they form a coherent network of volumes which should, in a manner of speaking, fill the entire "negative space" of the level. I like to call it "negative space" because it encompasses all empty areas from where a sound can either be transmitted or received, such as exterior and interior spaces that are usually filled with air, in contrast to the hard, tangible, physical, "positive" space consisting of solid material in which no transmitters or receivers of sound should exist.

Rooms are represented in our game by the AkSpatialAudioVolume actor class. These actors can cover one or more areas of the level and their properties allow us to define several features such as reflections based on its sides and edges, reverberation by adding the appropriate Wwise Aux asset, assignment of Room Tones and such. There are three main components of AkSpatialAudioVolume actors, Late Reverb, Room and Surface Reflectors, all of them attached to the BrushComponent and each covering a separate Spatial Audio functionality.

Portals, on the other hand, are invisible AkAcousticPortal volume actors that are used to cover all kinds of openings of a Room, including ones connecting two Rooms together. They have one PortalComponent attached to the BrushComponent.

All these volumes are placed manually on the level and cover the said "negative space" in such a way that their boundaries do not overlap with meshes and other volumes in an undesirable and unplanned way.

Asset organization and naming conventions

There are five types of objects and assets which are a part of the Spatial Audio workflow, two of them being at the Wwise side and three at the Unreal Editor side. On the Wwise side we have the Wwise RoomVerb presets (as a part of the ShareSets Hierarchy) and the Auxiliary Busses (as a part of the Master-Mixer Hierarchy), while on the Unreal Editor side we have the Audiokinetic Auxiliary Busses (as .uassets), the AkSpatialAudioVolumes and the AkAcousticPortals (as actor class instances on a level).

Wwise assets:

  • Wwise reverb (RoomVerb, Convolution Reverb) presets;
  • Auxiliary Busses.

Unreal Editor assets:

  • Auxiliary Busses;
  • AkSpatialAudioVolumes;
  • AkAcousticPortals.

Note: As stated at the beginning of the document, we will only be using the Wwise RoomVerb plug-in as an example for reverberation simulation, but any other kind of time-based effect can be assigned to a bus.

Naming conventions

We've agreed that all types of Spatial Audio assets should contain 'AX' as a prefix in their name. This is in the lieu of the rest of our naming conventions (SX for sound effects, MX for music tracks, etc.) AkSpatialAudioVolumes which only have Late Reverb enabled have 'LR' appended to 'AX', while AkSpatialAudioVolumes which are also used as Rooms have 'Room' instead of 'LR' in their name. Wwise RoomVerb presets and Auxiliary Bus assets are named the same and their descriptive names are formulated based on the Spatial Audio Layout document for that level (explained below in the 'Planning' section). The names of Unreal Editor assets must be named in the same way as their counterparts in Wwise.

Wwise objects organization

Wwise RoomVerb presets are a part of the ShareSets hierarchy which can be found in the ShareSets tab of the Project Explorer in the Wwise Authoring tool. I decided to organize presets by putting them in a separate folder for each level, in the following manner:

image (1)

Picture 1: Reverb ShareSets hierarchy

Basically, each level zone should have its own preset, in a folder pertaining to the level. More on defining level zones later.

The other type of Wwise objects that we will be using are Auxiliary Busses. Auxiliary Busses are a part of the Master-Mixer hierarchy, which can be found in the Audio tab of the Project Explorer, organized in a similar manner to ShareSets:

image (2)

Picture 2: Master-Mixer Auxiliary Busses hierarchy in the Wwise authoring tool

Each Auxiliary Bus should be created as a child of an Audio Bus referring to the level. Those Auxiliary Busses should have the same name as their ShareSet preset counterpart, since we will be assigning each ShareSet preset to its Auxiliary Bus later.

  • Using generic Auxiliary Busses on different zones

In Scars Above, we applied a second approach to organizing and implementing Auxiliary Busses. Besides creating a dedicated Aux Bus for each zone (as explained above), we also created two collections of generic Auxiliary Busses that can be used in situations where there is no need to create a preset for each zone. We divided those into Exterior and Interior types of busses and gave them descriptive names.

image (3)

Picture 3: Generic Auxiliary Busses hierarchy

The pros of this approach is the ability to quickly assign busses to several similar zones on the level. On the other hand, since we will be reusing the same Aux Busses on many zones, we are unable to edit their presets if we decide that they are not appropriate. In Scars Above, we struck a good compromise between having dedicated presets for the more diverse topography, while using the generic presets for areas that are in a way similar and less diverse.

Unreal Editor assets organization

As I mentioned, we will be using three types of Unreal assets : Audiokinetic Auxiliary Bus .uassets, AkSpatialAudioVolume and AkAcousticPortal actors.

We will be creating Audiokinetic Auxiliary Busses by dragging and dropping them from the WAAPI Picker into the Content Browser.

AkSpatialAudioVolume and AkAcousticPortal are actor classes that are a part of the Rooms and Portals API. We will be placing them directly on our sublevel.

  • Inside the Content Browser

If we open the WAAPI Picker and find the AuxBus folder, we should see the same Master-Mixer hierarchy that we previously created in the authoring tool. Now we just drag the Audiokinetic Auxiliary Bus assets from the WAAPI Picker into the appropriate folder where we will be keeping our Spatial Audio assets for that level. If you are still using the legacy SoundBank functionality, each Audiokinetic Auxiliary Bus asset should be opened and the appropriate SoundBank should be assigned.

image (4)

Picture 4: Master-Mixer Auxiliary Busses hierarchy in the WAAPI Picker


  • Inside the World Outliner

All instances of our Spatial Audio actors on the level (listed in the World Outliner) should be grouped into a dedicated Spatial Audio folder inside the Audio folder, by moving them to that folder after being placed on the level. I suggest having a separate folder just for AkAcousticPortals.

image (5)

Picture 5: Spatial Audio actor instances in the World Outliner

Planning

Creating a network of volumes can seem like a daunting task, especially if the level is large, with many irregular areas and crossing paths. While those spaces can add to the appeal, they also make Spatial Audio implementation a lot harder, so proper planning is the key.

I suggest dividing the level into segments, large ones and then smaller ones, in a process I call 'zoning'. The goal at the end of the zoning phase is to have a finished document containing the Spatial Audio Layout for the level. This document should contain a floor plan with defined zones and positions of Rooms and Portals, along with the name of the zones and their priorities. To properly sketch a layout, we would need to consult all the existing documentation related to the level design. It would also help to have a talk with the level designer or the game designer about the concept of the level itself, its visual aesthetics and atmosphere that the level should convey. When it comes to the more complicated levels, talking to the level designer should reveal if there are any rooms/nooks/passages that are not easily noticeable. If the level design team has already made their own version of zoning, it's desirable to reuse their zone naming for our assets.

Defining zones

Zones represent parts of the level which are acoustically and visually distinct enough that they may be treated as separate areas when it comes to asset organization and naming. Zones can be General, Specific and Rooms, depending on the complexity of the level, each subsequent type being a subset of the previous. Zones may or may not be represented by a dedicated AkSpatialAudioVolume, which on the other hand may or may not be defined as a Room. Depending on which type of a zone the AkSpatialAudioVolume represents, a priority number is assigned, with more general zones having lower priority numbers than the more specific ones.

image (6)

Picture 6: An example of a level divided into General zones

Levels of priority based on the zone type:

  1. Zones (General) - Priority level: 1 (or higher)
  2. Zones (Specific) - Priority level: 2 (or higher)
  3. Rooms - Priority level: 3 (or higher)

Note: Zones of the same type can have different priorities in situations where there might be overlapping between two or more zones. In such cases, we need to decide which zone has a higher priority over the other. If the overlapping zones have the same priorities, the system will not be able to predict the zone that is active.

Specific zones are parts of the General zones that are distinct enough that they warrant a dedicated AkSpatialAudioVolume instance inside an AkSpatialAudioVolume belonging to their parent General zone, and having a higher priority over it.

image (7)

Picture 7: An example of a General zone divided into Specific zones

Rooms represent interiors and spaces that are in the exterior but which are enclosed for the most part and have openings that usually need to be covered by portals. They are a subset of General or Specific zones and have higher priorities than their parent zone. In the layout document they are represented by the screenshot of the room which should depict the size and the acoustic characteristics of the room, alongside their name and the priority number.

image (8)

Picture 8: An example of what a Room might look like

Initial setup

Before starting the implementation of Spatial Audio features in our game, there are things to consider when it comes to setting up our project's initial settings as well as defining parameters of our Wwise objects and Unreal Engine classes.

Wwise setup

  • Enable Diffraction and Transmission

In order for a sound object to be processed by Spatial Audio's obstruction and occlusion, they need to be included in the calculations for diffraction, which is done by enabling the 'Diffraction and Transmission' option in the Positioning tab.

image (9)

Picture 9: A 'Diffraction and Transmission' option in the Positioning tab

This is applied to all objects that are a part of the Actor-Mixer Hierarchy as well as Auxiliary Busses that are to be assigned to the AkSpatialAudioVolumes as 3D busses.

  • Enable Game-Defined Auxiliary Sends

In order for a sound to be sent to the Auxiliary Bus of a Room i.e. its Wwise RoomVerb effect, a Game-Defined Auxiliary Send must be enabled. This is done in the General Settings tab of an audio object that we wish to send that is a part of the Actor-Mixer Hierarchy, as well as all Auxiliary Busses with reverbs that we want to send to other Room's reverb, by enabling the 'Use game-defined aux sends' option.

image (10)

Picture 10: Enabled 'Use game-defined aux sends' of an actor-mixer

Objects that do not have this option checked will not be processed by the Wwise RoomVerb effect of any volume in the game.

  • Global Obstruction/Occlusion curves

Having Spatial Audio in our game means that we will be doing all sorts of calculations at runtime, deciding on the amount of processing that is applied to each Game Object that is being sent to the Spatial Audio engine. Two main parameters are sent by the game engine to Wwise: Diffraction and Transmission Loss. Diffraction is tied to the amount of Obstruction of the sound while Transmission Loss is tied to the amount of Occlusion of the sound.

How those parameters will influence the sound is determined by the Volume, LPF and HPF curves for both the Obstruction and Occlusion. Those curves can be defined globally or per sound as RTPCs.

Global Obstruction and Occlusion curves can be set in the Wwise Project Settings under the Obstruction/Occlusion tab.

image (11)

Picture 11: Global Obstruction/Occlusion curves defined in the Project Settings

Having these curves set up will automatically apply the processing to all sounds that are being sent for Diffraction and Transmission calculations.

Note: I would suggest using global Occlusion curves instead of Transmission Loss RTPCs for occlusion calculation. The reason for this is that the Transmission Loss calculation via RTPC curves will not be done for each sound path separately, but will instead provide one value per game object, meaning that the processing done on a direct path will also affect any potential diffraction paths originating from an emitter. Using global Obstruction and Occlusion curves allows signal processing of rays independently and will provide greater accuracy when modeling both transmission and diffraction.

Unreal Editor setup

  • Occlusion Refresh Rate

The alternative to the Spatial Audio obstruction and occlusion processing is the Unreal Engine's built-in occlusion calculations, which is enabled by default and can be used in parallel with Spatial Audio. Compared to the Spatial Audio implementation and how it's integrated into the Wwise project, Unreal Engine's occlusion is rather crude and should we decide to switch to Spatial Audio diffraction, it's best to turn the Unreal Engine's occlusion off.

Unreal Engine's occlusion is applied to each instance of AkComponent in the game, in the form of the Ak Component's 'Occlusion Refresh Interval' parameter in the Occlusion category. The default value is 0.2 seconds, meaning that for each fifth of a second a new Occlusion value is being calculated for that AkComponent. Setting the value to 0 will disable the Unreal Engine's occlusion completely.

image (12)

Picture 12: Setting the AkComponent's 'Occlusion Refresh Interval' to zero

Since this needs to be done on each AkComponent in the game, it's best to ask a colleague from the programming team to change the default value of the parameter to 0.0f inside the constructor of AkComponent.cpp so you wouldn't have to worry about it in the future.

  • Using Collision presets

Collision presets are a useful way to be sure that a proper collision will be set on all of our Spatial Audio actors. Having dedicated presets for Spatial Audio means that the collision will be always set up properly from the start and that other developers wouldn't have to worry if changing their collision settings would affect our volumes or not.

In Scars Above, we defined two collision presets, SpatialAudioVolume and SpatialAudioVolumeBlock. Since our AkSpatialAudioVolumes do not need to block anything and just need to register any Game Objects that overlap them, all of the Trace Types In SpatialAudioVolume collision preset are set to Ignore but the one that we are using for Weapon Tail switching and debugging called AudioTrace, which is set to Overlap.

In the SpatialAudioVolumeBlock collision preset, as the name implies, the AudioTrace is being set to Block instead of Overlap, while the rest is set to Ignore. We will be using this preset on all volumes and other objects that need to be sent to AkGeometry, in order to prevent the listener from getting inside the geometry. This will be explained in more detail at the end of the document, as a part of our 'Custom Listener' integration.

image (13)

Picture 13: SpatialAudioVolume and SpatialAudioVolumeBlock collision presets

I would strongly suggest avoiding using Custom collisions on actors and especially their instances on the map. Having Custom collisions will make it much harder to track those changes later in the development, while any potential changes to our Collision presets will not apply to those actors.

Implementation

Phase 1: Unreal Editor side

The goal of this phase is to have a complete coverage of the level with AkSpatialAudioVolumes in such a way that they would create a basis for the network of Rooms and Portals. Whether those volumes would be considered to be Rooms or not depends on the decision of the Sound Designer, informed by their layout document. Generally, an AkSpatialAudioVolume that is used for exteriors to simulate the Late Reverb is not considered to be a Room and does not need to strictly follow the contours of the area, while an AkSpatialAudioVolume that is a Room is more often used to cover interiors and should strictly follow the curvature of the space it occupies.

image (14)

Picture 14: A network of AkSpatialAudioVolumes that are Late Reverb only

image (15)

Picture 15: A network of AkSpatialAudioVolumes featuring only Rooms


  • Placing AkSpatialAudioVolume actors on the level

Before we begin populating the level with actor instances, we need to make sure that we are in the appropriate sublevel. To select it, we need to open the Window drop-down menu and select the Levels tab. Inside it, we should find our sound sublevel and double click it, after which its name should be displayed in the lower right corner of the Viewport, confirming the selection.

image (16)

Picture 16: Properly selected Sound Sublevel

Placement of an AkSpatialAudioVolume actor is done by choosing that actor type from the Place Actors panel and dragging and dropping it onto the level.

image (17)

Picture 17: Place Actors panel with a selected AkSpatialAudioVolume actor

When adding an instance of the actor to the level, we should try to place the actor so that its pivot point is close to the center of the area. The pivot point is important as it defines the location of the actor with its X, Y, and Z values, which are used for calculating the position of the volume in the Profiler view and in debugging tools such as Visualize Rooms and Portals. The pivot point is also important if we want to post an event from the volume itself as a room tone (more on Room Tone functionality later).

After placing the actor on the level, the instance will show up selected in the World Outliner and its Details tab should be opened. In it, we can choose between several settings:

  1. Will this instance have distinct surfaces (by Enabling Surface Reflectors)?
  2. Will this instance send to a Late Reverb Aux Bus (by Enabling Late Reverb)?
  3. Will this instance behave like a Room (by Enabling Room)?

image (18)

Picture 18: AkSpatialAudioVolume settings

The 'Enable Surface Reflectors' option is related to using the surfaces of the volume as geometry, along with defining their acoustic properties with acoustic textures etc. The 'Late Reverb' option is tied to our decision on whether we want this volume to generate the effects of reverberation or not. We can assume that all AkSpatialAudioVolumes that are used for describing specific spaces from the layout document should also simulate its acoustic properties, so it's logical to have this turned on. The 'Enable Room' option is related to the aforementioned decision on whether we should treat this area as a Room or not.

In addition to these settings in the Details tab, we need to set up the collision preset of the BrushComponent by clicking on the 'Collision Presets' drop-down menu and selecting the 'SpatialAudioVolume' preset that we previously created.

image (19)

Picture 19: Properly set up AkSpatialAudioVolume Collision Preset

After that we need to set the Priority number for the Late Reverb and the Room components based on our Spatial Audio Layout document.

image (20)

Picture 20: Properly set up AkSpatialAudioVolume Late Reverb and Room Priorities

Now we need to add an actor tag to the volume which will be used to set the Interior/Exterior state in Wwise to be used by the Weapon Tail switching (more on this in the 'Our custom solution' section). Based on the properties of the volume (whether it's an exterior or interior and its size), we can choose between four different volume types:

  • Exterior;
  • Interior_Large;
  • Interior_Medium;
  • Interior_Small.

image (21)

Picture 21: Interior/Exterior tag added to the AkSpatialAudioVolume

The last step is to rename the instance in the World Outliner following the naming convention ('AX' for the prefix followed by 'LR' or 'Room' depending on the Room setting and the rest of the name taken from the layout document). After which we should move the instance in the World Outliner to the Audio / SpatialAudio / <Level> folder (see Picture 5).

  • Shaping the volume using the Brush Editing tool

During this phase, we will be modifying the volume so that it follows the contours of the space it occupies. If the actor is not a Room, it's sufficient to trace the lines of the General or the Specific zone defined in the Spatial Audio Layout document. In case it is a Room, we should invest more effort into shaping the volume in such a way that it closely follows the edges of the area it is in, resulting in a volume that is a good approximation of the area which it fully encompasses.

This is most easily accomplished by using the Unreal Editor's Brush Editing tool. Unfortunately, diving into its functionalities is beyond the scope of this document. Like I mentioned above, a Sound Designer dedicated for Spatial Audio implementation should get themselves familiar enough with the tool. There are plenty of great tutorials online that should help you get ready to move vertices and extrude faces in no time. Needless to say, practice is the key and while in the beginning this might seem to be too challenging, it's most certainly worth the effort.

Here are a few tips to help you along the way:

1. A shortcut to the Brush Editing tool is Shift+4 on UE version 4.x and Shift+7 on UE version 5.x;
2. There's an 'Enable Edit Surfaces' button in the Details tab of an AkSpatialAudioVolume that hides all actors except that volume and enters the Brush Editing tool mode. Clicking on 'Disable Edit Surfaces' exits the tool and unhides all previously hidden actors;

image (22)

Picture 22: Edit Surfaces buttons of an AkSpatialAudioVolume

3. When shaping a Room volume, I found it to be easiest to select both the lower and the upper vertex of a vertical edge end move them simultaneously, since this would create less triangles by keeping the vertices coplanar;

BrushEditing1

GIF 1: Moving two vertices simultaneously

4. Selecting both vertices can be easily done in the Orthographic Top or Bottom view of the Viewport;

BrushEditing2

GIF 2: Selecting vertical vertices from the Top Orthographic view

5. I try to cover one of the main openings of the room first and then go from there. Having the edges fixated at the opening means that I wouldn't have to worry if the portal is placed correctly, since I have a dedicated surface just for that opening;

6. Extruding is the easiest way to add new surfaces and will provide us with more vertices for further shaping, but pay attention of the surface that you want to extrude and don't overdo it or it might make the volume too complex and difficult to modify;

7. When finished, see if you can find any vertices that might be obsolete. Do not delete them, instead find the nearest vertex that you want to keep and weld the redundant one to it.

BrushEditing3

GIF 3: Welding the obsolete vertices

A shaped and finalized asset should sufficiently cover the area it is in, following its edges closely while never going over into adjacent spaces, and not overlapping with other Rooms. Overlapping Rooms can cause issues in the areas that are not covered by a portal. Special care should be taken to cover any areas where emitters or listeners may exist, even if it means adjusting the volume so that it might not perfectly fit the walls around it, otherwise we might experience problems in sound reproduction in the likes of sudden drops in volume or unwanted filtering.

The problems occur in cases where an emitter or a listener gets on the other side of the Room or into another overlapping Room through surfaces that are not covered by a portal. Since volumes are invisible, we might not be aware that there's a surface between the listener and the emitter. Having the listener and the emitter in different Rooms, even though they might be in their lines of sight, necessitates the calculation of new direct and diffracted paths around the edges between them. If there's no portal, those paths might be too obstructed or too long for the emitting sound to be heard by the listener.

Good people at Audiokinetic are aware of how tedious this kind of manual labor might be and they are giving their best to provide us with tools to speed up this process. While 'Fit to Geometry' option on AkSpatialAudioVolumes and AkAcousticPortals might seem like a godsend when working with simple geometry like buildings with cube-like rooms and corridors, it just does not track well with the more irregular and convoluted areas such as caves with openings, nooks and winding tunnels that are prominent in our game. The resulting volumes would most certainly need further manual work, so there is no reason to suggest using it in our game.

Volume2

GIF 4: An example of a well placed AkSpatialAudioVolume


  • Placing AkAcousticPortal actors on the level

As with AkSpatialAudioVolumes, placement of an AkAcousticPortal actor is done by choosing that actor type from the Place Actors panel and dragging and dropping it onto the level.

After we create an instance of the actor, it will show up selected in the World Outliner and its Details tab should be opened. In it we should define the Portal's Initial State to be either Open or Closed (default is Open), as well as select the appropriate SpatialAudioVolume Collision Preset.

The next step is to rename the instance in the World Outliner following the naming convention ('AX_Portal' for the prefix and the rest of the name taken from the layout document). After which we should move the instance in the World Outliner to the Audio / SpatialAudio / <Level> / Portals folder (see Picture 5).

Now we need to change the position of the actor to the position of the opening that we want to be represented by this portal. We will shape it using the Translate, Rotate and Scale tools so that it completely covers the opening. The portal's Y axis should be perpendicular to the opening, meaning that it should follow the transition from one space to the other. We should also pay attention to the orientation of the X axis, which defines the front side of the portal. The actor is displayed in the Viewport as a yellow ribbon to help us with orientation.

image (23)

Picture 23: An example of a well placed AkAcousticPortal

By scaling the X axis we can define the width of the portal. That width represents the area within which the transition between two acoustic spaces occurs and can be understood as a spatial counterpart to a crossfade. The deeper the portal is, the more gradual the transition will be.

AkAcousticPortals have a color indicator suggesting if you have placed the actor correctly or not. All portals must be appended to at least one volume. They also must have a front room that is distinct from its back room. If no volumes are associated with a portal or if it's not properly placed between volumes, the portal will be colored in red.

Portal3

GIF 5: AkAcousticPortal placement indicator

Note: An AkAcousticPortal can work properly only if placed in such a way that it covers exactly one edge per adjacent AkSpatialAudioVolume. If placed incorrectly, the sounds could be cut off if the listener is positioned at the point where the face of the volume intersects the Portal. This necessitates special care when modeling AkSpatialAudioVolumes, since we need to be able to modify the volume in such a way that only one of its faces covers the entire opening.

When shaping two adjacent Rooms to create a network, I found that it is best if those Rooms are connected at the opening in such a way that we are left with no empty space between them. This will assure that wherever the transition happens, it is through a portal and not through the surfaces of a Room not covered by one.

BrushEditing4

GIF 6: A network of adjacent Rooms with no space between them


Phase 2: Wwise side

Now we will turn to the Wwise Authoring tool where we will be creating Auxiliary Busses for each of the AkSpatialAudioVolumes and their dedicated Wwise RoomVerb ShareSet presets, as well as define their parameters.

  • Creating Aux Busses

If we follow the steps in the 'Wwise object organization' section, we should have all Auxiliary Busses properly named and at their correct place inside the Master-Mixer hierarchy (see Picture 2). In the Positioning tab of each Auxiliary Bus we will find settings regarding the Positioning, Attenuation and Diffraction. We need to turn on 'Listener Relative Routing' and set the '3D Spatialization' to 'Position + Orientation'.

Since the Wwise version 2019.2, it is not necessary to assign an attenuation curve to the Auxiliary Busses. The distance is being calculated from the point of the emitter to the point of the listener and that distance is used for attenuation already applied to the emitter. Applying an Attenuation curve to the Auxiliary Bus would further process the reverbs emanating through the portals, so add Attenuation presets if you feel that the original attenuation of the emitter is not applying enough processing.

We should Enable Diffraction if we want diffraction and transmission loss to affect the reverb of the volume propagating through the portals.

image (24)

Picture 24: Properly set-up Positioning tab of an Auxiliary Bus

For every reverb effect that we want to be processed by other volumes' RoomVerb effects, it's necessary to tick the 'Use game-defined aux sends' box in the General tab of the Auxiliary Bus.

image (25)

Picture 25: Enabled 'Use game-defined aux sends' of an Auxiliary Bus


  • Creating and assigning Wwise RoomVerb ShareSet presets

A Wwise RoomVerb preset is created in the ShareSets tab in a previously defined manner. Its parameters are defined in relation to the acoustic properties of the volume and aesthetic preferences of a Sound Designer (see Picture 1).

In the Effects tab of the Auxiliary Bus, we should assign an instance of a Wwise RoomVerb plug-in and select the appropriate ShareSet preset.

image (26)

Picture 26: Properly assigned Wwise RoomVerb preset


Phase 3: Unreal Editor side

  • Creating and assigning Auxiliary Busses

In this phase we need to create the Auxiliary Bus assets inside the Unreal Editor by dragging and dropping the Auxiliary Bus from the WAAPI Picker into the appropriate folder within the Content Browser and add it to a SoundBank (if applicable), after which we will be able to assign the Auxiliary Bus to its AkSpatialAudioVolume.

To assign a bus, we need to select the volume and in its Details tab find the 'Late Reverb' category, and add the Auxiliary Bus to the Aux Bus property either by dragging and dropping or by searching for it via its name.

image (27)

Picture 27: Properly assigned Auxiliary Bus

  • Room tones

I've already mentioned above the importance of properly placing the AkSpatialAudioVolume and its location on the level. The location and rotation of the volume will be used in case we decide to post an AkEvent right on the volume itself, using a feature called Room tone. Since the Room tone will take the location and rotation of the volume, the front face of the volume will be the front facing direction of the sound. If we select 'Position + Orientation' in the Positioning tab of the sound, the sound's stereo image will rotate with the room while we are in it, making the sound seem fixed in space and therefore more realistic.

The main difference between a regular AkAmbientSound and a Room tone is that the distance between the Room tone and the listener is always evaluated at zero whenever the listener is inside the volume. When the listener exits the volume, the emitter will be placed at a nearest portal and the distance will be calculated from the portal to the listener, so bear that in mind when designing the attenuation curves for the Room tone.

Testing / Troubleshooting

Wwise RoomVerb

Since we don't have the possibility to hear parameter changes when editing the preset in the Wwise Authoring tool, testing those parameters needs to be done during a play session. We need to start the session, connect it to the Wwise Profiler and change the values of the effect while playing the game. It's easiest to excite the reverb and hear the changes if you have a positioned sound with a distinct transient that can be moved between volumes and triggered when needed. On Scars Above, I often used Kate's melee attack whoosh sound since the Player Character is controllable and always available.

Rooms and Portals

Troubleshooting Rooms and Portals mostly consists of finding parts of the level that is not adequately covered by an AkSpatialAudioVolume. Without using any custom tools, that search is done manually by taking control of the listener (usually attached to the camera's spring arm) and moving it along the edges of the zones where our volume's faces and surrounding walls should meet. Since the volumes are invisible during gameplay, we need to rely on our ears to find any issues with sounds. Those issues can be manifested as immediate drops in volume or sudden changes in processing, due to the Diffraction and Transmission Loss values jumping high when we exit the volume through a surface and not through a portal. If we locate a problematic area, we need to stop the play session and further modify the volume so that the area is covered properly and test again.

A useful indicator that can help us identify the areas where the volume is not covering enough space is when a volume's edge can be visible in the Viewport when editing. The edges are represented by yellow lines. If you see any lines sticking out of geometry, it may be possible that the listener can get out of the volume through the surfaces connected by the line into an area not covered by a volume, which will most likely cause issues with sound.

image (28)

Picture 28: An edge of an AkSpatialAudioVolume protruding through the surrounding geometry

If your team is using any other way to block the camera (and therefore the listener) such as invisible blockers, you can try lining up the volume according to those actors, but my experience tells me that trying to fit the volume into Static Meshes that are visible both in the game and the editor is a pretty safe way to make sure that the listener can't get through the volume's walls. Of course, your mileage may vary, so again, test extensively.

  • Visualize Rooms and Portals

One tool that might come in handy that comes with the Wwise Unreal integration is Visualize Rooms and Portals. If enabled in Project Settings, it allows us to see the instances of AkAcousticPortal actors and their connections to adjacent rooms as we traverse the level. While it cannot display the boundaries of AkSpatialAudioVolumes, the information this tool provides is still very beneficial as we can see if our portals and volume pivots are placed correctly.

image (29)

Picture 29: Enabled Visualize Rooms and Portals option during gameplay

Visualize Rooms and Portals is a part of the Wwise Integration Settings and turning it on will make the change to DefaultGame.ini and will be committed to the repository, so make sure to turn it off before committing, unless you want the whole team to see the portals as well (in which case, expect a call from the QA shortly).

If your programming team has some spare time, I would highly recommend that you opt into creating a custom debugging tool for Spatial Audio. For this purpose, with the help of our programmers I've devised and implemented a custom solution for Spatial Audio visualization called VisualizeAkGeometry, which will be described along with our other custom tools at the end of this document.

Ak Geometry

What is Ak Geometry API?

Ak Geometry API represents a set of Wwise tools that enable us to simulate the effect of sound diffraction around a solid obstacle, as well as simulate reflections from the surfaces of that obstacle, working in conjunction with the Rooms and Portals API.

Diffraction and transmission loss

Diffraction is a phenomenon in which the sound wave, in its path from the emitter to the listener, encounters an obstacle, gets obstructed by it and bends around the edges of that obstacle at a certain angle and arrives at the listener with lower volume and limited frequency spectrum. If the obstacle is made out of a material that does not permit sound to pass through, the diffracted path takes priority over the direct path as the primary source of sound, putting the (virtual) emitter at the edges of the obstacle.

Transmission loss happens when the obstacle between the emitter and the listener is large enough that the sound cannot bend around its edges, leaving us with just the direct path that goes through the obstacle. Depending on the properties of the obstacle's material, the sound will be partially or completely occluded, which also manifests in lower volume and limited frequency spectrum.

Diffraction along with Transmission Loss as a part of the Ak Geometry API serves to define parameters of occlusion and obstruction within Wwise and completely replaces the Unreal Engine's occlusion system. Using these parameters, we can shape the Wwise objects' Volume, HPF and LPF curves, giving us the ability to accurately simulate the acoustic phenomenon of obstruction and occlusion.

When do we use Ak Geometry API?

Let's say we are in one of our newly created Rooms and there's a stone pillar in the middle of it made up of one Static Mesh, and we want to have sound bending and reflecting off of it. We can add an AkGeometry component to it which will take the collision of the mesh and translate it into geometry that is readable by the Spatial Audio. This geometry will serve as an approximation of the level geometry represented in the Wwise engine which would be used for direct and diffracted paths calculation and setting the Diffraction and Transmission Loss parameters' values.

Even better usage would be in areas where we currently have no diffraction going on, like exteriors covered by volumes that are only used for Late Reverb simulation. In this case, we would manually add the AkGeometry component to all Static Meshes that we want to include in Spatial Audio calculations. In levels that are not populated with complex meshes, this is a fast and easy solution. On the meshes with a large number of triangles, adding an AkGeometry component might prove to be too expensive. Here we can put our Brush Editing skills to work and use AkSpatialAudioVolumes to cover the obstacle and create geometry of a much lower complexity.

Combining Rooms and Portals, Ak Geometry components and volumes, we can successfully cover all areas and obstacles that need to affect our sounds by either obstructing or occluding them.

Implementation

We can implement Ak Geometry in two main ways: by adding an AkGeometry component to an existing Static Mesh or by shaping an AkSpatialAudioVolume actor to cover one or more Static Meshes (or any other kind of obstacle).

Adding an AkGeometry component to a Static Mesh

Adding an AkGeometry component to an existing Static Mesh consists of several steps. First we need to select the desired actor in the Viewport (it will get highlighted in the World Outliner) and by clicking the +Add Component button find the AkGeometry component.

image (30)

Picture 30: Adding an AkGeometry component to a Static Mesh

We need to select the component, and for the 'Mesh Type' parameter within the Geometry category choose Simple Collision and turn on 'Enable Diffraction' (if not already enabled).

image (31)

Picture 31: Details tab of an AkGeometry component

In some cases, this solution can end up being too expensive since we have little control over the complexity of geometry that will be generated by the component. Even though we are using Simple over Complex collisions, they can still prove to be overly complex or even not working properly, so use with caution. If a Static Mesh does not have a Simple Collision, we need to create one or contact our Tech Art team for help.

Using AkSpatialAudioVolumes as AkGeometry

In case we don't want to add an AkGeometry component to a mesh, usually for the reasons of optimization, we can use an AkSpatialAudioVolume instead. For that purpose, we need to add an actor instance of the volume to the level and shape it so that it adequately covers the mesh. In the Details tab of the volume we need to turn on 'Enable Surface Reflectors' and turn off both 'Enable Late Reverb' and 'Enable Room'. We also need to make sure that 'Enable Diffraction' is turned on, as well as assign a SpatialAudioVolumeBlock collision preset to it.

image (32)

Picture 32: Properly placed and set-up AkSpatialAudioVolume around a Static Mesh

SpatialAudioVolumeBlock collision preset is used specifically for this purpose, as it enables the volume to block the AudioTrace channel to which our Custom Listener is being tied. This enables us to create simpler AkGeometry volumes in less time without the fear of the listener getting inside its boundaries. More on our Custom Listener in the 'Our custom solutions' section of this document.

  • Turning on Diffraction on existing Rooms

If we wish for our existing Rooms to behave as objects around which the sound gets diffracted, we need to turn on 'Enable Diffraction' in the Geometry Settings category of the Details tab.

image (33)

Picture 33: Enabled Diffraction on an AkSpatialAudioVolume

This is important to enable on volumes that are not fully convex, where we want to have diffraction around the edges where the volume is bending inwards.

image (34)

Picture 34: A concave AkSpatialAudioVolume with selected edges that will diffract sound if enabled

image (35)

Picture 35: One possible path between the emitter (e) and the listener (l)


Custom solutions

In Scars Above, we had areas in the exterior with lots of walls, stairs and other large objects that needed to diffract the sound. It was not practical to either assign the AkGeometry components to the meshes since there were too many, or to shape the AkSpatialAudioVolumes around them using the Brush Editing tool since it was too tedious and time consuming. In such cases, I'd suggest talking to your programmers to create a custom solution for a fast and reliable Ak Geometry creation. I had much fun devising and iterating on our tool we called SplineAudioWall that I will talk about in the 'Our custom solutions' section of the document.

Testing / Troubleshooting

As with Rooms and Portals, it's mandatory to test the geometry we created for any issues with the sound not diffracting properly around the geometry or having emitters and the listener getting in the geometry when they should not. There are several ways to test diffraction, both on the Unreal Editor and the Wwise side, all of them giving us valuable information that can help us pinpoint the problematic geometry, providing us with the opportunity to fix the issues and tweak the parameters until we get the desired result.

Unreal Editor side

  • Draw Diffraction

To test diffracted paths of an emitter during a gameplay session, we have the option to turn on Draw Diffraction in the Details tab of an AkComponent, under the Spatial Audio / Debug Draw category.

image (36)

Picture 36: Turned on Draw Diffraction debug draw of an AkComponent

While testing, during the gameplay if the sound is playing, green lines will be drawn from the emitter containing the AkComponent around the edges of the obstacle that's a part of the AkGeometry. Small purple spheres are displayed at the points on the edges where the sound got diffracted. This can help us decide if the boundaries of the geometry are properly set up and if the geometry is covering the obstacle sufficiently or not.

image (37)

Picture 37: An example of diffracted paths being drawn during gameplay

Even though this is a developer only feature and the lines will not be shown in the build, it is good practice to turn off the Draw Diffraction option on the AkComponent before committing changes to the repository.

  • Visualize Rooms and Portals

Visualize Rooms and Portals tool can also help us test the diffraction paths around the edges of our AkAcousticPortal actors. In combination with Draw Diffraction mentioned above, we should see the diffracted paths being drawn from the emitter around a portal to the listener.

Wwise side

Testing on the Wwise side includes the usage of two different Profiler layouts of the Authoring tool to get the information needed for troubleshooting direct and diffracted paths, as well as Diffraction and Transmission Loss parameters.

  • Game Object Profiler

Using the Game Object Profiler, we can draw the positions of the listener and all the emitters that are currently playing sounds, geometry sent to the Spatial Audio and their mutual impact on sound propagation and diffraction. We need to start the game session and connect to it in the Authoring tool. After we open the Game Object Profiler (shortcut is F12), all of the objects mentioned above should be displayed on a plane of reference. I won't delve into the details regarding the usage of the Game Object Profiler, suffice to say this is a highly useful tool that can help us visually gauge our Spatial Audio implementation which includes geometry, paths and values for Diffraction and Transmission Loss, among other things.

image (38)

Picture 38: An example of a profiling session inside the Game Object Profiler

  • Voice Profiler

Voice Profiler allows us to track the changes of Diffraction and Transmission Loss parameters in a little more detail. For each Game Object that is currently playing sounds, we can see in the Voice Inspector the whole path of the sound along with any processing being applied to it. This includes the Diffraction and Transmission Loss parameters displayed as children of the game object reference playing the sound we want to profile.

image (39)

Picture 39: Diffraction and Transmission Loss parameters seen in the Voice Profiler

Optimization

Spatial Audio can be expensive. Depending on the number of objects that are included in the calculations, the complexity of the geometry and the level of fidelity we demand from Spatial Audio, the calculations that are being performed on every tick can prove to be too taxing on the audio thread. As usual, resources are scarce and a prudent Sound Designer would take steps to optimize their Spatial Audio implementation and lessen the load that the audio team is putting on the CPU.

Here I have listed some of the examples on how to tackle optimization that I hope would prove to be useful to any Sound Designer wanting to have the best functioning Spatial Audio in their game.

Optimization of AkGeometry by associating with a Room

If an actor with an AkGeometry component is completely covered by one of our Rooms, it is desirable to have that component associated with the Room, so that the sound propagation around that actor would be calculated only if the listener is in the same Room with the obstacle. This is done by selecting the AkGeometry component of the actor and assigning the appropriate Room instance in the 'Associated Room' property in the Optimization category.

image (40)

Picture 40: Associating a Room to an AkGeometry component of a Static Mesh

Note: For this to be possible, both the Room and the actor with the AkGeometry component must be on the same sublevel.

It is important to mention that if a Room has been associated with a geometry, that Room won't consider any other global geometry on the level (geometry that is not associated with any Room), so keep that in mind.

Turning off Diffraction on fully convex volumes

I have mentioned above that we need to turn on 'Enable Diffraction' on all AkSpatialAudioVolumes that we want to be included as geometry. Since all portals diffract the sounds around their edges by default, it is safe to have this option turned off on all Rooms that are fully convex, i.e. volumes that do not have any edges that we want the sound to diffract from.

It's easiest to recognize these volumes if we ask the question: do all portals of a room have direct lines of sight between them? If the answer is yes, it's probably safe to exclude the volume from geometry calculations.

Turning off floor surfaces

Let's say that we have a Room volume that we want to include in Spatial Audio geometry. Most of the time our Rooms are made up of walls, a ceiling and a floor. In such spaces, we rarely have the need to calculate diffraction from the floor i.e. bottom surfaces, so it's safe to exclude those surfaces from AkGeometry calculations. This reduces the number of AkGeometry triangles and eliminates the need to calculate diffraction paths from below the volume.

This is done by going into the Brush Editing mode and selecting the specific floor surfaces of the volume and turning them off by unticking the box next to 'Enable Surface' property in the Geometry Surfaces category.

image (41)

Picture 41: Disabling floor surfaces of a Room volume

Remove obsolete portals

After we have populated our levels with Rooms and Portals and made sure that everything works fine, we should ask ourselves if there is a possibility that some of the portals that we added would never actually pass sounds through during gameplay. Level designers might have put some openings on the walls or ceilings (or even floors) for aesthetic reasons or to let some light in. Having a dedicated portal for each of those openings gives us security that our geometry will faithfully mimic the one in the game and if there happens to be any sounds on the other side, they should be heard.

On the other hand, a situation might never come for some portals to actually have sounds emitting from the opposite side. Removing those portals would greatly improve our Spatial Audio implementation since for each portal, there are paths that need to be calculated in relation to the emitters, the listener and all other portals that are attached to the Room.

Have simpler geometry

This might come as obvious but having simpler geometry gives us fewer calculations to be made and makes the CPU run smoother. When thinking about simplifying your geometry, take into consideration all the practical cases in the game and the level of detail that you want to achieve in particular situations. The volume that you so meticulously molded around those curved pillars might give you a higher level of detail, but if that kind of fidelity can't actually be perceived by the player, there is no need to have it in the game and you should ask yourself if a simpler shape might give you the same result.

In such cases, you can either create a new volume or reduce the number of triangles of the existing one by simply choosing the vertices that you want to keep and welding those that seem obsolete.

Per platform Spatial Audio settings

If you find that the audio thread is still struggling to get a hold of all the Spatial Audio work that is being thrown at it, you might want to check the Spatial Audio Initialization Settings of your platform(s) to see if any improvements can be done on the Wwise Integration side. These settings are a part of the Wwise Project Settings category and there are many parameters that can lessen the strain on the CPU by sacrificing some of the fidelity. I won't go over them separately, most of these should be self explanatory and the rest are explained by the tooltips or in the documentation.

Below are the settings that we found to be satisfactory for PS4 and PS5 platforms respectively.

image (42)

Picture 42: Scars Above Spatial Audio Settings for the PS4 platform

image (43)

Picture 43: Scars Above Spatial Audio Settings for the PS5 platform

World Composition issues

In case that our level is a part of the World Composition and is being streamed alongside other levels, I'd suggest all Spatial Audio actors to be placed in a persistent sublevel which is specifically created to host Spatial Audio actors, for example SUB_Sound_SpatialAudio. The reason for this is that, in the Wwise version 2021.1.9 and Unreal Engine version 4.27.2, Spatial Audio actors are not easily streamed, meaning that there might be slight hiccups in sound and frame drops when simultaneously loading and unloading a large number of Rooms and Portals and other Ak geometry at runtime. Having them all in a persistent sublevel circumvents this issue, while still being rather light on resources (if set up properly, having a lot of Spatial Audio geometry loaded at the same time should not pose any issues. On Scars Above, we had more than a thousand actors in one persistent level and the game ran just fine).

Our custom solutions

Tools provided by Audiokinetic included in the Wwise Spatial Audio API cover most of our needs when it comes to building and testing the acoustic space of a game. However, since every project is specific, I felt the need to upgrade our Spatial Audio pipeline by creating several custom solutions with the help of our programming team. These solutions improved our workflows both in efficiency and speed and I again highly recommend that you consider getting more involved with the available toolsets and people around you in order to achieve the possibilities that only custom solutions can provide.

Custom Listener

We inherited the listener functionality from the Wwise Unreal integration, but expanded it to a degree to make it more flexible. Our Custom Listener is set on a dedicated spring arm, which is again tied to the camera, but can be moved independently from it. Its position can be set to any point between the camera and the Player Character (I felt that having it around 2/3 of the path from the Player Character is the most natural position for our game). During cutscenes when the camera is moved away, we can choose to either move the listener along or to have it at the same position as the Player Character.

When it comes to its relation to Spatial Audio, we've made it so that the listener can never get into geometry that has a SpatialAudioVolumeBlock collision preset on it. Since the camera and the listener are on their separate spring arms, any Ak Geometry that is not blocking the camera (such as our invisible volumes) will block the listener. This acts as a fail-safe meaning that we can be sure that the listener will not get occluded by geometry. It also gives us the possibility to create less complex geometry around the obstacles.

  • ShowDebug AudioListener command

ShowDebug AudioListener is a console command made for debugging which displays the current position of our Custom Listener component updated at runtime, in the form of a blue sphere. This can help us see the position of the listener on its spring arm. It can also tell us when the listener gets out of bounds of a volume or if it is obstructed in some undesirable way. It also prints the current transform data of the listener.

Listenernew2

GIF 7: Debugging the listener with the ShowDebug AudioListener console command

VisualizeAkGeometry

Having Visualize Rooms and Portals at our disposal is great, but wouldn't it be even better if we could visualize all of AkGeometry during gameplay and not just portals and their connections to the volumes?

This is something that we tried to achieve with our debugging tool called VisualizeAkGeometry. When enabled, it allows us to see the volumes and their relationship to other geometry on the level, making it much easier to find problematic spots and debug the listener and the emitter paths while actually playing the game.

We've implemented several different views. In addition to being able to toggle the tool on or off, we can choose to display the geometry as semi-transparent meshes, as a wireframe or both, and we can select between two different depths of view. The first depth shows all Ak Geometry in front of all other level actors, showing the full shape of the volumes, while the second depth hides the geometry behind other level geometry, giving us the ability to see any protruding edges and faces that we might have missed during a Brush Editing session. This view also allows us to see the volume faces at the openings where the portals should be, enabling us to pinpoint the locations where the listener or the emitter would cross from one area to the other.

VisualizeAkGeometry4

GIF 8: Different VisualizeAkGeometry display modes

SplineAudioWalls

In order to facilitate a faster and more convenient creation of Ak Geometry around obstacles, we implemented an actor class that would give us the flexibility and speed of adding and moving the points of a spline actor, while also enabling us to quickly generate and modify simple meshes for each of these points.

This allowed us to create Ak Geometry that would not rely on the Brush Editing tool, which proved to be rather cumbersome to use, especially when we need to mold the volumes around an existing level geometry. A SplineAudioWall approach is much more intuitive and even though its main purpose is to simulate walls, it can encompass all kinds of shapes, making it a preferable workflow to Brush Editing tool for Ak Geometry.

When we place the actor on the level, we will be presented with a short spline and a simple purple box that would serve as a basis for our geometry. We can freely add new points to the spline and new boxes (procedural meshes) would be instantly created. For each individual point, we are able to move and rotate the procedural mesh, as well as scale it in all directions.

When we are finished with the actor, we should end up with one complex shape that is made up of several simple boxes, depending on the number of spline points. This procedural mesh would then be baked into a Static Mesh, which in turn has an AkGeometry component attached to it, sending the geometry to the Wwise Spatial Audio engine. The baked mesh has the SpatialAudioVolumeBlock collision preset assigned by default, in order to stop the listener from getting inside its boundaries.

One neat thing that we added to the actor is the ability to switch off all floor surfaces, for optimization purposes. Since these are actors that are usually put on the level's floor, the option is enabled by default.

SplineAudioWall2

GIF 9: Modifying an instance of a SplineAudioWall actor

These actors can also be displayed by the VisualizeAkGeometry debugging tool. The AkSpatialAudioVolumes are colored in light blue, while SplineAudioWalls are colored in purple (see GIF 8).

Bear in mind that each new spline point adds triangles to the geometry being sent to Spatial Audio, so make sure to delete any unnecessary points.

Future improvements to this solution would be to add a Transmission Loss parameter and an Acoustic Texture per surface. Since we didn't use Wwise Reflect on Scars Above, this was not a priority.

Weapon Tail Switching

In Scars Above, we've decided to use the existing network of Rooms to determine the type of space that the Player Character is in based on its Interior/Exterior type and size.

We've managed to accomplish this by creating a custom Player Character component that is tracing the volumes in regular intervals using the AudioTrace channel and returns the actor tag of the volume with the highest priority. Based on the actor tag returned, we are setting the proper Interior/Exterior state in Wwise. Those states are used for several things, including the Weapon Tail switching. We've decided that the four types of Weapon Tail sounds are sufficient: Exterior, Interior_Large, Interior_Medium and Interior_Small. An actor tag should be added manually to each instance of the AkSpatialAudioVolume.

image (44)

Picture 44: General Settings tab of the V.E.R.A. weapon's Tail switch container

Final thoughts

Thank you for reading this document. Hopefully, the advice I laid out here will make some of your struggles with Spatial Audio a little more manageable. As the Spatial Audio API is constantly being improved by the people at Audiokinetic, some details of this document will become less relevant in time, but the general principles of having good planning and organization, careful implementation and thorough troubleshooting and optimization will still apply to a large degree. It is my intention to closely follow the development of Spatial Audio in the future, so hopefully this document will also get updated with new findings and better practices.

My biggest respect goes to Dimitrije, Selena, Teodora and Marko, my fellow sound designers at Mad Head Games, and Nikola from EA Dice for the support and the feedback. A huge thank you to Nina, Stevan, Nemanja and Dušan from the Mad Head Games programming team. A big thank you to Julie, Adrien, Guillaume, Nathan, Louis-Xavier, Thalie, Masha and Maximilien from Audiokinetic for all the advice and the opportunity to publish my thoughts in a form of this blog.

Milan Antić

Lead/Technical Sound Designer

Mad Head Games

Milan Antić

Lead/Technical Sound Designer

Mad Head Games

Milan Antić is a Lead/Technical Sound Designer working at Mad Head Games studio, based in Belgrade, Serbia. He has a rich experience in designing sounds and creating audio systems for both the casual and the big budget games, with the focus on AA/AAA projects developed in Unreal Engine using Wwise.

LinkedIn

Comments

Sam Bird

February 16, 2023 at 06:42 pm

Really nice, thorough write up of Wwise's Spatial Audio systems in the context of the Unreal integration. Thank you for taking the time to compile and post all of this!

Chuck Carr

February 18, 2023 at 06:38 pm

Well done, Milan. Appreciate your weapon tails solution.

Gen P

July 05, 2023 at 10:45 am

Thanks for the feedback!

Marek Nygus

September 26, 2023 at 08:35 am

So where can I actually find this "VisualizeAkGeometry" tool? Is it in some 2023 beta version only?

Milan Antic

December 02, 2023 at 08:04 am

Markus, the VisualizeAkGeometry tool is a custom solution developed internally by us specifically for this game. Since the publication of this document, we expanded on it and are currently using it in two other projects we're developing.

Ruoyang Zou

October 10, 2023 at 10:31 pm

what a practical guidance, super helpful!

Leave a Reply

Your email address will not be published.

More articles

A Brief History of Music in Video Games

Music is a key part of all modern video games. You only have to look at Persona 4’s oddball mix of...

25.9.2018 - By TechJVB

Assassin’s Creed Valhalla | Sandbox Music System

The music in Assassin’s Creed Valhalla was a massive undertaking. This was one of the largest maps...

14.4.2022 - By Alexandre Poirier

Solving the Mobile Game Audio Puzzle

This article will try to shed light on how we use Wwise and other tools at Rovio in free-to-play...

19.8.2022 - By Can Uzer

Making Music Design Choices in Little Orpheus

Little Orpheus is a side-scrolling adventure game about one comrade’s journey to the centre of the...

15.9.2022 - By Jim Fowler

Sound Haven Highway | Step Up Your Sound Game Jam

This article was written following the second annual Step Up Your Sound Game Jam, an Android mobile...

26.5.2023 - By Petricore

Wait, where did that sound come from? A look at some spatialization methods using Unreal Engine and Wwise

Last year, I started exploring various approaches to spatialization using Unreal Engine and Wwise as...

1.6.2023 - By Harleen Singh

More articles

A Brief History of Music in Video Games

Music is a key part of all modern video games. You only have to look at Persona 4’s oddball mix of...

Assassin’s Creed Valhalla | Sandbox Music System

The music in Assassin’s Creed Valhalla was a massive undertaking. This was one of the largest maps...

Solving the Mobile Game Audio Puzzle

This article will try to shed light on how we use Wwise and other tools at Rovio in free-to-play...