Blog homepage

Overwatch - Game Audio Using Wwise (part 2/2)

Game Audio / Sound Design / Wwise Tips & Tools

Continued...  

During Audiokinetic's Wwise Tour, Project Audio Director Scott Lawlor, and Senior Software Engineer 2 Tomas Neumann, presented their work. The consideration, successes, and lessons learnt are demonstrated in a series of very educational Wwise Tour videos from the event. However, just to give you a better idea of what their target goals actually meant, we’d like you to note that what the Overwatch audio team managed to achieve has practically made it possible to play this game with the monitors turned off!

If you didn't get the chance to see the first set of videos, we recommend that you check out Overwatch - Game Audio Using Wwise (part 1/2) .

 

Pinpoint Accuracy 

While the previous video, A Clear Mix, focused on showcasing how they helped the player indentify "what" and "who" their threats were, here, the challenge evolves to integrate into the equation a way for the player to easily locate "where" their threat was coming from.

To achieve this, the following concepts were used: 

·         Obstruction and occlusion

·         Distance and space

·         3D audio

The Blizzard Entertainment audio team also made use of raycast and path diversion techniques to drive the occlusion factors that were applied per game object using RTPC curves for volume, LPF, HPF, Aux Sends, and so on.  

Many sets of RTPC curves are shown in Wwise in this part of the presentation,  followed by in-game examples of the system in action.

For distance and space, they made use of:

·         Layered sounds

·         Indoor vs outdoor variations for certain sounds

·         Distance filtering

·         Focus and spread

·         Reverb and quad delay

And, finally, they end this portion of their presentation by talking about how they added support for Dolby Atmos for Headphones late in production, again to elevate gameplay and help players compete! 

  

Gameplay  Information

Conveying gameplay information via the game's audio was a very important objective in this project. 

The first step was to create unique sounds related to each character. The goal was to help the player identify characters through their representative sounds. A great clip is shown in this part of the presentation which demonstrates how characters can truly be recognized by their footsteps alone. 

But the core essense of this portion of the presentation exposes how they used DataFlow (their game engine system broadcasting all sorts of information such as life health, progression, time left, and how high a character is in the map) to attach game information to properties in Wwise using RTPCs, States, and sound selection, or to drive State changes for music transitions and more.   

They also demonstrate how gameplay cues were all driven by RTPCs and were automatically adapting anytime a game designer would modify their timings. Having the audio designed in Wwise be completely adaptive to game information saved their sound team from having to recreate baked audio everytime the gameplay was tuned by the game designers.

In the same spirit, they showcase a series of Blend Containers in Wwise with many layers and describe the benefits of keeping them in Wwise for easy remixing and fine tuning, while remaining connected with the game.

 

Pavlovian Response

The main idea of this section can be summarized as such: Even if it did go against the sound team’s instincts, it was worthwhile to actually minimize the number of sounds and dialogue variations in order to create a Pavlovian Response during gameplay. Pavlovian Response helps players intuitively memorize and, therefore, quickly identify sounds, which in essense helps them optimize their reaction time. 

 

Extending Wwise

Tomas presents what they developed internally to extend or build on top of the Wwise technology:

  • Source Control Plug-in
  • Quad Delay Plug-in
  • Dolby Atmos
  • Code changes (with help from the Audiokinetic team) : 
                                   - Increase ParameterID size from 64 back to 64K
                                   - Log non-cached WEM files
                                   - Synchronize platform sound inclusion
                                   - Add profiling callbacks, non-blocking GetRTPC call
                                   - Develop memtracker using Alloc callbacks

 

The presentation was concluded with an amazing set of clips that compared the exact gameplay sequence before and after their dynamic mixing system. These clips are simply fantastic as they show exactly how much more comfortable and enjoyable gameplay becomes once you hear only what’s really important at any given gameplay moment.

Audiokinetic wishes to thank Blizzard Entertainment, everyone on the Overwatch team, and particularly Tomas and Scott for assembling and presenting this fascinating educational presentation and sharing it with the Wwise and interactive audio community. Game audio at its finest, these series of videos are a must watch. We hope you enjoyed them and that you will want to share them with your fellow game audio peers!

 

  Subscribe

 

 

Audiokinetic

Audiokinetic

Audiokinetic sets new standards in audio production for interactive media and games. The company’s middleware solutions, including the award-winning Wwise® and SoundSeed®, empower sound designers and audio programmers with a cost effective, comprehensive authoring tool and audio engine for creating innovative interactive experiences. Audiokinetic is headquartered in Montréal, QC, Canada, has subsidiaries in Tokyo, Japan, and Shanghai, China, as well as Product Experts in Europe.

 @audiokinetic

Comments

Leave a Reply

Your email address will not be published.

More articles

Fun with Feedback

Introduced with Wwise 2017.1, 3D busses and auxiliary sends from busses make it possible to use the...

17.10.2017 - By Nathan Harris

Part 1: The spatial acoustics of NieR:Automata, and how we used Wwise to support various forms of gameplay

NieR:Automata is an action role-playing game (RPG) taking place on a wasteland that is earth, after...

4.12.2018 - By PlatinumGames

Reverb Needs Spatialization Too: A Guide to Rooms and Portals in Wwise Spatial Audio

Recently, there has been a lot of hype surrounding spatial audio, but the truth is, “spatial audio”...

26.3.2019 - By Nathan Harris

The Story Behind the Mastering Suite: In-Game Audio Mastering

The Mastering Suite is the result of a series of collaborations amongst creatives and engineers in...

16.7.2020 - By Danjeli Schembri

WAAPI is for Everyone | Part 2: wwise.core

Hello. I’m Thomas Wang (also known as Xi Ye).In part 1, I used mind maps to summarize WAAPI...

27.11.2020 - By Thomas Wang (汪洋)

Event-Based Packaging Process Overview

What is Event-Based Packaging? Not long ago, the UE4 Integration of Wwise 2019.2 launched a new...

21.1.2021 - By Fan Runpeng

More articles

Fun with Feedback

Introduced with Wwise 2017.1, 3D busses and auxiliary sends from busses make it possible to use the...

Part 1: The spatial acoustics of NieR:Automata, and how we used Wwise to support various forms of gameplay

NieR:Automata is an action role-playing game (RPG) taking place on a wasteland that is earth, after...

Reverb Needs Spatialization Too: A Guide to Rooms and Portals in Wwise Spatial Audio

Recently, there has been a lot of hype surrounding spatial audio, but the truth is, “spatial audio”...