Blog homepage

Practical Implementation: Leveraging Wwise to Replicate a Real Radio in Saints Row (2022)

Game Audio

When initially designing systems in pre-production for Saints Row (2022), the audio team decided to let the real world dictate how a number of our systems would operate. This was mainly decided simply because of the structure of the team at large—we have practical developers applying real-world skills in our proprietary engine. Our physics programmer used to make flight simulators, our vehicle designer was an engineer, etc… so they would build their systems around what they knew. We decided to leverage this; all those systems already worked in a practical way, let’s just stick with what works.

For example, our vehicle engine is set up just like a real vehicle; we get an actual RPM value, we have gear ratios, throttle, suspension, and even tire slip to consider, not just ignition/shutdown with a velocity RTPC (though we did have those available too). We even have an RTPC for whether a boat’s propeller is submerged or not.

On top of that, vehicle collisions are registered on each vehicle depending on the type of collision. If I’m driving full-on and slam into an NPC vehicle perpendicular to my vehicle, my car plays a head-on collision, and the NPC plays a T-bone sound. With the limited number of assets we used, we were able to get the permutation on general vehicle collision sounds into the tens of millions before even counting the random vehicle parts that could fall off or spray fluid as a sweetener layer.

We applied the same theory to prop impacts; if a large hollow wooden barrel hits a metal pole, it sounds exactly as described. In addition, one of my favorite features (which I’m sure nobody consciously noticed) is that bullet impacts and explosions travel at the speed of sound using the Initial Delay feature.

img1

This theory was held across the entire project, with every audio system considering any real-life practical implementation that we could think of before attempting to get fancy. That’s when things got spicy—it was time to design the radio system.

Saints Row Radio—A Brief History

Designing the radio system was a perfect storm of experience and timing. Before joining Volition, I cut my teeth in the cutthroat rat race of medium market radio, starting in promotions before designing commercials and some engineering before joining the airstaff. I knew how radio worked from tip to toe, and designing the Saints Row radio was the first project for which I had full ownership of audio systems from the ground level. 

Previous SR titles had a complex network of background clocks, timers, and seek tables, with hundreds of table files all talking to each other every frame. Under the hood, it was a spiderweb that would fall apart if the spider sneezed. On the player’s side, there was some cross-pollination between individual car radios playing the same station, some syncing bugs, etc., but it was radio enough to be called radio.

I thought we could do better, considered the world-first approach, cracked my knuckles, and called our audio programmer, Mike Naumov, into my office.

The idea was simple: Radio is radio. There are transmitters in the world and receivers roaming around. All we had to do was connect the two and set up playlist logic that the radio elements could follow.

The Transmitter

Before we could get anyone tuned in, we had to get something playing. We placed an object under the origin of the game world and played a Random Container of old-school country APM tracks just to get started. This would eventually evolve into the Tumbleweed station. 

img2

Making sure we would stay in sync using virtual voice settings was my primary task. In the meantime,  Mike got to work on the radio dial functionality, so we added another object with another container so we could start testing switching between stations.

The Receiver

Now, we just had to figure out how to actually hear the radio when you turn it on. Initially, we tried simply setting an RTPC to turn a station 2D when selected while keeping all the others 3D using the Speaker Panning/3D Spatialization Mix feature. But we were concerned NPC cars would also tune into the same station and double up the music. The solution ended up being as simple as assigning a PC/NPC RTPC to the player-owned receiver to do basically the same thing.

We now had a functional radio component that could be placed on all vehicles, distinguish the player character’s radio from NPC radios, and functionally change between stations. 

Through a proprietary multi-position emitter tool (similar to AK’s Set Multiple Positions node in Unreal), we were able to dynamically attach emitting positions to each radio component without creating new game objects, allowing multiple vehicles to be tuned into the same station while moving, without messing with our voice or object count.

img3

img4

The Stations

Now that we had a way to transmit and a way to listen, it was time to start building the stations. From my experience using Scott Studio back in the day to set up and run real-world radio playlists, I figured we’d just recreate that functionality in the guts of our own radio system.

Here’s a Googled screenshot of Scott Studios SS32 radio automation software interface, circa 2006. 

img5v3

(Source: https://www.devabroadcast.com/dld.php?r=759)

This effectively lets the station programmer predefine a sequence, including commercial breaks, news breaks, DJ talkover, weather, station IDs, all broken down by individual elements, letting the airstaff run the automated playlist or hijack it for timing or requests. 

To achieve this, we created a modular system comprised of the same “elements” system, which could be slotted in a predefined order by the designer so we wouldn’t get too many commercials in a row, or a song with an outro followed immediately by an intro to the next song, or a station ID in between two commercials, etc. This modularity also opened the door for a custom player-created playlist as a bonus.

As you can see in this screenshot from our table editor, there are newscasts sprinkled in between any non-verbal element. This is due to the system playing a newscast in the next available slot if and only if a newscast was unlocked by completing a piece of gameplay. This is because the newscast will discuss the player’s actions. It helped tremendously that Mike’s time was shared with the progression team, making that functionality a breeze to set up.

img6

The Songs

Having planned the entire system around several Interactive Music features so far, this is where we really started to leverage Wwise. As each element was modular, all we needed was a play Event for each one. These are all paired off in a separate table class, so the code made all the selections—each song is simply a Switch Container with each of the flavors nested underneath as a sequence:

img7

img8

When a flavor is selected, a Switch is set for that station’s object and the play Event is posted for that song’s Switch Container. We could then set the Exit cues to time out the transitions, allowing each DJ to hit the post perfectly (i.e., stop talking on a specific beat, a primary goal/flex for all on-air personalities):

img9

That Exit cue would line up to fire off the song so the DJ will always hit the post, indicated by the playhead below:

img10

Similar attention was paid to the outro; when the music starts to wind down, we fire off an Exit cue and let the DJ start to talk. We also used sidechain compression on the DJ voice bus to allow them to punch through the music if it was a little fuller than the jockey.

img11

In addition to the Exit cue, we had to be able to tell the radio system the element was finished, not just the individual track. To do this, we also placed a custom cue with a specific label so that the code could listen for an AK callback; when triggered it tells the system to select and play the next element. This significantly streamlined the engine-side processing and allowed a very natural crossfade between elements, just like a real radio DJ could manually force the next element to play or predetermine a crossfade duration. All elements also had the same custom cue to exit, so the code only had to listen for a single callback each time it cycled. 

In Conclusion…

When all this comes together, it feels like the most realistic radio experience I’ve had to date. Two cars passing while listening to the same station are completely in sync, you can hijack a car playing a song you like, and it immediately strips LPF and volume offsets at the exact same place in the song. You can toggle through all the stations and land back at the initial song as if it kept broadcasting over the airwaves without considering your individual actions…because it actually did.

Without the perfect collision of Wwise’s capabilities, my exact position on the project at the time, my experience working in this exact field in the real world, and especially Mike’s incredible work and willingness to rewrite a paradigm, we’d probably still be running clocks and seek-playing songs at incredible cost to our CPU budget… with the bonus of squashing all the bugs from the old system and making radio as modular as possible for possible fan modding in the future. It was overall an extremely pleasant and satisfying experience that we’re very proud of. 

Brendon Ellis

Senior Technical Audio Designer

Volition

Brendon Ellis

Senior Technical Audio Designer

Volition

Brendon Ellis started testing games in 2007 seeking shelter from the cutthroat rat race of the medium-market radio industry as a line tester, moved into audio testing, then bug fixing, and then a sound designer before eventually becoming Volition’s senior technical audio designer.

Discord: poor-old-goat#8203

MobyGames

 @The_Pie_Maker

Comments

Leave a Reply

Your email address will not be published.

More articles

Footsteps Material Management using Wwise / Unreal Engine 4 / Unity 3D

Early in pre-production, sound designers need to prototype many systems, and they don’t always have...

29.9.2016 - By Sébastien Gaillard

Behind the Beautiful Sound of Monument Valley 2: Interview with Todd Baker

This interview was originally published on A Sound Effect With Monument Valley 2, Ustwo Games not...

16.1.2018 - By Anne-Sophie Mongeau

Approaching UI Audio from a UI Design Perspective - Part 1

In some cases, a game’s user interface might ultimately have very little impact on the player’s...

23.7.2019 - By Joseph Marchuk

An Introduction to Blind Accessibility

Imagine, if you will, your favorite video game. Picture its opening menu screen, it’s first level,...

2.10.2019 - By Brandon Cole

Interactive Music: A 'Choose Your Own Adventure' Style Ballad

After a successful crowdfunding campaign in 2018, inXile Entertainment began full production of the...

28.5.2021 - By Alexander Brandon

Tell Me Why | Audio Diary Part 3: Sound Design

The Audio team for "Tell Me Why" had lots of opportunities to enhance unique and memorable narrative...

24.6.2021 - By Mathieu Fiorentini

More articles

Footsteps Material Management using Wwise / Unreal Engine 4 / Unity 3D

Early in pre-production, sound designers need to prototype many systems, and they don’t always have...

Behind the Beautiful Sound of Monument Valley 2: Interview with Todd Baker

This interview was originally published on A Sound Effect With Monument Valley 2, Ustwo Games not...

Approaching UI Audio from a UI Design Perspective - Part 1

In some cases, a game’s user interface might ultimately have very little impact on the player’s...