Blog homepage

Behind the Sound of 'NO MAN'S SKY': A Q&A with Paul Weir on Procedural Audio

Audio Programming / Game Audio / Sound Design

This interview was originally published on A Sound Effect 

What goes into creating the sound for such a vast game as No Man’s Sky? Find out in Anne-Sophie Mongeau’s in-depth talk with this game's Audio Director, Paul Weir.

no-mans-sky-sound.jpg

The game No Man’s Sky was an ambitious project which presented considerable challenges regarding audio, due to both its procedurally generated universe, as well as its style and art. How did those challenges reflect on audio design and implementation?

Paul Weir (PW): From the beginning, I aimed to keep the ambiences as natural as possible, using lots of original recordings of weather effects and nature sounds. It was a sensible decision to use Wwise and drive the ambiences using the state and switch systems. The advantage of this approach is that you can relatively easily construct an expandable infrastructure into which you can add layers of sound design that respond to the game state. 

With a game like No Man’s Sky you need to pass as much information as practical from the game to the audio systems in order to understand the environment and state of play. For example, what planet biome you’re on, what the weather is doing, where you are relative to trees, water or buildings, whether you’re close to a cave or in a cave, underwater, in a vehicle, engaged in combat and so on.

A simple example of how this information can be brought together without additional programmer support is the introduction of interior storm ambience. We have a control value (an RTPC in Wwise terminology) for ‘storminess’ and know whether the player is indoors or out. It was a simple job then to add different audio, such as shakes and creaks, when indoors and a storm is raging, without having to rely on a programmer to add this.

It helps that nearly all of our audio is streamed, so I have few restrictions on the quantity of audio I can incorporate.

I wouldn’t usually use electronic sounds as much as recorded acoustic material, but given the sci-fi nature of the game, a lot of the obviously sci-fi features do use synth sounds, although often combined with real-world mechanical sounds. There’s a certain pride I take in recording unassuming everyday objects and using them for key sounds. For example in the most recent update where we added vehicles, the buggy is my own unglamorous car, recorded using contact microphones, the hovercraft is a combination of a desktop fan and air conditioning unit and the large vehicle sounds come from programmer Dave’s Range Rover, I just dropped a microphone into the engine then we went for a spin around Guildford.

Apart from my usual rule of every sound being original, which I appreciate is in itself pretty dogmatic, I have no set approach as to where the sounds come from. It’s whatever works. 

slbfwlrvqjrgvrvfghrk.jpg

Can you define in a few words the difference between generative and procedural for the readers?

PW: There is no recognised definition for either term, so it’s not possible to definitively describe the difference. For me, generative means it is a randomised process with some rules of logic to control the range of values, it does not need to be interactive. Procedural is different in that it involves real-time synthesis that is live and interactive, controlled by data coming back from the game systems. This differentiation works reasonably well for audio but graphics programmers will no doubt have their own definitions.

How much of the game’s audio is procedurally generated and how would you compare these new innovative techniques to the more common sound design approaches? 

PW: Very little of the audio is procedurally created, only the creature vocals and background fauna. At the moment it’s too expensive and risky to widely use this approach, although there are several tools in development that may help with this. Procedural audio is just one more option amongst more traditional approaches and the best approach as always is to use whatever combination best works for a particular project.

Can you tell us about the generative music system (Pulse) – the goals, what it allows to do, and its strengths compared to other implementation tools?

PW: Pulse, at its heart, is really just a glorified random file player with the ability to control sets of sounds based on gameplay mechanics. We have a concept of an instrument which is an arbitrary collection of sounds, usually variations of a single type of sound. This is placed within a ‘canvas’ and given certain amounts of playback logic, such as how often the sound can play, its pitch, pan and volume information. When these instruments play depends on the logic for each soundscape type, of which there are four general variations consisting of planet, space, wanted and map. So for example when in space, instruments in the ‘higher interest’ area play as you face a planet in your ship or when you’re warping. In the map the music changes depending on whether you’re moving and in your direction of travel.

We currently have 24 sets of soundscapes, so that’s 60 basic soundscapes, plus special cases like the map, space stations, photo mode, etc.

Pulse also makes the implementation of soundscapes relatively simple. Once you drag the wavs into the tool it creates all the Wwise XML data itself and injects it into the project, so you never manually touch anything to do with the soundscapes from Wwise.

kmyailnbyy0afbyqdfxn.jpg

In NMS, how are music and sound effects interacting together? What was your approach towards mixing those 2, and do you have any recommendations on how to mix music and SFX dynamically?

PW: I always mix as I go, the mix process wasn’t as difficult as you might expect and as a PS4 title, we’re mixed to the EBU R128 standard.

Whilst there’s a lot of randomisation in the game, I always know the upper and lower limits of any sound and so over time you reach a reasonably satisfactory equilibrium in the mix. It helps a lot that we don’t have any dialogue. You also have to accept that you’re never going to have a perfect mix with this type of title, so just embrace the chaos.

I do have to be careful with the music though. 65 Day’s of Static like creating sounds with very resonant frequencies so sometimes I use EQ to avoid these from standing out too much. Similarly I’ll take out sounds that are too noise-based as they might sound like a sound effect. On the whole though, 90% of what the 65’ers make goes straight into the game.

What’s your opinion on sourcing any audio from libraries VS creating original content? 

PW: On larger projects I am most irritating in insisting that all of the audio is original and not a single sound is sourced from a library, if at all possible. It does depend largely on the game and practicalities but I’ve been able to do this on No Man’s Sky so far. On smaller projects or where time is of the essence, then obviously it makes sense to dip into libraries. Over the years I’ve amassed a large personal collection of sounds that I’m constantly adding to. 

Can you tell us about the tools you used for NMS’s procedural/synthesised audio, what other software was involved in its creation?

PW: Early in development we used Flowstone to prototype the VocAlien synthesis component. Flowstone has the advantage of being able to export a VST so Sandy White, the programmer behind VocAlien, wrote a simple VST bridge to host plugins in Wwise. For release though it obviously needs to be C++ and cross-compile to PS4 and Windows. VocAlien is not just a synthesiser, it’s several components, including a MIDI control surface and MIDI read/write module.

byull3k2xzfndgivkdlw.jpg

On a more technical point of view, how was audio optimisation handled? Did using procedural audio improve CPU/Memory usage?

PW: VocAlien is very efficient and on average our CPU usage is low. However due to the nature of the game, where we can’t predict the range of creatures or sound emitting objects on a planet, the voice allocation can jump around substantially. We have to use a lot of voice limiting based on distance to constantly prioritise the sounds closest to the player. 

What would you think is the best use of procedural audio? Would it be more adequate for some types of projects or sounds than others?

PW: Procedural audio, according to my suggested definition above, only makes sense if it solves a problem for you that would otherwise be difficult to resolve using conventional sound design. 

It’s still a poor way to create realistic sounds. I’m not generally in favour of using it to create wind or rain effects for example. As a sound designer I find this a very functional approach to sound, ignoring the emotive qualities that natural sound can have. Wind can be cold, gentle, spooky, reassuring. There are complex qualities that we instantly react to with natural sounds, it’s a lot harder to do this with synthetic sound.

Finally, NMS’s audio is of a such greatly varied nature and represents a massive achievement overall, do you have a few favourite sounds in game? 

PW: Thank you, I’ll very gratefully take any compliments. Although it started off quite incidental, I like how we’ve managed to insert so many different flavours of rain into the game. I thanked Sean recently for letting me make SimRain, the game itself is incidental.

What gives me pleasure is knowing the everyday items that make it into the game, such as an electric water pump, vending machine, garage motor. I’ve included some examples of the raw sounds that were used as source material below.

 

Big thanks to Paul Weir for this interview and his insights on procedural audio!

 

 

 

Anne-Sophie Mongeau

Sound Designer

Eidos-Montréal

Anne-Sophie Mongeau

Sound Designer

Eidos-Montréal

Anne-Sophie is currently working as a Sound Designer at Eidos Montreal. Previously, she was a Game Audio Engineer at DIGIT Game Studios in Dublin. She has been working in games since 2012, both designing and integrating sound for independent and AAA titles. Her background in music studies, notably at the University of Edinburgh, have lead her to becoming involved in projects of many different natures, both in linear and interactive media such as short films, documentaries, and interactive audiovisual installations. Driven by a passion for knowledge sharing, Anne-Sophie is regularly hosting practical workshops and masterclasses to university students on how to use tools surrounding game audio.

 @annesoaudio

Comments

Leave a Reply

Your email address will not be published.

More articles

Loudness and frequency response on popular smart phones

Do iPhones sound better, and if so, why? What about Huawei mobile phones, how do they sound?...

30.1.2019 - By Jie Yang (Digimonk)

How we used AI to improve dialogue management for Pagan Online

We live in a time where research and development of artificial intelligence is gaining a significant...

6.8.2019 - By Nikola Lukić

The Sound of the Outer Worlds: Part 2

Welcome back to Part 2 of our deep dive into the Sound of the Outer Worlds!

5.11.2019 - By Obsidian Entertainment

Racing Engine Sounds with REV

Engine sounds have always been a challenging part of game audio design, mainly for the following...

17.11.2021 - By Xu Wei (徐巍)

Making Music Design Choices in Little Orpheus

Little Orpheus is a side-scrolling adventure game about one comrade’s journey to the centre of the...

15.9.2022 - By Jim Fowler

How to Create Audio-Reactive Objects Using Wwise and Unity

I would like to show you how to use RTPCs to move game objects in Unity, and how to create...

9.2.2023 - By Tomokazu Hiroki

More articles

Loudness and frequency response on popular smart phones

Do iPhones sound better, and if so, why? What about Huawei mobile phones, how do they sound?...

How we used AI to improve dialogue management for Pagan Online

We live in a time where research and development of artificial intelligence is gaining a significant...

The Sound of the Outer Worlds: Part 2

Welcome back to Part 2 of our deep dive into the Sound of the Outer Worlds!