Behind the sounds of «11 – 11 : Memories Retold »

게임 오디오

« 11-11 : Memories Retold » is a narrative game, set during WW1, following the story of two characters on the western front. Harry, a young Canadian who enrolled as a war photographer. And Kurt, a German father, who is going to the front at the search of his son who disappeared. The player plays both characters alternately. This is not a shooting game, it is about the story of the two characters and people affected by war, on both sides. The game alternates between exploration and narration sequences, in a third person perspective.

Pic1_Banner

The game has been developed by Digixart in Montpellier (France), Aardman Animations (Bristol) and published by Bandai Namco for PC / PS4 / XboxOne (released on November 9th, 2018).  The visual of the game is very unique, with its painterly effect rendering the game in real-time, as a living painting, composed of thousands of strokes drawing the game.

Pic2_InGame

Elijah Wood lent his voice to Harry, and Sebastian Koch to Kurt. The music was composed by Olivier Deriviere. Olivier Ranquet delivered precious assets and Antoine Chabroux was our sound intern.  I was primarily in charge of the technical side of the audio (Wwise project management and integration in Unity, custom audio tools, and some SFX and foley design).

Wwise was an obvious choice to manage the audio for the game for the following reasons:

  • multi-platform game : Conversion and Stream management per platform was easily manageable, as well as profiling / optimizing / debugging
  • it has built-in functions to manage the logic involved in switching between two characters in two different location (which is a big feature of the game)
  • we were a small dev team and I was working remotely : being independent in my work was important and Wwise also reduced the workload for programmers

Early work

This was the first game of this scale for DigixArt and Aardman, with a freshly formed team. There was no audio pipeline in place for Wwise and the visual scripting tool used in Unity didn't had built-in nodes for Wwise. I was brought early in the production, which gave us the time to have a proper pre-production period. The goal was to have a solid audio pipeline before getting into the busy production period ahead.

« 11-11 : Memories Retold » is a story-driven experience. Having a clear and well-crafted narration is essential. Therefore, one of my first mission was to develop a text-to-speech system between Wwise and Unity, so that level designers can pace and design the game around the voices (voice-over or diegetic dialogues).

I came upon this WAAPI sample  by Bernard Rodrigue, that generates a robot voice audio file corresponding to the Notes of a Sound Voice object. After experimenting with it locally, I explained and pointed useful functions to a programmer in order for him to create a large scale system working as explained below:

  • the writers were working on a spreadsheet and were updating this document daily
  • a custom « audio spreadsheet » was created, importing the dialogue number and the dialogue line (cf. image below)
  • this tab delimited document was then imported into Wwise, creating a Sound Voice object named after the dialogue number and having the dialogue line as Notes. An additional column was added to create a Play_ event in the correct Work Unit.
  • WAAPI then generated the robot voice by reading the Notes of the newly created Sound Voice, and generating corresponding Soundbanks

This was all automated in a batch file, executed automatically every mornings.  This system allowed the level designers and myself to work with always up-to-date dialogue lines.  This was absolutely crucial to pace the game before the voice recording sessions (of course, some adjustments had to be made once putting the real voice in game but this system was undoubtedly a big time saver).

 Pic3_TabDelimitedImport

« Sample of the automatically created audio spreadsheet imported in Wwise »

After playing prototypes and studying the game design documentation, I came up with a list of data that I would need to get from the game engine to Wwise.

Here is a list that already covered a lot of my needs:

  • Character's Switch : for everything specific to each character
  • Character's States : for everything global (ambiances / music / ...)
  • Character's Stance Switch (is the character crouching / running / climbing / …) : useful for all the locomotion foley
  • Footsteps Material Switch and material detection system
  • Character's Speed RTPC
  • AkListener's position RTPCs (X, Y and Z)
  • A dozen of States corresponding to different features of the game: NPC Dialogue On/Off, Cinematic On/Off, Voiceover On/Off, Puzzle view On/Off, Menu On/Off, GameView (PuzzleView / ExplorationView / CameraView / CardGameView / ...)
  • and multiple core Events : Level_Start_ and Level_End_ events ; Menu and Pause events ; ...

A programmer helped me plugged these datas and developed a generic C# script attached to every character (main or NPCs). All gameplay-elements (puzzles, Harry's camera, …) also had a generic script attached, feeding Wwise with the current state of the puzzle / game. 

Every time a level designer added an NPC or a puzzle, all the logic system built in Wwise were applied to those new elements automatically.  If I needed to modify or add something, modifying the parent prefab would update all the children in levels.

The point is, being involved early gave me the time to build generic but adaptive systems that saved me a lot of time for the integration during the production, facilitated the debug process and helped building the interactive mix along the way.

Integration and interactive systems

For all diegetic voices, I had two Attenuation Sharesets. One was realistic and simulated real-life attenuation. The other one had a very long slope and the voice was never completely attenuated/filtered and  was always intelligible. Drag and dropping the Sound Voice in the Acter-Mixer with one or the other attenuation Shareset was an easy way to keep the most important dialogue audible, no matter what the player is doing.

Voice-overs can happen anytime during exploration, cinematics, exploration, puzzles, … All sort of other audio events can also occurs during those VO sequences, potentially creating intelligibility issues. We had a custom node to call voice lines. It allowed us to set if this line was narrative (VO) or diegetic, to play multiple Wwise events successively, and have different output in between events or after all events are finished.

Pic8_CustomVoiceNode

«Custom Node to play dialogues or VO »

When Set to « Narrative », it triggers different settings in Wwise affecting the mix :

  • sets the VoiceOver State to On at the beginning of the first VO event and switch it back Off when the last event of the node is finished. This State ducks everything except VO (at the Bus level), coupled to high/low pass. Having this custom node allowed level designers to work granularly with VO events (adding, removing or switching VO lines) and yet keeping a generic and consistent narrative mix, no matter how many or which VO are played. I preferred the use of States over Auto-ducking as I also needed to apply this mix settings when no audio was playing in the VO bus (if applying a long delay between two VO lines for example)
  • sets the RTPC_Voice_Over_Effect to 1 : this RTPC goes from 0 (when no VO are playing) to 1 (whenever a VO is playing). This RTPC lower the threshold of compressors on the music, ambience and SFX bus, avoiding louder sounds to overlap with the VO

 

The VoiceOver State lowers the Music bus from 3 dB. Which is clearly not enough when a full orchestra is playing loud. In order to keep a good VO intelligibility without lowering the  music volume too much (and potentially hearing it pumping in and out), an EQ was filtering out of the music the most prominent frequencies of the VO. I assigned an RTPC to the meter of the VO bus ; this RTPC was driving the gain of two frequency bands. This really helped getting the VO through the loud orchestra without ducking it too much.

Pic4_EQ_SideChain
« EQ on the Music bus, with the VO meter ducking two frequencies (with a large Q) »

At some point in the game, the player will control a cat. Putting Wwise events on the animation timeline did not work well for the footsteps, as the animation speed changed depending on player's input and the « rhythm » of footsteps wasn't consistent.

To get a more convincing results, I used a random container in Continuous/Loop mode, with the Trigger Rate value driven by the cat's speed (as was the volume and pitch). Playback events were managed in code. The Play_ event occurred when the cat speed was superior to 0 and the Stop_ event was called once the cat's speed reaches 0.

 

Music

Composer Olivier Deriviere was involved from the very beginning of the project. He composed beautiful music that is a major actor of the game and conveys a lot of emotions. It guides and follow the narration, emphasizing Kurt's and Harry's feelings. It has been performed by The Philharmonia Orchestra and recorded at Abbey Road Studios.

 Pic7_AbbeyRoad

« Recording session at Abbey Road Studios »

Because of its structure, the game didn't need complex interactive music systems. The implementation work consisted mostly of carefully timing the music with the narration and objective completion. This was achieved as simply as can be : one main Switch container (played at game start) and Set States actions in various events (Level_Start, VO events) or in the visual scripting tool. 

«Olivier talks about the creative and recording process»

The full soundtrack is available here .

Mix

From the many options Wwise offers regarding interactive mix, HDR was not considered because of the genre of the game. (It is quite scripted and the number of concurrent events is controlled.) I used a mix of custom side-chains, States and Auto-Ducking. The choice of State ducking over Auto-Ducking was made depending on the need of keeping the ducking even if no audio was going through the ducking bus. Each ducking applies a volume between -2 and -6 dB, which is not a lot on its own, but the mix results from the cumulation of those values depending on situations.

Pic5_MixSystem

«Simplified scheme of the mix system » 

Using States instead of Auto-Ducking also allowed me to enhance or soften the ducking on specific Sound. For example, if I want a specific Sound SFX to be less ducked, I add the VoiceOVer State to it and compensate the volume loss and filters that is applied on the bus level. Same goes to enhance the ducking on specific sounds.

I also used the built-in Azimuth parameter to drive an RTPC, ducking specific emitters going behind the AkListener's field of view. This helped clearing the mix and keep the player's focus. (The Azimuth parameter is also quite handy when linked to a Pitch curve, to simulate Doppler effect on planes flying over for example.)

 Pic6_Azimuth

« Azimuth RTPC filter and volume curves »

Conclusion

« 11-11 : Memories Retold » is a narrative, story-driven game where music and VO have the first role in the audio landscape. With this in mind, Wwise allowed me to build generic and adaptive system to facilitate the integration, convey the story and bring emotions to the player through audio.

 

Subscribe

Yoann Morvan

Technical Sound Designer

Yoann Morvan

Technical Sound Designer

I am a French sound designer based in Montreal (at time of writing). After a few years working in the cinema industry in Paris, my passion for video games brought me to London where I graduated from the University of Westminter to specialize in interactive audio. I have since been collaborating with multiple studios on projects from AAA to indie.

 @YoannMorvan1

댓글

댓글 달기

이메일 주소는 공개되지 않습니다.

다른 글

하이브리드 상호작용 음악의 시대가 올 것인가? 제 2부 - 기술적 설명

이 블로그의 제 1부에서는 하이브리드 상호작용 음악에 대해 다루고, 게임에서 음악을 더 넓은 범위로 사용할 수 있는 방법을 개발하는 것이 왜 중요한지를 알아보았습니다. 이제 제...

31.10.2019 - 작성자: 올리비에 더리비에르 (OLIVIER DERIVIÈRE)

새로운 Impacter 플러그인 알아보기

개요 Impacter(임팩터)는 기존의 SoundSeed Impact 플러그인을 영감으로 하는 새로운 음원 플러그인입니다. 이 플러그인은 '타격음' 사운드 파일을 저작 도구로...

20.5.2021 - 작성자: 라이언 돈 (RYAN DONE)

‘잇 테이크 투(It Takes Two)’ 사운드 비하인드 스토리 | Hazelight 오디오 팀과의 Q&A

Hazelight Studios(헤이즈라이트 스튜디오)에서 제작한 잇 테이크 투(It Takes Two)는 분할 스크린 액션 어드벤처 플랫폼 협동 게임입니다. 이 게임은 엄청나게...

5.4.2022 - 작성자: Hazelight (헤이즐라이트)

텔 미 와이(Tell Me Why) | 오디오 다이어리 제 4부: 믹싱과 마스터링

'텔 미 와이(Tell me Why)'는 Xbox와 PC에서 출시되었으며 5.1 서라운드 사운드를 완전히 지원합니다. 서사적 게임을 작업할 때에는 최종 믹스를 특정 방식으로...

26.7.2022 - 작성자: Mathieu Fiorentini

Wwise를 사용한 반복 재생 기반 자동차 엔진음 디자인 | 제 2부

다시 뵙게 되어 반갑습니다! 이 시리즈에서는 Wwise에서의 간단한 반복 재생 기반 자동차 엔진을 구성 및 설계를 함께 살펴보게 됩니다. 디자인을 제어하기 위해 필요한 엔진 매개...

9.5.2023 - 작성자: 아르토 코이비스토 (Arto Koivisto)

상호작용 음악: '여러분이 직접 선택하는 모험' 스타일의 발라드

2018년 크라우드 펀딩 캠페인을 성공적으로 마친 inXile Entertainment(인엑사일 엔터테인먼트)는 '웨이스트 랜드 3(Wasteland 3)' 게임의 본격적인 제작에...

23.5.2023 - 작성자: Alexander Brandon (알렉산더 브랜드)

다른 글

하이브리드 상호작용 음악의 시대가 올 것인가? 제 2부 - 기술적 설명

이 블로그의 제 1부에서는 하이브리드 상호작용 음악에 대해 다루고, 게임에서 음악을 더 넓은 범위로 사용할 수 있는 방법을 개발하는 것이 왜 중요한지를 알아보았습니다. 이제 제...

새로운 Impacter 플러그인 알아보기

개요 Impacter(임팩터)는 기존의 SoundSeed Impact 플러그인을 영감으로 하는 새로운 음원 플러그인입니다. 이 플러그인은 '타격음' 사운드 파일을 저작 도구로...

‘잇 테이크 투(It Takes Two)’ 사운드 비하인드 스토리 | Hazelight 오디오 팀과의 Q&A

Hazelight Studios(헤이즈라이트 스튜디오)에서 제작한 잇 테이크 투(It Takes Two)는 분할 스크린 액션 어드벤처 플랫폼 협동 게임입니다. 이 게임은 엄청나게...