Audiokineticオフィスは東部標準時間(EST)2019年12月24日午後3時から2020年1月3日午前9時まで、冬季休業とさせて頂きます。ご不便をおかけしますが、何卒ご理解いただきますようお願い致します。詳しくはこちら

Aporia: Beyond the Valley - Making Sounds for a Ghost

ゲームオーディオ / サウンドデザイン

What does a ghostly, dark, mysterious, and dangerous entity sound like? And how does it affect its surrounding sounds? Naturally, it sounds like an old coffee machine! It also sucks out all other sounds from its vicinity, leaving only a cold, dead atmosphere. But then, as it starts chasing you, stressful, ever rising musical phrases start playing. You run as fast as you can but can’t outrun it. You don’t have time to turn your head, but you can sense that it’s getting closer as the air gets colder, freezing, while the musical phrases intensify! In an instant, you can’t move and all sounds disappear. You gasp for air and the ice cold atmosphere fills your lungs. Then, there’s only darkness.

Header.png 

My name is Troels and I’m part of a team developing a game called Aporia: Beyond the Valley. Throughout the production of this game, I’ve acted as sound designer, composer, and the Wwise integration/optimization person (in one title: The Sound Guy).

Story

To give you an understanding of what this game is about, we’re dealing with a first person exploration, adventure/puzzle, set in an ancient valley, which tells a story completely without text or dialogue. The game is developed in CryEngine, which is very useful especially for creating large, beautiful environments. Ruins and stuff from the past are scattered all around the game world, and as the player explores the forgotten lands, a dark entity is watching and patrolling the forest. This entity has the shape of a ghost and could be considered the enemy in the game. The ghost is a central part of the game and closely related to the narrative. I’d like to mention that while the game does have some scary elements to it, it wasn't designed or imagined as a horror game.

 Swamps.jpg

The game has tons of audio and several instances of rather complex audio behaviour; therefore, Wwise was an obvious choice for audio integration software due to its versatility and ability to mix, monitor, and optimize performance via remote connect and during gameplay. Additionally, for me as sound designer and composer, one of the greatest benefits of using Wwise is the massive amount of control I gain without being dependent on the audio programmers. Wwise has all the functionality I could wish for, and programmers can very quickly integrate the Events I create. Once an Event is integrated, it’s also easy to update the audio which it triggers later in the process.

 

In this blog post I’ll talk about some of the sound design related to the previously mentioned ghost, as well as the integration of the sounds on a technical level.

Ghost events.png

This is a list of the Events connected to the ghost (Although the ghost, according to the lore, has an official name, on the team, we tend to call it KNUD, a traditional Scandinavian name. Please, don’t ask why.)

To give you a better understanding of how the Events work in an in-game setting, we made this behaviour tree that shows how the ghost’s AI is set up.

KNUD patrol graph.png

What we want the ghost/enemy to feel like

So, via sound, we wanted to make the ghost appear scary, with a tint of sad. Early in the process, our game director Sebastian Bevense requested a clicking sound. This led my thoughts to the characteristic ghost throat sound from The Grudge. My goal was to establish the same feeling without it having too many associations. Now, as the list above reveals, besides the clicking, the ghost has many other sounds and Events connected to it. I’ll go a bit in depth with those Events and their purpose below. I think it’s always challenging to design sounds for abstract elements as we usually don’t have any exact expectation as to what they would sound like. So, where do you start when you are given the task to design sound for a big, dark, floating ghost? Usually in situations like this, I start experimenting with all sorts of different sounds from recordings and synthesizers and try to imagine which elements on this ghost would emit sound.

forest1.1.jpg 

The only element easy to understand that the ghost has is some cloth that flaps as it moves. So, cloth that’s flapping and a clicking sound. Those are actually decent guidelines!

How we achieved it

The moment Sebastian requested a clicking sound for the ghost, I thought of a sample of an old coffee machine I had heard one week earlier. I don’t recall exactly where I found this sample, but I think this sound is very recognizable.

Original coffee machine sample

We passed the sound through a chain of processing units and tried blending in different types of sounds that could work well with it. As sound designers, we always like to layer sounds—sometimes even too much, and we end up ruining the effect because of too many elements overlapping, making everything muddy (maybe it’s just me). When I export the sound effects for import in Wwise, I like to export layers. For example, I separate the various layers in low, mid, and high content (depends on the type of sound). Then when I import to Wwise, I tie the Events to Blend Containers that contain the different layers. 

shadows1.1.jpg

This allows me to mix and randomize individual layers rather than having only one sound to tweak. My experience is that this approach gives much better results, as you can more easily tweak/mix the sound to fit exactly how you want the player to experience a certain situation. To me, this way of mixing is also much more effective than making sounds solely based on a video recording of a given event/animation or interaction. Of course, when I start designing a sound, I do it based on a video recording; but, from experience I know that it usually feels different playing a game compared to watching it. Therefore, the layer export approach makes sense to me.

Naturally, with this approach you end up with a much higher voice count in Wwise. To compensate for this and in order to optimize performance, at the end of development, you have to manually “bake” all of the sounds by recording a couple of takes of the parent Blend Containers, cutting them into one-shots, and then placing them in Random Containers so that you end up triggering only one sound instead of several. This can be a tiring process and I often wonder if there is a more effective way of doing this.

Processed sound of the coffee machine.

Ghost Events and abilities

Below, I’ll elaborate on various Events connected to the ghost. I’ll focus on the most important sounds rather than going into depth with everything.

  • Emission

Sound-wise, one of the first things we focused on related to the ghost was its emissive sound.

As the ghost is of critical importance, we want to bring it to the player’s focus. A great way to do that is obviously to make it emit a 3D sound, 360 degrees around it.

Early in the integration process we established an RTPC defining the distance to the ghost. This allowed us to easily control the volume of the various layers. For example, we wanted the clicking to be audible earlier than the sub layer. As all layers react differently based on the RTPC, we also created a less static experience of the ghost closing in compared to a one layer loop. Besides controlling the layers, we also used the RTPC to control the gain in one band of the Wwise multiband distortion, making the sound more present and destroyed up close. Using an RTPC also allowed us to control parameters on the bus level. This allowed us to control the ambience and footsteps volume in a controlled way, decreasing them as the ghost moves closer to grab you! One of my favorite features in Wwise is the ability to send signal output values to RTPCs using the Wwise Meter Effect. Exactly as in music production, the ability to route signals across busses and use them to control parameters (external sidechain) is a very useful/powerful way of dynamically controlling your mix. This is an essential tool for achieving a cleaner mix in non-linear audio playback.

After several iterations, we finally ended up with something we were happy about.

One thing I think is difficult when making sound for abstract entities like this is to make everything sound consistent, like it’s coming from the same object in the game. I layer sound upon sound untill everything becomes muddy and nothing stands out. Hopefully, someday I’ll learn that less is more! 95% of the time I think the best result comes from keeping things simple, minimalistic, and effective.

This is a recording of the ghost’s emission going from far away to close-up.

  • Occlusion

One problem that we encountered during testing was that players sometimes would hear the emission of the ghost at full volume even if they were separated by a wall. This indicated that they were in great danger even though the ghost’s AI had no chance of detecting them. We wanted the players to understand that the ghost was near without playing the intense elements of the layers. In an attempt to solve this problem, we split the emissive sounds into two Blend Containers and made one of them react to occlusion. Note how the low end is present in the recording, reacting to occlusion to appear more threatening and make the players feel they are in imminent danger.

Recording of the emission not reacting to occlusion (1 meter away).

Recording of the container reacting to occlusion (1 meter away).

Occlusion setup.png

The spotting sequence

When the player stares at the ghost, the player draws its attention and gradually the player’s position is revealed. Eventually, the ghost will become enraged, charge its energy and teleport to the player. From there, it will initiate a furious chase accompanied by ever rising stressful musical phrases that intensify the closer it gets. Below, I’ll break down and elaborate on the various Events related to the spotting sequence. All signals coming from the ghost run into 1 of 2 busses, the ghost bus or the critical sfx bus. The other busses (music, non-critical sfx, ambience, footsteps,  etc.) all react to these 2 busses by ducking their volume whenever they receive an important signal. This way, we get a cleaner mix while keeping important sounds in the foreground.

  • Locking on

As mentioned above, when the player starts looking at the ghost, it will gradually become aware of him/her. To inform the player about this, we introduce a riser that contains information about how close you are to being spotted. If the player looks away during the riser, the riser stops. This introduced a problem as the sound felt like it was being interrupted too abruptly. Even if we stopped the sound with a fade out, it felt wrong. We attempted to solve this by sending the container’s signal to an aux reverb bus with a long tail. This way, when the container was stopped by looking away from the ghost, the sound would quickly fade out, but hang for a few seconds with a reverb tail creating a nice, more natural fade out. We actually made all of the sounds more or less, associated with the ghost send to this specific aux bus for the sake of consistency between the various ghost sounds.

  • Locked on > Freeze ground > Teleport > Arrive close to player 

Woe to the one who stares at the ghost for too long. When it picks up your presence it will charge its energy and prepare to engage. When it scopes in on your body, the ground freezes around you and you hear the sound of creaking ice. After a few seconds, it starts teleporting (moving fast) towards your location. To provide information about how close it is during the teleportation phase, it emits a howling 3D sound during flight. When the ghost reaches its destination, it appears very close to the player and initiates a chase. At that point, to stress the player, we choose to play a loud impact sound followed by a looping chase Shepard tone-like musical loop. The chase loop consists of 3-4 different layers of increasing intensity. These layers are placed in a Blend Container where the layers crossfade from one to the other based on the previously mentioned RTPC, set by the distance to the ghost.

 Locking on, locked on and freeze ground.

Teleport sound up close + Arrival close to player

Chase loop going from far to up-close > Catch the player

This concludes this blog post! I hope you gained something from reading, and that you might be able to use it as input for when you have to design sound for an abstract dark entity!

As a bonus, here’s a piece of music from the game. Not scary.

 

Game’s website: aporiathegame.com

Developer’s website: investigatenorth.dk

Troels’s website: ubertones.com

Troels Nygaard

Technical Sound Designer and Composer

Investigate North

Troels Nygaard

Technical Sound Designer and Composer

Investigate North

Troels Nygaard is a technical sound designer and composer with a BA in Sound Design (2014) and a MSc in game design with focus on audio integration and user experience (2016). Additionally, Troels is a Wwise Certified Instructor, and music producer.

nordictones.com

コメント

Replyを残す

メールアドレスが公開されることはありません。

ほかの記事

Fun with Feedback(フィードバックの魅力)

Wwise...

12.12.2017 - 作者 ネイサン・ハリス(NATHAN HARRIS)

Yonder: The Cloud Catcher Chronicles - オーディオ日誌エピソード2 - 40種のノイズ

22.5.2018 - 作者 ステファン・シュッツ(STEPHAN SCHÜTZE)

予算が大きいゲームに、ミドルウェアが重要なのはなぜ?

いくつかのメリットを簡単にご紹介しながら、特に予算枠の大きいゲームでWwiseを採用する利点を、自分の考えとして述べたいと思います。 ...

30.4.2019 - 作者 サミー・エリア・コロンバノ(Samy Elia Colombano)

Murderous Pursuitsのダイアログや会話のデザイン - パート1

はじめまして。ゲームチームBlazing Griffinのオーディオデザイナー、ジェイミー・クロスです。数か月前にMurderous...

9.7.2019 - 作者 ジェイミー・クロス(Jaime Cross)

Wwiseが小規模ゲームに向いている、5つの理由

ゲームオーディオに携わり、小さなゲームプロジェクトに関わった人なら誰でも、この議論をしたことがあるはずです...。...

6.8.2019 - 作者 アレックス・メイ(Alex May)

Another Sightのサウンドの、舞台裏をのぞく

27.8.2019 - 作者 ルカ・ピッチーナ(LUCA PICCINA)

ほかの記事

Fun with Feedback(フィードバックの魅力)

Wwise...

Yonder: The Cloud Catcher Chronicles - オーディオ日誌エピソード2 - 40種のノイズ

予算が大きいゲームに、ミドルウェアが重要なのはなぜ?

いくつかのメリットを簡単にご紹介しながら、特に予算枠の大きいゲームでWwiseを採用する利点を、自分の考えとして述べたいと思います。 ...