Blog homepage

Behind the Sounds of Another Sight

Game Audio / Interactive Music

London, end of the Victorian Era. The British Empire is at the apex of its power. The Royal Navy controls most of the key maritime routes of the world and enjoys unchallenged sea power. The whole of Europe takes advantage of this so called “Pax Britannica” and enters a period of unprecedented advancements in scientific research and technological applications, opening the phase of the second Industrial Revolution. The same period corresponds with the flourishing of arts, especially in France with the Belle Epoque, where new cultural paradigms and styles come to life, and some avant-garde movements such as Monet’s Impressionism gain full maturity and recognition.

Another Sight is a story-driven adventure game that takes place during this exciting period. It’s the first game developed by Lunar Great Wall Studios, a recently opened studio based in Milan, Italy,  mainly composed of veterans from the Italian video game industry.


Picture1

Another Sight brings to life two leading characters: Kit, a refreshingly bold teenager, who loses her sight after the under-construction London Underground tunnel she was exploring collapses, and Hodge, a mysterious red-furred cat she meets in darkness. Kit and Hodge explore a surreal fantasy world, both together and apart, inspired by Neil Gaiman’s urban fantasy, Neverwhere, encountering great artistic minds such as Claude Monet, Jules Verne, and other cultural icons.

When I was first presented the concept behind Another Sight, I was really excited. I very much love the period in which the game was set, and the mix of historical events and characters with magical and sci-fi elements was very appealing to me. I also liked the idea of a gameplay that implies the alternate use of Kit and Hodge to proceed in the adventure, and the use of an “Impressionist Filter” that gives the game a peculiar and beautiful visual aspect.

But I forgot to mention the thing I liked the most, a small detail that changes everything: when Kit wakes up she’s totally blind, and she perceives the surrounding environment through some kind of mysterious echolocation!

Defining the goals of Another Sight soundscape

The fact that I was instantly passionate about the project didn’t make it simpler, given the level of quality Lunar Great Wall Studios wanted to achieve.
To kick off the project, I firstly wrote down a series of requirements from both artistic and technical points of view to better understand any possible problems we could face and define a precise overall plan:

  • Another Sight is first of all a story of growth, a modern fable that wants to create empathy for our protagonists and deliver a “sense of wonder” for the mysterious world discovered by Kit and Hodge.
  • In order to achieve this, a part of conventional descriptive sound design, brilliantly-acted dialogues, and emotional music were required.
  • Kit is mysteriously blind and uses echolocation to see the world that surrounds her.
  • Echolocation is an ability - typical of some animals - that  consists in perceiving the surrounding environment by sensing echoes, which are sound waves reflected by nearby objects: some bats for example, emit an ultrasonic signal from their larynx in a range between 11 KHz and more than 200 KHz (the kind of signal depends on the kind of bat), and later can process it thanks to their purposely shaped auditory system, allowing them to have a clear “vision” of their environment and to hunt in full darkness.
  • Echolocation may also be used by properly trained humans. Normal sighted individuals use vision to detect information about of the objects surrounding them. So the information of the echoes related to the sound produced (or reflected) by those objects is usually redundant and a psycho-acoustic phenomenon called Precedence Effect (or Haas Effect) then comes into play.
  • The Precedence Effect basically states that humans localize sound sources from the direction of the first arriving sound (sound wavefront) despite the presence of reflections from different directions coming with a small lag with respect to the first arriving wavefront.
  • The perception of the position of the sound source varies depending on the delay between subsequent wave fronts, the kind of signal representing the sound source itself, and the intensity of the reflections.
  • It has been shown that properly trained individuals can use natural environmental echoes or produce mouth clicks to trigger forced echoes to sense details about their environment. In some blind echolocation experts, this also lead to activation in the primary visionary cortex, allowing them to see - in a sense - sounds and reverberation.
  • These considerations led to our concept of Kit’s view: everything is totally dark except small bursts of light that correspond to near sound sources. And even if Kit can’t directly produce clicks to trigger echoes, meowing with the cat will resonate with the environment increasing Kit’s perception of the world around her.
  • The shift in perception between Hodge and Kit’s views is huge and I wanted to underline the feeling of entering another kind of dimension when using the young girl. Sources of inspiration for me were games such as Limbo or Inside, with their estranging soundscapes, or Stranger Things Upside Down.
  • Kit's view of the world had to be balanced in design for two specific reasons: on one hand the gameplay need of clear distinguishable objects and sound sources, and on the other hand an immersive ambient sound to deliver the magic of  almost being in another world.
  • Additionally, the impressionist filter applied to Kit’s view also drove me to mimic the small brushes that characterize this kind of painting using a granular approach when designing ambient sounds.
  • From an implementation point of view, I needed a powerful tool in order to respect the strict deadlines and lighten coders work:

• Another Sight was developed using UE4, and a built-in engine integration was a must have to speed up the development process.

• I needed a deep control over audio data, to generate sound and lights coherently depending on the audio sources, exploiting audio callbacks, markers, and real time signal processing.

• Interactive objects such as door, levers, switches, generators all needed to have complex behaviour and could be simulated with physics.I needed to develop their audio behavior mostly autonomously in order to save programmers time for harder tasks and achieve the results I wanted, so a tool that could handle several kind of events and powerful game-object profiling was highly desirable.

• The game had to be debugged, profiled, and optimized for Xbox One, PS4, Switch, and PC in a short time and localized in several languages. Easy multi-platform support was a must.

                             

All these considerations suggested Wwise as the primary choice for Another Sight's Audio. I was in charge of the whole of asset creation (with the exception of music) and implementation through middleware and blueprint scripting. I also gave our audio programmer Diego Rodriguez some suggestions about where to look in the big Wwise SDK and what we were looking for, while he did the hard work with C++ implementation of AK functions in our modified UE4 engine and handled integration of audio with the core logics of the game.

Visualizing Sounds

Perhaps the most interesting part of developing Another Sight audio was getting data from audio in realtime in order to manage sounds dynamically depending on their envelope and generating lights coherently when impersonating Kit with her peculiar “Echolocation vision”.

In order to achieve this, we first needed to expose a hidden “getRTPC” function in the AkDevice for Unreal and to register\unregister to AkCallbacks.

 

Picture2
 
Picture3
Picture4
 
 

We then decided to incorporate Wwise in our Audio Classes to better fit our needs. For example:

  • We created an Audio manager that gives access to audio functionalities at the right time (for example, allowing us to play a sound when the related actors are initialized or the SoundBanks are properly loaded, or setting RTPCs only when reliable data is available).
  • The audio manager also takes care of triggering ambient sounds and manages audio states. It works in synch with the other game managers.
  • We also decided to have a dynamic listener which could be moved, interpolating positions between the standard Unreal/Wwise listener (that is at camera position), and the controlled pawn.
  • We developed some custom anim notify that brings additional information, for example the level of effort in order to drive a switch containing Kit’s voice effects.
  • Once set up, these basics elements with the fundamental intervention of the audio programmer, I was almost autonomous in blueprinting everything I needed to manage complex audio behavior with custom components I created for Interactive objects and audio driven graphic objects.

For example, this function is used to produce tick time, a couple of values from RTPC values generated by Wwise:

Picture5

RTPC values were generated using the Wwise Meter Effect on aux buses that were then set to 0 volume, in order just to output RPTC values and to not affect the mix. In some cases we also added filtering before metering in order to get frequency dependent RTPCs.

 

Picture6

This gave us the possibility to modulate (in real time) several parameters of our sound bubble lights depending on audio. But we also went further, and used game parameters (in game or with call-in-editor functions) to modulate - for example - the parametric eq cutoff frequencies, in order to dynamically change the filtering of the signal and consequently the generated output RTPC.

The video below show a simple test on a sample mesh:


 

For what concerns simple sound play/stop synchronization with lights, when not possible to do otherwise, we bound to Wwise callbacks to know when a sound is over.


Picture7

 

States and Other Modulations

I deeply relied on States together with RTPC to manage audio between the different game situations and, depending on the condition of the gameplay, to provide a different soundscape when playing with Kit or Hodge, or in other cases.

For example, a snapshot of our environment modulation at the beginning of the project (just a couple of values were modified then, just to show the possible variations available):

 

Picture8

I also found Wwise extremely powerful when simulating a low-RPM “rhythm” machine.

 

Picture9

 

For example, I realised the engine steam boat at the end of the first level by using several layers: at the beginning I used a classic approach by pitch shifting some loops depending on the engine speed; then I also added another layer modulated by varying the “trigger rate” of the samples always in function of the same speed, so properly simulating the “cycles” of a steam engine.

Trigger Rate modulation obviously works only in certain cases, but I found it very useful with mechanical movements such as clicking mechanisms, rolling chains, and so on.

Picture10

 

Music

While I designed pads and atmospheres by trying to use a granular and  a concrète approach using sound effects as instruments - for music Lunar collaborated with Jingle Bell Productions and Another Sight soundtrack composer Stefano Cisotto to utilise an orchestral approach in order to achieve a traditional sense of “fable” and narration. The “Claudio Abbado Orchestra” of Salerno was recorded and additional instrumental recordings were made during the development of the game.

The choice of a more traditional soundtrack was needed, (especially during our 2D impressionist-style cutscenes) to be coherent with the visual style. Stefano took reference of some of the most prominent composers of that time, such as Claude Debussy or Eric Satie and, in fact, created a dreamy and a slightly exotic main theme that was later arranged in several ways.

We worked together in developing the soundscape of the game, and beside the orchestral components, he managed to add some electronic sounds as the game gained more sci-fi influences and also needed the injection of more modern sounds. We also developed dynamic music using the Wwise standard music tools but, to be honest, that was only a little part of the work since the pieces composed by Stefano had enough power to underline narrative moments with their own proper structure.

 

Optimization

Last but not least, we also intensively used Wwise conversion and profiling tools to keep performance under control on consoles, especially for what concerns memory requirements, voice management, and game object profiling.

Wwise conversion tools allow for fast previewing of the quality of the audio assets, making it easy to find a balance between memory optimization and audio quality, and they were really useful in particular developing on Xbox One and Switch. Proper bank refactoring and conversion allowed us to reduce memory usage by a factor of 10 without affecting too much CPU and audio quality in a short time.

 

Luca Piccina

Audio Director and Senior Sound Designer

Lunar Great Wall Studios

Luca Piccina

Audio Director and Senior Sound Designer

Lunar Great Wall Studios

Luca Piccina is currently Audio Director and Senior Sound Designer at Lunar Great Wall Studios. After respectively gaining a Bachelor’s in Computer Science and a Master’s in Sound Design and Engineering at Politecnico di Milano, he attended specialisation courses in Sound Design and has worked for almost 9 years in game audio, especially racing games. Being fond of both audio artistic and technical research, he loves game audio 360°!

lgwstudios.com/

 @LGW_studios

Comments

Leave a Reply

Your email address will not be published.

More articles

The Witcher 3: Wild Hunt - Game Audio (part 1/2)

CD Projekt Red released The Witcher 3: Wild Hunt in May of 2015, and critics have been raving about...

29.11.2016 - By Audiokinetic

Loudness Processing Best Practices, Chapter 1 : Loudness Measurement (PART 1)

Translator's Note: This series highlights audio best practices from China. China topped the world...

6.6.2017 - By Jie Yang (Digimonk)

Behind the Music of Halo Wars 2: An Interview with Composers Brian Lee White, Brian Trifon and Gordy Haab

One of the most buzzed about video games of 2017 has been Halo Wars 2 with The Guardian calling it...

20.2.2018 - By Amie Cranswick

Is Hybrid Interactive Music the Future? PART I - How I used Get Even as an R&D platform for Interactive Music

When I write music for video games, I always wonder about how I can make it more meaningful for the...

27.3.2018 - By Olivier Derivière

Approaching UI Audio from a UI Design Perspective - Part 2

In this article I’d like to show you how approaching UI audio as if you yourself were the UI...

30.7.2019 - By Joseph Marchuk

Hitman 2: Enhancing Reverb on Modern CPUs

Popularization of CPUs with six and eight cores means that untapped processing power becomes...

28.8.2019 - By Stepan Boev

More articles

The Witcher 3: Wild Hunt - Game Audio (part 1/2)

CD Projekt Red released The Witcher 3: Wild Hunt in May of 2015, and critics have been raving about...

Loudness Processing Best Practices, Chapter 1 : Loudness Measurement (PART 1)

Translator's Note: This series highlights audio best practices from China. China topped the world...

Behind the Music of Halo Wars 2: An Interview with Composers Brian Lee White, Brian Trifon and Gordy Haab

One of the most buzzed about video games of 2017 has been Halo Wars 2 with The Guardian calling it...