Behind the scenes: Sounds of the "Passengers: Awakening" VR Experience with Technicolor Sound Designer Viktor Phoenix

사운드 디자인 / VR 체험

  • Viktor PhoenixSound Supervisor and Senior Technical Sound Designer for The Sound Lab at Technicolor
  • ProjectPassengers: Awakening 
  • RoleADR Supervision and Technical Sound Design for all narrative elements
  • ChallengesSmall Team (+/- 10 devs) with goal of creating AAA quality interactive VR  experience in short time frame
  • Audio RequirementsRealtime Spatialization of Narrative Elements on Multiple Platforms and in Three Languages

Screen Shot 2017-05-09 at 4.47.12 PM.png

I was brought in to work with the MPC VR team on ‘Passengers: Awakening’, the companion VR experience to the Sony Pictures movie starring Chris Pratt and Jennifer Lawrence. Based off of my experience with Wwise over the years, the first time being almost ten years ago when I was at Pandemic Studios, I knew that we needed to use it on this project to achieve our goals in the time frame that we had.

I worked closely with the developers at MPC to drive dialogue with logic built in Blueprints, the visual scripting in Unreal Engine. The combination of Blueprints in UE and Wwise Events allowed me to do more without using up programmer time to create hooks.

The built-in audio engine in Unreal can handle a lot of tasks, but there are several aspects to working with Wwise that I prefer. First, Uassets are binary files and aren't text-editable or easily merged. I find having the option of editing files in a text editor critical. Wwise files are built on XML and live outside of the UE project, so they're easily edited and I can merge files in Perforce. I don't often have to do that, but it's a lifesaver when I need it.

Second, I knew that I would need to rely on Wwise to handle some of the heavy lifting for tasks that UE doesn't currently handle automatically and that I would normally ask of a programmer - the most important of which was managing platform-specific integrations.

Binaural Renderers

‘Passengers: Awakening’ was developed for Oculus Rift, VIVE, and PSVR. There is a version of Oculus’s binaural renderer in Unreal Engine 4, so if you’re releasing only on PC you’ll be able to spatialize sounds directly in Unreal. However, since we were releasing on PSVR and the Oculus spatializer isn’t currently compatible with PS4, I needed a solution that would allow me to use the Oculus spatializer on PC and Sony’s 3D audio tools on PSVR for all of my content without implementing it twice or requiring a programmer. 

The Oculus Audio SDK includes their OSP plug-in for Wwise and installation was a breeze; the same for Sony’s tools. I set it up once near the beginning of the project and never had to update Wwise again. The Oculus spatializer lives in the project Binaries directory, though we decided to build those locally (meaning every developer rebuilt the binary files on their machine). So, we had to create an exception for the DLLs in Perforce. Other than that, easy peasy.

Wwise Integration

I'm pretty experienced with scripting, but I'm not a programmer. When I initially integrated Wwise into Unreal Engine manually, I hit a stumbling block getting the Oculus spatializer registered, but I was lucky that Giuseppe Mattiolo, an engineer from Technicolor's Research & Innovation team, was on site at MPC and able to help me out. Other than that, I was able to integrate Wwise on my own, something I didn’t think possible a few years ago. As the project progressed, I was even able to migrate to new versions of the SDK as they were released with the Wwise Launcher. I love having that ability - one more thing that can free up programmer time for other tasks.

MPAA Security Restrictions Created Challenges

Both the Technicolor Sound facility and MPC follow the Best Practices outlined by the Motion Picture Association of America Content Security Program, designed to protect MPAA member's content from piracy. For those of you who aren’t familiar with the guidelines, one of the best practices is that production networks and any computer that processes or stores digital content can not directly access the Internet. There are guidelines set out for transfrering files in and out of a facility, as well as installing software. This security is important, but is a challenge if you’re used to quickly upgrading software and sending assets since it requires someone from another team to install software or transfer files between facilities. This process creates accountability and the offline installation via the Wwise Launcher helped, but the bottlenecks created by the guidelines meant spending more time coordinating than developing.

 Screen Shot 2017-05-10 at 9.26.17 AM.png

Pre-Rendered Reverbs

The decision to pre-render Effects or to render them at runtime comes down to balancing quality, memory, and performance. Each project is different and I decided to print stereo reverb layers for dialogue assets.

The line count for ‘Passengers: Awakening’ was fairly low compared to other projects that I’ve worked on (under 500 lines), the unique areas were limited to six, and the experience is fairly linear with most of the dialogue being limited to one or two locations. For lines that played in multiple areas, I set Switches in Blueprint to trigger the correct layer in Wwise based on your location.  In the end, I had under 1,700 assets (including reverb layers), so with Vorbis and ATRAC compression, memory wasn’t much of issue. But, this is a graphically rich experience and VR requires very high frame rates, so we couldn’t risk adding any additional CPU usage. The we decided to pre-render the reverbs.

While I was able to craft rich sounding reverbs in my studio and I’m happy with the way they sound. If I were to do it all over, I would prefer to spatialize early reflections using game geometry and print only the late reflections or use a high quality reverb for the tails. The Oculus spatializer adds CPU when using early reflections and it goes up proportionately as the room gets bigger. But, Audiokinetic announced a new plug-in at GDC 2017, called Wwise Reflect, that looks to use CPU efficiently; I’d definitely like to give that a try.

 

Presence

An important aspect to creating presence in VR is having sounds respond in real time to users’ movements. By pre-rendering any sound, presence will be negatively affected. There’s just no way around it. You lock perspective for a sound by pre-rendering, and when you do that a user no longer feels like they have agency over their place in the environment. For certain things like 360 videos, it’s fine to use an existing approach like ambisonics or quad-binaural; but, don’t let anyone tell you that it’s not going to affect presence in a fully interactive experience - it will. You can mitigate that by doing things like rotating a sound field around a user, which is what the new Wwise ambisonic convolution reverb does, but you simply have to render changes to a sound’s perspective to the user and the environment at runtime to create presence. 

The pre-rendered reverbs also meant that I needed to print assets for every language the project was localized into. Again, with a small line count, it wasn’t a colossal undertaking; but, if we had the CPU cycles to render and spatialize reverbs at runtime, I would do that in a heartbeat.

 

Localization

Speaking of localization, managing languages in Wwise made implementing the localized assets a breeze. Getting them to stream properly in Unreal took some effort on the part of Timothy Darcy, one of the developers at MPC VR; but, getting the audio assets in and the SoundBanks rebuilt, with all existing AK Events pointing to the correct line, was a snap.

 

Stylin’ Profiling

Ok - I had to get just one fun headline in. Seriously though, anyone who has worked on interactive projects knows how important profiling is. Profiling info in Wwise was invaluable on this project for everything from good old QA and bug fixing to ruling out audio as a cause of a frame rate slow down at one point in the project. Being able to say exactly how much CPU was being used and when helped us narrow down the cause and unblock the team.

 

Links:

http://www.technicolor.com/en/solutions-services/entertainment-services/sound-post-production/sound-lab-technicolor

http://www.moving-picture.com/film/film-vr/vr/

http://www.technicolor.com/en/solutions-services/entertainment-services/creative-houses/technicolor-los-angeles/sound

http://www.technicolor.com/en/innovation/research-innovation

https://developer.oculus.com/downloads/package/oculus-audio-sdk-plugins/

http://www.mpaa.org/content-security-program/

Viktor Phoenix

Sound Supervisor and Senior Technical Sound Designer

The Sound Lab at Technicolor

Viktor Phoenix

Sound Supervisor and Senior Technical Sound Designer

The Sound Lab at Technicolor

ADR Viktor Phoenix is Supervising Sound Designer and Senior Technical Sound Designer for The Sound Lab at Technicolor in Los Angeles and brings expertise in systemic sound design and 3D audio to game, VR, and 360 video projects at the studio. He has 15 years of experience designing, implementing, and mixing interactive sound on projects such as VRSE 'Click Effect', Three One Zero 'ADR1FT', Cloudhead Games 'The Gallery', Kite & Lightning 'Insurgent VR', Turtle Rock Studios 'Evolve', Pandemic Studios 'Mercenaries' and Passengers VR Experience.

 @viktorphoenix

댓글

댓글 달기

이메일 주소는 공개되지 않습니다.

다른 글

ABZÛ - 도전! 물고기 10,000 마리의 게임 오디오 만들기

이 이미지를 가장 처음 봤을 때 여러분은 어떤 생각이 먼저 드나요? 저는 어떻게 하면 이 월드에 현실적이고 진짜 같은 사운드를 만들 수 있는지, 그리고 이 장면의 중심으로 헤엄을...

30.7.2019 - 작성자: 스티브 그린 (Steve Green)

라우드니스를 처리하는 최상의 방법, 제 3강: 게임을 위한 측정 가능한 라우드니스 처리

우리는 대부분의 경우 복잡한 사운드를 다룹니다. 게임에서는 각 사운드가 전반적인 출력 라우드니스에 기여합니다. 영화나 텔레비전 제작과 달리 작은 세그먼트의 다이내믹과 주파수 반응을...

27.5.2020 - 작성자: 지에 양 (Jie Yang, 디지몽크)

게임 오디오 직업 스킬 - 게임 사운드 디자이너로 고용되는 법

20.1.2021 - 작성자: 브라이언 슈밋(BRIAN SCHMIDT)

텔 미 와이(Tell Me Why) | 오디오 다이어리 제 3부: 사운드 디자인

텔 미 와이(Tell Me Why)의 오디오 팀에게는 독특하면서도 인상 깊은 서사적 시퀀스를 향상시킬 수 있는 기회가 많았습니다. 저희 시네마틱 및 크리에이티브 디렉터는 영상에 꼭...

19.7.2022 - 작성자: 매튜 피오렌티니 (Mathieu Fiorentini)

노 스트레이트 로드(No Straight Roads)의 음악적 게임 세계 설계하기

안녕하세요, 게임 오디오 여러분들! 저희가 Wwise와 Unreal Engine을 사용해서 '노 스트레이트 로드(No Straight Roads, NSR)'의 극도의 스타일링을...

29.3.2023 - 작성자: Imba Interactive (임바 인터랙티브)

Wwise를 사용한 반복 재생 기반 자동차 엔진음 디자인 | 제 1부

이 시리즈에서는 Wwise Authoring과 오디오 및 자동차 전문 지식을 알맞게 사용해서 간단한 반복 재생 기반 자동차 엔진 사운드를 디자인하는 방법을 살펴보려고 합니다! ...

18.4.2023 - 작성자: 아르토 코이비스토 (Arto Koivisto)

다른 글

ABZÛ - 도전! 물고기 10,000 마리의 게임 오디오 만들기

이 이미지를 가장 처음 봤을 때 여러분은 어떤 생각이 먼저 드나요? 저는 어떻게 하면 이 월드에 현실적이고 진짜 같은 사운드를 만들 수 있는지, 그리고 이 장면의 중심으로 헤엄을...

라우드니스를 처리하는 최상의 방법, 제 3강: 게임을 위한 측정 가능한 라우드니스 처리

우리는 대부분의 경우 복잡한 사운드를 다룹니다. 게임에서는 각 사운드가 전반적인 출력 라우드니스에 기여합니다. 영화나 텔레비전 제작과 달리 작은 세그먼트의 다이내믹과 주파수 반응을...

게임 오디오 직업 스킬 - 게임 사운드 디자이너로 고용되는 법