Behind the scenes: Sounds of the "Passengers: Awakening" VR Experience with Technicolor Sound Designer Viktor Phoenix

사운드 디자인 / VR 체험

  • Viktor PhoenixSound Supervisor and Senior Technical Sound Designer for The Sound Lab at Technicolor
  • ProjectPassengers: Awakening 
  • RoleADR Supervision and Technical Sound Design for all narrative elements
  • ChallengesSmall Team (+/- 10 devs) with goal of creating AAA quality interactive VR  experience in short time frame
  • Audio RequirementsRealtime Spatialization of Narrative Elements on Multiple Platforms and in Three Languages

Screen Shot 2017-05-09 at 4.47.12 PM.png

I was brought in to work with the MPC VR team on ‘Passengers: Awakening’, the companion VR experience to the Sony Pictures movie starring Chris Pratt and Jennifer Lawrence. Based off of my experience with Wwise over the years, the first time being almost ten years ago when I was at Pandemic Studios, I knew that we needed to use it on this project to achieve our goals in the time frame that we had.

I worked closely with the developers at MPC to drive dialogue with logic built in Blueprints, the visual scripting in Unreal Engine. The combination of Blueprints in UE and Wwise Events allowed me to do more without using up programmer time to create hooks.

The built-in audio engine in Unreal can handle a lot of tasks, but there are several aspects to working with Wwise that I prefer. First, Uassets are binary files and aren't text-editable or easily merged. I find having the option of editing files in a text editor critical. Wwise files are built on XML and live outside of the UE project, so they're easily edited and I can merge files in Perforce. I don't often have to do that, but it's a lifesaver when I need it.

Second, I knew that I would need to rely on Wwise to handle some of the heavy lifting for tasks that UE doesn't currently handle automatically and that I would normally ask of a programmer - the most important of which was managing platform-specific integrations.

Binaural Renderers

‘Passengers: Awakening’ was developed for Oculus Rift, VIVE, and PSVR. There is a version of Oculus’s binaural renderer in Unreal Engine 4, so if you’re releasing only on PC you’ll be able to spatialize sounds directly in Unreal. However, since we were releasing on PSVR and the Oculus spatializer isn’t currently compatible with PS4, I needed a solution that would allow me to use the Oculus spatializer on PC and Sony’s 3D audio tools on PSVR for all of my content without implementing it twice or requiring a programmer. 

The Oculus Audio SDK includes their OSP plug-in for Wwise and installation was a breeze; the same for Sony’s tools. I set it up once near the beginning of the project and never had to update Wwise again. The Oculus spatializer lives in the project Binaries directory, though we decided to build those locally (meaning every developer rebuilt the binary files on their machine). So, we had to create an exception for the DLLs in Perforce. Other than that, easy peasy.

Wwise Integration

I'm pretty experienced with scripting, but I'm not a programmer. When I initially integrated Wwise into Unreal Engine manually, I hit a stumbling block getting the Oculus spatializer registered, but I was lucky that Giuseppe Mattiolo, an engineer from Technicolor's Research & Innovation team, was on site at MPC and able to help me out. Other than that, I was able to integrate Wwise on my own, something I didn’t think possible a few years ago. As the project progressed, I was even able to migrate to new versions of the SDK as they were released with the Wwise Launcher. I love having that ability - one more thing that can free up programmer time for other tasks.

MPAA Security Restrictions Created Challenges

Both the Technicolor Sound facility and MPC follow the Best Practices outlined by the Motion Picture Association of America Content Security Program, designed to protect MPAA member's content from piracy. For those of you who aren’t familiar with the guidelines, one of the best practices is that production networks and any computer that processes or stores digital content can not directly access the Internet. There are guidelines set out for transfrering files in and out of a facility, as well as installing software. This security is important, but is a challenge if you’re used to quickly upgrading software and sending assets since it requires someone from another team to install software or transfer files between facilities. This process creates accountability and the offline installation via the Wwise Launcher helped, but the bottlenecks created by the guidelines meant spending more time coordinating than developing.

 Screen Shot 2017-05-10 at 9.26.17 AM.png

Pre-Rendered Reverbs

The decision to pre-render Effects or to render them at runtime comes down to balancing quality, memory, and performance. Each project is different and I decided to print stereo reverb layers for dialogue assets.

The line count for ‘Passengers: Awakening’ was fairly low compared to other projects that I’ve worked on (under 500 lines), the unique areas were limited to six, and the experience is fairly linear with most of the dialogue being limited to one or two locations. For lines that played in multiple areas, I set Switches in Blueprint to trigger the correct layer in Wwise based on your location.  In the end, I had under 1,700 assets (including reverb layers), so with Vorbis and ATRAC compression, memory wasn’t much of issue. But, this is a graphically rich experience and VR requires very high frame rates, so we couldn’t risk adding any additional CPU usage. The we decided to pre-render the reverbs.

While I was able to craft rich sounding reverbs in my studio and I’m happy with the way they sound. If I were to do it all over, I would prefer to spatialize early reflections using game geometry and print only the late reflections or use a high quality reverb for the tails. The Oculus spatializer adds CPU when using early reflections and it goes up proportionately as the room gets bigger. But, Audiokinetic announced a new plug-in at GDC 2017, called Wwise Reflect, that looks to use CPU efficiently; I’d definitely like to give that a try.

 

Presence

An important aspect to creating presence in VR is having sounds respond in real time to users’ movements. By pre-rendering any sound, presence will be negatively affected. There’s just no way around it. You lock perspective for a sound by pre-rendering, and when you do that a user no longer feels like they have agency over their place in the environment. For certain things like 360 videos, it’s fine to use an existing approach like ambisonics or quad-binaural; but, don’t let anyone tell you that it’s not going to affect presence in a fully interactive experience - it will. You can mitigate that by doing things like rotating a sound field around a user, which is what the new Wwise ambisonic convolution reverb does, but you simply have to render changes to a sound’s perspective to the user and the environment at runtime to create presence. 

The pre-rendered reverbs also meant that I needed to print assets for every language the project was localized into. Again, with a small line count, it wasn’t a colossal undertaking; but, if we had the CPU cycles to render and spatialize reverbs at runtime, I would do that in a heartbeat.

 

Localization

Speaking of localization, managing languages in Wwise made implementing the localized assets a breeze. Getting them to stream properly in Unreal took some effort on the part of Timothy Darcy, one of the developers at MPC VR; but, getting the audio assets in and the SoundBanks rebuilt, with all existing AK Events pointing to the correct line, was a snap.

 

Stylin’ Profiling

Ok - I had to get just one fun headline in. Seriously though, anyone who has worked on interactive projects knows how important profiling is. Profiling info in Wwise was invaluable on this project for everything from good old QA and bug fixing to ruling out audio as a cause of a frame rate slow down at one point in the project. Being able to say exactly how much CPU was being used and when helped us narrow down the cause and unblock the team.

 

Links:

http://www.technicolor.com/en/solutions-services/entertainment-services/sound-post-production/sound-lab-technicolor

http://www.moving-picture.com/film/film-vr/vr/

http://www.technicolor.com/en/solutions-services/entertainment-services/creative-houses/technicolor-los-angeles/sound

http://www.technicolor.com/en/innovation/research-innovation

https://developer.oculus.com/downloads/package/oculus-audio-sdk-plugins/

http://www.mpaa.org/content-security-program/

Viktor Phoenix

Sound Supervisor and Senior Technical Sound Designer

The Sound Lab at Technicolor

Viktor Phoenix

Sound Supervisor and Senior Technical Sound Designer

The Sound Lab at Technicolor

ADR Viktor Phoenix is Supervising Sound Designer and Senior Technical Sound Designer for The Sound Lab at Technicolor in Los Angeles and brings expertise in systemic sound design and 3D audio to game, VR, and 360 video projects at the studio. He has 15 years of experience designing, implementing, and mixing interactive sound on projects such as VRSE 'Click Effect', Three One Zero 'ADR1FT', Cloudhead Games 'The Gallery', Kite & Lightning 'Insurgent VR', Turtle Rock Studios 'Evolve', Pandemic Studios 'Mercenaries' and Passengers VR Experience.

 @viktorphoenix

댓글

댓글 달기

이메일 주소는 공개되지 않습니다.

다른 글

Wwise CPU 최적화: 기본 가이드라인

오디오 제작자에게 힘을 실어준다는 것은, 한편으론 이들의 손에 일부 게임 리소스의 책임을 넘긴다는 것을 뜻합니다. Wwise는 편집기와 SDK를 통해 최소한의 CPU 예산을 지킬...

29.5.2019 - 작성자: 아드리앙 라보아 (Adrien Lavoie)

Wwise / Unreal Engine 4 / Unity 3D를 사용해 발자국 소리의 표면 관리하기

사전 제작의 초기 단계에서 사운드 디자이너는 많은 시스템을 프로토타입화해야 하는데, 이를 도와줄 오디오 프로그래머가 없는 경우도 있죠. 다행히도 Wwise는 Unreal...

17.7.2019 - 작성자: 세바스티앙 겔라르 (Sébastien Gaillard)

Blade Runner: Revelations (블레이드 러너: 리벨레이션)

Blade Runner: Revelations는 잘 알려진 영화 ‘Blade Runner (블레이드 러너)’ 프랜차이즈에 기반한 상호작용 모바일 VR 게임으로, 최근 Seismic...

23.7.2019 - 작성자: Hexany Audio (헥사니 오디오)

ABZÛ - 도전! 물고기 10,000 마리의 게임 오디오 만들기

이 이미지를 가장 처음 봤을 때 여러분은 어떤 생각이 먼저 드나요? 저는 어떻게 하면 이 월드에 현실적이고 진짜 같은 사운드를 만들 수 있는지, 그리고 이 장면의 중심으로 헤엄을...

30.7.2019 - 작성자: 스티브 그린 (Steve Green)

잔향도 공간화가 필요합니다: Wwise Spatial Audio의 Room과 Portal 가이드

최근에 공간 음향에 대한 관심이 아주 뜨거워졌습니다. 하지만 사실 '공간 음향'이 의미하는 것은 매우 다양할 뿐만 아니라 사용할 수 있는 옵션이 너무 많아 어렵게 느껴질 수...

6.8.2019 - 작성자: 네이튼 해리스 (NATHAN HARRIS)

Wwise 오디오 플러그인을 제작하는 간단한 과정

Wwise용 오디오 플러그인을 개발하는 것은 디지털 오디오 편집 프로그램 (DAW)에서 플러그인을 개발하는 것과 꽤 다릅니다. Wwise는 상호작용적이며 수많은 플랫폼을 지원하기...

27.8.2019 - 작성자: 조엘 로비쇼 (Joel Robichaud)

다른 글

Wwise CPU 최적화: 기본 가이드라인

오디오 제작자에게 힘을 실어준다는 것은, 한편으론 이들의 손에 일부 게임 리소스의 책임을 넘긴다는 것을 뜻합니다. Wwise는 편집기와 SDK를 통해 최소한의 CPU 예산을 지킬...

Wwise / Unreal Engine 4 / Unity 3D를 사용해 발자국 소리의 표면 관리하기

사전 제작의 초기 단계에서 사운드 디자이너는 많은 시스템을 프로토타입화해야 하는데, 이를 도와줄 오디오 프로그래머가 없는 경우도 있죠. 다행히도 Wwise는 Unreal...

Blade Runner: Revelations (블레이드 러너: 리벨레이션)

Blade Runner: Revelations는 잘 알려진 영화 ‘Blade Runner (블레이드 러너)’ 프랜차이즈에 기반한 상호작용 모바일 VR 게임으로, 최근 Seismic...