Real-Time Synthesis for Sound Creation in Peggle Blast!

게임 오디오 / 사운드 디자인

As a primer for their GDC 2015 talk, Peggle Blast! Big Concepts, Small Project, PopCap Team Audio has written a short blog series focusing on different aspects of the audio production, covering real-time synthesis, audio scripting, MIDI scoring, and more! 

Peggle Blast!  Peg-hits and the Music System - RJ Mattingly 

Scoring Peggle Blast! New Dog, Old Tricks - Guy Whitmore 

Realtime Synthesis for Sound Creation in Peggle Blast- Jaclyn Shumate  

Recently, on Peggle Blast, PopCap’s newest mobile addition to the Peggle Franchise, we were challenged by a directive from the team to keep all of our audio content under 5 MB.   Having little to no memory is an Audio Designer’s worst nightmare, and we were concerned that we wouldn’t be able to come close to hitting the quality bar we had achieved on our last Peggle project, Peggle 2.  To give some comparison, Peggle 2 shipped on Xbox One with 783 MB of audio. 

To work around the lack of available memory, we used a combination of MIDI and real-time synthesis for audio creation.  We built sounds using tone generators and Digital Signal Processors (DSP) within the audio engine (Wwise), while matching sounds directly to visuals, real-time, in the game engine (Unity).   We didn’t work in the traditional way of creating content, where you take game capture and then make sounds to picture within your Digital Audio Workstation (DAW).  This new workflow allowed for an immersive development experience, a tighter feedback loop, and a more dynamic product overall.   

maxresdefault.jpg

Inspiration came from classic games like Super Mario Bros., The Legend of Zelda, and Pac Man.  These games were able to achieve iconic audio despite having limitations.  In fact, these constraints contributed greatly to their signature styles and may have boosted creativity.   On Peggle blast, the constraints shaped our daily goal of creating sounds without using any memory.  Memory hogging .wav files were reserved for a few iconic Peggle sounds and specially curated .wav file building blocks that we could manipulate in varied ways to use as layered elements with our generated sounds.   In the end, we were able to ship Peggle Blast with only 1.3 MB of sound effects and 3.5 MB of music, yet it contained hundreds of sound effects and over 30 minutes of interactive music.

The Workflow

Let’s say I wanted to make a sound for a splashy visual effect that burst across the screen when the player achieves a high score, and I wanted it to sound like a sweeping bleep bloop with magic sparkle dust added in.  The first step I’d take would be to make an approximation of the sound within Wwise.  Let’s say in this case, to create the bleep bloop, it’s a square wave, pitched up over .5 seconds, created using Wwise’s tone generator with various delays added.  Then, to get some sparkle dust, I could add in a swooshy sound created in SoundSeed Air, with tremolo and more delay.   By taking generated sounds, experimenting with playback behaviors, DSP, and layering, very quickly a unique sound would be born.

Once I had the basic framework for a sound that I was happy with, I would connect Wwise to Unity via the Wwise profiler, view the visual effect real-time in game, and continue to tune the sound until it fit the gameplay sequence I was creating it for.   It was real-time sound creation with real-time sound generation!

We spent a lot of time exploring different methods to expand our pallet of sounds with our limited toolset.  There were a few of us working in the project at various times, and we borrowed liberally from each other’s tricks.  Someone would come up with a new way to get a different sound, and then that technique would proliferate into other sounds throughout the project.  Additionally, effects like escalating pitch over time on the same sound were a big win for creating sound variability and excitement, without requiring a new asset.

BA1-620x.jpg

The Results

There were a number of interesting outcomes of working this way, beyond the memory savings.   Artistically, creating sounds with such basic and pure elements was a fun and refreshing exercise in sound design.  I was entirely forced away from using real-world assets and literal sound design.  Technically speaking, not needing to use a DAW to create sounds made for a faster design process.  No time was spent creating game capture, importing it into the DAW, exporting your final asset, and repeating that process again if edits are needed.  The DAW was no longer an extra layer between the sound designer and the game, and it felt really good.  We were more closely integrated with the game than I have ever been able to work in the past. 

With real-time sound generation, the sky was the limit in terms of interactivity.  All sounds could be stretched, pulled, pitched, or delayed in whatever way worked best to match gameplay, and it was truly dynamic.  The audio content was a living, malleable, part of the gameplay.   With RJ Mattingly down the hall handling all of our scripting, it was quick work to add Real Time Parameter Controls or whatever else we wanted to use to push the interactivity.  Another enormous added benefit was that when art or design changed an asset, it was fast and easy to iterate on the sound and make it match the new timing.  You could just jump into the audio engine, change a few numbers that controlled the length of a sound, all while viewing the asset real-time in game to make sure it fit the visuals (can you imagine being able to do that on a large console project? It would be awesome).

Working on this game became an inspiration for what can be accomplished with truly generative, dynamic audio.  A tight feedback loop starting from sound creation through gameplay made for a great feeling workflow. The limited toolset forced us to hone in on how we could maximize the effectiveness of audio content without increasing the size.   Peggle Blast was a good reminder that creativity can flourish under constraints, and I can’t wait to keep exploring real-time sound generation techniques on future games, both large and small. 

This blog was originally published on Audio Gang

Jaclyn Shumate

Interactive Audio Director

Jaclyn Shumate

Interactive Audio Director

Jaclyn Shumate is an award-winning Game Audio Specialist that works across multiple platforms to create and implement AAA quality audio on casual games.

 @shujaxaudio

댓글

댓글 달기

이메일 주소는 공개되지 않습니다.

다른 글

Total War: WARHAMMER 2 (토탈 워: 워해머 2): 음악 구조 제작과 체계적 사용법

Total War 시리즈를 잘 모르시는 분들을 위해 설명 드리자면 Total War는 턴 방식의 전략 게임으로 대규모의 실시간 전쟁이 특징적인 게임입니다. 중세 유럽, 봉건 일본,...

21.8.2019 - 작성자: CREATIVE ASSEMBLY (크리에이티브 어셈블리)

소규모 게임 프로젝트가 Wwise로부터 혜택을 받을 수 있는 5가지 이유

여러분이 게임 오디오 분야에 종사하고 있으며 이전에 소규모 게임 프로젝트를 수행한 적이 있는 경우. 다음과 같은 대화를 나눈 적이 있을 수 있습니다. "근데, 와이즈와 같은...

7.7.2020 - 작성자: 알렉스 메이 (ALEX MAY)

사운드 디자이너가 PureData + Heavy를 사용하여 DSP 플러그인을 개발하는 법 - 제 2부

제 1부에서는 Patch 파일을 사용하여 '블루프린트'를 제작하는 법을 설명해 드렸습니다. 이제 Heavy 컴파일러를 사용하여 '자동 워크숍'을 제작한 후, 이 자동 워크숍을...

24.11.2020 - 작성자: 천종 호우 (Chenzhong Hou)

가상 음향을 통해 소리 풍경 가청화하기

이 시리즈에서는 과거, 현재, 그리고 새로운 잔향 기술을 집중적으로 살펴보고 몰입적 공간적 관점에서 이 기술을 검토해봅니다. 이전 글에서는 가상 현실에서 몰입적인 잔향을 제작하는...

16.3.2021 - 작성자: 브누아 알라리 (BENOIT ALARY)

Event-Based Packaging(이벤트 기반 패키징)이란?

Event-Based Packaging(이벤트 기반 패키징)이란 무엇일까요? 얼마 전에 Wwise 2019.2 UE4 통합은 Event-Based Packaging(이벤트 기반...

10.8.2021 - 작성자: 판 룬펑 (Fan Runpeng)

텔 미 와이(Tell Me Why) | 오디오 다이어리 제 4부: 믹싱과 마스터링

'텔 미 와이(Tell me Why)'는 Xbox와 PC에서 출시되었으며 5.1 서라운드 사운드를 완전히 지원합니다. 서사적 게임을 작업할 때에는 최종 믹스를 특정 방식으로...

26.7.2022 - 작성자: Mathieu Fiorentini

다른 글

Total War: WARHAMMER 2 (토탈 워: 워해머 2): 음악 구조 제작과 체계적 사용법

Total War 시리즈를 잘 모르시는 분들을 위해 설명 드리자면 Total War는 턴 방식의 전략 게임으로 대규모의 실시간 전쟁이 특징적인 게임입니다. 중세 유럽, 봉건 일본,...

소규모 게임 프로젝트가 Wwise로부터 혜택을 받을 수 있는 5가지 이유

여러분이 게임 오디오 분야에 종사하고 있으며 이전에 소규모 게임 프로젝트를 수행한 적이 있는 경우. 다음과 같은 대화를 나눈 적이 있을 수 있습니다. "근데, 와이즈와 같은...

사운드 디자이너가 PureData + Heavy를 사용하여 DSP 플러그인을 개발하는 법 - 제 2부

제 1부에서는 Patch 파일을 사용하여 '블루프린트'를 제작하는 법을 설명해 드렸습니다. 이제 Heavy 컴파일러를 사용하여 '자동 워크숍'을 제작한 후, 이 자동 워크숍을...