Based on the popular movie franchise, Men in Black: Galactic Getaway, developed for Dave & Buster’s by VRstudios, is an exclusive VR simulator experience at the nationwide entertainment operator.
The wacky adventure takes players who are “accidental” MIB agents, from the alleys of New York to the streets of London. Their job is to chase down and apprehend a crew of alien criminals who have stolen deadly jars of glowing green goo and insect parts before they escape the planet. The Men in Black: Galactic Getaway experience has different endings, randomized player characters, seven different randomly assigned weapons, and dialogue that responds in real-time to the player’s performance.
Hexany Audio handled all of the SFX, VO, and technical audio implementation. They also created the score and incorporated Danny Elfman’s famous themes into the experience.
What preparation did you do before starting to work on the music and audio for Men in Black: Galactic Getaway?
Richard: When we start a new project, we usually start by pulling our internal team together. Though my official title at our studio is “Audio Director,” that means a lot of different things from project-to-project. For this one, I handled Voice Over Direction and our lead sound designer who crafted the bulk of the sounds was Jason Walsh. Our Technical Sound Designer Max Harchik implemented most all of the content, and our composer Matthew Carl Earl wrote the musical score for the experience. Our producer Bradan Dotson kept the show running and some of our other sound designers also contributed additional work to the SFX as well of course. Long story short, lots of awesome people were involved to help bring this one to life on the audio side!
Jason: Men in Black is a series that has its own flavor of quirky Sci-fi styling, which also applies to the series’ music and sound. Per direction from the client, we needed to stay faithful to the films so watching the movies was an important (and enjoyable) first step.
What was your overarching vision for the music and audio?
Jason: Early in development we spent time working out all the gun sounds. Since Galactic Getaway is an arcade shooter, the gunshots are the most prevalent set of sounds the player is going to hear. Achieving the Men in Black aesthetic early in the process of making content means we have a good reference and method for designing the rest of the sound.
Men in Black: Galactic Getaway combines classic elements of Men in Black with some all-new story elements, weapons and characters. How did you approach driving home iconic sounds while creating new audio and themes? What kind of audio and music references will the players easily be able to pick out from the franchise?
Jason: Having easily accessible reference material and a clear direction on the game development from Chanel Summers, VP of Creative Development at VRstudios, helped us hone in on the sweet spot for original sound design that still felt like Men in Black. I think some very memorable things from the movies are all the chrome Sci-fi guns (especially the Noisy Cricket), the transforming vehicle, and the Neuralyzer device. We wanted to stay as true to the movies as we could with sound for that kind of pre-existing content.
Two alien guides guide players throughout the experience. How does the audio participate in guiding the player through this experience and directing their attention?
Jason: The aliens that take you through the experience are Marv and Knute. Their job is to provide tasks to the players throughout each phase of the mission. The character dialogue is the main driver of the story and objectives, as well as the comedy that gives Galactic Getaway its Men in Black identity. Something that I think is particularly cool about the gameplay is a lot of the talking Marv and Knute do is in response to player actions. Players are free to shoot at whatever they want, as much as they want, and destroy basically everything along the path in the game. Depending on how you play, they give feedback that can help players achieve a better score.
Were there any technical implementation considerations or ideas you had from the onset which you knew would help you towards achieving your creative vision? What Wwise features if any were especially helpful in allowing you to realise your creative vision?
Jason: The game runs in Unreal’s level sequencer and is roughly a five-minute experience with some branching story elements. This means that we have to account for the alterations to pacing and gameplay in the level sequences. We made sure that our sound design work was broken out into small assets so that we could adjust things as the sequence continued getting polished. For example, an explosion would have sounds for impact, LFE, explosion tail, material type, debris size, and length of debris clutter. This allowed us to design sound in Wwise and make even further changes to variations in Unreal with the level sequencer.
Max: We knew the game would feature hundreds of on-screen enemies, destructible environments, and four different players constantly shooting weapons. With all of that comes a pretty hefty voice count, so we knew that was something we’d need to pay close attention to if we wanted to keep the experience running smoothly. With Wwise’s voice limiting tools, we were able to keep our CPU usage as low as possible by killing unneeded voices or in specific cases, such as the enemy alien insects’ wing sounds, sending them to virtual voices.
How did you prioritize your audio for this experience? Can you tell us a little bit about any systems you may have designed for it? And, how did you leverage Wwise?
Max: We made extensive use of Wwise’s flexible bussing structure to group sounds and music into different categories such as dialogue, ambience, weapons, destructibles, etc. This gave us fast and easy control over which elements should be most present at any given moment and allowed us to immediately get new sounds fitting nicely into the mix as they came online. We also used Wwise meters to set up logic for ducking certain groups so that important sounds and key dialogue could cut through.
The player moves through a number of different environments throughout the experience, starting in the open streets of New York, heading down into subway tunnels, and eventually to the top of the London Gherkin. We took advantage of Wwise’s real-time reverbs to help ground sounds in the space the player is currently in and provide some change in the soundscape as they move through the experience.
Can you describe how you approached creating a cohesive spatial mix for this experience?
Max: With so much going on at all times we had to pay a lot of attention to the mix to make sure that important things such as dialogue could always be heard. Luckily for us the experience has a set route so we were able to design the sounds and music around specific moments and be confident they would play out the same way for all players. We used various states triggered throughout the experience to the balance between sonic elements and direct player’s attention towards important story beats.
Where there any expectations you may have had about a creative or technical direction you were wanting to take that did not end up working out just as you had imagined it, and why?
Jason: We did have some finer detail sounds that ultimately didn’t make the cut. As the game came together with 4 players worth of sound, music and dialogue, those details simply didn’t come through as benefiting the gameplay.
What are you most proud of when it comes to this project?
Jason: I was really happy with how fluid the process between creative and technical sound design came together on this project.
What was the most fun part of working on this project?
Jason: There’s something really satisfying about putting a bunch of deconstructed sound elements together and hearing it play back in one satisfying moment in Wwise. I was especially happy with how punchy the guns and explosions came out.
Hexany Audio has had a long relationship producing the music and sound for VRstudios’ projects. Can you describe your work process and perhaps how it may have evolved from previous LBE projects?
Brandon: We really enjoy working with VRstudios and have developed a great workflow with Chanel Summers that we’re continually improving with each experience. The collaboration begins during pre-production. Because most of these LBEs have been based on iconic IPs, we typically like to re-familiarize ourselves with those franchises first as we’re conceptualizing the design work. When taking on a project based on the legacy of many artists work it is important that we’re not only creating something new and fresh, but also paying homage to the history, too. Once the script has been locked in and the initial experience has been blocked out, we can begin fleshing out a base layer of sound and music. After that there is daily communication between our team and VRstudios as we advance to each milestone in production. During this time, we are polishing assets and prioritizing work, as art, animation, VO, and VFX come online. Eventually we will take a final design/music/mix pass once final animations and VFX have been committed, and then of course... A few test runs on the Dave & Buster’s simulator :)
As far as how our work process has evolved, I think it’s just become more and more streamlined. We know what the scope and expectations of our work is and we have a really fantastic collaboration that always pushes everyone to deliver some top-notch immersive experiences. Wwise is absolutely a huge part of that efficiency, as it allows us the flexibility we need to work in parallel with a constantly evolving project.
Richard Ludlow Audio Director, Owner
Jason Walsh Sr. Composer, Sound Designer
Brandan Dotson Director of Operations, Producer
Max Harchik Technical Sound Designer