menu
 

Community Q&A

Welcome to Audiokinetic’s community-driven Q&A forum. This is the place where Wwise and Strata users help each other out. For direct help from our team, please use the Support Tickets page. To report a bug, use the Bug Report option in the Audiokinetic Launcher. (Note that Bug Reports submitted to the Q&A forum will be rejected. Using our dedicated Bug Report system ensures your report is seen by the right people and has the best chance of being fixed.)

To get the best answers quickly, follow these tips when posting a question:

  • Be Specific: What are you trying to achieve, or what specific issue are you running into?
  • Include Key Details: Include details like your Wwise and game engine versions, operating system, etc.
  • Explain What You've Tried: Let others know what troubleshooting steps you've already taken.
  • Focus on the Facts: Describe the technical facts of your issue. Focusing on the problem helps others find a solution quickly.

0 votes

Hello!

We're currently considering the switch to Wwise in our Unity project, but I have a few concerns that I can't find answers to.

  1. Converting our dialogue system to Wwise events would be cumbersome: I'd rather not have to adapt our already established AssetBundle-based dialogue system to work with Wwise events instead of AudioClips (our game has ~15,000 lines of dialogue). Is there some sort of AkSoundEngine.PlaySound(clip)? Or does absolutely everything have to go through sound banks and events? Is there an easy way to convert every single dialogue clip to an event, or will our sound designers need to go through an extra, very tedious step every time dialogue is added or changed?
  2. Lip sync: Currently we are planning for different tiers of lip sync for different parts of the game:
    1. Hand-animated: Shouldn't be a problem, as the animators can animate to the sound itself.
    2. Custom phoneme metadata: Designers or animators will have a way to arrange phonemes on a timeline. Also shouldn't be a problem, but is there a way to acquire a waveform for a specific Wwise sound to make this easier?
    3. Automatic: Simple mouth movements based on the waveform of the audio clip. We absolutely need to grab the waveform for this, at least so we can bake the mouth movements.

Thanks,

Bryant

in General Discussion by Bryant C. (150 points)

1 Answer

0 votes
Hi Bryant,

Did you find a solution to the first point about playing arbitrary clips? I have the same challenge, and would like to play either an audio clip or optimally a specific audio object from a bank without having an event for each line of dialogue.

Simon
by Simon A. (140 points)
...