Audiokinetic's Community Q&A is the forum where users can ask and answer questions within the Wwise and Strata communities. If you would like to get an answer from Audiokinetic's Technical support team, make sure you use the Support Tickets page.

0 votes

Hi everyone,

I'm working on a music-driven game where we’d like to analyze audio and trigger visuals based on frequency and amplitude—similar to a spectrogram.

Initially, we thought it would be possible to route audio from Wwise into a UE Submix, and then use that in a Niagara system with the spectrogram feature. However, it seems that routing audio directly from Wwise into a UE Submix isn’t supported. Could anyone confirm this?

As a workaround, we’re considering splitting the main music track into a few frequency bands—so we'd have three separate tracks, each covering a different frequency range. We’d then apply a meter to each one and read the RTPC values from UE. This way, we could at least get a rough frequency analysis using three bands.

What do you think? Is there a better approach for this? Any comments or recommendations are greatly appreciated.

Thank you!

in General Discussion by Lg G. (130 points)

1 Answer

0 votes
 
Best answer

Hi Lg G.,

I can confirm that it is not possible to route Wwise audio output to a UE Submix. As documented in the Combining Unreal and Wwise Audio with AudioLink page, Unreal audio can go to Wwise but not the opposite.

I think your approach of splitting your music by frequency makes sense. We are used to seeing people use the Parametric EQ to filter sub-bands followed by a Wwise Meter to get a similar result instead of separating frequencies at the source, as you are considering.

Keep an eye out for the Wwise 2025.1 beta, which will be available in a few months. Great improvements on this topic are coming your way!

ago by Guillaume R. (Audiokinetic) (8.3k points)
selected ago by Alessandro Famà
...