🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Mastering Seperate 'Adaptive Audio' Tracks

Started by
5 comments, last by yjbrown 9 years, 3 months ago

Hi

I've recently taken on a project that involves exploring a small area using a virtual reality headset. One of the key features is the players ability to 'create' their own soundtrack via proximity to certain objects. i.e parts of the music are allocated to certain objects within the game, volume/panning determined by proximity.

I have al the music written but I'm struggling to work out the puzzle of getting the final master to gel together properly. I haven't sent this off to the programmer yet. Ideally I'd like to program the music myself but I've only just started learning audio middleware method and am not in a position to take on the task and meet the deadline. The engine used is Unity5

The issue being I want the music to flow smoothly without wild differences in audio levels. My current solution is bounce all the 'seperate' music tracks individually but master them together to make sure they dont peak at any point? But this necessitates a lower overall level for each track, add to that the change in level based on proximity and it could be a big problem.

Has anyone else come accross a similar problem?

Cheers,

Matt

Matt Dear - Sound Design | Music | Dialog | Audio Implementation

www.mattdear.com

Advertisement

Interactive audio (especially related to music!) within VR came up in several talks at GDC this year. Even the top notch pros are struggling with what the exact answer will be as this is very new ground we're breaking. One talk in particular, given by Oculus crew members, discussed that in some situations omnipresent music worked great in VR while in other situations it completely destroyed the experience. Then in other situations proximity music implementation didn't work. So, I cannot really help you here as I've not done any mixing/implementation with VR audio.

If you were doing traditional 3D/2D interactive audio, then I could help for sure. Sorry!

Nathan Madsen
Nate (AT) MadsenStudios (DOT) Com
Composer-Sound Designer
Madsen Studios
Austin, TX

Hey Matt,

This sounds fascinating, you can certainly try to prototype this in wwise.

Experiment with 3D atttenuation graphs.

Try using a music bus and running some DSP (wwise supports things like compressors and side chain compressors / ducking, eq... etc) over it. There's some 3rd party vendors of some fairly good DSP out there as well.

It really depends what you're trying to do with the objects, how many other tracks play, do they fade out to silence, or just duck a bit to focus on the other object. Can you within a certain radius fade over to the produced track mixed fully. Lots of different things to try. Some you'll need engineering support and a way for you to tune the results, most you can try to protoype in the audio authoring middleware.

Game Audio Professional
www.GroovyAudio.com

Thanks for the help guys, I'm speaking with the programmer later today so I'll probably come back to this thread once we've determined what exactly all our problems are!

The only game I can think of as a reference to this kind of thing is Proteus, in terms of the 'interactive soundtrack'; the music changes with both the position of the player, field of view and interaction with the environment. Though, this game was not VR.

In a traditional 2D/3D game am I right in thinking that all the audio is mastered to around 0dB and then re-mixed within the engine/audo middleware? With some sort of limiter to prevent the combined adapting music and sfx from peaking? I can see this as an art in itself!

Cheers.

Matt Dear - Sound Design | Music | Dialog | Audio Implementation

www.mattdear.com


In a traditional 2D/3D game am I right in thinking that all the audio is mastered to around 0dB and then re-mixed within the engine/audo middleware? With some sort of limiter to prevent the combined adapting music and sfx from peaking? I can see this as an art in itself!

-0.1dB peak but there is no industry standard for the volume levels. I usually leave myself a good amount of headroom so things don't get over-compressed when you get into the middleware. If you mix too hot, combine several tracks, and then wait for a limiter to save you, you'll find you're not peaking but there's will be a lot of distortion.

Oh also I agree with GroovyOne, Wwise is the way to go, I've never done a VR mix but Wwise is so versatile you can really do anything with it.

Doesn't Unity5 have audio effects like a compressor and limiter? You would then let the engine do the mixing in real time. Maybe the coder could implement a system that decreases the overall volume of the music bus dependent on how many sources are playing?

Also have a look at Fabric for Unity 5. It comes with a lot of good features to help implement your audio well, and is native to unity.

You have access to mix states, mix bus and also DSP as well as any other DSP that Unity provides.

Unity native audio showcases some of the DSP functions with unity 5 - the side chain compression is something you may need to use (also a feature in wwise).

http://blogs.unity3d.com/2014/07/24/mixing-sweet-beats-in-unity-5-0/

Fabric Unity Audio Engine provides a lot more than the basic unity audio - tempo based music switching..etc though I don't know just yet how complex a music system it has.

http://www.tazman-audio.co.uk/

Both Fabric and WWise can be obtained free to experiment with before choosing one to license.

Dynamic mixing for games is definitely an art in itself and a lot of fun to make it work smile.png

It really depends on how technically your music is adaptive. Does the mix just change because of one stem gets added - ie there's a basic mix and then another stem is layered in, or does the music need to get to a transition point and change. That will drive what middleware or solution you choose as well. I have not explored Fabric's dynamic music system at all, but have used WWises and FMOD desgner's a lot. Basic knowledge of FMOD from talks and demos I've had with the guys and the previous designer system both FMOD / WWISE have outstanding dynamic music systems and authoring tools where you can prototype the behaviors.

Both also work with unity, with an external plugin. It's so great to have so many choices these days!

As far as standards, a lot of developers for console / tv are adapting the ITU 1770 standard of loudness. WWise actually supports metering with this to help achieve the right mix. For handheld / tablet.. etc - there is no real standard yet since we're dealing with different speaker types sizes.

Game Audio Professional
www.GroovyAudio.com

This topic is closed to new replies.

Advertisement