Questions on Audio Design

Started by
4 comments, last by Kylotan 5 years ago

Hello everyone,

 

I am currently attending college with a specialization in game programming and development. With the end of my courses coming near, it's about time I reached out to a legitimate game developer forum for knowledge and advice on a variety of topics. The topic I would like to discuss, which is also the one that has grabbed my attention recently, is audio design in games. As we all know, audio is an important aspect of the overall gaming experience. The questions I have are as follows:

 

  • The Playstation 5 is planning on having "3D Audio" built-in without external hardware. Assuming this works similar to something like Dolby Atmos, but more accessible, what with this mean for audio design in the future? Many game dev teams are already tasked with developing for multiple platforms, how will this new technology effect multi-platform development? Will this only be utilized for Playstation 5 exclusives?
  • Footstep audio in player versus player experiences: how much is too much? Many games with pvp experiences place an emphasis on the user character footstep audio over other characters footstep audio. In my experience, this makes finding other players to fight (in the context of a confined space with many areas to hide) a exercise of "stop, listen, and go" to locate them. Theoretically, would increasing enemy footstep audio lead to quicker engagements on the battlefield? Though over-tuning this might be harmful for gamers who are overtly-sensitive of hearing.
  • Voiced in-game elements for the sake of accessibility: worth the cost? I have two brothers with cerebral palsy who play games often, as it's one of the only things they can engage with. Their hearing is great, though they have an extremely hard time reading in-game text that aren't voiced (i.e. in-game items and quest descriptions). Would having something similar to Microsoft Sam read out these elements be worth implementing? Text-to-speech licenses as low as $99 a month exist, which is lower than what I expected.

 

Thanks for reading!

Advertisement
  1. "3D Audio" built-in - arguably this doesn't mean much at all. Developers typically access audio functionality via 3rd party systems and those systems will adapt to take in whatever functionality Sony are adding. If this functionality has to work with existing stereo systems then it will just be a set of filters applied to audio sources based on the position of the source and the listener. Most likely there isn't going to be much here that isn't already possible in other hardware or software.
  2. Footstep audio  - I don't think most games these days expect you to locate enemies via sound. In the games that do, such as stealth games, this is balanced accordingly.
  3. Voiced in-game elements for the sake of accessibility: worth the cost?  - I think it's hard to put a price on these things. It's quite likely that the number of additional sales you get from this extra level of accessibility isn't going to make up for the cost, but I'd argue that developers have a responsibility here, perhaps one that governments should be enforcing. Text-to-speech will certainly cut costs but I'm not sure how well they would operate with some of the more dynamic UI systems.

I had a few follow-up questions on audio design.

  • When it comes to mixing, how do you prevent the mix from being too busy? Essentially, the "wall of sound" effect. What tips would you have for keeping a mix focused? This is especially present in action games.
  • Do sound designers use the in-game engine they work with as their primary digital audio workstation (DAW)? Supposedly, it's commonplace to use an third-party DAW print linear tracks that simply play under the in-game scene.
  • Would using heavy percussion in the backing soundtrack compete with the percussive nature of gun sounds and explosions in most action games?

About the "wall of sound" effect.

I'm not an audio designer as will certainly show below but here's my experience with the issue if that can help. I'm working on a space game with a detailed engine model. As I was assigning drones to the various components (modulating pitch and volume from related simulation variables) the whole thing quickly started to sound like an indiscriminate soup. Here are the tricks I used to try and make all sounds recognizable:

  • Limit the number of sources. Is certainly obvious sorry but it forced me to focus on the sounds that were the most important to convey the state of the system.
  • Time multiplexing: try not to play sounds simultaneously whenever possible. In a related approach I also used high-pass filters on the volume (envelope) to amplify transitions and attenuate continuous inputs.
  • Frequency multiplexing: try to use sounds with frequency content that do not overlap. For instance many samples from commercial libraries sound very nice partly thanks to a rich bass content. With too many of them clipping can occur very quickly and it might be useful to high-pass them first. Regarding harmonic content try to use sounds with distinct timbres.
  • Rhythm-based multiplexing: sounds that exhibit a distinct rhythm stand out. For instance I modulated the envelope of some drones with cascaded low-frequency oscillators, giving them a specific signature when they could be heard every few seconds or so.

Incidentally the three last points relate to "TDM", "FDM" and "CDMA" signal processing techniques used in radio transmissions to maximize the through-output of information over a channel (my original trade ^^).

On 7/25/2019 at 4:13 AM, NicholasCorbin said:

I had a few follow-up questions on audio design.

  • When it comes to mixing, how do you prevent the mix from being too busy? Essentially, the "wall of sound" effect. What tips would you have for keeping a mix focused? This is especially present in action games.
  • Do sound designers use the in-game engine they work with as their primary digital audio workstation (DAW)? Supposedly, it's commonplace to use an third-party DAW print linear tracks that simply play under the in-game scene.
  • Would using heavy percussion in the backing soundtrack compete with the percussive nature of gun sounds and explosions in most action games?

Mixes being 'busy' - I can't really offer any tips here. It depends on the game, the intent of the designers, etc. It's possible to give some elements priority over others so that important aspects are heard while less important aspects are attenuated, just like the background music might drop during a documentary when the narrator is speaking.

DAWs - no, a sound designer would have their own DAW of choice for doing the heavy lifting - not just for 'linear' tracks or even just for music but for most sound effects too. However, when it comes to testing how things work in-game, you need to start using the in-game tools.

Competing sounds - certainly. But what to do about it is a creative decision.

This topic is closed to new replies.

Advertisement