Wednesday 15 October 2008

Worksheet 2 Explained

I like the way this year is set up in regards to peer review. It really helps you to refine an idea when you get so much feedback. On the other hand, giving your own feedback to others certainly opens the mind letting you think "outside the box".

So what was worksheet 2 about? In worksheet 2 I had to formulate a research question for my project and detail how I would go about answering it. Now, surprisingly (or perhaps unsurprisingly, depending on your age and outlook) adaptive music in computer games has definitely been around a long time. Looking back to what people have produced not just in the industry but also in previous years of CGT I have decided it would be wise to approach this subject indirectly. Still keeping the theme of adaptive music but not having that as the only focus. Without further delay, here is my second initial question ( first was rubbish :P )

"What aspects present in traditional audio engineering applications can be tailored for use in tools specifically designed for developing adaptive music in computer games?"

... say what?

Let's run through it.

"What aspects present in traditional audio engineering applications ... ". Skipping the aspects portion for a minute let us look at the traditional audio engineering applications bit. What is a "traditional" audio engineering application? Back in the day, audio engineers would use giant mixing desks, complicated systems of wiring and controllers etc. but with modern processing power the average joe can simulate this with a digital audio workstation. These workstations would run all the hardware related tasks of audio engineering but in software. According to Wikipedia this concept was made popular in 1987 with a package created by Digidesign that later went on to become the industry standard, Pro Tools.

What aspects of applications such as Pro Tools, Sound Forge, ACID pro etc. am I talking about? There are lots of features in these programs that make them successful (too many to look into, list or even think about), so which ones would benefit the development of adaptive music? It would be silly to try and recreate something like Logic Audio, considering that it has been around for years, gone through many iterations and is incredibly stable. These applications are generally used for music generation and because of their long running development are very effective at this. Leading me to believe that computer game development tools tend not to be a replacement for these programs but more of an extension, trying to make the content to engine pipeline a lot smoother. People still use Max and Maya, even though they are not solely geared towards games. So to ascertain what aspects of audio engineering applications would benefit the development of adaptive music a look into current game audio tools is needed.

There are a fair few tools out there for game audio design, although from what I've seen very few actually "create" the music from scratch (extension of existing applications, remember). One that I hope to single out for my project is a tool mentioned in the previous post, Wwise. Wwise is a fantastic design tool that once the programmer has integrated the framework into the engine the audio content creator has full control over every part of the audio in game. Wwise is a complete audio design tool but the majority of it will be disregarded as I am only interested in the adaptive music section. Here you can load in wavs, structure them, assign their playback to game triggers, control their volume etc. all really cool stuff. Aside from assigning playback to game triggers this functionality is very familiar to programs such as ACID pro, Pro Tools, Sound Forge. The crunch here is what can tools like Wwise learn from these well developed applications to make producing adaptive music more accessible/efficient?

I cannot say exactly at this moment in time what would or wouldn't benefit tools like Wwise but from personal experiences here is what I feel is a drawback. Currently, any sample brought into Wwise must be of the same length as any other sample that it is to loop with to ensure sample to sample accuracy. Reason3 certainly does not have this problem. As long as the samples are in the same time signature, bbm and their length is in multiples of a bar length then you're good to go. Surely this should be able to work in Wwise. ACID pro bypasses the bbm constraint by dynamically time stretching the sample to fit and Pro Tools can even bypass the time signature by dynamically finding the beats and moving them! Surely these aspects could be put to good use in adaptive audio development as they would remove a fair amount of constraints on the composer. These are just personal examples that I feel are important but obviously more research will need to be completed before a more educated example can be given. There might be a very good reason for the sample to sample accuracy :D

No comments: