Friday 20 March 2009

Not Strictly Asynchronous

In order to keep a low overhead for using FMODex the developers consciously decided to not make it thread safe. While this is nice if you want to squeeze the most out of your audio code, the design of the API has left me with a slight issue relating to my click track implementation.

Callbacks are easy to set up in FMOD, but they do not work how you expect them to. In order to trigger a channel callback you must first call the FMOD system method "update". In fact, to update any of the 'asynchronous' features of FMOD you require to call system::update(), and this method must be called in the same thread that all other FMOD commands have been called in because of the non-thread-safe nature of the API.

For a simple while loop:

while (true)
{
FmodSystem::Update();
}

Everything runs fine, and to a certain extent (depending on how time sensitive you are) the method of using a click track would be enough to keep everything in time without the need for sample accurate sound stitching. However, Once an element of delay is introduced - Sleep(30) - timing discrepancies begin occurring with the triggering of samples. With a delay of Sleep(100) the timing is really poor; with a delay of Sleep(1000) it's a mess. This is fairly obvious when you think about it as the callbacks would only be triggered every 1000 milliseconds or so, and when a beat generally occurs approximately every 0.3 seconds in a 4/4, 120 bpm sequence, you can see where the delay is coming from.

I am going to try handling the system::update() in its own thread to see if this can keep the callback timings consistent, but I will need to be careful not to access sensitive FMOD methods from this new thread.

Monday 16 March 2009

Old Ideas Return

When breaking up a piece of music into various layers that play independently a major concern is how to keep these layers in time with each other. Wwise achieves this by making sure that each layer is of the same sample length. This is also how I got round the issue during Dare To Be Digital. The whole point of this project, however, is to allow for layers to be of varying lengths. So how can the synchronisation of these layers be achieved?

The first method I looked into simply involved counting the beats. If you have a sequence that you know the tempo and time signature you can use that to work out how long, in milliseconds, a single beat should be. A timer can then be used to count the milliseconds between beats.

While in theory this is a "sound" idea (sorry, but I had to get that pun in eventually) issues can occur with synchronising the start of the timer with the start of the sequence. A separate thread created by the FMOD api handles playback of the sounds meaning that the timer is being calculated outwith the sound playback. With the time-critical nature of audio, and particularly music, this method is inevitably too unpredictable to confidently use, as the timer will always be separate from the source.

After realising the pitfalls of using a timer I then returned to my original idea of using an audio sample as a "click track". The click track was based on a sample containing a single bar of audio that had markers placed on the beats. During playback of this sample when the playhead passed one of the markers a callback would be triggered indicating that we've hit a beat. With this method the timing is kept in the same thread as the playback of the audio, so timing discrepancies are reduced.

This method was used in the Dare To Be Digital game, Origamee, to synchronise the audio cues with the beats of the music. However, it didn't work properly as it suffered from the delay of looping static sounds, so the cues would go out of time. Luckily as the tempo for the game music was fast it was hard to spot the timing issues with the cues, and because the music layers were the same length the music itself wouldn't go out of time. Now that I can implement sample accurate stitching this can be used to resurrect the click track, as it overcomes the issue with the loop delay.