Bob Clark wrote:
Is there an official place to put audio device sods? Unless there's a more
appropriate place, I will link to the list archive of this message when I need
to refer to it.

we sometimes put them into the www directory under devdocs.

    helix-client/www/2006/devdocs

for example. For something like this, I think the list archive
is OK.

more below....

Information

 * SOD: CoreAudio -- modern sound API for OS X
 * author: [EMAIL PROTECTED]

Problem Statement

The current audio device implementation for OS X uses the
deprecated Sound Manager APIs. Among other drawbacks, the
Sound Manager only allows two channels, so multichannel
audio is not possible. Furthermore, although one can rarely
tell what Apple will do, it's dangerous to be relying solely
on a deprecated set of APIs.

Solution Overview

The CoreAudio APIs are the approved OS X sound APIs. These
are supported and approved by Apple. Among the strengths of
CoreAudio: it supplies APIs to detect when a multichannel
(e.g. Dolby 5.1) output device is connected and adds the
ability to play multichannel audio.

Solution Details

 * Rely on the system's current default output audio device.
   (Instead of adding internal preferences.) This is
   standard Mac practice, but one could concoct scenarios
   where it would be convenient to specify an audio output
   device that's not the default output audio device.
 * The CoreAudio APIs differ from other audio device APIs.
   With other audio device APIs, we can just push audio data
   at the device and it will then play it. CoreAudio instead
   polls via a callback function, asking for more audio data.
   Helix DNA expects to treat the audio device as something
   to push data at, so to accommodate Helix's "expectation" I
   need to set up a ring buffer so Helix has something to
   push data at... then when CoreAudio wants to suck data I
   have it there to supply to the OS.

What thread do these requests come in on?

Does OS X mandate how long you have to honor the request for
more data before it might underflow? Is it a synchronous request
for data or asynchronous?

Do we have a minimum or maximum amount of data we can send
it?

--greg.


 * Add code to dynamically check the default audio output
   device's characteristics (number of channels, etc.)
   instead of just assuming (like the Sound Manager
   implementation) that I can shove two-channel data at it.
 * Helix pushes data in UINT16 (two-byte integer) format,
   while CoreAudio expects data in double (four-byte float)
   format. This conversion is not noticeably CPU intensive,
   but it will be worth watching for performance problems.
 * Of note: the audio device implementation seems to require
   that the implementation's Resume() function calls
   OnTimeSync().




_______________________________________________
Audio-dev mailing list
[email protected]
http://lists.helixcommunity.org/mailman/listinfo/audio-dev


_______________________________________________
Audio-dev mailing list
[email protected]
http://lists.helixcommunity.org/mailman/listinfo/audio-dev

Reply via email to