On Apr 13, 2015, at 8:34 AM, Steven Brawer 
<[email protected]> wrote:
> 1. What is the C/Obj C function (or functions) needed to capture signals from 
> the Logic software tracks into which the plugin is inserted? (These would 
> replace/expand MIDI signals with Logic signals.)
Logic uses CoreAudio API to host AudioUnits. There is no such thing as "Logic 
signals." Logic uses the standard CoreAudio mechanism to send MIDI and 
equivalent "signals" to an AudioUnit. You need to learn how to write an 
AudioUnit MusicDevice. This kind of AudioUnit generates audio, but typically 
does not have audio input. All of the examples and the entire framework needed 
are written in C++, so you must use that to host your code, unless you are a 
guru - and there aren't any plugins that aren't written with this C++ 
framework. Personally, C++ is my least favorite language, and ObjC is by far my 
most favorite. However, I still find that it's not too difficult to work within 
the frameworks provided in the examples. The challenge is that MusicDevice 
plugins, like your synth needs to be, are the least documented. You have a lot 
to learn.
> 2. What is the C/Obj C function (or functions) to send the output directly to 
> the instrument track?
There is no function. An AudioUnit is an implementation of a C++ subclass of a 
generic object that handles basic CoreAudio operations. Your synth needs to be 
a subclass of the MusicDevice class, and then you will be called when audio 
data is needed. In other words, you do not call a function, your member 
function is called by the system (AU Host, which is Logic).
> 3. What is the C/Obj C function (or functions) required for automation?
Automation is another standard feature of CoreAudio and AudioUnits. If you 
provide Parameters, then Logic will be able to automate these. If you want fine 
resolution of parameters that change multiple times per audio frame buffer, 
then you'll need to make sure that your Render() and/or Process() functions can 
operate on variable sizes arrays of audio, and you will be called for each 
section of audio between automated changes.
> 4. How do I create the app bundle and where do I put it?
There are examples. This is documented.
> 5. I have been informed that Audio Queue cannot be used in an au plugin. Is 
> this true? Is there an alternative that operates in essentially the same 
> manner? 
You do not need an Audio Queue. Your synth will become a MusicDevice AudioUnit 
plugin, and it will become part of something like an Audio Queue within Logic 
(or any AU Host). All you need to do is synthesize the waveform while honoring 
changes to parameters, and deliver the audio samples when requested. Your AU 
will be given timing information so that you can create the correct waves at 
the correct time. There should be no features of an AU synth that even require 
an Audio Queue - Logic handles all of that for any plugin.

Do not think of your task as porting all of your existing code to run inside 
Logic. Instead, only the very core of your synth needs to be converted to an 
AU. Logic will take care of the rest. The challenge is to learn the full 
CoreAudio method for writing AudioUnits so that you will know which parts of 
your existing code to toss, and which parts to restructure for their new home.

Brian Willoughby
Sound Consulting

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to