FluidSynth currently only supports 16 bit audio sample data, such as in the
SF2 format.
There are several functions which are used for synthesizing the output, in
particular:
fluid_synth_nwrite_float
fluid_synth_process
fluid_synth_write_float
fluid_synth_write_s16
The last one in particular outp
The interpolation routines place the processed data into dsp_buf[] of
type fluid_real_t.
I'm assuming you eventually convert this to int16 for rendering.
Can you point me to the section of the code that does the conversion?
Are you doing anything else, such as a moving-averaging filter?
Thanks,
B
Regarding the previous post...
It seems this is done in the function /fluid_rvoice_buffers_mix /in the
file "fluid_rvoice.c."
The output is buf[] and is also type fluid_real_t.
So I'm assuming the rendering engine can accept floating point values.
I'm still trying to see how the linear interpo
The interpolation routines place the processed data into dsp_buf[] of
type fluid_real_t.
I'm assuming you eventually convert this to int16 for rendering.
Can you point me to the section of the code that does the conversion?
Are you doing anything else, such as a moving-averaging filter?
Thanks,
B