Hello David,
Quoting David Henningsson :
j...@resonance.org skrev:
It seems like you're thinking that we pre-render one fluidsynth buffer
(64 samples) ahead, and add that to the latency. That's a simpler
solution than the one I had in mind: I was thinking that we should
prerender several buff
j...@resonance.org skrev:
> Quoting David Henningsson :
>>> I think ideas like these are good. Having each voice be processed and
>>> then mixed, would only require one buffer (64 bytes) per voice and
>>> would not require much extra CPU. This could also facilitate moving
>>> to the multi-thread