Thanks for your thoughts, Marcus. > > 2. A meaningful ordering? > > To overcome this problem I'm suggesting to implement an ordering function > > which makes sure that NoteOn events are always last (within the same tick > > count).
> As you say: almost all MIDI players fall back to the event order, so why > shoudl Fluidsynth be different? I'm looking at it with my software engineer glasses: Whenever something has undefined behaviour, one can think about how to tweak and optimize things. > Why is "Note-On last" the better order? How do you know that that special > (and seemingly Fluidsynth specific) ordering is what the author of the MIDI > file wanted? When something happens at the same tick, I don't claim to know anything. My proposed solution is simply the best guess I can come up with in such a situation. Two examples: a) NoteOn and ProgChange happen in that order at the same tick. Would it make sense to turn on the note using the old instrument, although we are just about to receive a new program at the same tick? I don't think so. The NoteOn, although first in the stream, should use the new program. b) NoteOn at tick 0, after e.g. 100 ticks: NoteOn and NoteOff. All three note events happen in the given order on the same channel and trigger the same key. How would fluidsynth interpret them currently? The first note is triggered, it sounds for a while until the second NoteOn arrives. That second NoteOn kills overlapping notes, i.e. it kills the first note. But oops, just after that second note was turned on, it will be turned off right again, because we have a NoteOff at the same tick, which apparently was meant to turn off the first note though. For case b) I have MIDI files that do such things. Ofc you are right Marcus, technically the MIDI converter program is to blame. I cannot really change the converter though. I would alternatively need to handle those cases in my program. I was just thinking of a possibly acceptable way for fluidsynth to handle such "corner-cases". > > 3. Simpler implementation? > What trips me up though: if your program was really stuck in that loop for > seconds, I'm wondering how many events you were actually adding to the > sequencer... Millions? I'm takling about ten-thousands of events. Here are some measurements: virtual void MidiWrapper::open(): Added 23976 events to the seq, took 12 ms void FluidsynthWrapper::Render(float*, frame_t): Rendering took 897 ms virtual void MidiWrapper::open(): Added 28395 events to the seq, took 17 ms void FluidsynthWrapper::Render(float*, frame_t): Rendering took 1441 ms virtual void MidiWrapper::open(): Added 84961 events to the seq, took 48 ms void FluidsynthWrapper::Render(float*, frame_t): Rendering took 19709 ms The problem is the very first rendering call (in my case: fluid_synth_process), when the sample-timers are called back until they ultimately reach the while loop I mentioned. You can see there seems to be an exponential increase. ATM, I just tested 4 different MIDI files. The first 3 files have a playduration of 4-6 minutes, the last one has 12 minutes playback. Ofc, this heavily relies on the timing order of the events, so making a generalized benchmark is not so straight forward to do. Same test, but with std::priority_queue : virtual void MidiWrapper::open(): Added 23976 events to the seq, took 9 ms void FluidsynthWrapper::Render(float*, frame_t): Rendering took 4 ms virtual void MidiWrapper::open(): Added 28395 events to the seq, took 16 ms Warning: void FluidsynthWrapper::Render(float*, frame_t): Rendering took 3 ms MidiWrapper::open(): Added 84961 events to the seq, took 39 ms Warning: void FluidsynthWrapper::Render(float*, frame_t): Rendering took 4 ms > > 5. The system timer > isn't there a global timing event that MIDI devices can send to each other to > synchronise playback position and playback speed? You seem to refer to "System Real Time Messages". But I don't know how this could be integrated into fluidsynth. > > 6. Time Scale > Similar comment as above and again, just thinking out loud: wouldn't it be a > great feature to allow slower or faster MIDI playback? Indeed, supporting tempo changes would be a great and necessary feature for the seq, if we ever want to use it in the midi player. Ok, so better improve the time scaling rather than remove it. > adding another implementation in C++ is not cleanup IMO... it adds more > technological debt because we now have to keep two different implementations > in sync. Having two implementations would be temporarily. First add an "experimental" C++ impl., then deprecate the current C implementation while using the C++ impl. by default and finally remove the deprecated. > But I do wonder: what can C++ do that C can't? And especially: why can C++ do > it faster? Simply that I'm more familiar how to do those things in C++ rather than with glib. And I'm not sure what Carlo, you, and the other embedded system guys said, if we fortify the glib dependency. I'm not sure how they think about a dependency to the C++ std lib either... Tom _______________________________________________ fluid-dev mailing list fluid-dev@nongnu.org https://lists.nongnu.org/mailman/listinfo/fluid-dev