Hi,

I’m generating a visual feed that is reacting to audio. Both audio and video 
are being generated in real time, each in its own thread.

While the audio generation is always steady, video generation is not precise 
and can have fluctuations.

While the muxing example provides a good start on how to use ffmpeg for audio 
and video muxing, it doesn’t cover synchronisation aspects since the video and 
the audio are generated in the same loop, on demand, without any possible 
fluctuation.

I’m trying to achieve perfect synchronisation between audio and video but I 
feel I’m navigating uncharted terrains here.

Some problems that I have identified but don’t have a proper solution at the 
moment:

- the audio buffer can start having samples sooner than the video buffer of 
vice-versa. Ideally the first frame of video should match the first frame of 
audio. 
- I’m already rescaling the video pts and dts to adjust the video generation 
fluctuations but audio is always a fixed value and cannot fluctuate

Are there examples that show how to achieve this? What are the best practices? 

Thank you!

Best regards,

Nuno






_______________________________________________
Libav-user mailing list
[email protected]
https://ffmpeg.org/mailman/listinfo/libav-user

To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".

Reply via email to