> In DummySink::afterGettingFrame i just fwrite buffer to file, nothing more.
Alternatively, you could have just used the existing “FileSink” class.
> Didnt want to cross post but this is a debug info from afterGettingFrame -
> microseconds delay between each calls of this function:
> fDuration
Didnt want to cross post but this is a debug info from afterGettingFrame
- microseconds delay between each calls of this function:
fDurationInMicroseconds 23219
Time diff 25142
fDurationInMicroseconds 23219
Time diff 44613
fDurationInMicroseconds 23219
Time diff 25751
fDurationInMicroseconds 2321
Thanks Ross,
I did that as you said with ADTSFileSource but i have a problem. After
your change it start to works but not as good as it shold.
In example - below code was running for 30 seconds and produced 20s AAC
audio file.
If it try to stream the same file using RTPSink and RTSP Server -
st
Marcin,
Reading your email once again, I realized that my first response wasn’t a
proper answer to your question - because you are not transmitting the audio
data (over RTP), but are instead are recording it into a Transport Stream file.
Because of this, you *do*, indeed, need to call "schedul
> How to read this file in native speed (read every packet every 21.3ms) using
> ADTSFileSource?
> Using scheduleDelayedTask every 21.3 ms to get new packet on time is bad
> option - that was wrong approach - took a lot of CPU.
Marcin,
I’m not totally sure I understand what you’re trying to do
Hello Ross,
I try to use your library to do something different. I got an RTSP input
source with only video track (40ms PTS diff, 25 fps, H264)
I need to add an audio track with AAC (from local file) and mux it
together in TS.
The problem is that as i recieve video NAL unit every 40ms at *almost