Hi everybody, I am trying to create fragmented mp4 video files (h264 encoded) starting from a RTSP source (an IP camera, or ffserver streaming a mp4 which in turn comes from an IP camera). I use a custom fragmentation mode, by calling av_write_frame() with a null packet every 2 seconds, while the actual video frames are written using av_interleaved_write_frame(). Packets corresponding to SPS and PPS nal units (which usually precede an IDR) are also muxed, but for them I let the pts and dts non specified (i.e. AV_NOPTS_VALUE). The problem is that the presentation times which are assigned to AVPacket instances before calling av_interleaved_write_frame() (with a 90 KHz timebase) sometimes do not corresponds to those read from the same file via av_read_frame() once the file has been closed, as if they were rearranged somehow. Empirically, it seems that this doesn't occur if fragmentation is disabled.
Any suggestion/explanation? Thanks in advance, Massimo Perrone -- Sent from: http://libav-users.943685.n4.nabble.com/ _______________________________________________ Libav-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/libav-user
