Sounds like he wants multiple decoders running simultaneously on different threads, and you're saying that a single decoder can use multiple threads. Not quite the same thing.

I use multiple encoders, in my case, simultaneously in different threads, and it works fine. I think decoders should work fine too. One thing I do is register a custom locking function via av_lockmgr_register() just in case some bit of internal library code wants to serialize threads through a critical section.

Regarding the packet question, I found the following commentary in avformat.h (from ffmpeg 2.0.1):

* If AVPacket.buf is set on the returned packet, then the packet is
* allocated dynamically and the user may keep it indefinitely.
* Otherwise, if AVPacket.buf is NULL, the packet data is backed by a
* static storage somewhere inside the demuxer and the packet is only valid
* until the next av_read_frame() call or closing the file. If the caller
* requires a longer lifetime, av_dup_packet() will make an av_malloc()ed copy
* of it.
* In both cases, the packet must be freed with av_free_packet() when it is no
* longer needed.

I think you can make this work. As far as I've seen, the code has been written with concurrency in mind.

Andy

On 12/2/2013 4:46 PM, Bruce Wheaton wrote:
On Dec 1, 2013, at 2:43 AM, Adi Shavit <[email protected]> wrote:

Does anyone have any insights or some references I should follow regarding this 
issue?

Adi, are you aware that ffmpeg does/can employ multi-threaded decoding already? 
If you set the correct number of threads by setting thread_count in your 
avcodeccontext before opening the codec, it will do exactly what you propose.

In effect, the first few decode calls will return immediately, then your frames 
will start to come out, having been delayed by the number of threads you 
requested.

Bruce




On Tue, Nov 26, 2013 at 9:15 PM, Adi Shavit <[email protected]> wrote:
Hi,

   I am consuming a multi-program transport stream with several video streams 
and decoding them simultaneously. This works well.

I am currently doing it al on a single thread.
Each AVPacket received by av_read_frame() is checked for the relevant 
stream_index and passed to a corresponding decoder.
Hence, I have one AVCodecContext per decoded elementary stream. Each such 
AVCodecContext handles one elementary stream, calling avcodec_decode_video2() 
etc.

The current single threaded design means that the next packet isn't decoded 
until the one before it is decoded.
I'd like to move to a multi-threaded design where each AVCodecContext resides 
in a separate thread with its own AVPacket (concurrent SPSC-)queue and the 
master thread calls av_read_frame() and inserts the coded packet into the 
relevant queue (Actor Model / Erlang style).
Note that each elementary stream is always decoded by the same single thread.

Before I refactor my code to do this, I'd like to know if there is anything on 
the avlib side preventing me from implementing this approach.
AVPacket is a pointer to internal and external data. Are there any such data 
that are shared between elementary streams?
What should I beware of?
Please advise,
Thanks,
Adi



_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user




_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user



_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to