Perette, as far as I can tell there are 2 different things relating to codecs:
old API: AVCodecContext - obtained from stream->codec (this may be the source of confusion), a pointer which is passed into functions dealing with the codec AVCodec - obtained from av_codec_find_decoder(context), contains fields like width, height, stream type etc. new API: AVCodecParams - obtained from stream->codec_params, equivalent to AVCodec in old API AVCodecContext - allocated, then set from codec_params I could be wrong, I have not experimented with the new API yet. Just looking at code. http://lives-video.com https://www.openhub.net/accounts/salsaman On Mon, Aug 22, 2016 at 10:04 AM, Perette Barella <[email protected]> wrote: > Salsaman, > > I reviewed the doc/examples and my existing, working code some more, and > I’m going to step back and ask some architectural questions to validate > changing assumptions based on your assertions. > > * AVFormat provides the I/O and multiplexing for media > * AVStream is an abstraction for the separate audio/video/subtitle > components of the media. It is associated with an AVFormatContext. > * AVCodec provides the encoder/decoders for a particular AV type. > > My assumption has been that since AVStream->codec exists and was filled in > by avformat_new_stream and other functions, and *that there was an > association between an AVStream and its AVCodec* so that all these > different components worked together. > > You’re implying that no such association exists, and that it’s entirely my > code that’s pushing packets through the Codec, then moving the results onto > the Stream. I find this a little surprising, although I find nothing in my > code or the examples/muxing.c to contradict it. And it would explain why > it takes to much code to do anything with lav as opposed to gstreamer or > other libraries. > > Am I going in the right direction now? > > And with that in mind, then the purpose of codecpar is a way for the > stream to *provide* parameters for a codec that I’m supposed to create, and > the codec structure was there in the past only to provide and not an > indication of association between the codec and the stream (because no such > association exists). > > > Just use avctx from the code above. I don' t see what the problem is. > > > I think one of lav’s problems is lack of a good architectural diagram to > explain how it’s *supposed* to work. Yes, i can read the code and the > examples, and the Doxygen is a big help… but some sense of the intent and > design behind the code would help significantly. It’s like the difference > between diagnosing an electrical problem with vs. without a > blueprint/schematic: you can do without, but having the picture makes the > work easier, faster and more accurate. > > Perette > > _______________________________________________ > Libav-user mailing list > [email protected] > http://ffmpeg.org/mailman/listinfo/libav-user >
_______________________________________________ Libav-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/libav-user
