I think you need one because even if you are not encoding anything when creating the stream, the decoder still needs to know about the details of the audio encoding.
If the output format requires these details in the header then you need to tell the encoder to put its details in the header also. Salsaman. http://lives-video.com https://www.openhub.net/accounts/salsaman On Wed, Mar 14, 2018 at 5:31 PM, Michael IV <[email protected]> wrote: > Hi Anton. I see what you mean, but I actually use that internal > context,and it works perfectly. > For video I am getting h264 NALs from somewhere,so I don't perform > encoding at all. > For audio, I am opening existing audio files/ streams with data in the > format I need,so I don't > transcode,but just pass the data as is into the muxer. > So as you see I don't really need codec context for encoding. > But if the API forces me to create one just for the sake of aligning with > some rules,I am not sure > that's a good thing. > > > On Wed, Mar 14, 2018 at 10:26 PM Anton Shekhovtsov <[email protected]> > wrote: > >> Yes you have to create with avcodec_alloc_context3. >> The fact that structure of same type is present in input stream >> description does not mean you are supposed to use it. Don't remember where >> I learned it, maybe earlier on this list. >> _______________________________________________ >> Libav-user mailing list >> [email protected] >> http://ffmpeg.org/mailman/listinfo/libav-user >> > > _______________________________________________ > Libav-user mailing list > [email protected] > http://ffmpeg.org/mailman/listinfo/libav-user > >
_______________________________________________ Libav-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/libav-user
