On 2/19/12 1:32 AM, Nicolas George wrote:
Le decadi 30 pluviôse, an CCXX, Deron a écrit :
We are trying to work with all kinds of odd ball videos, and I have
to accommodate either odd ball audio streams like != 48kHz, or two
audio streams with mono samples instead of a single stereo stream,
wrong sample size, or something that is too loud. Something like
"-af resample=48000:0:2,channels=2,volume=-12,format=s16le" in
mplayer would be used. What I'm not finding in ffplay is how to
implement something similar.

The other minor case is when dealing with SD or 1080i video that
needs to be scaled and/or clipped. It appears that ffplay includes
video filters support. Correct? Am I just missing the support for
the audio filtergraphs?
The FFmpeg system includes audio filtering, and a lot of filters that can do
what you require. The catch is that audio filtering is not implemented in
the command line tools: ffmpeg and ffplay. But it is still usable in a
roundabout way, using the lavfi pseudo-device:

        ffmpeg -f lavfi -i 'amovie=file.ogg, ...'

instead of:

        ffmpeg -i file.ogg -af ...


So, I presume then that avformat_open_input (or?) does the interpreting in the background and amends the stream list etc so that packet info is basically the output from the filtergraph? How would I implement that in some other app? Pass "lavfi" to avformat_open_input as the input format?


I suppose the real question is, and no one but myself might be able
to answer it but here goes, what would be the best place to base
this on? It seems that modifying ffplay would be best?
I suggest adding it in libavdevice.


Well, I never knew that the devices included support for _output_. Thought it was input only.

Looking at the existing devices, I only see 4 output devices (alsa, sdl, oss, and sndio). None of those work like the decklink (well, SDL can be the keeper of time as demonstrated in ffplay but the libavdevice simply draws every time it gets an image). Since timing was the biggest mplayer problem, I am not at all confident that this can be made to work.

I'm willing to try, if you can confirm that this is not an issue. As I understand it, the device will need to create 2 queues (one for video, one for audio), with timing, and then simply not return(block) when a highwater mark is reached in the queues to stop ffmpeg from further decoding until the decklink catches up. Any examples of this in action as a libavdevice? Or more complex (output) libavdevices?

Ideally, MPlayer and ffmpeg should be made to be able to use libavdevice for
audio and video output.

Are you saying that ffmpeg does not current support libavdevice for audio/video output? Or that the ideal way is to add a decklink libavdevice so that mplayer/ffmpeg supports decklink "automatically"?

Regards,



The other concern is that the way the decklink library is written, the decklink libavdevice will need to be c++. Does that pose a problem for the maintainers? The main incentive to me to do it as a libavdevice would be to get it included in the project (obviously as a conditional build), but if there is a flat refusal to allow it because it is c++ I'd like to know that now.

Thanks,

Deron

_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to