I'm using QVideoFrame QVideoFilterRunnable::run(QVideoFrame *input, const 
QVideoSurfaceFormat &surfaceFormat, RunFlags  /*flags*/)
However the video coming from an MP4 file is set as 
surfaceFormat.scanLineDirection() == BottomToTop.
When I do analysis, I do it on a QImage. I have enough information that I can 
call mirrored() accordingly and generate an image that matches the frame. 
However if I alter that frame, and want to supply it down the filter line, I 
have to do my alterations, then re-flip then set it back as a QVideoFrame. This 
wastes a lot of time.

I feel a few things should be possible:
1. I can access QVideoFrame pixel data as if it were an image (QImage::pixel()) 
(Given some pixelFormat limitations, like ARGB and not classic (YUV) video 
formats)
2. I should be able to modify surfaceFormat and 
setScanLineDirection(TopToBottom) and feed in my TopToBottom image to 
QVideoFrame as no more unflipping required.  Currently if I unflip then feed 
that, BottomToTop stays (surfaceFormat is const!) 
3. Both QImage and QVideoFrame should be able to use a implicitly shared pixel 
data implementation, to make zero-copy a thing. 
4. QPainter should be able to use QVideoFrames (again, given some pixelFormat 
limitations)

Generally when I'm this off in the weeds, I'm doing something wrong. So is 
there a better way to do this? My frame processing is already taking 55ms 
(18fps), so adding unneeded flipping, back-flipping then who knows what the 
other steps of the pipeline take in terms of MS, which further reduce the 
framerate.


Any suggestions?
_______________________________________________
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest

Reply via email to