By the way, RGB is a "display device optimized" format. Meanwhile the video 
formats are for broadcast and analog amplifiers (TVs).

You also might want to look into not bits(), but scanLine().

Also Thiago contributed this code:

It requires SSSE3:

    _m128i control = _mm_set_epi32(0xff030303, 0xff020202, 
        0xff010101, 0xff000000);
    for (int x = 0; x < frame.width(); x += 4) {
        // load 4 Y bytes
        uint yyyy = *(uint *)&bits[y*frame.width() + x];

        // transfer to SSE register
        __m128i data = _mm_cvtsi32_si128(yyyy);

        // spread it out to the right lanes and fill in the alpha channel
        __m128i rgba = _mm_shuffle_epi8(data, control);

        // store
        _mm_storeu_si128((__m128i*)&scanLine[x], rgba);
    }



________________________________
 From: Jason H <scorp...@yahoo.com>
To: Rayner Pupo <rpgo...@uci.cu>; "interest@qt-project.org" 
<interest@qt-project.org> 
Sent: Friday, March 14, 2014 11:47 AM
Subject: Re: [Interest] QVideoFrame and YUV question
 


Haha, I went through his 6 weeks ago, but I only needed the Y channel.

The frame is not laid out like RGB. This is for legacy reasons. Black and White 
TV presented a black and white frame (Y) when color TV was added, it was added 
in a backwards-compatible way, with that data between the Y frames. SO what you 
have are actually 3 images per frame - the width x height B&W Y channel then a 
subsampled Cb and Cr.  Cb and Cr as subsampled by a factor of two, meaning you 
have:
[Y (width*height)] [Cb (width/2*height/2)] [Cr (width/2*height/2)]

so your y_ is right, 
but u_ is at (width*height) + (y *(width/2)) + x/2
but v_ is at (width*height) + ((width*height)/4) + (y *(width/2)) + x/2


Yes, this means that U and V are used for 4 pixels, but the Y channel is pixel 
for pixel. 

________________________________
 From: Rayner Pupo <rpgo...@uci.cu>
To: interest@qt-project.org 
Sent: Friday, March 14, 2014 11:29 AM
Subject: [Interest] QVideoFrame and YUV question
 

Hi, I'm trying to create a QImage from a QVideoFrame but my video has YUV420P 
format so converting it's not easy for me at all. By using a snippet from 
histogram class from the player example provided by Qt I was able to read each 
pixel on the frame but my question is: who can I decompose Y, Cb and Cr values 
from a single uchar?
This is how to I'm iterating over the video frame bits.
if (videoFrame.pixelFormat() == QVideoFrame::Format_YUV420P) {
        QImage nImage(videoFrame.width(),
 videoFrame.height(), 
QImage::Format_RGB32);
        uchar *b = videoFrame.bits();
        for (int y = 0; y < videoFrame.height(); y++) {
            uchar *lastPixel = b + videoFrame.width();
            int wIndex = 0;
            for (uchar *curPixel = b; curPixel < lastPixel; curPixel++) {
                double y_ = *curPixel;????????
                double u_ = ???????????;
                double v_ = ???????????;

                r = y_ + 1.402 * v_;
   
             g = y_ - 0.344 * u_ - 0.714 * v_;
                b = y_ + u_ *
 1.772;

                nImage.setPixel(wIndex, y, qRgb(int(r), int(g), int(b)));

                ++wIndex;
            }
            b += videoFrame.bytesPerLine();
        }
        return nImage;
}
________________________________________________________________________________________________
I Conferencia Cient�fica Internacional UCIENCIA 2014 en la UCI del 24 al 26 de 
abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu

_______________________________________________
Interest mailing list
Interest@qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest




_______________________________________________
Interest mailing list
Interest@qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest
_______________________________________________
Interest mailing list
Interest@qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest

Reply via email to