hi Ross,

Let me explain how I transfer data from our camera to live555 library.

I followed the second approach from instructions on the FAQ page, 
http://www.live555.com/liveMedia/faq.html#liveInput 
namely, to write my own "Gm813xSource" as subclass of "FramedSource", and my 
own "GM813XServerMediaSubsession" as subclass of 
"OnDemandServerMediaSubsession".

The GM813XSource.cpp is modified out of DeviceSource.cpp. Here I copy the key 
part of the code to below.

void Gm813xSource::doGetNextFrame()
{
    if (gmPollFrame()) {
        deliverFrame();
    }
}

Boolean Gm813xSource::gmPollFrame(void)
{
    int ret = gm_poll(&poll_fds, 1, 2000);
    if(GM_TIMEOUT == ret) {
        envir() << "gm_poll timeout\n";
        return false;
    }
    if(GM_SUCCESS == ret) {
        return true;
    }
    envir() << "gm_poll error, ret=" << ret << "\n";
    return false;
}

void Gm813xSource::deliverFrame(void)
{
    int ret;
    gm_enc_multi_bitstream_t bs;

    if (!isCurrentlyAwaitingData())
        return; // we're not ready for the data yet

    memset(&bs, 0, sizeof(bs));

    bs.bindfd = main_bindfd;//poll_fds[i].bindfd;
    bs.bs.bs_buf = frameBuf;  // set buffer point
    bs.bs.bs_buf_len = FRAME_BUF_SIZE;  // set buffer length
    bs.bs.mv_buf = 0;  // not to recevie MV data
    bs.bs.mv_buf_len = 0;  // not to recevie MV data

    if (bytesInBuf > 0) { // send leftover data
        if (bytesInBuf > fMaxSize) {
            fFrameSize = fMaxSize;
            fNumTruncatedBytes = bytesInBuf - fMaxSize;
            bytesInBuf = fNumTruncatedBytes;
            dataPtr += fFrameSize;
        } else {
            fFrameSize = bytesInBuf;
            bytesInBuf = 0;
        }
        memmove(fTo, dataPtr, fFrameSize);
        FramedSource::afterGetting(this);
    } else { // get a new frame and send
        if ((ret = gm_recv_multi_bitstreams(&bs, 1)) < 0) {
            printf("Error, gm_recv_multi_bitstreams return value %d\n", ret);
        } else {
            if ((bs.retval < 0) && bs.bindfd) {
                printf("Error to receive bitstream. ret=%d\n", bs.retval);
            } else if (bs.retval == GM_SUCCESS) {
                u_int8_t* newFrameDataStart = (u_int8_t*)bs.bs.bs_buf; //%%% TO 
BE WRITTEN %%%
                unsigned newFrameSize = bs.bs.bs_len; //%%% TO BE WRITTEN %%%
                bytesInBuf = newFrameSize;
                dataPtr = bs.bs.bs_buf;

                // Deliver the data here:
                if (newFrameSize > fMaxSize) {
                    fFrameSize = fMaxSize;
                    fNumTruncatedBytes = newFrameSize - fMaxSize;
                } else {
                    fFrameSize = newFrameSize;
                }

                bytesInBuf -= fFrameSize;
                dataPtr += fFrameSize;

                gettimeofday(&fPresentationTime, NULL); 
                memmove(fTo, newFrameDataStart, fFrameSize);

                // After delivering the data, inform the reader that it is now 
available:
                FramedSource::afterGetting(this);
            }
        }
    }
}

As you can see, I use "frameBuf" to hold encoded bytes from the encoder, and 
then move the data to fTo as soon as possible, finally, after each copying I 
called afterGetting callback. For the "frameBuf", I allocated 512K bytes, which 
is big enough to hold the largest I frame. If the afterGetting callback of 
sink-side object then deliver the frame data to network immediatly, the latency 
I've introduced by frameBuf should be only 1 frame, namely 33ms (of course 
there's capture buffer and buffers inside the hw encoder that's not counted 
here, but they shouldn't add too much, I'm currenly checking this with help 
from the chip FAE). 

Another confusion is raised when I observed the fMaxSize. The value of this 
variable goes down by the encoded frame size, each time after I copy the frame 
and call afterGetting. But it doesn't go back to the max value (seems to be 
150KB, no matter how I try to override it in createNewRTPSink) next time. Only 
if the new frame size exceeds fMaxSize, then next time it goes back. My 
question is, does this mean that all those data copied to fTO is not consumed 
right away, thus introduced a frame buffer here?

It's a long post, thanks for your patience to read this far.



Xin Liu
VsceneVideo Co. Ltd.
Mobile:+86 186 1245 1524
Email:x...@vscenevideo.com
From: Ross Finlayson
Date: 2017-01-11 20:04
To: LIVE555 Streaming Media - development & use
Subject: Re: [Live-devel] how to make latency as low as possible
Our server software - by itself - contributes no significant latency.  I 
suspect that most of your latency comes from the interface between your camera 
and our server.  (You didn’t say how you are feeding your camera’s output to 
our server; but that’s where I would look first.)
 
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
 
 
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to