OK, I’ve now installed a new version (2015.04.01) of the “LIVE555 Streaming
Media” code that adds a virtual function
virtual Boolean nalUnitEndsAccessUnit(u_int8_t nal_unit_type);
to “H264or5VideoStreamDiscreteFramer”. The default implementation sets this
simply based on whether the NAL
> Fortunately, however, there is a way for you to overcome this (in a subclass
> of “H264or5VideoStreamDiscreteFramer”): Simply define your own ‘after
> getting’ function that first calls
> “H264or5VideoStreamDiscreteFramer::afterGettingFrame1()”, and then sets
> “pictureEndMarker()” to the ‘co
Yes, you’ve stumbled upon a problem with the way that RTP payload formats for
video codecs are defined: They mandate that the RTP ‘M’ bit be set for the last
RTP packet of a ‘picture’, but often it’s non-trivial for a streaming
application (like ours) to figure out exactly when this should be do
Hi Ross,
I use LIVE555 to stream video from an H.264 encoder over RTSP/RTP, and I
have recently updated my system to support sending multiple slices per
frame. Internally, I use my own DeviceSource-based class to feed single
NALs to an instance of H264VideoStreamDiscreteFramer, which then feeds
e
Hi Ross,
I use LIVE555 to stream video from an H.264 encoder over RTSP/RTP, and I
have recently updated my system to support sending multiple slices per
frame. Internally, I use my own DeviceSource-based class to feed single
NALs to an instance of H264VideoStreamDiscreteFramer, which then feed
A pipe is just that, there is not normally any framing involved so is just a
stream of bytes, with multiple “frames” running on form one another.
You can either add a frame header that tells you the size of the data to read
(and any other data like presentation time, etc.), which your source can