AFAIK, there’s no built in mechanism for adaptive streaming.

There are some things you can do, but may not be flexible enough.  For 
instance, if you have a low-res-h264 and high-res-h264 version of a video, in 
the server code, you could switch which file you are reading from, seek to the 
same point in the media and start outputting those samples.  You will need to 
make sure you send out the SPS and PPS NALs that belong to the new stream first 
though.  Things get a little more complicated/impossible with codecs that don’t 
support this, and of course audio.

Again, AFAIK, your best bet may be a higher level application strategy of 
detecting packet loss and immediately dropping to a lower bitrate url in the 
background until enough of your client play-through buffer is ready.  This 
won’t be seamless, but I don’t think real-time protocols and adaptive streaming 
mesh well.  You may be better off using something that is made for this, like 
HLS, DASH or smooth-streaming.  Those methods of streaming are tuned to 
download chunks of media (ex 2seconds worth), and if it cannot download them 
faster than real-time, it will start downloading the smaller bitrate chunks of 
media.  The way the chunks are written are specially designed to be seamless 
(specially in the context of audio).

-Jer
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to