>testonDemandRTSPServer can stream multiple media files at the same
>time,then how can I use/modify the testOnDemandRTSPServer or
>livemedia server to stream Mpeg4 video & pcm (wave) audio at the
>same time to single client(just one connection), i.e how can I
>multiplex & stream separate mp
Hi~ ^^
Thanks for your answer~ ^^
testonDemandRTSPServer can stream multiple media files at the same time,then
how can I use/modify the testOnDemandRTSPServer or livemedia server to stream
Mpeg4 video & pcm (wave) audio at the same time to single client(just one
connection), i.e how can I mu
>But I can not know if there are multiple frames
>lost or just one. If I could know how many RTPpackets each frame is
>composed on at live level it would be enough to find out how many frames
>were lost.
As you noted, the "RTPSource" abstraction delivers complete 'frames'.
(However, the term 'fra
Is it possible to determine the frame rate of an H264 encoded asset (encoded
as an Annex B bitstream (NAL units preceded by 0x00 0x00 0x00 0x01)) from
the data stream so that I can accurately set fDurationInMicroseconds in the
framer?
Also, from the H264 spec it looks like the only way to determin
Hello *,
I have a question regarding the interface between the live
library and mplayer. As expected, in the case of bigger frames, they are
split in more RTP packages. These packages, are put together in live and
assembled in a bigger package, that contains the whole frame. How could I
have a
You didn't properly read the "openRTSP" documentation (in particular,
the meaning of the "-o" option)
Thanks Ross! I apologize for my questions: it was a typical case
in which a "RTFM" as answer would have been appropriate! ;)
I.
--
"Much less doesn't mean zero"
-- E.Benetti --
__
>>Therefore, if you are feeding input from a "MPEG1or2VideoRTPSource"
>>into a decoder, and your decoder is not smart enough to decode one
>>slice at a time, then you must aggregate the input data into
>>complete video frames before feeding them to your decoder.
>
>Can this be done with live555
Therefore, if you are feeding input from a "MPEG1or2VideoRTPSource"
into a decoder, and your decoder is not smart enough to decode one
slice at a time, then you must aggregate the input data into complete
video frames before feeding them to your decoder.
Can this be done with live555 stuff?
J