Hi Ross,
Thank you so much for your response!
I already updated the lasted version of Live555. Now I got another problem. I 
use this kind of chain to make the transport stream: InputVideo(Live H.264)-> 
H264VideoStreamFramer->MPEG2TransportStreamFromESSource. 
With that chain I got the :Frame truncated -> would you tell me where I can 
increase the buffer ?
The second problem is that after I use VLC to request data and play back. For 
the first VLC client It found the SPS and PPS and it plays back ok. But for the 
second VLC client, it cannot play back the stream because it doesnot receive 
the SPS and PPS and keeps waiting for those parameters. (packetizer_h264 
warning: waiting for SPS/PPS)

----------원본메세지---------- 
보낸사람: 팜반푸<phuocp...@ssu.ac.kr>
받는사람: Ross Finlayson <finlay...@live555.com>,LIVE555 Streaming Media - 
development & use <live-de...@ns.live555.com>
보낸날짜: 2014-01-16 14:20:14
제 목: Re: [Live-devel]Problem_with_mpegts_muxer

FYI, the version of Live555 used on my IP camera is the version (1986-2010).

----------원본메세지---------- 
보낸사람: Ross Finlayson <finlay...@live555.com>
받는사람: LIVE555 Streaming Media - development & use <live-de...@ns.live555.com>
보낸날짜: 2014-01-15 00:08:42
제 목: Re: [Live-devel] Problem with mpegts muxer


I'm working on an embedded device, the IP network camera. I am using 
MPEG2TransportStreamFromESSource to mux H.264 and AAC into transport stream. I 
succedded the muxing. But at the client side, the bottom right corner always 
has some broken part as shown in the image of the link below. The broken part 
is shown in the red square of the image.
http://s1208.photobucket.com/user/phuocpham09t/media/mpegts.png.html

 
Do you have any ideas why?



No, unfortunately I don't.  However, why are you multiplexing your video and 
audio into a Transport Stream, and then transmitting the Transport Stream?  It 
is *much* more efficient (and robust) to stream the H.264 video and AAC audio 
separately, i.e., as separate RTP streams - without dealing with Transport 
Streams at all.


The way to do this is to create two different "ServerMediaSubsession"s (each 
one a subclass of "OnDemandServerMediaSubsession", assuming that your server is 
streaming unicast), and add each one to your server's "ServerMediaSession" 
(using two calls to "RTSPServer::addSubsession()").


One of your "ServerMediaSubsession" subclasses (for video) would create (in its 
"createNewRTPSink()" virtual function implementation) a "H264VideoRTPSink", fed 
from a "H264VideoStreamDiscreteFramer".  Your second "ServerMediaSubsession" 
subclass (for audio) would create (in its "createNewRTPSink()" virtual function 
implementation) a "MPEG4GenericRTPSink".  (See 
"ADTSAudioFileServerMediaSubsession.cpp" for an example of how to create a 
"MPEG4GenericRTPSink" for streaming AAC audio.)



Ross Finlayson
Live Networks, Inc.
http://www.live555.com/ 

_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to