I have an OnDemand RTSP server based around Live555 that streams out live
H.264; I implemented a subsession and framer. On the client side, I have a
receiver application based loosely around openRTSP, but with my own custom
H264 MediaSink class.
My problem is fPresentationTime; the values I hand
On Tue, Jul 28, 2009 at 8:03 AM, Steve Jiekak wrote:
> Also, how to determine the right values for sprop_parameter_sets_str ??
You'll get these values (SPS and PPS) from your H.264 encoder. You base-64
encode them, and then delimit them with a comma. So something like:
sprop_parameter_sets=
I would never make any assumptions about a key frame every 12 frames. I
don't know exactly what these atoms are but I regularly use FFMPEG and
other encoders to create H.264 streams with very different key frame
intervals.
Matt S.
Ross Finlayson wrote:
OK, I've now released a new version (20
Hello Ross,
My query is:
1. I have to transfer some metadata in XML format inside RTP Packet from a
RTPServer to Client.
For implementing client side functionality I have used LIVE 555. As of now
I am receiving JPEG image data using the a class derived from MediaSink.
2. I peeped inside LIVE 555
Hi,
I have to create a H264VideoStreamFramer subclass for my H264 RTSP Server.
This subclass will be the one that provides frames (or in this case, Nal
Units) to the H264VideoRTPSink class.
As i understand it H264VideoStreamFramer is only a filter to make me
implement the method:
virtual Boolea
Hi Ross,
Does LIVE 555 supports AAC and G.711 audio formats.
Regards
Deepti
Help save paper - do you need to print this email ?
This e-mail and any files transmitted with it are confidential and intended
solely for the use of the individual(s) or entity addressed above. If you are
not
Does LIVE 555 supports AAC
Yes, using "MPEG4GenericRTPSink" (for sending) and
"MPEG4GenericRTPSource" (for receiving). Note how
"testOnDemandRTSPServer" streams ".aac" files, for example.
and G.711 audio formats.
Yes, but currently only u-law, not a-law. Note how
"testOnDemandRTSPSer
The thing is that i cannot see exactly what my doGetNextFrame function has
to do
Look at the template code outlined in "DeviceSource.cpp", and also
the many examples of "doGetNextFrame()" throughout the code.
--
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
__
3. My query is how can I receive that new type of metadata payload.Below is
the method you sent me :
You would need to define and implement two new classes:
1/ A new subclass of "MultiFramedRTPSink", for sending RTP packets in
the new payload format.
2/ A new subclass of "MultiFramedRTPSource
My problem is fPresentationTime; the values I hand off to Live555 on
the server side are not the values I get on the client side.
The presentation times that you generate at the server end *must be
aligned with 'wall clock' time - i.e., the times that you would
generate by calling "gettimeofda
Thank you for the clarification. One question about 'wall clock' time:
In the server, my samples come to me timestamped with a REFERENCE_TIME,
>> which is an __int64 value with 100 nanosecond units. So to convert from this
>> to the timeval, I do the following:
>>
>>// Our samples are 10
Yes, you're not aligning these presentation times with 'wall clock'
time - i.e., the time that you would get by calling "gettimeofday()".
So, the sample times I get from my encoder start at 0 and increase
(i.e. a relative timestamp); when you say 'wall clock' time, does
this mean I should be
12 matches
Mail list logo