On Mon, Jan 30, 2012 at 9:14 PM, Ross Finlayson wrote:
> After going through your 2 excellent faqs and Elphel source, am I correct
> to say the quickest way to build a MJPEG streamer (from JPEG files in
> prototype) would be to modify ElphelJPEGDeviceSource.cpp only?
>
>
> Actually, I don't recomm
>
> I can use some own header as 0xAABBCCDD to mark that timestamp entry
> points, did I understand you properly?
>
>
> *You don't need to do that, if your headers are all the same size, and
> contain the frame size. Instead, the frame size value will tell you how
> many bytes you need to read to
> When we are writing a quicktime file from an IP address stream the recording
> turns out fine. The issue we are having is that when viewing the file in
> quicktime after the recording we get compression artifacts when scrubbing
> through the timeline, additionally the audio sync is off after s
> So, your idea is to create my own fileSinker with my own file format
> recording each timestamp and frame size and then create my own
> byteStreamFileSource and serve the data packets as they where sinked but
> filtering that timestamps and framesizes.
Yes.
> I can use some own header as 0x
When we are writing a quicktime file from an IP address stream the
recording turns out fine. The issue we are having is that when viewing the
file in quicktime after the recording we get compression artifacts when
scrubbing through the timeline, additionally the audio sync is off after
scrubbing bu
Great, thank you very much!
So, your idea is to create my own fileSinker with my own file format
recording each timestamp and frame size and then create my own
byteStreamFileSource and serve the data packets as they where sinked but
filtering that timestamps and framesizes.I can use some own heade
I found it in Mp3StreamState.CPP
Thanks again.
From: Marlon Reid
Sent: 31 January 2012 13:12
To: LIVE555 Streaming Media - development & use
Subject: RE: [Live-devel] Mp3 Tags
Thanks for the reply.
I agree that stripping off the tags is the correct behav
Thanks for the reply.
I agree that stripping off the tags is the correct behaviour. I was
just wondering how you accomplish this. Where in live555 does it strip
out the tags?
Regards.
From: live-devel-boun...@ns.live555.com
[mailto:live-devel-boun...@ns.li
> It seems to me that Live555 removes ID3 tags from MP3 somewhere before or
> during streaming. Is this so?
Yes, because there's no standard way defined - in the RTP protocol - for
streaming the 'ID3' tag information along with the MP3 audio data. Our
software deals only with the MP3 audio fr
Unfortunately none of the output 'multimedia' file formats that we currently
support - .mov/.mp4, or .avi - are very good at supporting the recording of
accurate timestamp information along with each frame, so they are not very well
suited for the purpose of recording incoming streams for later
Hi,
It seems to me that Live555 removes ID3 tags from MP3 somewhere before
or during streaming. Is this so? If so, where is this done? I checked
MP3FileSource and MPEG1or2AudioRTPSink but cannot see anything that
seems to relate to ID3 tags.
Thank you.
Hello,
I am currently developing some stream recorder software based on Live 555
libs, I am recording mpeg4, h264 and mjpeg and streaming it later using
XXXFileServerMediaSubsession. The main problem here is that, as files
contain raw recorded stream, i don“t have any timing information as
framera
Hello everyone
Thanks for everybody's suggestion.
Looks i have made it start to do something instead on no frame, fail to
decode. I m keep getting something like this. Is it because of i m putting
the wrong NAL units to decoder?
My nal structure is
0x1sps0x1pps
*[h264 @ 0x700f400]slice type to
One thing that you're definitely doing wrong is feeding your incoming H.264 NAL
units into a "H264VideoStreamFramer". This is wrong. Because you are reading
discrete NAL units - i.e., complete NAL units, one at a time - from your
source, you *must* feed them into a "H264VideoStreamDiscreteFram
It is not working. Its crashing. The gdb stacktrace is given below.
The onDemandRTSP Server will initialize the source and sink when it
receives a DESCRIBE from the client.
Now when it starts to form the SDP for which it starts looking for
bitstream, if there is no data received at the input will
>
> >
> > SPropRecord * data to a NSData and send into extradata to decode?
>
> I don't know what a "NSData" is (it's apparently something outside our
> libraries), but I hope it should be obvious from the implementation of the
> function in "liveMedia/H264VideoRTPSource.cpp" how it works.
>
>
"NS
> In H264VideoFileServerMediaSubsession::createNewStreamSource instead of
> ByteStreamFileSource can H264VideoRTPSource be used so that I can get live
> input ?
I think that will work.
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
___
l
James,
You can parse sprop parameters in order to get meaningfull stream
info [ stream width, height etc] if you need.
You can get details from here:
http://stackoverflow.com/questions/6394874/fetching-the-dimensions-of-a-h264video-stream
Best Wishes
Novalis
2012/1/31 Ross Finlayson :
> After
For PIPEs to work the H264VideoRTPSource and the RTSP Server should be in
different processes.
It will not work if they are in single process as PIPE read or write would
be blocking call.
Is there an alternative way to do the same without using linux PIPEs ?
In H264VideoFileServerMediaSubsession:
19 matches
Mail list logo