Awesome!
Ross Finlayson wrote:
Yes there is a comment about that in the code. Is it difficult to convert to
bytestream? Or is there something quick I can do to tell the caller that I
don't have data yet?
I'll be updating the "WAVAudioFileSource" code very shortly to do asynchronous
file r
I'm creating a RTSP Server in a separate thread. My encoder (libx264) produces
arrays x264 nal units. When the encoder processes the first frame it produces
an array of 4 nal units. Then I pass by one unit in my DeviceSource and call
signalNewFrameData every time. But it seems this separate thre
How can I increase H264VideoRTPSink buffer size ?
Novalis
Best Wishes
___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel
> Yes there is a comment about that in the code. Is it difficult to convert to
> bytestream? Or is there something quick I can do to tell the caller that I
> don't have data yet?
I'll be updating the "WAVAudioFileSource" code very shortly to do asynchronous
file reads (except on Windows).
Yes there is a comment about that in the code. Is it difficult to convert to
bytestream? Or is there something quick I can do to tell the caller that I
don't have data yet?
Ross Finlayson wrote:
But the wav source class just uses fread instead of the bytestream so my
getframe blocks on my
Because you've made your own custom modifications to the "testRTSPClient" code,
it's hard for me to tell what might be going wrong. So you need to start first
with the original, unmodified "testRTSPClient" code.
I suggest going back to that code, and verifying - in the implementation of the
(s
Hi,
I'm fairly new to using Live 555, so apologies if my question is a
stupid one, I've read through the FAQ and looked for an answer to this
through the live-devel mailing list but I've so far come up with
nothing.
I've subclassed Mediasink for use with an Iphone app I'm developing,
I've b
> But the wav source class just uses fread instead of the bytestream so my
> getframe blocks on my audio fifo.
OK, I didn't realize that you were using the "WAVAudioFileSource" class. Yes,
you're right - that still uses a blocking "fread()", rather than asynchronous
reading. I'll need to fix t
But the wav source class just uses fread instead of the bytestream so my
getframe blocks on my audio fifo. Doesn't that also block video since there is
only one thread? What do I do if I have no audio data avail?
I will check presentation time as well.
Ross Finlayson wrote:
Then problem is c
Hi Ross,
I found the problem and manage to fix it even though I am not entirely
sure of the reason for it.
I have my own custom AudioRTPSink for PCM data. Removing the
MultiFramedRTPSink::doSpecialFrameHandling call from the
doSpecialFrameHandling function in my derived AudioRTPSink solved the
On 03.02.2012 08:14, Marlon Reid wrote:
The bottom half of the right
channel contains noise. If you take a look at the image hosted here :
http://www.freeimagehosting.net/q1cgw you will see what I mean.
That's a great example where looking closer at the data will give a hint
on what might be wr
I have installed a new version (2012.02.03) of the "LIVE555 Streaming Media"
code that makes a small change to the behavior of "RTSPClient"s. Now, after
receiving the response to each RTSP "SETUP" command, the code will send a
couple of short, 'dummy' UDP packets towards the server. If the cli
> Then problem is combining audio+video. I think I did the audio incorrectly.
> I think it's doing a blocking read on my audio fifo and messing up the video
> since the whole shebang is single threaded. I think what I need is the
> ByteStream class which does async, that way it tells the sche
> I am experiencing a problem that has me stumped. My application uses Live555
> to stream PCM data over a network. The problem is that the data received on
> the client side is corrupt. The bottom half of the right channel contains
> noise. If you take a look at the image hosted here :
> h
I have been struggling with how to stream live audio+video in an embedded
device. What I have is a process which feeds a h.264 ES into a video fifo, say
/tmp/video.fifo, and a process which feeds ulaw into an audio fifo, say
/tmp/audio.fifo. If I use the H264*.cpp/hh classes and point them at
15 matches
Mail list logo