Ross,
Thank you for clarification. I do see your point about aborting the client side
right away (you were correct about this being a client app), however,
considering other possible causes (temporary packet loss? some other momentary
hiccup on the LAN?), I prefer to keep session closure (e.
> Okay, let me try being more specific:
>
> 1) The application streams RTSP H264 video.
By “streams”, do you mean transmits, or receives?
If you mean “transmits”, then what is generating the H.264 video? Is it coming
from a file, or from a device (that the OS treats as a file)?
If you mean
As I noted in my earlier response, we usually have to continue to allow a
“RTSPClientSession” to outlive a “RTSPClientConnection” - but that doesn’t make
sense if the session is using RTP/RTCP-over-TCP streaming. In that case, when
the “RTSPClientConnection” dies, we need to also close any “R
Okay, let me try being more specific:
1) The application streams RTSP H264 video.
2) The source is a stock source created by the stack while initializing H264
RTP stream for RTSP session. Not subclassing for it at all. A very rudimentary
“do-nothing” sink is subclassed from MediaSink. The o
> Our software uses liveMedia as an RTSP client to connect to network cameras
> to receive H.264 frames.
> Our sink is being passed data from (presumably) the FramedSource class in
> response to data arriving from the camera/RTSP/H.264 video source and being
> depacketised.
OK, it sounds like y
> Have been seeing some errors, where if a timeout occurs in getNextFrame, the
> next getNextFrame call will result in an abort() call (due to
> fIsCurrentlyAwaitingData being set).
What object is this? Is this a subclass of “FramedSource” that you have
written yourself - e.g., to deliver data
Our software uses liveMedia as an RTSP client to connect to network cameras to
receive H.264 frames.
Our sink is being passed data from (presumably) the FramedSource class in
response to data arriving from the camera/RTSP/H.264 video source and being
depacketised.
We are also getting similar is
Hi,
Have been seeing some errors, where if a timeout occurs in getNextFrame, the
next getNextFrame call will result in an abort() call (due to
fIsCurrentlyAwaitingData being set).
What is the proper behavior in this case:
1) Don’t call getNextFrame again, this source is tainted (don’t belie
> Simply put, a single call to afterGettingFrame in my file sink is receiving a
> single large “frame” that actually consists of an SPS, PPS, and the I frame
> in one single byte stream with no prefix or separator between them.
But where is this data coming from? I.e., what in your code is feed
Simply put, a single call to afterGettingFrame in my file sink is receiving a
single large "frame" that actually consists of an SPS, PPS, and the I frame in
one single byte stream with no prefix or separator between them.
https://dl.dropboxusercontent.com/u/2931731/Bugs/largesps.dat
That is dumpe
I’m having a very hard time understanding exactly what problem you’re having.
In particular:
> It's missing the H.264 prefixes between NALUs and joined them together in one
> afterGettingFrame call
What is “It”? I.e., where is this data coming from? I.e., are you a client, a
server, a proxy
Good morning all.
We're trying to track down an issue we're seeing very occasionally.
This manifested itself as a crash in our code when storing the SPS frames where
the size overrun the 1KiB buffer we'd allocated for it (now fixed).
I was originally thinking it was memory corruption elsewhere ca
I've inherited most of this code and I'm supporting it but I'm not
really an expert at it.
>I'd still like to know exactly where/how/why the crash was happening, to make
>sure that it's not a problem with our code. (As always, I assume that you're
>using the latest version of the code.)
Yes
13 matches
Mail list logo