In MultiFramedRTPSink we have :
Boolean MultiFramedRTPSink::isTooBigForAPacket(unsigned numBytes) const {
// Check whether a 'numBytes'-byte frame - together with a RTP header and
// (possible) special headers - would be too big for an output packet:
// (Later allow for RTP extension header!
I have, I think a unique problem that uses the same section of code so it
seems a good time to ask a related question.
I have reuseFirstSource set because it is a live source.
In this test I have 2 or more clients connected, 1 of them is a slow
consumer causing the OS to backup the socket until a
In RTSPClient, Would it be better to have the cast from char* to u_int8_t*
inside the write function, so those operations systems that expect send to
have a char * will not need re-casting?
(If not an alternate solution like typedef SOCKET and DATATYPE in a common
platform header or compiler define
I can diff the code once it downloads but I thought it might be good to ask
at a high level.
Does this encrypt only the RTSP part of the conversation? Is the video
still in the clear?
I know there are 3 ways to stream in live555, TCP interleave, tunneled and
UDP.
Did we get SRTP and SRTCP in this
In GenericMediaServer:
Is there a memory leak If I call AddUser with the same credentials more
than once ?
void UserAuthenticationDatabase::addUserRecord(char const* username,
char const* password) {
fTable->Add(username, (void*)(strDup(password)));
}
strDup is called each time, but th
I did check that and we have plenty available. Using about 104 sockets out
of 1024
On Wed, Sep 18, 2019 at 10:56 AM Ross Finlayson
wrote:
>
>
> > On Sep 18, 2019, at 7:30 AM, Jeff Shanab wrote:
> >
> > I have been using Live555 for many years so I am fairly familiar
I have been using Live555 for many years so I am fairly familiar with the
code. Most of my time has been with the client side but I am currently
having some trouble with a Server implementation and was hoping someone
might have some insight.
The issue is after 3 to 10 days of constant running serv
I have experimented with a few ideas for buffer mgmnt in live555,
Admittedly mostly as a client not a server.
New GetNextFrame implementations.
Pass in a ref to a vector and let it resize and fill in the data member.
Worked well, not sure how efficient.
Pass in a pointer to a struct that has v
in netcommon.h we have the following code :
...
1> /* Windows */
2> #if defined(WINNT) || defined(_WINNT) || defined(__BORLANDC__) ||
defined(__MINGW32__) || defined(_WIN32_WCE) || defined (_MSC_VER)
3> #define _MSWSOCK_
4> #include
5> #include
6> #endif
7> #include
...
>From my understanding a
I have tested for over 100 hours with instrumentation to confirm the code
is being used.
This happens during rtsp-over-http and appears to be where the device is
not expecting Receiver Reports back on the GET connection.
RTPInterface calls the following function in GroupsockHelper.cpp with 500
mil
As far back as 6 years ago I see that Live555 supports rtsp-over-http
To quote Ross
"The protocol tunnels RTSP over HTTP, but RTP (and RTCP) packets
are also tunneled over the RTSP channel. So, strictly speaking, it's
(RTP/RTCP-over-RTSP)-over-HTTP."
But when I wireshark I see the Receiver Report
In OnDemeandServerMediaSubsession in StreamState::startPlaying (line
530), we call
void RTCPInstance::setSpecificRRHandler(netAddressBits fromAddress,
Port fromPort, TaskFunc* handlerTask, void* clientData)
With "dests->tcpStreamSock", a socket number, for the first argument.
I
I was looking at what it would take to update the Ugly groupsock as
mentioned in past posts for the purpose of adding things like
rtsp-over-https and srtp.
The socket usage for rtsp-over-http is simple and direct inside RTSPClient
and I see the 2 sockets created for GET and POST. But then the
Med
In my last project my server collected camera stats and served them up in a
web page using and embedded (mongoose) web server. I hiiked into the
callback functions of my own code, This was security cameras to disk,http
streamaing, and my own browser plugin streaming.
On Wed, Feb 15, 2017 at 3:32 P
Other than the way the two OS's report ?
Windows divides by number of cores, linux shows you per core.
On my 8 core windows will show 12% with linux shows 96%
On Mon, Oct 3, 2016 at 9:54 AM, Dr. Monotosh Das <
monotosh@videonetics.com> wrote:
> Dear Ross,
>
> Is there any evidence that live
I am dealing with a mixture of security cameras that can be from 640x480
to 4000x3000.(4K) and beyond.
These 4K cameras(H264) can throw some large NAL units on the KeyFrame
when the quality is high and the lighting is just right. I hate to make the
buffer in the client large for all cameras ju
I have been using a network emulator to introduce network issues to test
the robustness of our code and found this conversation timely.
In some of my tests which are rtp over tcp, the lost packets can be
re-transmitted. The ReorderPacketBuffer then can take the rtp packets
inside that may be out o
Why not add a metadata subsession then you can put whatever you like in,
for example, an XML payload.
In security cameras they put motion data, analytics, dewarp parameters,
multicamera calibration etc.
Small amounts of information have also been put into the SEI But it sounds
like you want to pu
Sorry, I cannot.
On a side note, I just stumbled across something you may be interested in
for a live site. (yeah got it, pun intended)
http://doxygraph.sourceforge.net/
I found it useful because it lets me look at a subset interactively.
On Mon, Apr 4, 2016 at 1:03 PM, Ross Finlayson
wrote:
I have used live555 for quite a while and what you need to understand is
that live555 does internally what an operating system does at a more
generic level. Jump between tasks. An operating system has knwo knowledge
so it slices tasks and must preserve and restore context. By using an event
loop t
Check your quoteing, it sounds like maybe there is an unbalanced quote so
the CR is ignored and it is waiting for the completion ???
On Wed, Mar 9, 2016 at 6:47 PM, Kelley Klassen wrote:
> When I run openRTSP with commands that were working fine yesterday, I only
> get >
>
> Like a secondary pro
Windows machine is something *other than* "c:\Program
>Files (x86)\Microsoft Visual Studio 14.0[..or latest release]\Vc", change
>the "TOOLS32 =" line in the file "win32config”.
>4. In a command shell, 'cd' to the "live" directory, an
I personally use CMake to generate Visual Studio, Xcode, makefile and
embedded projects of live555 and test programs.
A bit of a learning curve at first but I've grown to love it.
On Thu, Mar 3, 2016 at 3:37 AM, Deanna Earley
wrote:
> You need to add all referenced classes (the .cpp at least).
I finally got around to trying this to see how much space could be saved
and it is about 40% depending on target platform.
This is the result of my first attempt at this.
- I had to ifdef out a section of code in RTSPClient as it wants to pull in
server components when it implements the REGISTER c
I was wondering too becasue, my first thought is keep it simple and work in
the language the library supports. (Software engineering for me is "same
stuff, differnt language")
c/c++ is the best cross platform compiled language for me
python is the best cross platform scripting language for me.
Ha
Is it possible, and allowed in the license, to build a client only dll
to save size?
If I am pulling from rtsp sources only, to record them for example I do not
need the server or proxyserver code and could save a lot of space.
___
live-devel mailing
I concour.
Here are some data points.
I have been streaming 4K cameras recently one of them is actually
4000x3000 resolution.
A Sony is the normal 3840 x 2150 streaming at 30 fps, It puts out 5
slices for each frame type. I,P,B Slices also enable us to parallelize
the decoding so it can be r
. Indeed it was CPU not keeping up. We have had
> good experiences with threads dedicated for decoding sessions with streams
> of even more than 5MP. Problems come up when we have too many of these, of
> course there is so much you can ask the CPU.
>
> For me it was that one of the sou
I work with 5Mp and larger streams a lot.
This is probably not a live555 issue but there are 2 things that come to
mind that the larger streams stress without using a lot of CPU.
Decoding not keeping up. (Buffers on client side, watch client memory to
see this.)
Single threaded decoding can tak
I think Ross may have missed the bit about different subnets.
I noticed your email indicates you work at Samsung. I work at Exacq
Technologies and have done alot with Samsung cameras and we use live555.
My usual contact at Samsung is ByungJin Son [byungjin@samsung.com].
When there is more th
When I build live555 on OSX I have to add a definition for SOCKLEN_T.
#ifdef SOLARIS
#define u_int64_t uint64_t
#define u_int32_t uint32_t
#define u_int16_t uint16_t
#define u_int8_t uint8_t
#endif
#endif
#ifdef __APPLE__ << I add these
#define SOCKLEN_T unsigned i
I have used boost atomic or just a boost mutex with a scoped_lock when I
needed to protect, It implements cross platform the best avail on a
platform.
On Tue, Mar 17, 2015 at 9:07 AM, Robert Smith wrote:
> Ok, I will implement my own TaskScheduler class, but operations that
> read-modify-wri
Since it is 1 to 2 seconds in delay which could be a "GOP" in size, Is it
possible since axis makes it optional to insert the SPS and PPS that there
is a delay getting the info necessary for the SDP package that ends up
persisting?
On Tue, Jan 27, 2015 at 6:49 AM, Ross Finlayson
wrote:
> No, whe
Safari or apple in general is very picky about the segment length and what
is in them. I did get this to work but had to modify one of the live 555
classes. It was a few years ago but I did send it to this list as a
suggestion, so it is in the history somewhere. While the standard allows
you to no
How are you playing the file?
The container format you use is in charge of maintaining the timing
information and the player you use or write uses the stored timestamps to
gate out the frames.
Double speed is ominous though. Could it be that you have interleaved
frames?
If it is AVI, then the c
and
> perform the near-real time trick play. I hope that makes sense and better
> clarifies our goal. That is why we are looking into the tsx file generation
> possibilities.
>
>
>
> Thank you,
>
>
>
> Michael Chapman
>
>
>
> *From:* live-devel [mailto:liv
That seems similar to security video and if it is really just the recent
past, can be done entirely in the client. It may be preferred for the
smooth transition from now to recent-past and back to realtime.
Note: You would only cache uncompressed video. I maintained 2 gops worth
decoded and the re
Security cameras keep it simple. They will not usually have bidirectionally
predictive frames as that generally takes a two pass encoder and adds
latency. Either way that is not the concern at the streaming level although
i think live555 will reorder frames if need be.
Remember that this is an RTS
processing that you want on these
> incoming H.264 NAL units. Once again, I suggest that you review the code
> for “DummyRTPSink::afterGettingFrame()” ("testRTSPClient.cpp”, lines
> 500-521). In your own application, you would rewrite this code to do
> whatever you want to the incoming NAL un
You need to create a filter and insert it into the chain.
I had this exact scenario and what I had was my own filter that handled the
incoming frames. All my frames were small POD classes with a bit of meta
data and a buffer holding the frame. I had a pool of these of different
sizes and they were
I think someone mentioned this camera is ONVIF. if so the URI is in the
XML. Maybe an opensource tool like DeviceManager can extract it for you.
On Fri, Oct 31, 2014 at 7:23 AM, Dnyanesh Gate <
dnyanesh.g...@intelli-vision.com> wrote:
> Hi,
>
> We do use some swann network cameras for our product
Cool. Seeing as i just spent the last week working on our hikvision plugin
i am actuallycquite familiarcwith their api. LOL
Hikvision oem a lot
Interlogix for one my previous employer and now my current employer.
There is actuallyva lot of that in the industry, companies like UDP and
Sercomm. Etc.
I have veen wanting to get one of these . I will look tomarrow at work to
see if we have it docunented anywhere
On Oct 30, 2014 10:14 PM, "Ross Finlayson" wrote:
> Not off the top of my head but start wireshark and use whatever software
> they provide.
>
>
> That’s the thing. The manufacturer w
Not off the top of my head but start wireshark and use whatever software
they provide.You may be able to filter the traffic for tcp.dstport == 554
and then follow the stream.
On Thu, Oct 30, 2014 at 8:23 PM, Ross Finlayson
wrote:
> I’m trying to connect (using LIVE555 RTSP client software, of co
set up a vpn tunnel Rtsp just happens to be the connection protocol on top
of it then.
On Wed, Oct 8, 2014 at 8:54 AM, Alejandro Ferrari <
alejandro.ferr...@vixionar.com> wrote:
> Hi Guys,
>
> I'm reading about some way to protect RTSP streaming, we have a security
> cam, that stream over RTSP, b
f i r e w a l l ???
Wireshark may help here. If it is a sea of red then that may indicate
issues.
Also check the measurements. Are all users getting the same? ie 1000 at 1
fps vs 800 @ 10fps ?? so check the bandwidth to account for loss to.
On Thu, Oct 2, 2014 at 5:47 PM, Pete Pulliam wrote:
Are you making a copy for each connected viewer?
The system I worked on just over a year ago could stream around 400 streams
but never was it 400 of 1 stream, it was 5 or 10 of 100-200 sources.
Even then I used a buffer pool and a shared pointer so when the last
unicast client was sent the packet,
Does open broadcast need raw video as input? Live555 is a streamer only.
I decoded my frames using libavcodec(FFMPEG) and then apply them to a
texture using opengl or directx on windows,linux,mac,android or iphone.
same procedure for all.
Libavcodec decodes to YUV422 data, That is a luminance value
If I am understanding the question correctly I did something similar using
live555 in a previous project. Originally this was only because the device
(Sercomm security camera) had some special code on it. I later wrote a
small app that allowed any RTSP camera inside the network to do reach out
from
The first of the 3 images looks like the information has been truncated, at
least on the luminence plane but with all grey it is not certain. The first
frame in full color pictures from earlier email, the one of a cube and
pyrimid is definitely truncated. With the decoders I've used, when the
decod
Bandwidth is not determined by live555.
The bandwidth of H264 video is determined by resolution, quality,
gopsize and frames per second used during encoding. Pretty much in that
order. The file would need to be transcoded into a new file with another
tool like ffmpeg to change it's band width
I am lucky with most encoders, I deal with about 23 brands.
At least 6 major brands I communicate with and we have a healthy
relationship in which we get changes in their FW on request. Inded
sometimes it seems we are debugging their firmware for them. It rolls both
ways. (A lot of them use live555
A related technical question. Can we split a large Iframe nal frame into
slices after the fact? Sometimes the encoder is a closed piece of
hardware/firmware. Is it possible to split it into slices at a macroblock
boundary?
On Wed, Jun 18, 2014 at 10:19 AM, Vikram Singh
wrote:
> Hi ross,
>
>
>
>
I wrote a HLS streamer for live stream in my last job that used live555 to
pull from security cameras. I only had to make a small modification to the
MPEG2TransportStream class to make deterministic PAT packet (or was that
PES, sorry this is from memory). These packets are currently inserted on a
t
What is the resolution? At higher resolutions and/or encoding qualities the
encoder/decoder may require multiple calls to pump out the frame. I assume
that you are sticking with a simple base profile specifically to avoid B
frames. (Bidirectional predictive B frames would require latency
obviously)
Check Byte Alignment,Pixel format, and encoder slices.
How are you encodeing them? If the frames are large, then the encoder may
spit out more than one frame with the same timestamp and differnt sequence
number.
Nal units may be [7][8][5][5][5][1][1][1]... instead of simply
[7][8][5][1][1][1]
In my Archiver, Restreamer, and Player I have a watchdog that is kicked
when frames come in. If I fail to get frames the watchdog expires; or if
ever there is a socket error. It is a do-over. I start again with the
DESCRIBE. I did my own server and there was never more than 1 stream from
the camera
Sounds like you have a good understanding of the nal units and h264 traffic
in general. So It might be a decoder issue.
I have found descrepencies in interpretations of the standard.
>From experience...
Each Nal unit needs a header before it goes to the decoder. This is the 00
00 00 01 before the
Because of this suspicion, I ran without the tool at very low resolutions
and used windows' right-click mini-dump feature. I then post processed
these dumps and found the same exact objects, the 68 byte BufferPacket
structure and the 10k BufferPackets growing. Surprisingly they are not in
lock step
I have a strange problem showing up while trying to solve a memory leak
elsewhere in my code. When I attach a memory tool it somehow messes timing
or something and live555 begins to create BufferPacket structures of 68
bytes with a payload of 10K. Somehow they are not released so the code to
ge
Thanks. I do understand how the event model works (Quite eloquent BTW).
The fact that it throws away complete NAL units if a piece of a fragment has
loss explains why it appears as dropping NALs.
Here is the only part I cannot figure out. Why does it not lose any packets if
I use OpenRtsp on th
Thankyou.
I think I was able to confirm this with a printing breakpoint in
doGetNextFrame1() inside the if (fPacketLossInFragmentedFrame).
I also get a TCP_Zero window warning if tcp and socket unreachable if Udp
Could it be that the packets are incorrect and live555 cannot find the
beginni
frames. The
entire NAL is dropped. Almost like there is some random drop packet test code
going on.
This has been happening for a long time but I now have a camera that is very
sensitive to it and instead of occasional artifacts, It is a key frame then
grey.
TIA
Jeff Shanab, Manager
I have solved this in my code by cacheing and inserting into the stream (using
a filter).
The decoder always needs it and, as stated, some encoders do not(axis
default,pelco), some do(gvi,samsung) and some it is a setting(axis)
From: live-devel-boun...@ns.live555
ot work
deterministically.
Maybe an #ifdef NEED_LIVE555_BOOLEAN wrapper and let us #define or not?
Jeff Shanab, Manager-Software Engineering
D 630.633.4515 | C 630.453.7764 | F 630.633.4815 |
jsha...@smartwire.com<mailto:jsha...@smartwire.com>
[MVSSig]
This message and any atta
Thank you very much! I will update immediately.
Unfortunately the cameras themselves are purchased both by us and resold and
out of band by the customer.
Some can and are willing to update, others are not even open to it.
This message and any attachments contain confidential and proprietary
en 3 times in an hour but most the time it shows up after
about 2-3 days of operation.
Jeff Shanab, Manager-Software Engineering
D 630.633.4515 | C 630.453.7764 | F 630.633.4815 |
jsha...@smartwire.com<mailto:jsha...@smartwire.com>
[MVSSig]
This message and any attachments c
We have a product that uses live555 along with libavcodec to play streams from
IP cameras to mobile devices,browsers and an archiver/restreamer.
Using live555 in some environments means you are not using the built in
libraries and therfore not hooked into their hardware decoder. But a single IP
..."
I am looking for suggestions on where to look for the source of this problem.
Simple or scaled down examples do not have a problem. Valgrind like tools on
windows have a high overhead and also seem to effect the occurrence. (Indicates
a race!)
Thanks
Jeff Shanab, Manager-Software
Just an FYI.
I have noticed that many items are not initialized on construction.
Mostly this shows up during incomplete usage by the programmer, me, but it is
important to note that the checks for null that are in the code fail when in
DEBUG on Windows.
GCC generally zero's things, Microsof
I deal with security cameras and over time, the resolutions have been getting
higher.
They love to crank up the quality and resolution to give users that initial:
"Wow this camera has a nice picture"
But our customers are bandwidth sensitive and the first thing we have to do is
change the setti
to have changes that effect this
Jeff Shanab, Manager-Software Engineering
D 630.633.4515 | C 630.453.7764 | F 630.633.4815 |
jsha...@smartwire.com<mailto:jsha...@smartwire.com>
[MVSSig]
This message and any attachments contain confidential and proprietary
information, a
course addServerMediaSession is necessarily called form a different thread
than the one event loop.
Jeff Shanab, Manager-Software Engineering
D 630.633.4515 | C 630.453.7764 | F 630.633.4815 |
jsha...@smartwire.com<mailto:jsha...@smartwire.com>
[MVSSig]
This message and any attac
I just built and ran live555 and the proxyRtsptest app on my raspberry PI and
I didn't have to change a thing.
From: live-devel-boun...@ns.live555.com [live-devel-boun...@ns.live555.com] on
behalf of Ross Finlayson [finlay...@live555.com]
Sent: Tuesday, March 19,
I record video from a security camera. I do not want to throw away up to the
first 2 seconds so I measure the offset and adjust my timestamps to the
timebase of the machine. When I get the sync notification I adjust the
adjustment. This adjustment is continually applied to all timestamps.
It is
/port change without restarting the whole application)
Jeff Shanab, Manager-Software Engineering
D 630.633.4515 | C 630.453.7764 | F 630.633.4815 |
jsha...@smartwire.com<mailto:jsha...@smartwire.com>
[MVSSig]
This message and any attachments contain confidential and propr
Thanks. I have gotten it to crash 4 or 5 times now and in all cases the Packet
is in varying states of destruction. The previous observation about max packet
size was not repeatable. This points to a race condition. The code is
identical to all the other camera models but it is possible this is
needs to be done to handle this case.
Jeff Shanab, Manager-Software Engineering
D 630.633.4515 | C 630.453.7764 | F 630.633.4815 |
jsha...@smartwire.com<mailto:jsha...@smartwire.com>
[MVSSig]
This message and any attachments contain confidential and proprietary
information, and may c
is 2 but the memory at
that location is only 23 bytes and the rest is the cdcdcd... pattern which is
"Clean memory" it means it is newly initialized and so even that looks ok.
Any suggestions on how to track this down? This was after a few minutes of
streaming video.
Jeff S
The namespace would allow us to get away from the define which is a text
substitution and is what causes the clash.
However. What about something like
#if Defined(USE_LIVE555_NAMESPACES)
# define OPENLIVENS namespace {
# define CLOSELIVENS } // close namespace LIVE555
# define LIVENS LIVE555
rland for years.
Jeff Shanab, Manager-Software Engineering
D 630.633.4515 | C 630.453.7764 | F 630.633.4815 |
jsha...@smartwire.com<mailto:jsha...@smartwire.com>
[MVSSig]
This message and any attachments contain confidential and proprietary
information, and may contain privileged i
de the tool yet.
My code is very much like the openRTSP example. It appears as if the command to
get OPTIONS does not return a value but the event loop is started already and
the teardown tries to delete the buffer that never got any data.
Jeff Shanab, Manager-Software Engineering
D 6
I am currently designing an app around rtspProxy server to run on arm.
(raspberry pi)
As a test I let it run 4 channels and 4 clients (1 per channel) for 48 hours.
I did not see any leak or cpu % growth in that time.
If you tell me your setup so I reproduce it exactly, I can run it on my pi.
This
I allow snapshots from my video to be saved when a user pauses and navigates
to a frame.
This is done by encodeing the RGB image into a JPEG image with avcodec.
I also allow pulling a snapshot from a file at a timestamp.
In this case I navigate to the closest keyframe, decode it from h264 and en
I had the same problem a while back. I also use live555 feeding libavcodec.
While the standard only says you need to have a 7 and an 8 before the first 5
and that after that the 5 or 1 is valid, I have had decoding trouble because of
it.
So while all the following are technically legal
7,8,5,1,1,
I implement fast forward, and reverse play, and stepping forward and backward,
but I do it from a buffer on the client.
I thought Reverse required a buffer.
Can rtsp stream backwards, By gop obviously, since diffframes depend on
keyframe
This message and any attachments contain confidential a
If you are using the HTTP Live Streaming that depends on the MPEG-2 Transport
Stream, I implemented this on live cameras by gorilla subclassing (Cut-n-paste
and modify) the MPEG2TransportStreamFromESSource to a
MPEG2TransportStreamFromESSource4iOS class. I changed the inserting of the PAT
and
The containers using RTSP already define stream types for meta data beyond
Video and Audio. Often used for analytics data for security camera video for
example. Is this what we are talking about?
From: live-devel-boun...@ns.live555.com [live-devel-boun...@ns.live
x27;s memory footprint. ( a
good trick for isolating system load from app on windows, run 32bit process on
64bit os)
From: live-devel-boun...@ns.live555.com [live-devel-boun...@ns.live555.com] on
behalf of Jeff Shanab [jsha...@smartwire.com]
Sent: Friday, January 11,
Oh, Sorry, Less than 25% CPU with the 400 streams. Leading me to believe I can
handle a lot more on a rack server with multiple interfaces and a better
upstream.
Indeed we now have over 500 cameras on a co-located dl380.
I would like to know what is the best way in such a situation to know when
I can only give a few data points. A lot depends on the bandwidth per stream.
I receive and save to disk from security cameras. These are, on avg, set for
10fps d1(704x480)
On one desktop PC I7-950, 12G ram and Solid-state drive I was testing my
software thruput. A rather high end machine to eli
No direct support, you have to write something, a plugin, but the live555
libraries help a lot!.
I use live555 libraries in a browser plugin written using the FireBreath Cross
browser plugin framework.
I am connecting directly to rtsp streams of security cameras as well as our own
http.
I also
In the simplest case, H264 defines a protocol that starts with a key frame and
has a succession of difference frames that are much smaller. ie the 24KByte key
frame has all the information needed to draw the whole frame and the 1Kbyte
diff frame has only the changes needed to update the key fram
>>Yes it's playable with vlc (and ffplay as mentioned), doesn't make it
>>any less un-playable with PS3 and mplayer though.
When I first got started with video I use VLC as my gauge that things were
working correctly. I learned over time tha VLC does an absolutely remarkable
job of playing vi
I agree It seems unlikely that it would be stuck in recvfrom() following a
successful select, but it does. I have proven this with debugging.
I checked the windows docs and if the flags are not set correctly on socket
creation, the default will block forever. I am looking for where this socket
I have a problem with pulling RTSP streams that needs to be reliable. If the
stream stops I have a watchdog that times out after 5 seconds and changes my
watch variable so the event loop in basic task scheduler will exit.
The problem is in one edge case, I have debugged found that the flag is n
RTSP is a protocol, so is HTTP, you need a plugin.
Browsers natively handle HTTP but they do not yet directly support RTSP, it
requires a plug-in. Players like VLC, Quicktime and Flash can handle RTSP and
usually have a plugin avail, indeed flash is the most common defacto plugin out
there.
I
This situation with the large keyframes reminds me to ask if live555 can handle
the Periodic Intra Refresh protocol? X264 supports it and If i can find an
embedded device that can handle it it would help me a lot on my project.
___
live-devel mailing l
I subclassed the MediaSInk to receive from MPEG2traonsportStreamFramer
I gorilla subclassed MPEG2TransportStreamMultiplexor calling it
MPEG2TransportStreamMultiplexor4ios and MPEG2TransportStreamFromESSource
calling it MPEG2TransportStreamFromESSource4iOS
The trick I used was to set up a stan
From: Jeff Shanab
Sent: Monday, April 30, 2012 9:15 AM
To: live-devel@lists.live555.com.
Subject: 100% CPU in tcpReadHandler in extream case.
I have read thru the archives and found mention of this issue before but also
that it had definitely been fixed in 2010.03.14.
I am running 2012.2.29
1 - 100 of 184 matches
Mail list logo