Thank you for the quick answer, Ross!

On 6 apr 2014, at 22:54, Ross Finlayson <finlay...@live555.com> wrote:

>> Now I'd like to proxy some remote surveillance cameras. As first I started 
>> working on some
>> own proxying code, but that didn't work too well and then I found 
>> ProxyServerMediaSubsession
>> which seems to do exactly what I want to do for now. However, I'd like that 
>> the streams my
>> application serves are multicasted, and ProxyServerMediaSubsession has an 
>> explicit
>> mention that it uses unicast only.
> 
> That's correct.  That code is used to allow a single 'back-end' RTSP/RTP 
> stream (which can be either unicast or multicast) to be proxied - *unicast* - 
> to one or more 'front-end' clients.  It is not intended to be used for 
> multicast front-end streaming, because would be based upon completely 
> different code ("PassiveServerMediaSubsession" rather than 
> "OnDemandServerMediaSubsession").

So OnDemandServerMediaSubsession and its subclasses are always for unicast 
streaming while PassiveServerMediaSubsession can perform
multicast streaming? I understood the latter as a class that simply wrapped an 
existing sink, not that there would be any difference
in the way the data is streamed. I guess it's not as simple as making a version 
of ProxyServerMediaSubsession that  is based on
PassiveServerMediaSubsession instead?


> Fortunately, however, you can proxy to a multicast 'front-end' very simply, 
> using just a modification to the existing tools:
> (Assuming that your 'back-end' camera streams H.264 video.)
> 1/ In the "testH264VideoStreamer.cpp" code (in "testProgs"), change 
> "inputFileName" from "test.264" to "stdin".
> 2/ Then run "openRTSP" on the 'back-end' stream, piping its output to your 
> modified "testH264VideoStreamer" application.  I.e.:
>       openRTSP -v rtsp://back-end-rtsp-url | 
> your_modified_testH264VideoStreamer

The backend cameras are quite normal and output H.264 or mjpeg. But as my 
application is supposed to be able to proxy several
of those cameras, as well as do optional scaling, framerate dropping as well as 
grab still images, I do need to have it all as a
more or less monolithic application. Based on your description above it's 
perfectly doable, as long as I can figure out
how to properly create the source -> sink chain. I haven't understood yet how 
to take data that one example application sends
to a dummy sink and make that a real source to a part of the application that 
could do the multicasting.

> 
>> Also I probably want to also be able to decode the incoming stream in order 
>> to grab
>> still images from it and do some manipulation on it before re-encoding it 
>> again. Is there
>> some good way to "duplicate" the incoming stream into my custom code for 
>> doing the
>> image manipulation while at the same time keeping the normal proxied stream 
>> working?
> 
> Yes, you can use the "StreamReplicator" class.  (Note the "testReplicator" 
> demo application - in "testProgs" - that illustrates how to use this.)

Aha, I was looking for things like "proxying" and "forwarding". That example 
looks indeed very promising and
the StreamReplicator class seems easy to use.

> If you use the mechanism that I suggested above (piping "openRTSP" into a 
> modified "testH264VideoStreamer"), then you could update the 
> "testH264VideoStreamer" some more, by feeding the "StreamReplicator" from the 
> input "ByteStreamFileSource" (from "stdin"), create two replicas, then feed 
> one replica into the "H264VideoStreamFramer" (and thus the 
> "H264VideoRTPSink", for streaming), and feed another replica into your 
> decoder.

I think I have to do it all in one application in this case. But it doesn't 
seem to be impossible at all.

Thank you again for the help, I'm learning more every day. :)


Best regards,
    Jan Ekholm



_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to