Dear Mr. Roach,

the answer below was regarding the recording of the video stream from a webRTC 
peerconnection. I am currently building a webRTC-Tool where a user should be 
able to start a recording using webRTC. This is part of a bachelor thesis at my 
university.

So in the browser I get the stream using getUserMedia(). On the server side 
runs a NodeJS server, that currently only support datachannels. My main goal 
would be to send the Video- and Audio-Stream via the datachannel to the server 
and record it on the server. How would that be possible? Below you state "The 
server starts receiving DTLS/SRTP packets from the browser, which it then does 
whatever it wants to, up to and including storing in an easily readable format 
on a local hard drive."
Since the MediaRecorder-API is not completely implemented, this would currently 
be the only solution I could think of. Using a canvas with a 30 FPS webcam 
video seems fairly unrealistic.

Thank you very much in advance and excuse me for restarting such an old topic,
Kaj-Sören

On Friday, March 29, 2013 at 3:58:12 AM UTC+1, Adam Roach wrote:
> On 3/28/13 20:45, Michael Heuberger wrote:
> > Thanks Adam
> >
> > So you're saying that it should be possible? If so:
> > - where can I see some examples?
> > - what function must be called to send the video to the server?
> 
> While I can't point you to any ready-made examples off the top of my 
> head (although I suspect they exist), the general information flow for 
> real-time server-based recording of a media stream would be something 
> along the lines of:
> 
>  1. Browser retrieves a webpage with javascript in it.
>  2. Browser executes javascript, which:
>      1. Gets a handle to the camera using getUserMedia,
>      2. Creates an RTCPeerConnection
>      3. Calls "createOffer" and "setLocalDescription" on the
>         RTCPeerConnection
>      4. Sends an request to the server containing the offer (in SDP format)
>  3. The server processes the offer SDP and generates its own answer SDP,
>     which it returns to the browser in its response.
>  4. The javascript calls "setRemoteDescription" on the RTCPeerConnection
>     to start the media flowing.
>  5. The server starts receiving DTLS/SRTP packets from the browser,
>     which it then does whatever it wants to, up to and including storing
>     in an easily readable format on a local hard drive.
> 
> 
> Clearly, I've glossed over the details, but I hope that's enough to get 
> you in the right direction with a little more research on your end.
> 
> > - do Mozilla and Chrome use different video codecs for the same 
> > implementation?
> 
> Presently, both Mozilla and Chrome use VP8 for their video codec.
> 
> >
> > I am also confused, what's the difference between RTCWEB and WebRTC?
> 
> The standardization effort to enable real-time communications in web 
> browsers is a cross-organizational endeavor, with the 
> javascript-to-browser interface being defined in the W3C's "WebRTC" 
> working group, and the browser-to-network interface being defined in the 
> IETF's "RTCWEB" working group.
> 
> The term "WebRTC" is used in the press to refer to both halves of the 
> effort. You won't generally see "RTCWEB" unless someone is making a 
> specific reference to the IETF working group.
> 
> 
> -- 
> Adam Roach
> Principal Platform Engineer
> [email protected]
> +1 650 903 0800 x863

_______________________________________________
dev-media mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-media

Reply via email to