Hi,

So far I managed to successfully stream h264/h265 video, jpeg, audio and text 
using live555 from my video and audio encoders at the same time. I created a 
class that inherits FramedSource () and I am using event trigger to signal new 
frames from my threads and I implemented a mechanism with semaphores to wait 
for frame completion so basically I have a mechanism of blocking api.


I know that live555 uses memmove() and memcpy() to put frames into buffers. 
First I do a memmove() into my class that inherits FramedSource() to put into 
'fTo', this is done for all sessions, then H264or5Sink does a memmove() also 
Text sink does it, also freeBSD 'sendTo()' uses memcpy() to immediately 
releasing the buffers. In my case this induce a lot of processor effort because 
sometimes metadata that I feed into text session is Mbs per frame, I am wonder 
since I have a blocking mechanism it is possible to not use memmove() and just 
use a reference to my buffers?

Thank you!
Sorin.
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to