Re: WebSocket progress report

2012-02-06 Thread Jonathan Drake
I'm one the three CS grad students working on WebSocket (along with Petr
Praus).

Just wanted to give an update on our progress, to let you know what we're
working on:

Adding support for fragmented payloads:
Right now, after receiving a frame, StreamInbound unmasks the payload in a
WsInputStream and passes it up to the servlet via
onBinaryData()/onTextData().
To support fragmented frames, we add an intermediate step: after unmasking
the payload, write it to a PipedOutputStream connected to a
PipedInputStream that we pass upward via onBinaryData()/onTextData(). When
the next fragment arrives, keep streaming data through the pipe.

This has the advantage of also allowing us to stream huge payloads (RFC
6455 allows for a 64-bit extended length field---way too much data to
buffer in memory all at once).
It has the minor disadvantage of breaking the ByteBuffer wrappers from
MessageInbound (we can still use them for small payloads if we buffer
fragments in memory)

I'm working on a patch that implements this...maybe a day or two.

I'd appreciate any early criticism you may have---otherwise I mainly just
want to prevent duplicate work by explaining what we're up to.

Cheers,

Jonathan Drake


Re: WebSocket TODOs

2012-02-23 Thread Jonathan Drake
On 2012/02/23 17:24, "Mark Thomas"  wrote:

>On 23/02/2012 12:42, Mark Thomas wrote:
>> All,
>> 
>> The bulk of the WebSocket implementation is complete. There are,
>> however, still quite a few TODOs.
>> 
>> 1. Autobahn failure of test 7.5.1 (close handling)
>Fixed.
>
>> 2. Autobahn UTF-8 failures. Invalid UTF-8 is not being detected by the
>> Reader. Needs further investigating.
>Fixed. (Thank you Apache Harmony).
>
>> 3. Autobhan performance failures. Not all these tests are completing.
>> Need to figure out why.
>The implementation is horribly slow. See point 6.


Ran into the same performance issues in our websocket branch. Found that
buffering the processor IO streams vastly improved performance.

Tried the same thing on the current trunk and got an order of magnitude
improvement.

The trunk as of now ran Autobahn 9.1.1 (64K payload) in about 500 ms,
and ran 9.1.5 (8M payload) in about 55 seconds. Case 9.1.6 (16M) fails.

But wrapping the [Input|Output]Streams in UpgradeBioProcessor
with Buffered[Input|Output]Streams gives the following:
9.1.1 (64K) ~120 ms (versus 500 ms)
9.1.5 (8M)  ~6 sec (versus 55 sec)
9.1.6 (16M) ~12 sec (versus >100 sec)

I haven't played with Upgrade[Nio/Apr]Processor to see how they behave.


>
>The rest are still TODO.
>
>Mark
>
>> 4. i18n. Fix all the TODOs and ensure there are no hard-coded English
>> messages.
>> 
>> 5. Threading. Fix the TODOs associated around multiple threads trying to
>> send messages at the same time.
>> 
>> 6. Profiling. Take a look with YourKit and Autobahn's performance tests.
>> If there are obvious bottlenecks, fix them.
>> 
>> 7. Add some documentation but mainly rely on the examples and the
>>Javadoc.
>> 
>> 
>> Once the above is complete, I intend back-porting the implementation to
>> 7.0.x.
>> 
>> I'd also like to see a lot more examples.



We wrote a nifty chat demo on our branch--will see about porting it to the
current trunk.



>> 
>> I will probably back-port the generic upgrade support first before the
>> above is complete.
>> 
>> Help with any/all of the above welcome.
>> 
>> Mark
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: dev-h...@tomcat.apache.org
>> 
>
>
>-
>To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
>For additional commands, e-mail: dev-h...@tomcat.apache.org
>



-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org