Irving, Dave wrote:

There's no reason to use crap stuff like a BAOS. Just append byte
arrays to a buffer.

I'll take a look at how this is done in tomcat at the moment.

The lower level code doesn't deal with IS/OS constructs, only with byte arrays.

- It looks like I'll have to implement the ActionHook stuff to
deal with call-backs from the request / response. Is there
anything else I'll need to do?

No.

That's handy. The thing that's currently puzzling me a little is that
the Response seems to have an associated stream, but doesn't really
write to it itself. When should I actually go about pushing stuff in
to that buffer at the connector level? In response to action
call-backs?

There's Response.doWrite, and that's it.

- Im planning to have my "NIOHttpProcessor" use a CoyoteAdapter
the same way the Http11Processor does. Does this sound reasonable
enough?

I don't care how how you name your classes ;)

I was asking more about whether the existing CoyoteAdapter is likely
to be re-usable in such a scenario

If it's not reusable, then you are in trouble ;)

Any pointers / help / advice would be gratefully received.

Obviously, you can feel free to experiment all you want, but such a
specific connector will not be included in Tomcat. Scalability will
be far more limited with your design than even with the fully
threaded Tomcat. If all the server is doing are small requests and
responses, then it will work well (however, the hybrid model also
likes this scenario, so I don't think it will even improve on
that), otherwise it will just break.

Sure, there's no way I could see this being included in Tomcat
"proper". Like I said, its just a prototype to see if it solves a
specific problem im experiencing (I just cant configure tomcat with
20,000 threads). However, your reply confuses me somewhat. These are
not going to be small requests / responses: In fact, they are likely
to be fairly large multi-part messages (SOAP and the like). In
addition, the processing latency is going to be large (potentially up
to 15 seconds per request). Surely this is a reason ** for **
breaking the thread per connection model? Why would it break with
anything other than small requests and responses? (There's nothing to
stop parsing to be offloaded to a small - probably CPU count matched
- thread pool)?

If you get large responses, then I figure GC (and maybe memory usage) is going to be a problem (all your buffers will pile up, and most likely they are not going to be short lived objects). A thread will still be needed to run the servlet, so I hope the amount of time spent in the service method will be lower than 15 seconds. Otherwise, there's no real solution besides having a large amount of threads.

Personally, I did experiment and implement a hybrid model for connection handling, which improves on some aspects of the regular thread per connection model, while keeping its benefits.

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to