On 10/15/2010 09:58 AM, Mark Thomas wrote:
On 15/10/2010 08:02, Mladen Turk wrote:
Hi,

I'm working for quite some time on the light-weight
VFS layer (with the java.io.* as the only provider at
the moment) to be used as the Tomcat's physical file
system access. The ultimate goal is to be able to run
the Tomcat on top of things like Hadoop or similar distributed
file systems (eg. GFS) by just using the provider module.

Sounds interesting. What is the use case? Resilience?


Current users that use NFS for sharing the common
data among cluster of Tomcat's.
Real cloud deployments.
Some more which I cannot talk about at this moment :)

How far does just a DirContext implementation get you?


Depending on the provider and the API he offers.
Basically anything that touches the file system directly
has it's wrapper in o.a.t.vfs
Yes, for some providers it can be very complex, but at
the end it's plugable module, so outside of the core Tomcat,
and as such does not influence the Tomcat code stability.

Now, the amount of changes is pretty huge but it involves
mostly changing the java.io.* with the o.a.t.vfs.*
counterparts.

"huge" makers me nervous but it if is mostly search and replace that is
less of a concern. Any feeling at the moment to the amount of overhead
this adds?


For default provider which wraps the java.io at all, almost none.
Majority is just ruled out by JIT.
For others if you use a custom zip layer the load time can fall
dramatically. Sure if running on top of Hadoop things
will be noticeably slowly then on physical file system, but
we are comparing apples and pears.


As final, it'll be in the sandbox, and I really have no
plan to touch the core Tomcat until proven stable and
performant, and then eventually accepted.


Regards
--
^TM

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to