On Wed, Apr 27, 2011 at 8:55 PM, Felix Schumacher < felix.schumac...@internetallee.de> wrote:
> Am Mittwoch, den 27.04.2011, 19:20 +0100 schrieb Guillaume Favier: > > Felix, > > > > Dis you check my workaround ? > > > > > > On Wed, Apr 27, 2011 at 7:01 PM, Felix Schumacher < > > felix.schumac...@internetallee.de> wrote: > > > > > Am Mittwoch, den 27.04.2011, 10:21 +0200 schrieb Felix Schumacher: > > > > On Wed, 27 Apr 2011 09:58:45 +0200, Felix Schumacher wrote: > > > > > On Tue, 26 Apr 2011 21:24:16 +0100, Guillaume Favier wrote: > > > > >> Thanks for your answer Felix, > > > > > Well, after rethinking my original answer, I think you will have to > > > > > define two clusters: > > > > > > > > > > worker.list=cluster1,cluster2 > > > > > > > > > ... > > > > > worker.c2t2.type=ajp13 > > > > > worker.c2t2.host=localhost > > > > > worker.c2t2.port=9002 > > > > > worker.c2t2.redirect=c1t1 > > > > Aargh, this should be > > > > worker.c2t2.redirect=c2t1 > > > Ok, last correction. redirect takes the name of the jvmRoute, not that > > > of the worker. So those two configuration entries should be > > > > > > worker.c2t2.redirect=tomcat1 > > > worker.c1t1.redirect=tomcat2 > > > > > > > > argh you're right, but with my work around you can avoid dealing with the > > route, it is a bit more scalable. > > > > I implement a workaround by dealing with lbfactor : > > worker.c1t1.lbfactor=100 > > worker.c1t1.redirect=cluster1 > > > > worker.c1t2.lbfactor=1 > > worker.c1t2.redirect=cluster1 > > #worker.c1t2.activation=disabled > > > > It is very unlikely that i get 100 request on one server. > > This does looks good but a pretty complex configuration if we move up to > > three server. And complexity will increase with the number of server. > > Seems that load balancing is easier than failover. > I don't think lbfactor is the right solution for your problem, but I > haven't checked it. I think your setup will pass 100 requests to worker > c1t1 and then 1 request to worker c1t2 (probably simplified it quite a > lot). That will trigger your servlets from your "failover" instance, > which you wanted to circumvent. > I am not convinced either by my workaround of your solution but for now that is the best solution. Still looking for a better one. I will put an lbfactor of 100000 that will prevent any request on c1t2. And if c1t1 faill, c1t2 will take all request. -> I have a defacto working failover. And scalable because i can have a c1t3 with lbfactor of 1. > As stated in my correction above, redirect takes the name of the > jvmRoute and I doubt, that your tomcat instance is called cluster1, so > that statement will be wrong. > I got it, and thanks for pointing that out, if I had rtfm correctly earlier, I might have spare quite a lot of time. > You are right that loadbalancing is simpler than my example with two > clusters, but that is because your original requirement were more > complex then simple loadbalancing. > > If you have the memory resources for simple loadbalancing I would go for > it. > I can't afford it, as I spotted in my original mail : 1 webapp is around 400M, at any time I have 6-8webapp (and increasing) started on each server. I would go from 5-6gig to potentialy 10-12 : i am pretty sure some people might disagree with that (me, for one), with this solution almost all the memory is used (few spare), if something failed : I have enough time to react. Regard gui > Regards > Felix > > > > gui > > > > > > > > > > > Regards > > > Felix > > > > > > > > Bye > > > > Felix > > > > > > > > > > You will have to set "jvmRoute" in your tomcats to "tomcat1" and > > > > > "tomcat2". > > > > > > > > > > To mount your webapps, you can use > > > > > > > > > > JkMount /ABC* cluster1 > > > > > JkMount /DEF* cluster2 > > > > > > > > > > Regards > > > > > Felix > > > > > > > > > >> > > > > >> > > > > >> On Tue, Apr 26, 2011 at 8:36 PM, Felix Schumacher < > > > > >> felix.schumac...@internetallee.de> wrote: > > > > >> > > > > >>> On Mon, 25 Apr 2011 09:40:59 +0100, Guillaume Favier wrote: > > > > >>> > > > > >>>> Hi, > > > > >>>> > > > > >>>> I have 2 tomcat 5.5 server. Each of them handling a set (50+) of > > > > >>>> third > > > > >>>> party > > > > >>>> webapps name /ABC* and /DEF*. > > > > >>>> Each of these webapp is quite memory consumming when started > (more > > > > >>>> than > > > > >>>> 300M). > > > > >>>> I would like all connection to ABC* webapps be handled by tomcat > > > > >>>> server 1, > > > > >>>> and connection to webapps DEF* to be handled by tomcat server 2. > > > > >>>> > > > > >>>> My objectives are : > > > > >>>> * server 1 to be failover of server2 and server2 failover of > > > > >>>> server1. > > > > >>>> * any webapp should be instanciate on only one server otherwise > it > > > > >>>> might > > > > >>>> trigger a memory overload > > > > >>>> > > > > >>>> So I set up my httpd.conf as is : > > > > >>>> > > > > >>>> JkWorkersFile "conf/worker.properties" > > > > >>>> JkOptions +ForwardKeySize +ForwardURICompat > > > > >>>> > > > > >>>> > > > > >>>> and my worker.properties as is : > > > > >>>> > > > > >>>> worker.list = failover > > > > >>>> > > > > >>>> # ------------------------ > > > > >>>> # template > > > > >>>> # ------------------------ > > > > >>>> worker.template.type=ajp13 > > > > >>>> worker.template.lbfactor=1 > > > > >>>> worker.template.connection_pool_timeout=600 > > > > >>>> worker.template.socket_timeout=1000 > > > > >>>> worker.template.fail_on_status=500 > > > > >>>> > > > > >>>> # ------------------------ > > > > >>>> # tomcat1 > > > > >>>> # ------------------------ > > > > >>>> worker.tomcat1.reference=worker.template > > > > >>>> worker.tomcat1.port=9001 > > > > >>>> worker.tomcat1.host=localhost > > > > >>>> worker.tomcat1.mount=/ABC* /ABC/* > > > > >>>> worker.tomcat1.redirect=failover > > > > >>>> > > > > >>>> # ------------------------ > > > > >>>> # tomcat2 > > > > >>>> # ------------------------ > > > > >>>> worker.tomcat2.reference=worker.template > > > > >>>> worker.tomcat2.port=9002 > > > > >>>> worker.tomcat2.host=localhost > > > > >>>> worker.tomcat1.mount=/DEF* /DEF/* > > > > >>>> > > > > >>> ^ is this correct or a typo? > > > > >> > > > > >> > > > > >> Sorry for the typo, you're right : it is in fact : > > > > >> worker.tomcat2.mount=/DEF* /DEF/* > > > > >> > > > > >> > > > > >>> worker.tomcat2.redirect=failover > > > > >>>> > > > > >>>> > > > > >>>> # ------------------------ > > > > >>>> # failover > > > > >>>> # ------------------------ > > > > >>>> worker.failover.type=lb > > > > >>>> worker.failover.balance_workers=tomcat1,tomcat2 > > > > >>>> > > > > >>>> The jvmroute is set in both server.xml. > > > > >>>> > > > > >>>> Previously I had put the jkMount directive in httpd.conf, but I > > > > >>>> could'nt > > > > >>>> make the failover work. So I move it in the worker.properties. > > > > >>>> Tomcat doesn't seem to take into account the jkmount directive > > > > >>>> from the > > > > >>>> worker.properties : a webapp is started indifrently on any > server. > > > > >>>> > > > > >>> Tomcat starts all webapps it can find, not only those you > specified > > > > >>> by a jk > > > > >>> mount. Servlets will > > > > >>> only start, if you specify a startup order, or trigger a request > to > > > > >>> a > > > > >>> servlet. > > > > >>> > > > > >>> > > > > >> Ok, maybe I should clarify that : > > > > >> 1) tomcat starts all webapps > > > > >> 2) when a users connect to a specific webapp all objects are > > > > >> instanciate and > > > > >> therefore the memory footprint drasticaly increase. > > > > >> I want to work on the second point : a webapp should be > instanciate > > > > >> only on > > > > >> one server. > > > > >> > > > > >> > > > > >> > > > > >>> So I don't think it is possible to prevent a webapp from starting > > > > >>> in the > > > > >>> "failover" tomcat. But it > > > > >>> should be possible to limit its memory footprint. > > > > >>> > > > > >> > > > > >> I have done some optimisation here and already removed all shared > > > > >> classes, > > > > >> jar, etc... > > > > >> > > > > >> > > > > >>> That said, I find it strange, that you define a special failover > > > > >>> worker > > > > >>> instead of a direct redirect like > > > > >>> > > > > >>> worker.tomcat1.redirect=tomcat2 > > > > >>> worker.tomcat2.redirect=tomcat1 > > > > >>> > > > > >>> > > > > >> But that would mean (solution already tested) : I have to declare > it > > > > >> in the > > > > >> worker list, so when a server fail httpd will continue to try to > > > > >> contact it > > > > >> instead of contacting the failover worker and find a another > worker > > > > >> -> even if it works it would only work for 2 servers not for 3. > > > > >> > > > > >> thanks > > > > >> gui > > > > > > > > > > > > > > > > --------------------------------------------------------------------- > > > > > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > > > > > For additional commands, e-mail: users-h...@tomcat.apache.org > > > > > > > > > > > > --------------------------------------------------------------------- > > > > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > > > > For additional commands, e-mail: users-h...@tomcat.apache.org > > > > > > > > > > > > > > > > --------------------------------------------------------------------- > > > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > > > For additional commands, e-mail: users-h...@tomcat.apache.org > > > > > > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >