JKStatus Bug?

2006-05-16 Thread dhay
Hi,

We're using 3 load-balancers to seperate our requests up (client and admin
etc.) and numerous tomcat servers (we're running Apache in front).

We need to be able to disable servers on a load-balancer level - so I need
to disable 2 of the 3 load balancers on a particular server, say.  We can
do this fine using the jkstatus page, but when the machines are restarted,
the changes don't seem to have been persisted.

And it seems that workers.properties is not fine-grained enough to handle
this?

Our setup is below...

Any ideas how to get around this?

cheers,

David


mod-jk.conf snippet:

JKMount /framework/admin/* adminloadbalancer
JKMount /framework/httpadaptor/* adaptorloadbalancer
JkMount /framework/* clientloadbalancer

# if you wanted to only load-balance a sub-context, you could
# map the module differently, such as:
# JkMount /myContext/* loadbalancer

JkMount /status/* jkstatus




workers.properties:

worker.LAUREL.type=ajp13
worker.LAUREL.lbfactor=1
worker.LAUREL.cachesize=25
worker.LAUREL.port=8009
worker.LAUREL.host=LAUREL.mw.prtdev.lexmark.com

worker.BLUFF.type=ajp13
worker.BLUFF.lbfactor=1
worker.BLUFF.cachesize=25
worker.BLUFF.port=8009
worker.BLUFF.host=BLUFF.mw.prtdev.lexmark.com


worker.adminloadbalancer.type=lb
worker.adminloadbalancer.method=B
worker.adminloadbalancer.sticky_session=1
worker.adminloadbalancer.sticky_session_force=1
worker.adminloadbalancer.local_worker_only=1
worker.adminloadbalancer.balanced_workers=BLUFF,LAUREL

worker.clientloadbalancer.type=lb
worker.clientloadbalancer.method=B
worker.clientloadbalancer.sticky_session=1
worker.clientloadbalancer.sticky_session_force=1
worker.clientloadbalancer.local_worker_only=1
worker.clientloadbalancer.balanced_workers=BLUFF,LAUREL

worker.adaptorloadbalancer.local_worker_only=1
worker.adaptorloadbalancer.type=lb
worker.adaptorloadbalancer.method=B
worker.adaptorloadbalancer.sticky_session=1
worker.adaptorloadbalancer.sticky_session_force=1
worker.adaptorloadbalancer.balanced_workers=BLUFF,LAUREL

worker.jkstatus.type=status
worker.list=jkstatus,adminloadbalancer,clientloadbalancer,adaptorloadbalancer


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: JKStatus Bug?

2006-05-16 Thread dhay
Hi Rainer,

Thanks for the reply.

As far as configuration change suggestions, how about making things more
fine-grained, so you can specify the worker within the balancer - eg:

worker.adminloadbalancer.BLUFF.disabled=1

Presumably something like that is happening within jkstatus?

cheers,

David
x54680


|-+>
| |   Rainer Jung  |
| |   <[EMAIL PROTECTED]|
| |   >|
| ||
| |   05/16/2006 01:36 |
| |   PM   |
| |   Please respond to|
| |   "Tomcat  |
| |   Developers List" |
| ||
|-+>
  
>--|
  | 
 |
  |   To:   Tomcat Developers List   
 |
  |   cc:   
 |
  |   Subject:  Re: JKStatus Bug?   
 |
  
>--|



Hi,

it's true, that jkstatus doesn't persist changes. There is no
functionality there to write a workers.properties (it's somewhere near
the end of the TODO).

Concerning disabled: Yes, disabled at the moment is an attribute
belonging to a worker and when using stickyness for any jvmRoute you can
only have one worker.

So if you want to use a worker in several balancers with different
enable/disable or start/stop values, the workers.properties gives you no
way to configure that.

Any ideas how such a configuration could look like? If the idea looks
good, i might implement :)

In case you only have further user questions, please proceed on
[EMAIL PROTECTED]

Concerning improvment of configuration syntax in workers.properties this
thread is right.

Regards,

Rainer

P.S.: local_worker and local_worker_only does no longer exist since some
thime before 1.2.15. The attributes are being ignored.

[EMAIL PROTECTED] wrote:
> Hi,
>
> We're using 3 load-balancers to seperate our requests up (client and
admin
> etc.) and numerous tomcat servers (we're running Apache in front).
>
> We need to be able to disable servers on a load-balancer level - so I
need
> to disable 2 of the 3 load balancers on a particular server, say.  We can
> do this fine using the jkstatus page, but when the machines are
restarted,
> the changes don't seem to have been persisted.
>
> And it seems that workers.properties is not fine-grained enough to handle
> this?
>
> Our setup is below...
>
> Any ideas how to get around this?
>
> cheers,
>
> David
>
>
> mod-jk.conf snippet:
>
> JKMount /framework/admin/* adminloadbalancer
> JKMount /framework/httpadaptor/* adaptorloadbalancer
> JkMount /framework/* clientloadbalancer
>
> # if you wanted to only load-balance a sub-context, you could
> # map the module differently, such as:
> # JkMount /myContext/* loadbalancer
>
> JkMount /status/* jkstatus
>
>
>
>
> workers.properties:
>
> worker.LAUREL.type=ajp13
> worker.LAUREL.lbfactor=1
> worker.LAUREL.cachesize=25
> worker.LAUREL.port=8009
> worker.LAUREL.host=LAUREL.mw.prtdev.lexmark.com
>
> worker.BLUFF.type=ajp13
> worker.BLUFF.lbfactor=1
> worker.BLUFF.cachesize=25
> worker.BLUFF.port=8009
> worker.BLUFF.host=BLUFF.mw.prtdev.lexmark.com
>
>
> worker.adminloadbalancer.type=lb
> worker.adminloadbalancer.method=B
> worker.adminloadbalancer.sticky_session=1
> worker.adminloadbalancer.sticky_session_force=1
> worker.adminloadbalancer.local_worker_only=1
> worker.adminloadbalancer.balanced_workers=BLUFF,LAUREL
>
> worker.clientloadbalancer.type=lb
> worker.clientloadbalancer.method=B
> worker.clientloadbalancer.sticky_session=1
> worker.clientloadbalancer.sticky_session_force=1
> worker.clientloadbalancer.local_worker_only=1
> worker.clientloadbalancer.balanced_workers=BLUFF,LAUREL
>
> worker.adaptorloadbalancer.local_worker_only=1
> worker.adaptorloadbalancer.type=lb
> worker.adaptorloadbalancer.method=B
> worker.adaptorloadbalancer.sticky_session=1
> worker.adaptorloadbalancer.sticky_session_force=1
> worker.adaptorloadbalancer.balanced_workers=BLUFF,LAUREL
>
> worker.jkstatus.type=status
>
worker.list=jkstatus,adminloadbalancer,clientloadbalancer,adaptorloadbalancer

>
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]


-

Re: JKStatus Bug?

2006-05-16 Thread dhay
Actually that's the other way my colleague and I came up with.  It seems a
little clumsier, but will work for us.

> I'll see, what I can do ...
Thanks.  What kind of timeframe are we looking at?

cheers,

David



|-+>
| |   Rainer Jung  |
| |   <[EMAIL PROTECTED]|
| |   >|
| ||
| |   05/16/2006 02:34 |
| |   PM   |
| |   Please respond to|
| |   "Tomcat  |
| |   Developers List" |
| ||
|-+>
  
>--|
  | 
 |
  |   To:   Tomcat Developers List   
 |
  |   cc:   
 |
  |   Subject:  Re: JKStatus Bug?   
 |
  
>--|



Hi David,

no internally it works like that: an lb worker has an internal
representation of the status of it's balanced workers. This is created
during initialization, so each lb worker has his own view of the
balanced workers. They don't share any information, like disable/stop,
error or usage counters/load (and that's something I want to keep).

The alternative approach I was thinking about is giving each (non-lb)
worker an optional attribute "jvmroute". The attribute is never used
except when the worker is being configured as a balanced worker for a
load balancer.

If a load balancer looks for the config of it's balanced workers, it
checks for the attribute "jvmroute". If it exists, it uses this name for
stickyness. If it does not exist, it uses the name of the worker.

The change is completely compatible with existing configs. In your case,
you would configure one worker pr tomcat target *and* per lb. Say you've
got TC1 and TC2 as tomcats and LB1 and LB2 as lbs. Then you configure 4
workers:

worker.lb1tc1.type=ajp13
worker.lb1tc1.host=HOST_OF_TC1
...
worker.lb1tc1.jvmroute=TC1
worker.lb1tc1.disabled=WHATEVER

... 2 further workers ...

worker.lb2tc2.type=ajp13
worker.lb2tc2.host=HOST_OF_TC1
...
worker.lb2tc2.jvmroute=TC2
worker.lb2tc2.disabled=WHATEVER

and two lb workers:

worker.lb1.type=lb
worker.lb1.balanced_workers=lb1tc1,lb1tc2

worker.lb2.type=lb
worker.lb2.balanced_workers=lb2tc1,lb2tc2

That way you can configure all attributes of the balanced workers per lb
and the implementation changes are far less risky. I'll see, what I can
do ...

Rainer

[EMAIL PROTECTED] wrote:
> Hi Rainer,
>
> Thanks for the reply.
>
> As far as configuration change suggestions, how about making things more
> fine-grained, so you can specify the worker within the balancer - eg:
>
> worker.adminloadbalancer.BLUFF.disabled=1
>
> Presumably something like that is happening within jkstatus?
>
> cheers,
>
> David
> x54680
>
>
> |-+>
> | |   Rainer Jung  |
> | |   <[EMAIL PROTECTED]|
> | |   >|
> | ||
> | |   05/16/2006 01:36 |
> | |   PM   |
> | |   Please respond to|
> | |   "Tomcat  |
> | |   Developers List" |
> | ||
> |-+>
>
>--|

>   |
|
>   |   To:   Tomcat Developers List 
|
>   |   cc:
|
>   |   Subject:  Re: JKStatus Bug?
|
>
>--|

>
>
>
> Hi,
>
> it's true, that jkstatus doesn't persist changes. There is no
> functionality there to write a workers.properties (it's somewhere near
> the end of the TODO).
>
> Concerning disabled: Yes, disabled at the moment is an attribute
> belonging to a worker and when using stickyness for any jvmRoute you can
> only have one worker.
>
> So if you want to use a worker in several balancers with different
> enable/disable or start/stop values, the workers.properties gives you no
> way to configure that.
>
> Any ideas how such a configuration could look like? If the idea looks
> good, i might implement :)
>
> In case you only have further user questions, please 

JKStatus bug - disabled=true with only one Tomcat?

2006-05-23 Thread dhay
Hi,

It seems that if there is only one tomcat server connected via mod_jk and
disabled is set to true (ie disabled=1) for that server that this is
ignored and requests still make it through.

Is this a bug, or a feature?!

cheers,

David


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: mod_jk 1.2.19 release candidate: ready to test

2006-09-19 Thread dhay
Just a quick question - does this REQUIRE Apache 2.0.58 or is verified it
works with it?  We're using Apache 2.0.54 but want to upgrade our mod_jk
only.

cheers,

David
x54680


|-+>
| |   Jim Jagielski|
| |   <[EMAIL PROTECTED]>|
| ||
| |   19/09/2006 15:07 |
| |   Please respond to|
| |   "Tomcat  |
| |   Developers List" |
| ||
|-+>
  
>--|
  | 
 |
  |   To:   "Tomcat Users List"
 |
  |   cc:   Tomcat Developers List   
 |
  |   Subject:  Re: mod_jk 1.2.19 release candidate: ready to test  
 |
  
>--|



+1 for release:

Tested OS X, Sol8 and Suse 10.0

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]