On 06.03.2009 14:19, Mladen Turk wrote:
Rainer Jung wrote:
On 06.03.2009 13:32, Mladen Turk wrote:
Rainer Jung wrote:
All this should never touch the global state
if there are live connections.
Let the live connection decides for itself when it gets serviced.
Anything else is just plain 'guessing'
That was my general rule of thumb,
because the point is to be robust as much as possible
JK_CLIENT_ERROR: it does not touch the global state (well it sets it
to OK), but you do touch the local state. I argued, why I would set
the local state also to OK. Any answer?
Well you said:
"It doesn't matter for the logic, but makes keeping track of the
differences easier."
I think opposite. Marking local as error and global as ok makes
things track easier. local error means that this worker won't
be tested in next lb loop, global error means it won't be tested
until retry timeout.
Since it is no functional change, I can stick to your definition of
"easier", so we keep it as is.
Explicitly setting global to OK reads as
'It wasn't our fault, it was client's fault, we are still in
contact with backend'
But that's actaully what we do, we do set global explicitely to OK, and
we always did. I was talking about the local part here. But I think it's
OK to leave it as it is.
But I agree, setting anything here is irrelevant, but like
you said "It makes things easier to track and read"
OK.
JK_STATUS_FATAL_ERROR: The whole purpose of the fail_on_status
configuration item is to tell something else. E.g. there is a status
if the context is not available, or the app could have a filter
returning a special status. For me it does not make sense to simply
ignore, what the admin configured using fail_on_status.
OK, this should probably be set to global as well if configured
explicitly. It will mean that all the sessions will be lost however.
OK. Yes, they'll be lost, but it depends on the admin to choose (or even
set by a filter) good status codes. But I think the most popular case
is, where the app is not deployed for some time.
JK_REPLY_TIMEOUT: Again I'm talking about the situation we have more
timeouts than max_reply_timeouts. By default we do not have any reply
timeout set, so the admin instructed us to react on reply timeouts.
JkMount /foo aw
JkMount /bar aw
Now, if /bar is slow and gets timeout it would mean that
/foo will be banned as well (although it might work perfectly)
But I see your point. Since configured it should be banned
immediately. However this requires that admins behave 'smart'
and deploy their applications to different instances and use
different workers.
Ideal would be for us to have per JkMount status in shared
memory. This is something for the future definitely.
Yeah, there's such a dependency between mounts and workers. But exactly
for this case we now have reply timeouts which can be set per mount.
Because here one size doesn't fit all. By default all reply timeouts are
off though.
So I think going into global error here is safe.
Note: it's only in the first half of the timeout handling, when we are
already above max_reply_timeouts, so we are not talking about isolated
timeouts.
(iv) JK_SERVER_ERROR
We only get this, if a memory allocation fails.
I'm fine with what you decided, although actually I see no reason why
allocation should work better or worse for one of the lb members.
We can leave it, to keep track of the differences it would be easier
to set local state to OK too.
Well, again I disagree. Actually in worker or prefork the child
will simply die without setting the global error in shm.
If we set here the global error it will again kill all the
sessions if one child had some memory issues.
I didn't suggest setting global to ERROR, I suggested setting local to
OK, because I don't get, hwat local ERROR helps here, and keeping both
equal is easier to understand.
Hmm, right, but setting local to OK won't probably help.
It might end up in some garbage send to Tomcat, so it's
better to mark it as local error.
OK. Agreed.
Regards,
Rainer
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org