DO NOT REPLY [Bug 47061] JDBCStore for saving sessions doesn't support datasource

2009-05-11 Thread bugzilla
https://issues.apache.org/bugzilla/show_bug.cgi?id=47061





--- Comment #3 from Steve Pugh   2009-05-11 02:46:00 PST 
---
Created an attachment (id=23639)
 --> (https://issues.apache.org/bugzilla/attachment.cgi?id=23639)
JDBCStore.java patch 1

-- 
Configure bugmail: https://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



DO NOT REPLY [Bug 47061] JDBCStore for saving sessions doesn't support datasource

2009-05-11 Thread bugzilla
https://issues.apache.org/bugzilla/show_bug.cgi?id=47061


Steve Pugh  changed:

   What|Removed |Added

  Attachment #23521|0   |1
is obsolete||




--- Comment #4 from Steve Pugh   2009-05-11 02:46:48 PST 
---
Created an attachment (id=23640)
 --> (https://issues.apache.org/bugzilla/attachment.cgi?id=23640)
 JDBCStore.java patch 2

-- 
Configure bugmail: https://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



DO NOT REPLY [Bug 47061] JDBCStore for saving sessions doesn't support datasource

2009-05-11 Thread bugzilla
https://issues.apache.org/bugzilla/show_bug.cgi?id=47061





--- Comment #5 from Steve Pugh   2009-05-11 02:57:37 PST 
---

The patch does indeed move the remove() method. This is because in the original
code, the call to remove() is inside the block of code in save(), which gets
the connection. But the remove() method then gets the connection again! This
isn't such a problem for the "direct connection" method as the connection is
left open after remove() has finished with it. But in the case where the
datasource is being used, the remove() method returns the connection, and then
the connection is then not available to the rest of the save() method.

As you mentioned, maybe this is a separate issue, so I have posted this a
separate patch (patch 1), so that if you decide to treat it separately it will
be easier to do so.

The other patch (patch 2) adds in the code required for datasource support
using JNDI lookup as before. I have also made the new member variables private
as suggested. This patch needs to be applied after patch 1.

-- 
Configure bugmail: https://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Proposed mod_jk logging patch

2009-05-11 Thread Jess Holle
I have noticed that mod_jk logging about a dead server is overtly 
verbose in some circumstances.


If you use it to load balance over a number of servers and one is dead 
you'll get several lines of error logging every time it retries the 
server (to see if it's alive yet).  This can get rather obnoxious when 
you're balancing over a number of ports which may or may not have a 
server listening at the time -- and when you're allowing retries of dead 
servers with any frequency.


The attached patch changes the level of such logging to debug for 
retries of a worker known to be in an error state, leaving the level at 
error for other cases.  The result is that you get error logging when a 
server is first determined to be unavailable -- and then are simply not 
bothered about this any longer.


Is there any chance of merging this patch into mod_jk?  The current 
level of log verbosity just isn't acceptable in cases where one is load 
balancing over a sparsely populated range of server ports, for instance.


--
Jess Holle

P.S. I already proposed a similar patch for mod_proxy_balancer/ajp.  
There appear to be additional issues there (having to do with load 
balancing getting "stuck" on a subset of the members), however, which 
are pushing us back to mod_jk anyway.


--- native/common/jk_ajp_common.c.orig  2009-04-07 12:56:25.926105900 -0500
+++ native/common/jk_ajp_common.c   2009-04-07 12:53:22.408773900 -0500
@@ -1392,7 +1392,8 @@
 static int ajp_send_request(jk_endpoint_t *e,
 jk_ws_service_t *s,
 jk_logger_t *l,
-ajp_endpoint_t * ae, ajp_operation_t * op)
+ajp_endpoint_t * ae, ajp_operation_t * op,
+int probing) /* Added 'probing', which is true 
when probing/retrying failed worker [PTC] */
 {
 int err_conn = 0;
 int err_cping = 0;
@@ -1504,6 +1505,14 @@
 /* Connect to the backend.
  */
 if (ajp_connect_to_endpoint(ae, l) != JK_TRUE) {
+/* Log at debug level rather than error level when 'probing' [PTC]
+*/
+if ( probing )
+jk_log(l, JK_LOG_DEBUG,
+   "(%s) connecting to backend failed. Tomcat is probably 
not started "
+   "or is listening on the wrong port (errno=%d)",
+   ae->worker->name, ae->last_errno);
+else
 jk_log(l, JK_LOG_ERROR,
"(%s) connecting to backend failed. Tomcat is probably not 
started "
"or is listening on the wrong port (errno=%d)",
@@ -2189,6 +2198,7 @@
 int rc = JK_UNSET;
 char *msg = "";
 int retry_interval;
+int probing;  /* Added [PTC] */
 
 JK_TRACE_ENTER(l);
 
@@ -2286,6 +2296,10 @@
 aw->s->busy++;
 if (aw->s->state == JK_AJP_STATE_ERROR)
 aw->s->state = JK_AJP_STATE_PROBE;
+/* Set 'probing' to true when aw->s->state == JK_AJP_STATE_PROBE;
+   indicates when worker is being probed/retried [PTC]
+*/
+probing = ( aw->s->state == JK_AJP_STATE_PROBE );
 if (aw->s->busy > aw->s->max_busy)
 aw->s->max_busy = aw->s->busy;
 retry_interval = p->worker->retry_interval;
@@ -2317,7 +2331,7 @@
 log_error = JK_TRUE;
 rc = JK_UNSET;
 msg = "";
-err = ajp_send_request(e, s, l, p, op);
+err = ajp_send_request(e, s, l, p, op, probing); /* pass 'probing' to 
ajp_send_request() [PTC] */
 e->recoverable = op->recoverable;
 if (err == JK_CLIENT_RD_ERROR) {
 *is_error = JK_HTTP_BAD_REQUEST;
@@ -2463,6 +2477,13 @@
 ajp_next_connection(p, l);
 }
 /* Log the error only once per failed request. */
+/* Log at debug level rather than error level when 'probing' [PTC]
+*/
+if ( probing )
+jk_log(l, JK_LOG_DEBUG,
+   "(%s) connecting to tomcat failed.",
+   aw->name);
+else
 jk_log(l, JK_LOG_ERROR,
"(%s) connecting to tomcat failed.",
aw->name);


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Re: Proposed mod_jk logging patch

2009-05-11 Thread Rainer Jung
Hi Jess,

On 11.05.2009 18:43, Jess Holle wrote:
> I have noticed that mod_jk logging about a dead server is overtly
> verbose in some circumstances.
> 
> If you use it to load balance over a number of servers and one is dead
> you'll get several lines of error logging every time it retries the
> server (to see if it's alive yet).  This can get rather obnoxious when
> you're balancing over a number of ports which may or may not have a
> server listening at the time -- and when you're allowing retries of dead
> servers with any frequency.
> 
> The attached patch changes the level of such logging to debug for
> retries of a worker known to be in an error state, leaving the level at
> error for other cases.  The result is that you get error logging when a
> server is first determined to be unavailable -- and then are simply not
> bothered about this any longer.
> 
> Is there any chance of merging this patch into mod_jk?  The current
> level of log verbosity just isn't acceptable in cases where one is load
> balancing over a sparsely populated range of server ports, for instance.
> 
> -- 
> Jess Holle
> 
> P.S. I already proposed a similar patch for mod_proxy_balancer/ajp. 
> There appear to be additional issues there (having to do with load
> balancing getting "stuck" on a subset of the members), however, which
> are pushing us back to mod_jk anyway.

I find it hard to decide between your case (we know the nodes are not
available and we don't need a reminder every minute, instead we want to
see the "real" errors), and the most common case (we didn't see the
single initial message a few days ago and so we didn't realize our nodes
were partially down for a long time).

So let me first ask: why don't you "stop" the nodes, you know are out of
service? If you let the balancer know, what your admins know about the
state of the system, the balancer will no longer throw errors.

Regards,

Rainer

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Proposed mod_jk logging patch

2009-05-11 Thread Jess Holle

Rainer Jung wrote:

I find it hard to decide between your case (we know the nodes are not
available and we don't need a reminder every minute, instead we want to
see the "real" errors), and the most common case (we didn't see the
single initial message a few days ago and so we didn't realize our nodes
were partially down for a long time).

So let me first ask: why don't you "stop" the nodes, you know are out of
service? If you let the balancer know, what your admins know about the
state of the system, the balancer will no longer throw errors.
  
This really in some respects a mod_cluster sort of thing.  I have a bank 
of ports in which a smaller number of server processes (embedding 
Tomcat) will be dynamically started.  These will continue to reside on 
these ports unless/until they hang or die -- at which point a 
daemon/manager process will start other server processes in the port 
range -- on whatever ports they can successfully bind to.


Having the daemon/manager process message to mod_jk as to which ports to 
start/stop all the time seems like an undesirable complexity and tight 
coupling.  Ideally the servers shouldn't even know which Apache(s) are 
targeting them, which module is being used, mod_jk or 
mod_proxy_balancer/ajp (or possibly mod_cluster at some point), etc.


Perhaps there should be a configurable boolean as to whether this should 
be logged noisily or quietly to meet both use cases?  [Note I need IIS 
and SJWS support as well as Apache 2.2 and so will need to rely on the 
jk/tc connectors in these cases in any case and will need to be able to 
configure any such setting in all cases.]


--
Jess Holle


-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org