Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Henri Gomez

Something we should also check is the CPU load of the Tomcat instance.
May be it will be usefull also to let users/admin add their own
counters in the load estimation.

For example, if some admins considers we should base the
load-balancing on HTTP requests or SQL access, and they have these
counters on their webapp applications, it will be usefull to be able
to get them from Tomcat to send them back to jk balancer.

It shouldn't be too hard and very welcome for many Tomcat sites

2007/7/4, Rainer Jung <[EMAIL PROTECTED]>:

Hi,

implementing a management communication between the lb and the backend
is on the roadmap for jk3. It is somehow unlikely, that this will help
in your situation, because when doing a GC the jvm will no longer
respond to the management channel. Traditional Mark Sweep Compact GC is
not distinguishable from the outside from a halt in the backend. Of
course we could think of a webapp trying to use the JMX info on memory
consumption to estimate GC activity in advance, but I doubt that this
will be a stable solution. There are notifications, when GCs happen, but
at the moment I'm not sure, if such events exist, before, or only after
a GC.

I think a first step (and a better solution) would be to use modern GC
algorithms like Concurrent Mark Sweep, which will most of the time
reduce the GC stop times to some 10s or 100s of milliseconds (depending
on heap size). CMS comes with a cost, a little more memory needed and a
little more CPU needed, but the dramatically decreased stop times are
worth it. Also it is quite robust since about 1-2 years.

Other components will not like long GC pauses as well, like for instance
cluster replication. There you configure the longest pause you accept
for missing heartbeat packets before assuming a node is dead. Assuming a
node being dead because of GC pauses and then the node suddenly works
without having noticed itself that it outer world has changed is a very
bad situation too.

What we plan as a first step for jk3 is putting mod_jk on the basis of
the apache APR libraries. Then we can relatively easily use our own
management threads to monitor the backend status and influence the
balancing decisions. As long as we do everything on top of the request
handling threads we can't do complex things in a stable way.

Getting jk3 out of the door will take some longer time (maybe 6 to 12
months'for a release). People willing to help are welcome.

Concerning the SLAs: it always makes sense to put a percentage limit on
the maximum response times and error rates. A 100% below some limit
clause will always be to expensive. But of course, if you can't reduce
GC times and the GC runs to often, there will be no acceptable
percentage for long running requests.

Thank you for sharing your experiences at Langen with us!

Regards,

Rainer

Yefym Dmukh wrote:
> Hi all,
> sorry for the stress but it seems that it is a time to come back to  the
> discussion related to the load balancing for JVM (Tomcat).
>
> Prehistory:
> Recently we made benchmark and smoke tests of our product at the sun high
> tech centre in Langen (Germany).
>
> As the webserver apache2.2.4 has been used, container -10xTomcat 5.5.25
> and as load balancer - JK connector 1.2.23 with busyness algorithm.
>
> Under the high load the strange behaviour was  observed: some
> tomcat workers temporary got the non-proportional load, often 10 times
> higher then the others for the relatively long periods.  As the result the
> response times that usually stay under 500ms went up to 20+ sec, that in
> its turn  made the overall test results almost two time worst as
> estimated.
>
> At the beginning we were quite confused, because we were
> sure that it was not the problem of JVM configuration and supposed that
> the reason is in LB logic of mod_jk, and the both suggestions were right.
>
> Actually the following was happening: the LB sends requests and gets the
> session sticky, continuously sending the upcoming requests to the same
> cluster node. At the certain period of time the JVM started the major
> garbage collection (full gc) and spent, mentioned above, 20 seconds. At
> the same time jk continued to send new requests and the sticky to node
> requests that led us to the situation where the one node broke the SLA on
> response times.
>
> I ^ve been searching the web for awhile to find the LoadBalancer
> implementation that takes an account the GC activity and reduces the load
> accordingly case JVM is close to the major collection, but nothing found.
>
> Once again the LB of JVMs under the load is really an issue for production
> and with optimally distributed load you are able not only to lower the
> costs, but also able to prevent bad customer experience, not to mention
> broken SLAs.
>
> Feature request:
>
> All lb algorithms have to be extended with the bidirectional
> connection with jvm:
>  Jvm -> Lb: old gen size and the current occupancy
>  Lb -> Jvm: pr

Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread jean-frederic clere

Henri Gomez wrote:

Something we should also check is the CPU load of the Tomcat instance.
May be it will be usefull also to let users/admin add their own
counters in the load estimation.


If you want to add this to Tomcat remeber that stuff needs a JNI module 
to collect information from the OS/hardware and that is an OS depend code.


Cheers

Jean-Frederic



For example, if some admins considers we should base the
load-balancing on HTTP requests or SQL access, and they have these
counters on their webapp applications, it will be usefull to be able
to get them from Tomcat to send them back to jk balancer.

It shouldn't be too hard and very welcome for many Tomcat sites

2007/7/4, Rainer Jung <[EMAIL PROTECTED]>:

Hi,

implementing a management communication between the lb and the backend
is on the roadmap for jk3. It is somehow unlikely, that this will help
in your situation, because when doing a GC the jvm will no longer
respond to the management channel. Traditional Mark Sweep Compact GC is
not distinguishable from the outside from a halt in the backend. Of
course we could think of a webapp trying to use the JMX info on memory
consumption to estimate GC activity in advance, but I doubt that this
will be a stable solution. There are notifications, when GCs happen, but
at the moment I'm not sure, if such events exist, before, or only after
a GC.

I think a first step (and a better solution) would be to use modern GC
algorithms like Concurrent Mark Sweep, which will most of the time
reduce the GC stop times to some 10s or 100s of milliseconds (depending
on heap size). CMS comes with a cost, a little more memory needed and a
little more CPU needed, but the dramatically decreased stop times are
worth it. Also it is quite robust since about 1-2 years.

Other components will not like long GC pauses as well, like for instance
cluster replication. There you configure the longest pause you accept
for missing heartbeat packets before assuming a node is dead. Assuming a
node being dead because of GC pauses and then the node suddenly works
without having noticed itself that it outer world has changed is a very
bad situation too.

What we plan as a first step for jk3 is putting mod_jk on the basis of
the apache APR libraries. Then we can relatively easily use our own
management threads to monitor the backend status and influence the
balancing decisions. As long as we do everything on top of the request
handling threads we can't do complex things in a stable way.

Getting jk3 out of the door will take some longer time (maybe 6 to 12
months'for a release). People willing to help are welcome.

Concerning the SLAs: it always makes sense to put a percentage limit on
the maximum response times and error rates. A 100% below some limit
clause will always be to expensive. But of course, if you can't reduce
GC times and the GC runs to often, there will be no acceptable
percentage for long running requests.

Thank you for sharing your experiences at Langen with us!

Regards,

Rainer

Yefym Dmukh wrote:
> Hi all,
> sorry for the stress but it seems that it is a time to come back to  
the

> discussion related to the load balancing for JVM (Tomcat).
>
> Prehistory:
> Recently we made benchmark and smoke tests of our product at the sun 
high

> tech centre in Langen (Germany).
>
> As the webserver apache2.2.4 has been used, container -10xTomcat 5.5.25
> and as load balancer - JK connector 1.2.23 with busyness algorithm.
>
> Under the high load the strange behaviour was  observed: some
> tomcat workers temporary got the non-proportional load, often 10 times
> higher then the others for the relatively long periods.  As the 
result the
> response times that usually stay under 500ms went up to 20+ sec, 
that in

> its turn  made the overall test results almost two time worst as
> estimated.
>
> At the beginning we were quite confused, because we 
were

> sure that it was not the problem of JVM configuration and supposed that
> the reason is in LB logic of mod_jk, and the both suggestions were 
right.

>
> Actually the following was happening: the LB sends requests and gets 
the

> session sticky, continuously sending the upcoming requests to the same
> cluster node. At the certain period of time the JVM started the major
> garbage collection (full gc) and spent, mentioned above, 20 seconds. At
> the same time jk continued to send new requests and the sticky to node
> requests that led us to the situation where the one node broke the 
SLA on

> response times.
>
> I ^ve been searching the web for awhile to find the LoadBalancer
> implementation that takes an account the GC activity and reduces the 
load
> accordingly case JVM is close to the major collection, but nothing 
found.

>
> Once again the LB of JVMs under the load is really an issue for 
production

> and with optimally distributed load you are able not only to lower the
> costs, but also able to prevent bad customer experience, not to mention
> bro

svn commit: r553410 - /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/ServerCookie.java

2007-07-05 Thread jfclere
Author: jfclere
Date: Thu Jul  5 01:13:06 2007
New Revision: 553410

URL: http://svn.apache.org/viewvc?view=rev&rev=553410
Log:
Escape the " in the cookie value.

Modified:
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/ServerCookie.java

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/ServerCookie.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/ServerCookie.java?view=diff&rev=553410&r1=553409&r2=553410
==
--- tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/ServerCookie.java 
(original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/ServerCookie.java Thu 
Jul  5 01:13:06 2007
@@ -130,6 +130,7 @@
 //
 // private static final String tspecials = "()<>@,;:\\\"/[]?={} \t";
 private static final String tspecials = ",; ";
+private static final String tspecials2 = ",; \"";
 
 /*
  * Tests a string and returns true if the string counts as a
@@ -154,6 +155,19 @@
return true;
 }
 
+public static boolean isToken2(String value) {
+   if( value==null) return true;
+   int len = value.length();
+
+   for (int i = 0; i < len; i++) {
+   char c = value.charAt(i);
+
+   if (c < 0x20 || c >= 0x7f || tspecials2.indexOf(c) != -1)
+   return false;
+   }
+   return true;
+}
+
 public static boolean checkName( String name ) {
if (!isToken(name)
|| name.equalsIgnoreCase("Comment") // rfc2019
@@ -213,7 +227,7 @@
 // this part is the same for all cookies
buf.append( name );
 buf.append("=");
-maybeQuote(version, buf, value);
+maybeQuote2(version, buf, value);
 
// XXX Netscape cookie: "; "
// add version 1 specific information
@@ -283,6 +297,17 @@
 buf.append('"');
 }
 }
+public static void maybeQuote2 (int version, StringBuffer buf,
+String value) {
+// special case - a \n or \r  shouldn't happen in any case
+if (isToken2(value)) {
+buf.append(value);
+} else {
+buf.append('"');
+buf.append(escapeDoubleQuotes(value));
+buf.append('"');
+}
+}
 
 // log
 static final int dbg=1;
@@ -306,12 +331,14 @@
 }
 
 StringBuffer b = new StringBuffer();
+char p = s.charAt(0);
 for (int i = 0; i < s.length(); i++) {
 char c = s.charAt(i);
-if (c == '"')
+if (c == '"' && p != '\\')
 b.append('\\').append('"');
 else
 b.append(c);
+p = c;
 }
 
 return b.toString();



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Henri Gomez

Well many beans are allready available :

java.lang.Theading
java.lang.MemoryPool

I could see also some beans related to CPU load but using a Sun JVM
(not on IBM).

2007/7/5, jean-frederic clere <[EMAIL PROTECTED]>:

Henri Gomez wrote:
> Something we should also check is the CPU load of the Tomcat instance.
> May be it will be usefull also to let users/admin add their own
> counters in the load estimation.

If you want to add this to Tomcat remeber that stuff needs a JNI module
to collect information from the OS/hardware and that is an OS depend code.

Cheers

Jean-Frederic

>
> For example, if some admins considers we should base the
> load-balancing on HTTP requests or SQL access, and they have these
> counters on their webapp applications, it will be usefull to be able
> to get them from Tomcat to send them back to jk balancer.
>
> It shouldn't be too hard and very welcome for many Tomcat sites
>
> 2007/7/4, Rainer Jung <[EMAIL PROTECTED]>:
>> Hi,
>>
>> implementing a management communication between the lb and the backend
>> is on the roadmap for jk3. It is somehow unlikely, that this will help
>> in your situation, because when doing a GC the jvm will no longer
>> respond to the management channel. Traditional Mark Sweep Compact GC is
>> not distinguishable from the outside from a halt in the backend. Of
>> course we could think of a webapp trying to use the JMX info on memory
>> consumption to estimate GC activity in advance, but I doubt that this
>> will be a stable solution. There are notifications, when GCs happen, but
>> at the moment I'm not sure, if such events exist, before, or only after
>> a GC.
>>
>> I think a first step (and a better solution) would be to use modern GC
>> algorithms like Concurrent Mark Sweep, which will most of the time
>> reduce the GC stop times to some 10s or 100s of milliseconds (depending
>> on heap size). CMS comes with a cost, a little more memory needed and a
>> little more CPU needed, but the dramatically decreased stop times are
>> worth it. Also it is quite robust since about 1-2 years.
>>
>> Other components will not like long GC pauses as well, like for instance
>> cluster replication. There you configure the longest pause you accept
>> for missing heartbeat packets before assuming a node is dead. Assuming a
>> node being dead because of GC pauses and then the node suddenly works
>> without having noticed itself that it outer world has changed is a very
>> bad situation too.
>>
>> What we plan as a first step for jk3 is putting mod_jk on the basis of
>> the apache APR libraries. Then we can relatively easily use our own
>> management threads to monitor the backend status and influence the
>> balancing decisions. As long as we do everything on top of the request
>> handling threads we can't do complex things in a stable way.
>>
>> Getting jk3 out of the door will take some longer time (maybe 6 to 12
>> months'for a release). People willing to help are welcome.
>>
>> Concerning the SLAs: it always makes sense to put a percentage limit on
>> the maximum response times and error rates. A 100% below some limit
>> clause will always be to expensive. But of course, if you can't reduce
>> GC times and the GC runs to often, there will be no acceptable
>> percentage for long running requests.
>>
>> Thank you for sharing your experiences at Langen with us!
>>
>> Regards,
>>
>> Rainer
>>
>> Yefym Dmukh wrote:
>> > Hi all,
>> > sorry for the stress but it seems that it is a time to come back to
>> the
>> > discussion related to the load balancing for JVM (Tomcat).
>> >
>> > Prehistory:
>> > Recently we made benchmark and smoke tests of our product at the sun
>> high
>> > tech centre in Langen (Germany).
>> >
>> > As the webserver apache2.2.4 has been used, container -10xTomcat 5.5.25
>> > and as load balancer - JK connector 1.2.23 with busyness algorithm.
>> >
>> > Under the high load the strange behaviour was  observed: some
>> > tomcat workers temporary got the non-proportional load, often 10 times
>> > higher then the others for the relatively long periods.  As the
>> result the
>> > response times that usually stay under 500ms went up to 20+ sec,
>> that in
>> > its turn  made the overall test results almost two time worst as
>> > estimated.
>> >
>> > At the beginning we were quite confused, because we
>> were
>> > sure that it was not the problem of JVM configuration and supposed that
>> > the reason is in LB logic of mod_jk, and the both suggestions were
>> right.
>> >
>> > Actually the following was happening: the LB sends requests and gets
>> the
>> > session sticky, continuously sending the upcoming requests to the same
>> > cluster node. At the certain period of time the JVM started the major
>> > garbage collection (full gc) and spent, mentioned above, 20 seconds. At
>> > the same time jk continued to send new requests and the sticky to node
>> > requests that led us to the situation where the one node broke the
>> SLA on
>> > response

Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Yefym Dmukh
Hi Rainer,
seems we have still an issue in jvm config. we have not noticed the 
concurrent mode gc failures that led to tomcat overload and bad responses.

14896.148: [Full GC 14896.148: [CMS (concurrent mode failure): 
1102956K->326641K(2359296K), 4.0684463 secs] 1280549K->326641K(3067136K), 
[CMS Perm : 131071K->62120K(131072K)], 4.0690589 secs]

15346.161: [Full GC 15346.161: [CMS (concurrent mode failure): 
1546160K->324956K(2359296K), 3.4321099 secs] 1637284K->324956K(3067136K), 
[CMS Perm : 131071K->6K(131072K)], 3.4327402 secs]



 Unfortunetely the concurrent algorithms in java need itself 30%-50% of 
memory, so if the jvm still allocates the objs in tenured and there is not 
enough   memory for concurrent gc stuff - it makes stop-the-world. Anyway 
these events are fired in JVM and would be really really nice if the 
load-balancer could be smart enough to react on it , e.g. redirecting the 
load. 

Agreed, that this problem is not easy to solve and may be  requires some 
features from jvm side that are not implemented yet. 


Regards,
Y.


Yefym Dmukh
InterComponentWare AG
R&D Personal Health Record
Industriestraße 41
69190 Walldorf (Baden)
Germany 

Phone:  +49 6227 385 122
Fax:+49 6227 385 588
Mail:   [EMAIL PROTECTED]
--
** www.icw.de** 
**   www.LifeSensor.com **

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 
IHE-konform und hoch skalierbar:

Der ICW Master Patient Index hat erfolgreich am IHE Connectathon
in Berlin teilgenommen und im HP European Performance und Benchmark
Center in Sekundenbruchteilen Patientendaten aus einer Datenbank mit
100 Millionen Datensätzen gefunden.
Mehr Informationen zu den innovativen Vernetzungslösungen
der ICW finden Sie unter http://www.icw.de
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

InterComponentWare AG: 
Vorstand: Peter Reuschel (Vors.), Norbert Olsacher, Dr. med. Frank Warda / 
Aufsichtsratsvors.: Michael Kranich
Firmensitz: 69190 Walldorf, Industriestr. 41 / AG Mannheim HRB 351761 / 
USt.-IdNr.: DE 198388516


Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Yefym Dmukh
Hi Rainer,
seems we have still an issue in jvm config. we have not noticed the 
concurrent mode gc failures that led to tomcat overload and bad responses.

14896.148: [Full GC 14896.148: [CMS (concurrent mode failure): 
1102956K->326641K(2359296K), 4.0684463 secs] 1280549K->326641K(3067136K), 
[CMS Perm : 131071K->62120K(131072K)], 4.0690589 secs]

15346.161: [Full GC 15346.161: [CMS (concurrent mode failure): 
1546160K->324956K(2359296K), 3.4321099 secs] 1637284K->324956K(3067136K), 
[CMS Perm : 131071K->6K(131072K)], 3.4327402 secs]



 Unfortunetely the concurrent algorithms in java need itself 30%-50% of 
memory, so if the jvm still allocates the objs in tenured and there is not 
enough   memory for concurrent gc stuff - it makes stop-the-world. Anyway 
these events are fired in JVM and would be really really nice if the 
load-balancer could be smart enough to react on it , e.g. redirecting the 
load. 

Agreed, that this problem is not easy to solve and may be  requires some 
features from jvm side that are not implemented yet. 


Regards,
Y.


Yefym Dmukh
InterComponentWare AG
R&D Personal Health Record
Industriestraße 41
69190 Walldorf (Baden)
Germany 

Phone:  +49 6227 385 122
Fax:+49 6227 385 588
Mail:   [EMAIL PROTECTED]
--
** www.icw.de** 
**   www.LifeSensor.com **

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 
IHE-konform und hoch skalierbar:

Der ICW Master Patient Index hat erfolgreich am IHE Connectathon
in Berlin teilgenommen und im HP European Performance und Benchmark
Center in Sekundenbruchteilen Patientendaten aus einer Datenbank mit
100 Millionen Datensätzen gefunden.
Mehr Informationen zu den innovativen Vernetzungslösungen
der ICW finden Sie unter http://www.icw.de
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

InterComponentWare AG: 
Vorstand: Peter Reuschel (Vors.), Norbert Olsacher, Dr. med. Frank Warda / 
Aufsichtsratsvors.: Michael Kranich
Firmensitz: 69190 Walldorf, Industriestr. 41 / AG Mannheim HRB 351761 / 
USt.-IdNr.: DE 198388516


Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Mladen Turk

Yefym Dmukh wrote:


Actually the following was happening: the LB sends requests and gets the 
session sticky, continuously sending the upcoming requests to the same 
cluster node. At the certain period of time the JVM started the major 
garbage collection (full gc) and spent, mentioned above, 20 seconds. At 
the same time jk continued to send new requests and the sticky to node 
requests that led us to the situation where the one node broke the SLA on 
response times. 



You have oxymoron here. With session stickiness you are willingly
tear down the load balancer correctness because you don't wish/can
have session replication.
Even with the smartest LB and even with the two way communication
it's only possible to make that working in non session stickyness
topology. In other case you would loose sessions on each GC cycle.
However like Rainer said the solution is to choose
the appropriate GC strategy for web based application.

Regards,
Mladen.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Mladen Turk

Yefym Dmukh wrote:


Actually the following was happening: the LB sends requests and gets the 
session sticky, continuously sending the upcoming requests to the same 
cluster node. At the certain period of time the JVM started the major 
garbage collection (full gc) and spent, mentioned above, 20 seconds. At 
the same time jk continued to send new requests and the sticky to node 
requests that led us to the situation where the one node broke the SLA on 
response times. 



You have oxymoron here. With session stickiness you are willingly
tear down the load balancer correctness because you don't wish/can
have session replication.
Even with the smartest LB and even with the two way communication
it's only possible to make that working in non session stickyness
topology. In other case you would loose sessions on each GC cycle.
However like Rainer said the solution is to choose
the appropriate GC strategy for web based application.

Regards,
Mladen.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Mladen Turk

Yefym Dmukh wrote:

You have oxymoron here. With session stickiness you are willingly
tear down the load balancer correctness because you don't wish/can
have session replication.


Generally you are right, but the ideal world is not the reality:
we use apache my faces implementation of jsf where the core session class 
size is about 500KB, the compression of state kills CPU, the size kills 
session replication/failover approach. So we have is what we have aqnd we 
are trying to get the best out of it. 



For 10 nodes the replication cost would be to high even for a smaller
session class, but you can at least use domain clustering model,
and slice that to 5x2 nodes.

BTW, what about the bidirectional jvm-lb connection and the stop-the-world 
GC managed by lb, as keep it simple approach ? 



This won't help much. The sticky session requests must be served
by the same node (group of nodes if using domain clustering),
and your requests will still be delayed by the JVM instance GC cycle
(It has to happen sometime, and you cannot depend on request
void intervals)

Of course since LB updates it's statistics after the request, and if
the request is delayed, right now we cannot react proactive on queued
requests, so new request can be delayed as well instead passed to the
node that is not within the GC cycle (during the GC cycle itself).

To solve the later problem we don't need the two-way communication,
because this can be solved by the LB by taking into account the number
of queued request as well, but we need it for a different things.

However all this is major technology upgrade and it's part of JK3 roadmap,
because it requires both protocol and substantial code change.


Regards,
Mladen.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Yefym Dmukh
>You have oxymoron here. With session stickiness you are willingly
>tear down the load balancer correctness because you don't wish/can
>have session replication.
>Even with the smartest LB and even with the two way communication
>it's only possible to make that working in non session stickyness
>topology. In other case you would loose sessions on each GC cycle.
>However like Rainer said the solution is to choose
>the appropriate GC strategy for web based application.

Generally you are right, but the ideal world is not the reality:
we use apache my faces implementation of jsf where the core session class 
size is about 500KB, the compression of state kills CPU, the size kills 
session replication/failover approach. So we have is what we have aqnd we 
are trying to get the best out of it. 

BTW, what about the bidirectional jvm-lb connection and the stop-the-world 
GC managed by lb, as keep it simple approach ? 






Mladen Turk <[EMAIL PROTECTED]> 
05.07.2007 13:19
Please respond to
"Tomcat Developers List" 


To
Tomcat Developers List 
cc

Subject
Re: Feature request /Discussion: JK loadbalancer improvements for high 
load






Yefym Dmukh wrote:
> 
> Actually the following was happening: the LB sends requests and gets the 

> session sticky, continuously sending the upcoming requests to the same 
> cluster node. At the certain period of time the JVM started the major 
> garbage collection (full gc) and spent, mentioned above, 20 seconds. At 
> the same time jk continued to send new requests and the sticky to node 
> requests that led us to the situation where the one node broke the SLA 
on 
> response times. 
> 

You have oxymoron here. With session stickiness you are willingly
tear down the load balancer correctness because you don't wish/can
have session replication.
Even with the smartest LB and even with the two way communication
it's only possible to make that working in non session stickyness
topology. In other case you would loose sessions on each GC cycle.
However like Rainer said the solution is to choose
the appropriate GC strategy for web based application.

Regards,
Mladen.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




svn commit: r553505 - /tomcat/container/branches/tc5.0.x/catalina/src/share/org/apache/naming/resources/ResourceCache.java

2007-07-05 Thread fhanik
Author: fhanik
Date: Thu Jul  5 06:45:42 2007
New Revision: 553505

URL: http://svn.apache.org/viewvc?view=rev&rev=553505
Log:
Backport fix for windows problem with Math.floor

Modified:

tomcat/container/branches/tc5.0.x/catalina/src/share/org/apache/naming/resources/ResourceCache.java

Modified: 
tomcat/container/branches/tc5.0.x/catalina/src/share/org/apache/naming/resources/ResourceCache.java
URL: 
http://svn.apache.org/viewvc/tomcat/container/branches/tc5.0.x/catalina/src/share/org/apache/naming/resources/ResourceCache.java?view=diff&rev=553505&r1=553504&r2=553505
==
--- 
tomcat/container/branches/tc5.0.x/catalina/src/share/org/apache/naming/resources/ResourceCache.java
 (original)
+++ 
tomcat/container/branches/tc5.0.x/catalina/src/share/org/apache/naming/resources/ResourceCache.java
 Thu Jul  5 06:45:42 2007
@@ -1,42 +1,50 @@
 /*
  * Copyright 1999,2004 The Apache Software Foundation.
- * 
+ *
  * Licensed under the Apache License, Version 2.0 (the "License");
  * you may not use this file except in compliance with the License.
  * You may obtain a copy of the License at
- * 
+ *
  *  http://www.apache.org/licenses/LICENSE-2.0
- * 
+ *
  * Unless required by applicable law or agreed to in writing, software
  * distributed under the License is distributed on an "AS IS" BASIS,
  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  * See the License for the specific language governing permissions and
  * limitations under the License.
- */ 
+ */
 
 package org.apache.naming.resources;
 
 import java.util.HashMap;
+import java.util.Random;
+
 
 /**
  * Implements a special purpose cache.
- * 
+ *
  * @author mailto:[EMAIL PROTECTED]">Remy Maucherat
  * @version $Revision$
  */
 public class ResourceCache {
-
-
+
+
 // --- Constructors
-
-
+
+
 public ResourceCache() {
 }
-
-
+
+
 // - Instance Variables
-
-
+
+
+/**
+ * Random generator used to determine elements to free.
+ */
+protected Random random = new Random();
+
+
 /**
  * Cache.
  * Path -> Cache entry.
@@ -98,7 +106,7 @@
 
 /**
  * Return the access count.
- * Note: Update is not synced, so the number may not be completely 
+ * Note: Update is not synced, so the number may not be completely
  * accurate.
  */
 public long getAccessCount() {
@@ -148,7 +156,7 @@
 
 /**
  * Return the number of cache hits.
- * Note: Update is not synced, so the number may not be completely 
+ * Note: Update is not synced, so the number may not be completely
  * accurate.
  */
 public long getHitsCount() {
@@ -227,11 +235,9 @@
 // Randomly select an entry in the array
 int entryPos = -1;
 boolean unique = false;
-int count = 0;
 while (!unique) {
 unique = true;
-entryPos = (int) Math.floor(Math.random() 
-* (cache.length - 1));
+entryPos = random.nextInt(cache.length) ;
 // Guarantee uniqueness
 for (int i = 0; i < entriesFound; i++) {
 if (toRemove[i] == entryPos) {
@@ -239,7 +245,7 @@
 }
 }
 }
-long entryAccessRatio = 
+long entryAccessRatio =
 ((cache[entryPos].accessCount * 100) / accessCount);
 if (entryAccessRatio < desiredEntryAccessRatio) {
 toRemove[entriesFound] = entryPos;
@@ -289,6 +295,7 @@
 if ((pos != -1) && (name.equals(currentCache[pos].name))) {
 cacheEntry = currentCache[pos];
 }
+
 if (cacheEntry == null) {
 try {
 cacheEntry = (CacheEntry) notFoundCache.get(name);
@@ -306,11 +313,14 @@
 
 public void load(CacheEntry entry) {
 if (entry.exists) {
-insertCache(entry);
+if (insertCache(entry)) {
+cacheSize += entry.size;
+}
 } else {
+int sizeIncrement = (notFoundCache.get(entry.name) == null) ? 1 : 
0;
 notFoundCache.put(entry.name, entry);
+cacheSize += sizeIncrement;
 }
-cacheSize += entry.size;
 }
 
 
@@ -401,13 +411,12 @@
 if ((pos != -1) && (name.equals(oldCache[pos].name))) {
 CacheEntry[] newCache = new CacheEntry[cache.length - 1];
 System.arraycopy(oldCache, 0, newCache, 0, pos);
-System.arraycopy(oldCache, pos + 1, newCache, pos, 
+System.arraycopy(oldCache, pos + 1, newCache, pos,
  oldCache.length - pos - 1);
 cache =

Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Yefym Dmukh
Mladen wrote:
>This won't help much. The sticky session requests must be served
>by the same node (group of nodes if using domain clustering),
>and your requests will still be delayed by the JVM instance GC cycle
>(It has to happen sometime, and you cannot depend on request
>void intervals)

The main point was to use the stop-the-world collectors PROACTIVELY 
managed by lb in order to:
eliminate the memory and cpu usage overhead ConcurrentXXX 
ParallelXXX (i.e. up to 50% for mem ) collectors..
prevent the overloading of the single node that falls in GC-cycle. 
 
reduce the complexity of jvm tunning

With an algorithm:
heap is almost full -> do not send the requests to the node 
anymore and load the other nodes (wait some timeto give the chance 
for the current workers to do the jobs), advise stop-the-world. Heap free? 
Resume the load.


Certainly this makes sense (if any) only for the setups with many small 
jvms that have small CPU (or limited for the current needs) and huge Mem 
size for the stateless applications or the applications with the short 
session lifecycle.

Also may be it would make sense for the domained installations, in GC 
cycles the other domain members will takeover the jobs. 
all this makes sense only in case if the load balancer manages not only 
the load but also GC cycles: kind of ideal JVMs level load-balancer for 
the high load.

Regards,
Y.

Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Rainer Jung
OK, this culd make sense, i.e. heap nearly full on node1 failover to 
other cluster member node2 (replcation assumed) and trigger GC by 
System.gc() on node1. Interesting idea ... (for JK3 and beyond).


Yefym Dmukh wrote:

Mladen wrote:

This won't help much. The sticky session requests must be served
by the same node (group of nodes if using domain clustering),
and your requests will still be delayed by the JVM instance GC cycle
(It has to happen sometime, and you cannot depend on request
void intervals)


The main point was to use the stop-the-world collectors PROACTIVELY 
managed by lb in order to:
eliminate the memory and cpu usage overhead ConcurrentXXX 
ParallelXXX (i.e. up to 50% for mem ) collectors..
prevent the overloading of the single node that falls in GC-cycle. 
 
reduce the complexity of jvm tunning


With an algorithm:
heap is almost full -> do not send the requests to the node 
anymore and load the other nodes (wait some timeto give the chance 
for the current workers to do the jobs), advise stop-the-world. Heap free? 
Resume the load.



Certainly this makes sense (if any) only for the setups with many small 
jvms that have small CPU (or limited for the current needs) and huge Mem 
size for the stateless applications or the applications with the short 
session lifecycle.


Also may be it would make sense for the domained installations, in GC 
cycles the other domain members will takeover the jobs. 
all this makes sense only in case if the load balancer manages not only 
the load but also GC cycles: kind of ideal JVMs level load-balancer for 
the high load.


Regards,
Y.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Filip Hanik - Dev Lists


Question, why would you need GC,heap or CPU stats from the Tomcat 
machine at all?

in most cases, none of these stats can predict response times.
The best way would be to simply record a history of response times, and 
then mod_jk can have algorithms (maybe even pluggable) on how to use 
those stats for future request.


Filip

Yefym Dmukh wrote:

Mladen wrote:
  

This won't help much. The sticky session requests must be served
by the same node (group of nodes if using domain clustering),
and your requests will still be delayed by the JVM instance GC cycle
(It has to happen sometime, and you cannot depend on request
void intervals)



The main point was to use the stop-the-world collectors PROACTIVELY 
managed by lb in order to:
eliminate the memory and cpu usage overhead ConcurrentXXX 
ParallelXXX (i.e. up to 50% for mem ) collectors..
prevent the overloading of the single node that falls in GC-cycle. 
 
reduce the complexity of jvm tunning


With an algorithm:
heap is almost full -> do not send the requests to the node 
anymore and load the other nodes (wait some timeto give the chance 
for the current workers to do the jobs), advise stop-the-world. Heap free? 
Resume the load.



Certainly this makes sense (if any) only for the setups with many small 
jvms that have small CPU (or limited for the current needs) and huge Mem 
size for the stateless applications or the applications with the short 
session lifecycle.


Also may be it would make sense for the domained installations, in GC 
cycles the other domain members will takeover the jobs. 
all this makes sense only in case if the load balancer manages not only 
the load but also GC cycles: kind of ideal JVMs level load-balancer for 
the high load.


Regards,
Y.
  



No virus found in this incoming message.
Checked by AVG Free Edition. 
Version: 7.5.476 / Virus Database: 269.10.0/886 - Release Date: 7/4/2007 1:40 PM
  



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Feature request /Discussion: JK loadbalancer improvements for high load

2007-07-05 Thread Yefym Dmukh
>Question, why would you need GC,heap or CPU stats from the Tomcat 
>machine at all?
>in most cases, none of these stats can predict response times.
>The best way would be to simply record a history of response times, and 
>then mod_jk can have algorithms (maybe even pluggable) on how to use 
>those stats for future request.

As for me the Busyness algorithm in mod_jk is almost perfect as well as 
idea as implementation , 
the only weak point is that it balances the jvms on the background.  The 
point is that Jvm has such an artifact as GC which under the circumstances 
makes deny of service for awhile :) and loadbalancer notices is too late, 
even worse because of any reasons (stickness or internal logic with jobs 
queue) the lb doesn^t react accordingly at the right time.

All other issues like threading and cpu are not from me. To my opinion 
they are useless. I was talking only about JVM loadbalancing which reacts 
accordingly on JVM specific things.



svn commit: r553628 - in /tomcat/sandbox/bayeux/java/org/apache/comet: bayeux/ json/

2007-07-05 Thread fhanik
Author: fhanik
Date: Thu Jul  5 13:50:13 2007
New Revision: 553628

URL: http://svn.apache.org/viewvc?view=rev&rev=553628
Log:
Added license headers
Implemented a streaming parser for JSON reading, so that we can read data 
directly from the reader without having to parse it twice

Added:
tomcat/sandbox/bayeux/java/org/apache/comet/json/
tomcat/sandbox/bayeux/java/org/apache/comet/json/ReaderCharIterator.java
Modified:
tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxChannel.java
tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxClient.java
tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxFilter.java
tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxListener.java
tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxPolicy.java
tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxServlet.java
tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/TomcatBayeux.java

Modified: tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxChannel.java
URL: 
http://svn.apache.org/viewvc/tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxChannel.java?view=diff&rev=553628&r1=553627&r2=553628
==
--- tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxChannel.java 
(original)
+++ tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxChannel.java Thu 
Jul  5 13:50:13 2007
@@ -1,8 +1,27 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
 package org.apache.comet.bayeux;
 
 import dojox.cometd.Channel;
 import dojox.cometd.Client;
-
+/**
+ * @author Filip Hanik
+ * @version 1.0
+ */
 public class BayeuxChannel implements Channel {
 public BayeuxChannel() {
 }

Modified: tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxClient.java
URL: 
http://svn.apache.org/viewvc/tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxClient.java?view=diff&rev=553628&r1=553627&r2=553628
==
--- tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxClient.java 
(original)
+++ tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxClient.java Thu 
Jul  5 13:50:13 2007
@@ -1,3 +1,19 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
 package org.apache.comet.bayeux;
 
 import java.util.Map;
@@ -5,6 +21,10 @@
 
 import dojox.cometd.Client;
 import dojox.cometd.Listener;
+/**
+ * @author Filip Hanik
+ * @version 1.0
+ */
 
 public class BayeuxClient implements Client {
 public BayeuxClient() {

Modified: tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxFilter.java
URL: 
http://svn.apache.org/viewvc/tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxFilter.java?view=diff&rev=553628&r1=553627&r2=553628
==
--- tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxFilter.java 
(original)
+++ tomcat/sandbox/bayeux/java/org/apache/comet/bayeux/BayeuxFilter.java Thu 
Jul  5 13:50:13 2007
@@ -1,8 +1,28 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance wi

svn commit: r553631 - in /tomcat/sandbox/bayeux: java/org/apache/comet/json/ReaderCharIterator.java test/ test/org/ test/org/apache/ test/org/apache/comet/ test/org/apache/comet/test/ test/org/apache/

2007-07-05 Thread fhanik
Author: fhanik
Date: Thu Jul  5 13:55:07 2007
New Revision: 553631

URL: http://svn.apache.org/viewvc?view=rev&rev=553631
Log:
Added test and fixed the streaming reader to work with multiple messages

Added:
tomcat/sandbox/bayeux/test/
tomcat/sandbox/bayeux/test/org/
tomcat/sandbox/bayeux/test/org/apache/
tomcat/sandbox/bayeux/test/org/apache/comet/
tomcat/sandbox/bayeux/test/org/apache/comet/test/
tomcat/sandbox/bayeux/test/org/apache/comet/test/TestJSONReader.java
Modified:
tomcat/sandbox/bayeux/java/org/apache/comet/json/ReaderCharIterator.java

Modified: 
tomcat/sandbox/bayeux/java/org/apache/comet/json/ReaderCharIterator.java
URL: 
http://svn.apache.org/viewvc/tomcat/sandbox/bayeux/java/org/apache/comet/json/ReaderCharIterator.java?view=diff&rev=553631&r1=553630&r2=553631
==
--- tomcat/sandbox/bayeux/java/org/apache/comet/json/ReaderCharIterator.java 
(original)
+++ tomcat/sandbox/bayeux/java/org/apache/comet/json/ReaderCharIterator.java 
Thu Jul  5 13:55:07 2007
@@ -52,8 +52,12 @@
 
 
 public void recycle() {
-reader = null;
+recycle(null);
+}
+public void recycle(Reader r) {
+reader = r;
 buffer.delete(0,buffer.length());
+pos = -1;
 }
 
 public void setReader(Reader r){

Added: tomcat/sandbox/bayeux/test/org/apache/comet/test/TestJSONReader.java
URL: 
http://svn.apache.org/viewvc/tomcat/sandbox/bayeux/test/org/apache/comet/test/TestJSONReader.java?view=auto&rev=553631
==
--- tomcat/sandbox/bayeux/test/org/apache/comet/test/TestJSONReader.java (added)
+++ tomcat/sandbox/bayeux/test/org/apache/comet/test/TestJSONReader.java Thu 
Jul  5 13:55:07 2007
@@ -0,0 +1,87 @@
+package org.apache.comet.test;
+
+import junit.framework.*;
+import org.stringtree.json.JSONReader;
+import org.stringtree.json.JSONValidatingReader;
+import org.stringtree.json.JSONValidator;
+import org.stringtree.json.JSONErrorListener;
+import java.io.StringReader;
+import org.apache.comet.json.ReaderCharIterator;
+
+public class TestJSONReader extends TestCase {
+String s = "[\n"+
+"  {\n"+
+"  \"channel\":\"/meta/handshake\",\n"+
+"  \"version\":0.1,\n"+
+"  \"minimumVersion\": 0.1,\n"+
+"  \"supportedConnectionTypes\":   [\"iframe\", \"flash\", 
\"http-polling\"],\n"+
+"  \"authScheme\": \"SHA1\",\n"+
+"  \"authUser\":   \"alex\",\n"+
+"  \"authToken\":  \"HASHJIBBERISH\"\n"+
+"  }\n"+
+"  ]\n";
+
+
+
+
+public TestJSONReader(String name) {
+super(name);
+}
+
+protected void setUp() throws Exception {
+super.setUp();
+}
+
+protected void tearDown() throws Exception {
+super.tearDown();
+}
+
+public void testJSON() throws Exception {
+JSONReader reader = new JSONReader();
+Object o = reader.read(s);
+System.out.println(o);
+assertNotNull("JSON Test failed, returned null.",o);
+JSONValidatingReader vr = new JSONValidatingReader(new 
ErrorListener());
+o = vr.read(s);
+System.out.println(o);
+assertNotNull("JSON Test failed, returned null.",o);
+}
+
+public void testJSONStreaming() throws Exception {
+JSONReader reader = new JSONReader();
+StringReader sr = new StringReader(s);
+ReaderCharIterator rci = new ReaderCharIterator(sr);
+Object o = reader.read(rci,JSONReader.FIRST);
+System.out.println(o);
+assertNotNull("JSON Test failed, returned null.",o);
+}
+
+public void testJSONStreamingTwoMsgs() throws Exception {
+JSONReader reader = new JSONReader();
+StringReader sr = new StringReader(s+s.replaceAll("SHA1","MD5"));
+ReaderCharIterator rci = new ReaderCharIterator(sr);
+Object o = reader.read(rci,JSONReader.FIRST);
+System.out.println(o);
+assertNotNull("JSON Test failed, returned null.",o);
+rci.recycle();
+rci.setReader(sr);
+o = reader.read(rci,JSONReader.NEXT);
+System.out.println(o);
+assertNotNull("JSON Test failed, returned null.",o);
+}
+
+
+public static class ErrorListener implements JSONErrorListener {
+public void start(String string) {
+System.out.println("Start:"+string);
+}
+public void error(String string, int _int) {
+System.out.println("Error:"+string+" Pos:"+_int);
+}
+public void end(){
+System.out.println("End:");
+}
+
+}
+
+}



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additi

svn commit: r553634 - /tomcat/sandbox/bayeux/README.txt

2007-07-05 Thread fhanik
Author: fhanik
Date: Thu Jul  5 14:08:31 2007
New Revision: 553634

URL: http://svn.apache.org/viewvc?view=rev&rev=553634
Log:
some added readme notes

Modified:
tomcat/sandbox/bayeux/README.txt

Modified: tomcat/sandbox/bayeux/README.txt
URL: 
http://svn.apache.org/viewvc/tomcat/sandbox/bayeux/README.txt?view=diff&rev=553634&r1=553633&r2=553634
==
--- tomcat/sandbox/bayeux/README.txt (original)
+++ tomcat/sandbox/bayeux/README.txt Thu Jul  5 14:08:31 2007
@@ -1,5 +1,6 @@
 Bayeux definition can be found at 
http://svn.xantus.org/shortbus/trunk/bayeux/bayeux.html
 JSON structure - http://www.json.org/ and 
http://www.ietf.org/rfc/rfc4627.txt?number=4627
 StringTree implementation for JSON parsing - 
http://www.stringtree.org/stringtree-json.html
+Currently we are using the source version of the StringTree parser as it has 
been refactored to support our needs, a release is due in July, 2007 at that 
time I will be adding the new jar to the build script
 
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



DO NOT REPLY [Bug 42822] New: - Entity resolution in JSP documents

2007-07-05 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=42822

   Summary: Entity resolution in JSP documents
   Product: Tomcat 5
   Version: 5.0.23
  Platform: Other
OS/Version: other
Status: NEW
  Severity: normal
  Priority: P2
 Component: Jasper
AssignedTo: [EMAIL PROTECTED]
ReportedBy: [EMAIL PROTECTED]


It is not currently possible to use entity references in JSP Documents in a
reasonable way.  A JSP such as this:



  

  The test page


  And here it is: 

  


will quite naturally fail to parse.  But while one might expect to be able to
include a doctype declaration with an appropriate entity decl, that doesn't work
 either.  That is, if you try


]>


  ...   ...


Jasper says: Element type "jsp:root" must be declared.

This is apparently because Jasper's parser turns on validation when it sees any
sort of DTD at all (see JspDocumentParser:1413), and since there's no external
DTD provided all elements are going to fail validation.

I can see the logic here, but it seems to go too far.  It is not necessarily the
case that if I've provided a DTD, I want validation.  Entity resolution can
happen w/o validation, so all that needs to happen is to allow the parse to
continue in non-validating mode.

I gather that the "solution" is to not actually use entity references, but
instead to use something like   and let the browser resolve the
"reference".  I find this confusing, to say the least.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



svn commit: r553700 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http: AcceptLanguage.java BaseRequest.java Cookies.java MimeMap.java Parameters.java ServerCookie.java

2007-07-05 Thread markt
Author: markt
Date: Thu Jul  5 19:36:32 2007
New Revision: 553700

URL: http://svn.apache.org/viewvc?view=rev&rev=553700
Log:
o.a.t.util.http
Tabs -> 8 spaces
Fix compiler warnings
No functional change

Modified:
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/AcceptLanguage.java
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/BaseRequest.java
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/Cookies.java
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/MimeMap.java
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/Parameters.java
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/ServerCookie.java

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/AcceptLanguage.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/AcceptLanguage.java?view=diff&rev=553700&r1=553699&r2=553700
==
--- tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/AcceptLanguage.java 
(original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/http/AcceptLanguage.java 
Thu Jul  5 19:36:32 2007
@@ -38,44 +38,46 @@
 public class AcceptLanguage {
 
 public static Locale getLocale(String acceptLanguage) {
-   if( acceptLanguage == null ) return Locale.getDefault();
+if( acceptLanguage == null ) return Locale.getDefault();
 
-Hashtable languages = new Hashtable();
-Vector quality=new Vector();
-processAcceptLanguage(acceptLanguage, languages,quality);
+Hashtable> languages =
+new Hashtable>();
+Vector quality = new Vector();
+processAcceptLanguage(acceptLanguage, languages, quality);
 
 if (languages.size() == 0) return Locale.getDefault();
 
-Vector l = new Vector();
+Vector l = new Vector();
 extractLocales( languages,quality, l);
 
 return (Locale)l.elementAt(0);
 }
 
 public static Enumeration getLocales(String acceptLanguage) {
-   // Short circuit with an empty enumeration if null header
+// Short circuit with an empty enumeration if null header
 if (acceptLanguage == null) {
-Vector v = new Vector();
+Vector v = new Vector();
 v.addElement(Locale.getDefault());
 return v.elements();
 }
-   
-Hashtable languages = new Hashtable();
-Vector quality=new Vector();
-   processAcceptLanguage(acceptLanguage, languages , quality);
+
+Hashtable> languages =
+new Hashtable>();
+Vector quality=new Vector();
+processAcceptLanguage(acceptLanguage, languages , quality);
 
 if (languages.size() == 0) {
-Vector v = new Vector();
+Vector v = new Vector();
 v.addElement(Locale.getDefault());
 return v.elements();
 }
-   Vector l = new Vector();
-   extractLocales( languages, quality , l);
-   return l.elements();
+Vector l = new Vector();
+extractLocales( languages, quality , l);
+return l.elements();
 }
 
 private static void processAcceptLanguage( String acceptLanguage,
- Hashtable languages, Vector q)
+Hashtable> languages, Vector q)
 {
 StringTokenizer languageTokenizer =
 new StringTokenizer(acceptLanguage, ",");
@@ -90,7 +92,7 @@
 if (qValueIndex > -1 &&
 qValueIndex < qIndex &&
 qIndex < equalIndex) {
-   String qValueStr = language.substring(qValueIndex + 1);
+String qValueStr = language.substring(qValueIndex + 1);
 language = language.substring(0, qValueIndex);
 qValueStr = qValueStr.trim().toLowerCase();
 qValueIndex = qValueStr.indexOf('=');
@@ -110,11 +112,11 @@
 
 if (! language.equals("*")) {
 String key = qValue.toString();
-Vector v;
+Vector v;
 if (languages.containsKey(key)) {
-v = (Vector)languages.get(key) ;
+v = languages.get(key) ;
 } else {
-v= new Vector();
+v= new Vector();
 q.addElement(qValue);
 }
 v.addElement(language);
@@ -123,7 +125,8 @@
 }
 }
 
-private static void extractLocales(Hashtable languages, Vector q,Vector l)
+private static void extractLocales(Hashtable languages, Vector q,
+Vector l)
 {
 // XXX We will need to order by q value Vector in the Future ?
 Enumeration e = q.elements();
@@ -132,9 +135,9 @@
 (Vector)languages.get(((Double)e.nextElement()).toString());
 Enumeration le = v.elements();
 while (le.hasMoreElements(

svn commit: r553716 - /tomcat/tc6.0.x/trunk/java/org/apache/catalina/util/CookieTools.java

2007-07-05 Thread markt
Author: markt
Date: Thu Jul  5 20:31:04 2007
New Revision: 553716

URL: http://svn.apache.org/viewvc?view=rev&rev=553716
Log:
Remove old util class that is no longer used.

Removed:
tomcat/tc6.0.x/trunk/java/org/apache/catalina/util/CookieTools.java


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]