DO NOT REPLY [Bug 40812] - change antiJARLocking to work on webapp-dir, but don't delete it when redeploying

2006-10-26 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=40812


[EMAIL PROTECTED] changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||WONTFIX




--- Additional Comments From [EMAIL PROTECTED]  2006-10-26 01:23 ---
I think this behavior is useless, so no.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Remy Maucherat

[EMAIL PROTECTED] wrote:

Author: fhanik
Date: Wed Oct 25 15:11:10 2006
New Revision: 467787

URL: http://svn.apache.org/viewvc?view=rev&rev=467787
Log:
Documented socket properties
Added in the ability to cache bytebuffers based on number of channels or number 
of bytes
Added in nonGC poller events to lower CPU usage during high traffic 


I'm starting to get emails again, so sorry for not replying.

I am testing with the default VM settings, which basically means that 
excessive GC will have a very visible impact. I am testing to optimize, 
not to see which connector would be faster in the real world (probably 
neither unless testing scalability), so I think it's reasonable.


This fixes the paranormal behavior I was seeing on Windows, so the NIO 
connector works properly now. Great ! However, I still have NIO which is 
slower than java.io which is slower than APR. It's ok if some solutions 
are better than others on certain platforms of course.


Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Source for Packages org.apache.tomcat.dbcp and below?

2006-10-26 Thread Remy Maucherat

Fernando Nasser wrote:
And we have the same problem on JPackage, and as consequence on Red Hat, 
Fedora, Suse, Mandriva


I wonder if the magic could not be done by having the original 
commons-dhcp JAR as input and doing some manipulation on it to move the 
classes to the desired package at tomcat build time...


Normally, it's difficult to do. You can easily patch Tomcat to have it 
use the regular commons-dbcp by default (it's a constant), but it will 
have the usual drawbacks (you will expose a bunch of JARs to webapps).


Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [TC6] Double AJP connector implementation

2006-10-26 Thread Remy Maucherat

Filip Hanik - Dev Lists wrote:
I don't have any preference either way, since we are pretty few active 
folks at the moment, the less code is usually better


My plan was not to do that (org.apache.jk is not that huge) and keep 
people happy.


Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Source for Packages org.apache.tomcat.dbcp and below?

2006-10-26 Thread Marcus Better
Remy Maucherat wrote:
> Normally, it's difficult to do. You can easily patch Tomcat to have it
> use the regular commons-dbcp by default (it's a constant), but it will
> have the usual drawbacks (you will expose a bunch of JARs to webapps).

Excuse my ignorance, but why is this a problem?

Marcus



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [build]fail message

2006-10-26 Thread Yoav Shapira

Hi,


On 10/24/06, Sean Qiu <[EMAIL PROTECTED]> wrote:

I have also tried the "ant test".
It seems that it is based unit test.


I don't use "ant test" and have no time to delve into what it is.  Use
the tester version.


I find there are only three testing classes and a few testcases.


The tester contains a little more than 100 unit tests, as you can see
from its output when you run it.


Need any configuration to make the unit test success?


None besides a proper configuration to start with.  Proper meaning
don't put anything in ANT_HOME/lib, on the CLASSPATH environment
variable (though it's ignored anyhow), in JRE_HOME/lib.  Proper
meaning edit build/build.properties.default as build/build.properties
to reflect your environment.  And that's it.


And what's the difference between "ant run-tester" and "ant test"? Thanks
again.


See first answer above.


BTW, when i run "ant test", it will fail for package missing.
So i add the revelant *jar files from binary into the "javac"'s classpath
parameter.
It works, but of course it is the the right way :) Pls correct me.


See first answer above.

Yoav

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r465417 - in /tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11: Http11NioProcessor.java InternalNioInputBuffer.java

2006-10-26 Thread Remy Maucherat

Filip Hanik - Dev Lists wrote:
I get occasional phantom slow downs with APR as well, not sure where 
they come from, I might dig into this after


Let me know if you find something.

Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Testing Tomcat 6.0.0 alpha

2006-10-26 Thread Remy Maucherat

Hi,

It would be good to test the build, and I'll post a stability vote for 
it next week (capped at beta, since some - very minor - test failures 
would need to be addressed first).


I've updated the website at people.apache.org, but it's not updated 
correctly and the download page does not work (the syncing seems broken 
following the extended downtime of people.apache.org).


The d/l location will be:
http://tomcat.apache.org/download-60.cgi

The build itself has been mirrored correctly.

Question: I don't remember where the Maven repository I should upload 
the build to is. Does someone know ?


Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



svn commit: r467989 - in /tomcat/tc6.0.x/trunk/java: javax/servlet/ServletException.java org/apache/catalina/core/StandardWrapper.java org/apache/catalina/valves/ErrorReportValve.java

2006-10-26 Thread remm
Author: remm
Date: Thu Oct 26 06:08:58 2006
New Revision: 467989

URL: http://svn.apache.org/viewvc?view=rev&rev=467989
Log:
- Refactor exception reporting using Throwable.getCause, since TC 6 does not 
have the restrictions for modifications
  to the API implementation classes.
- ServletException.getRootCause now calls getCause.
- Also add some tweaks for robustness to cap recursion.
- Let me know if I did it wrong.

Modified:
tomcat/tc6.0.x/trunk/java/javax/servlet/ServletException.java
tomcat/tc6.0.x/trunk/java/org/apache/catalina/core/StandardWrapper.java
tomcat/tc6.0.x/trunk/java/org/apache/catalina/valves/ErrorReportValve.java

Modified: tomcat/tc6.0.x/trunk/java/javax/servlet/ServletException.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/javax/servlet/ServletException.java?view=diff&rev=467989&r1=467988&r2=467989
==
--- tomcat/tc6.0.x/trunk/java/javax/servlet/ServletException.java (original)
+++ tomcat/tc6.0.x/trunk/java/javax/servlet/ServletException.java Thu Oct 26 
06:08:58 2006
@@ -23,30 +23,15 @@
  *
  * @author Various
  * @version$Version$
- *
  */
-
-
 public class ServletException extends Exception {
 
-private Throwable rootCause;
-
-
-
-
-
 /**
  * Constructs a new servlet exception.
- *
  */
-
 public ServletException() {
-   super();
+super();
 }
-
-   
-
-
 
 /**
  * Constructs a new servlet exception with the
@@ -56,16 +41,10 @@
  * @param message  a String 
  * specifying the text of 
  * the exception message
- *
  */
-
 public ServletException(String message) {
-   super(message);
+super(message);
 }
-
-   
-   
-
 
 /**
  * Constructs a new servlet exception when the servlet 
@@ -73,7 +52,6 @@
  * about the "root cause" exception that interfered with its 
  * normal operation, including a description message.
  *
- *
  * @param message  a String containing 
  * the text of the exception message
  *
@@ -81,18 +59,11 @@
  * that interfered with the servlet's
  * normal operation, making this servlet
  * exception necessary
- *
  */
-
 public ServletException(String message, Throwable rootCause) {
-   super(message);
-   this.rootCause = rootCause;
+super(message, rootCause);
 }
 
-
-
-
-
 /**
  * Constructs a new servlet exception when the servlet 
  * needs to throw an exception and include a message
@@ -110,33 +81,18 @@
  * that interfered with the servlet's
  * normal operation, making the servlet exception
  * necessary
- *
  */
-
 public ServletException(Throwable rootCause) {
-   super(rootCause.getLocalizedMessage());
-   this.rootCause = rootCause;
+this(rootCause.getLocalizedMessage(), rootCause);
 }
-  
-  
- 
- 
-
+
 /**
  * Returns the exception that caused this servlet exception.
  *
- *
  * @return the Throwable 
  * that caused this servlet exception
- *
  */
-
 public Throwable getRootCause() {
-   return rootCause;
+return getCause();
 }
 }
-
-
-
-
-

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/catalina/core/StandardWrapper.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/catalina/core/StandardWrapper.java?view=diff&rev=467989&r1=467988&r2=467989
==
--- tomcat/tc6.0.x/trunk/java/org/apache/catalina/core/StandardWrapper.java 
(original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/catalina/core/StandardWrapper.java Thu 
Oct 26 06:08:58 2006
@@ -679,17 +679,13 @@
 Throwable rootCause = e;
 Throwable rootCauseCheck = null;
 // Extra aggressive rootCause finding
+int loops = 0;
 do {
-try {
-rootCauseCheck = (Throwable)IntrospectionUtils.getProperty
-(rootCause, "rootCause");
-if (rootCauseCheck!=null)
-rootCause = rootCauseCheck;
-
-} catch (ClassCastException ex) {
-rootCauseCheck = null;
-}
-} while (rootCauseCheck != null);
+loops++;
+rootCauseCheck = rootCause.getCause();
+if (rootCauseCheck != null)
+rootCause = rootCauseCheck;
+} while (rootCauseCheck != null && (loops < 20));
 return rootCause;
 }
 

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/catalina/valves/ErrorReportValve.java
URL: 
http://svn.apache.or

Re: Testing Tomcat 6.0.0 alpha

2006-10-26 Thread Yoav Shapira

Hi,


On 10/26/06, Remy Maucherat <[EMAIL PROTECTED]> wrote:

The d/l location will be:
http://tomcat.apache.org/download-60.cgi

The build itself has been mirrored correctly.


Cool.


Question: I don't remember where the Maven repository I should upload
the build to is. Does someone know ?


It's on people.apache.org, but I think it might still be hosed.  The
upload directions are at:
http://www.apache.org/dev/release-publishing.html#maven-repo

Once we have a formal release (i.e. voted / approved by the PMC), we
can also upload to ibiblio, aka the Maven Central Repository,
following the directions at
http://maven.apache.org/guides/mini/guide-ibiblio-upload.html

Yoav

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467989 - in /tomcat/tc6.0.x/trunk/java: javax/servlet/ServletException.java org/apache/catalina/core/StandardWrapper.java org/apache/catalina/valves/ErrorReportValve.java

2006-10-26 Thread Tim Funk

Adding this to both loops may be helpful too:
if (rootCause == rootCause.getCause()) {
break;
}


-Tim

[EMAIL PROTECTED] wrote:

Author: remm
Date: Thu Oct 26 06:08:58 2006
New Revision: 467989

URL: http://svn.apache.org/viewvc?view=rev&rev=467989
Log:
- Refactor exception reporting using Throwable.getCause, since TC 6 does not 
have the restrictions for modifications
  to the API implementation classes.
- ServletException.getRootCause now calls getCause.
- Also add some tweaks for robustness to cap recursion.
- Let me know if I did it wrong.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



svn commit: r467995 - /tomcat/tc6.0.x/trunk/java/javax/servlet/ServletException.java

2006-10-26 Thread remm
Author: remm
Date: Thu Oct 26 06:24:22 2006
New Revision: 467995

URL: http://svn.apache.org/viewvc?view=rev&rev=467995
Log:
- Also use the parent constructor here.

Modified:
tomcat/tc6.0.x/trunk/java/javax/servlet/ServletException.java

Modified: tomcat/tc6.0.x/trunk/java/javax/servlet/ServletException.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/javax/servlet/ServletException.java?view=diff&rev=467995&r1=467994&r2=467995
==
--- tomcat/tc6.0.x/trunk/java/javax/servlet/ServletException.java (original)
+++ tomcat/tc6.0.x/trunk/java/javax/servlet/ServletException.java Thu Oct 26 
06:24:22 2006
@@ -83,7 +83,7 @@
  * necessary
  */
 public ServletException(Throwable rootCause) {
-this(rootCause.getLocalizedMessage(), rootCause);
+super(rootCause);
 }
 
 /**



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467989 - in /tomcat/tc6.0.x/trunk/java: javax/servlet/ServletException.java org/apache/catalina/core/StandardWrapper.java org/apache/catalina/valves/ErrorReportValve.java

2006-10-26 Thread Remy Maucherat

Tim Funk wrote:

Adding this to both loops may be helpful too:
if (rootCause == rootCause.getCause()) {
break;
}


Throwable.getCause does return null in that case, so the loop should get 
out:

public Throwable getCause() {
return (cause==this ? null : cause);
}

Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Testing Tomcat 6.0.0 alpha

2006-10-26 Thread Remy Maucherat

Yoav Shapira wrote:

It's on people.apache.org, but I think it might still be hosed.  The
upload directions are at:
http://www.apache.org/dev/release-publishing.html#maven-repo

Once we have a formal release (i.e. voted / approved by the PMC), we
can also upload to ibiblio, aka the Maven Central Repository,
following the directions at
http://maven.apache.org/guides/mini/guide-ibiblio-upload.html


Ok, it's not totally trivial then. I'll automate it (except the signing 
part, as usual) and I will start uploading builds to Maven for Tomcat 6.0.1.


Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467989 - in /tomcat/tc6.0.x/trunk/java: javax/servlet/ServletException.java org/apache/catalina/core/StandardWrapper.java org/apache/catalina/valves/ErrorReportValve.java

2006-10-26 Thread Tim Funk

Its specifically to address bad user code. For example:

http://issues.apache.org/bugzilla/show_bug.cgi?id=39088

Its the case where ServletException.getCause() returns something an 
instance of "user's custom Throwable" .. then the "user's custom 
Throwable" returns itself as the root cause.


-Tim

Remy Maucherat wrote:

Tim Funk wrote:

Adding this to both loops may be helpful too:
if (rootCause == rootCause.getCause()) {
break;
}


Throwable.getCause does return null in that case, so the loop should get 
out:

public Throwable getCause() {
return (cause==this ? null : cause);
}



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467989 - in /tomcat/tc6.0.x/trunk/java: javax/servlet/ServletException.java org/apache/catalina/core/StandardWrapper.java org/apache/catalina/valves/ErrorReportValve.java

2006-10-26 Thread Remy Maucherat

Tim Funk wrote:

Its specifically to address bad user code. For example:

http://issues.apache.org/bugzilla/show_bug.cgi?id=39088

Its the case where ServletException.getCause() returns something an 
instance of "user's custom Throwable" .. then the "user's custom 
Throwable" returns itself as the root cause.


I don't understand the situation you're talking about. Is it about an 
exception which would override getCause ? Feel free to make any changes, 
I'm ok with them.


Rémy


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [TC6] Double AJP connector implementation

2006-10-26 Thread Mladen Turk

Remy Maucherat wrote:

Filip Hanik - Dev Lists wrote:
I don't have any preference either way, since we are pretty few active 
folks at the moment, the less code is usually better


My plan was not to do that (org.apache.jk is not that huge) and keep 
people happy.




Nevertheless, the Apache/IIS Config will need some rewrite anyhow.
I suppose if we came up with the alternate solution for
the Config generator, the org.apache.jk can be marked as dormant.

Anyhow, I would like we have the org.apache.coyote.ajp as default
AJP connector, both for APR and JIO. It would give more stability
to both of them thought.

Regards,
Mladen.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



DO NOT REPLY [Bug 40820] New: - Default JSP factory not initialized early enough

2006-10-26 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=40820

   Summary: Default JSP factory not initialized early enough
   Product: Tomcat 6
   Version: 6.0.0
  Platform: Other
OS/Version: other
Status: NEW
  Severity: normal
  Priority: P2
 Component: Jasper
AssignedTo: tomcat-dev@jakarta.apache.org
ReportedBy: [EMAIL PROTECTED]


With the latest TC6 code, I'm seeing a problem that did not exist on earlier TC6
drivers.  Sorry that I can't put a finger on when this problem arose.  I looked
into relevant source files (like JspRuntimeContext) but haven't found the source
of the problem. 

Here's the issue (a testcase will be attached). 

The app is very simple: it installs ServletContextListener for the purpose of
adding a custom ELResolver.  It accomplishes this via: 

public void contextInitialized(ServletContextEvent evt) { 
ServletContext context = evt.getServletContext(); 
JspApplicationContext jspContext =
JspFactory.getDefaultFactory().getJspApplicationContext(context); 
jspContext.addELResolver(new ChipsELResolver()); 
} 


The problem is that JspFactory.getDefaultFactory() is returning null; see below.

Oct 4, 2006 5:32:18 PM org.apache.catalina.core.AprLifecycleListener 
lifecycleEv 
ent 
INFO: The Apache Tomcat Native library which allows optimal performance in 
produ 
ction environments was not found on the java.library.path: 
C:\javaFor6.0BuildJDK 
15\java\jre\bin;.;C:\javaFor6.0BuildJDK15\java\bin;c:\mantis2.1\mantis\bin;C:\se
 
tupIBASE;C:\Perl\bin\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:
 
\Python22;C:\Program Files\PC-Doctor for 
Windows\services;c:\cvsnt-2.0.4;c:\ecli 
pse;w:\;w:\bin;C:\Program 
Files\QuickTime\QTSystem\;C:\Diskeeper\;C:\CMVC\exe;C: 
\CMVC\exe\bin;;C:\CMVCDC50;C:\CMVCDC50;C:\CMVCDC50; 
Oct 4, 2006 5:32:18 PM org.apache.coyote.http11.Http11Protocol init 
INFO: Initializing Coyote HTTP/1.1 on http-8080 
Oct 4, 2006 5:32:19 PM org.apache.catalina.startup.Catalina load 
INFO: Initialization processed in 1234 ms 
Oct 4, 2006 5:32:19 PM org.apache.catalina.core.StandardService start 
INFO: Starting service Catalina 
Oct 4, 2006 5:32:19 PM org.apache.catalina.core.StandardEngine start 
INFO: Starting Servlet Engine: Apache Tomcat/6.0.0-dev 
Oct 4, 2006 5:32:19 PM org.apache.catalina.core.StandardHost start 
INFO: XML validation disabled 
Oct 4, 2006 5:32:19 PM org.apache.catalina.startup.HostConfig deployWAR 
INFO: Deploying web application archive ELResolverTest.war 
ChipsListener.contextInitialized  evt= 
javax.servlet.ServletContextEvent[source= 
[EMAIL PROTECTED] 
ChipsListener.contextInitialized  context= 
org.apache.catalina.core.ApplicationC 
[EMAIL PROTECTED] 
ChipsListener.contextInitialized  JspFactory.getDefaultFactory()= null 
Oct 4, 2006 5:32:20 PM org.apache.catalina.core.StandardContext start 
SEVERE: Error listenerStart 
Oct 4, 2006 5:32:20 PM org.apache.catalina.core.StandardContext start 
SEVERE: Context [/ELResolverTest] startup failed due to previous errors 
Oct 4, 2006 5:32:21 PM org.apache.coyote.http11.Http11Protocol start 
INFO: Starting Coyote HTTP/1.1 on http-8080 
Oct 4, 2006 5:32:21 PM org.apache.jk.common.ChannelSocket init 
INFO: JK: ajp13 listening on /0.0.0.0:8009 
Oct 4, 2006 5:32:21 PM org.apache.jk.server.JkMain start 
INFO: Jk running ID=0 time=0/63  config=null 
Oct 4, 2006 5:32:21 PM org.apache.catalina.startup.Catalina start 
INFO: Server startup in 2657 ms 


This is easily reproducible when you deploy the ELResolverTest.war *before* the
server has started (I assume JspFactory.setDefaultFactory() hasn't been invoked
at the time the listeners are being installed).  If you deploy the WAR *after*
the server starts, the problem does not manifest and the app works --- until you
stop and restart the server.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



DO NOT REPLY [Bug 40820] - Default JSP factory not initialized early enough

2006-10-26 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=40820





--- Additional Comments From [EMAIL PROTECTED]  2006-10-26 07:54 ---
Created an attachment (id=19042)
 --> (http://issues.apache.org/bugzilla/attachment.cgi?id=19042&action=view)
This zip file contains the WAR in the ELResolverTest\dist directory; and the
source and build scripts are provided as well.

This zip file contains the WAR in the ELResolverTest\dist directory; and the
source and build scripts are provided as well. 


-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Testing Tomcat 6.0.0 alpha

2006-10-26 Thread Filip Hanik - Dev Lists

Remy Maucherat wrote:

Yoav Shapira wrote:

It's on people.apache.org, but I think it might still be hosed.  The
upload directions are at:
http://www.apache.org/dev/release-publishing.html#maven-repo

Once we have a formal release (i.e. voted / approved by the PMC), we
can also upload to ibiblio, aka the Maven Central Repository,
following the directions at
http://maven.apache.org/guides/mini/guide-ibiblio-upload.html


Ok, it's not totally trivial then. I'll automate it (except the 
signing part, as usual) and I will start uploading builds to Maven for 
Tomcat 6.0.1.
you can automate the signing, I did it for 5.5 with a windows batch 
script, checked into 5.5/build/sign.bat

it takes the key password as its parameter.

Filip

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



svn commit: r468035 - in /tomcat/tc6.0.x/trunk/java/org/apache: coyote/http11/InternalNioOutputBuffer.java tomcat/util/net/NioEndpoint.java

2006-10-26 Thread fhanik
Author: fhanik
Date: Thu Oct 26 08:24:24 2006
New Revision: 468035

URL: http://svn.apache.org/viewvc?view=rev&rev=468035
Log:
Reverted the removal of the "socket buffer", writing to a ByteBuffer is 
extremely slow, so it should only be done in chunks

Modified:

tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net/NioEndpoint.java

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java?view=diff&rev=468035&r1=468034&r2=468035
==
--- 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java 
(original)
+++ 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java 
Thu Oct 26 08:24:24 2006
@@ -69,8 +69,15 @@
 this.response = response;
 headers = response.getMimeHeaders();
 
-//buf = new byte[headerBufferSize];
+buf = new byte[headerBufferSize];
 
+if (headerBufferSize < (8 * 1024)) {
+bbufLimit = 6 * 1500;
+} else {
+bbufLimit = (headerBufferSize / 1500 + 1) * 1500;
+}
+//bbuf = ByteBuffer.allocateDirect(bbufLimit);
+
 outputStreamOutputBuffer = new SocketOutputBuffer();
 
 filterLibrary = new OutputFilter[0];
@@ -128,7 +135,7 @@
 /**
  * Pointer to the current write buffer.
  */
-//protected byte[] buf;
+protected byte[] buf;
 
 
 /**
@@ -440,12 +447,11 @@
 /**
  * Send the response status line.
  */
-public void sendStatus() throws IOException  {
+public void sendStatus() {
 
 // Write protocol name
 write(Constants.HTTP_11_BYTES);
-addToBB(Constants.SP);
-pos++;
+buf[pos++] = Constants.SP;
 
 // Write status code
 int status = response.getStatus();
@@ -463,8 +469,7 @@
 write(status);
 }
 
-addToBB(Constants.SP);
-pos++;
+buf[pos++] = Constants.SP;
 
 // Write message
 String message = response.getMessage();
@@ -475,10 +480,8 @@
 }
 
 // End the response status line
-addToBB(Constants.CR);
-pos++;
-addToBB(Constants.LF);
-pos++;
+buf[pos++] = Constants.CR;
+buf[pos++] = Constants.LF;
 
 }
 
@@ -489,18 +492,14 @@
  * @param name Header name
  * @param value Header value
  */
-public void sendHeader(MessageBytes name, MessageBytes value) throws 
IOException {
+public void sendHeader(MessageBytes name, MessageBytes value) {
 
 write(name);
-addToBB(Constants.COLON);
-pos++;
-addToBB(Constants.SP);
-pos++;
+buf[pos++] = Constants.COLON;
+buf[pos++] = Constants.SP;
 write(value);
-addToBB(Constants.CR);
-pos++;
-addToBB(Constants.LF);
-pos++;
+buf[pos++] = Constants.CR;
+buf[pos++] = Constants.LF;
 
 }
 
@@ -511,18 +510,15 @@
  * @param name Header name
  * @param value Header value
  */
-public void sendHeader(ByteChunk name, ByteChunk value) throws IOException 
{
+public void sendHeader(ByteChunk name, ByteChunk value) {
 
 write(name);
-addToBB(Constants.COLON);
-pos++;
-addToBB(Constants.SP);
-pos++;
+buf[pos++] = Constants.COLON;
+buf[pos++] = Constants.SP;
 write(value);
-addToBB(Constants.CR);
-pos++;
-addToBB(Constants.LF);
-pos++;
+buf[pos++] = Constants.CR;
+buf[pos++] = Constants.LF;
+
 }
 
 
@@ -535,16 +531,11 @@
 public void sendHeader(String name, String value) {
 
 write(name);
-addToBB(Constants.COLON);
-pos++;
-addToBB(Constants.SP);
-pos++;
+buf[pos++] = Constants.COLON;
+buf[pos++] = Constants.SP;
 write(value);
-addToBB(Constants.CR);
-pos++;
-addToBB(Constants.LF);
-pos++;
-
+buf[pos++] = Constants.CR;
+buf[pos++] = Constants.LF;
 
 }
 
@@ -554,10 +545,8 @@
  */
 public void endHeaders() {
 
-addToBB(Constants.CR);
-pos++;
-addToBB(Constants.LF);
-pos++;
+buf[pos++] = Constants.CR;
+buf[pos++] = Constants.LF;
 
 }
 
@@ -609,28 +598,17 @@
 
 if (pos > 0) {
 // Sending the response header buffer
-//flushBuffer();//do we need this?
+addToBB(buf, 0, pos);
 }
 
 }
 
 int total = 0;
-private void addToBB(byte b)  {
-ByteBuffer bytebuffer = socket.getBufHandler().getWriteBuffer();
-final int length = 1;
-if (bytebuffer.remaining() <= length) {
-try { flushBuffer()

Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Filip Hanik - Dev Lists

Remy Maucherat wrote:

[EMAIL PROTECTED] wrote:

Author: fhanik
Date: Wed Oct 25 15:11:10 2006
New Revision: 467787

URL: http://svn.apache.org/viewvc?view=rev&rev=467787
Log:
Documented socket properties
Added in the ability to cache bytebuffers based on number of channels 
or number of bytes
Added in nonGC poller events to lower CPU usage during high traffic 


I'm starting to get emails again, so sorry for not replying.

I am testing with the default VM settings, which basically means that 
excessive GC will have a very visible impact. I am testing to 
optimize, not to see which connector would be faster in the real world 
(probably neither unless testing scalability), so I think it's 
reasonable.


This fixes the paranormal behavior I was seeing on Windows, so the NIO 
connector works properly now. Great ! However, I still have NIO which 
is slower than java.io which is slower than APR. It's ok if some 
solutions are better than others on certain platforms of course.


thanks for the feedback, I'm testing with larger files now, 100k+ and 
also see APR->JIO->NIO
NIO has a very funny CPU telemetry graph, it fluctuates way to much, so 
I have to find where in the code it would do this, so there is still 
some work to do.
I'd like to see a nearly flat CPU usage when running my test, but 
instead the CPU goes from 20-80% up and down, up and down.


during my test
(for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400 
http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)


my memory usage goes up to 40MB, then after a FullGC it goes down to 
10MB again, so I wanna figure out where that comes from as well. My 
guess is that all that data is actually in the java.net.Socket classes, 
as I am seeing the same results with the JIO connector, but not with 
APR(cause APR allocates mem using pools)
Btw, had to put in the byte[] buffer back into the 
InternalNioOutputBuffer.java, ByteBuffers are way to slow.


With APR, I think the connections might be lingering to long as 
eventually, during my test, it stop accepting connections. Usually 
around the 89th iteration of the test.
I'm gonna keep working on this for a bit, as I think I am getting to a 
point with the NIO connector where it is a viable alternative.


Filip

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Remy Maucherat

Filip Hanik - Dev Lists wrote:
thanks for the feedback, I'm testing with larger files now, 100k+ and 
also see APR->JIO->NIO
NIO has a very funny CPU telemetry graph, it fluctuates way to much, so 
I have to find where in the code it would do this, so there is still 
some work to do.
I'd like to see a nearly flat CPU usage when running my test, but 
instead the CPU goes from 20-80% up and down, up and down.


It's a bit mysterious.


during my test
(for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400 
http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)


my memory usage goes up to 40MB, then after a FullGC it goes down to 
10MB again, so I wanna figure out where that comes from as well. My 
guess is that all that data is actually in the java.net.Socket classes, 
as I am seeing the same results with the JIO connector, but not with 
APR(cause APR allocates mem using pools)
Btw, had to put in the byte[] buffer back into the 
InternalNioOutputBuffer.java, ByteBuffers are way to slow.


With APR, I think the connections might be lingering to long as 
eventually, during my test, it stop accepting connections. Usually 
around the 89th iteration of the test.
I'm gonna keep working on this for a bit, as I think I am getting to a 
point with the NIO connector where it is a viable alternative.


I agree, it seems it's better than java.io (although it's not faster, it 
seems more scalable, and I'm getting less errors during the tests).


Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Rainer Jung
Hi Filip,

the fluctuation reminds me of something: depending on the client
behaviour connections will end up in TIME_WAIT state. Usually you run
into trouble (throughput stalls) once you have around 30K of them. They
will be cleaned up every now and then by the kernel (talking about the
unix/Linux style mechanisms) and then throughput (and CPU usage) start
again.

With modern systems handling 10-20k requests per second one can run into
trouble much faster, than the usual cleanup intervals.

Check with "netstat -an" if you can see a lot of TIME_WAIT connections
(thousands). If not it's something different :(

Regards,

Rainer

Filip Hanik - Dev Lists schrieb:
> Remy Maucherat wrote:
>> [EMAIL PROTECTED] wrote:
>>> Author: fhanik
>>> Date: Wed Oct 25 15:11:10 2006
>>> New Revision: 467787
>>>
>>> URL: http://svn.apache.org/viewvc?view=rev&rev=467787
>>> Log:
>>> Documented socket properties
>>> Added in the ability to cache bytebuffers based on number of channels
>>> or number of bytes
>>> Added in nonGC poller events to lower CPU usage during high traffic 
>>
>> I'm starting to get emails again, so sorry for not replying.
>>
>> I am testing with the default VM settings, which basically means that
>> excessive GC will have a very visible impact. I am testing to
>> optimize, not to see which connector would be faster in the real world
>> (probably neither unless testing scalability), so I think it's
>> reasonable.
>>
>> This fixes the paranormal behavior I was seeing on Windows, so the NIO
>> connector works properly now. Great ! However, I still have NIO which
>> is slower than java.io which is slower than APR. It's ok if some
>> solutions are better than others on certain platforms of course.
>>
> thanks for the feedback, I'm testing with larger files now, 100k+ and
> also see APR->JIO->NIO
> NIO has a very funny CPU telemetry graph, it fluctuates way to much, so
> I have to find where in the code it would do this, so there is still
> some work to do.
> I'd like to see a nearly flat CPU usage when running my test, but
> instead the CPU goes from 20-80% up and down, up and down.
> 
> during my test
> (for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
> http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)
> 
> my memory usage goes up to 40MB, then after a FullGC it goes down to
> 10MB again, so I wanna figure out where that comes from as well. My
> guess is that all that data is actually in the java.net.Socket classes,
> as I am seeing the same results with the JIO connector, but not with
> APR(cause APR allocates mem using pools)
> Btw, had to put in the byte[] buffer back into the
> InternalNioOutputBuffer.java, ByteBuffers are way to slow.
> 
> With APR, I think the connections might be lingering to long as
> eventually, during my test, it stop accepting connections. Usually
> around the 89th iteration of the test.
> I'm gonna keep working on this for a bit, as I think I am getting to a
> point with the NIO connector where it is a viable alternative.
> 
> Filip
> 
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Filip Hanik - Dev Lists
That's some very good info, it looks like my system never does go over 
30k and cleaning it up seems to be working really well.

btw. do you know where I change the cleanup intervals for linux 2.6 kernel?

I figured out what the problem was:
Somewhere I have a lock/wait problem

for example, this runs perfectly:
./ab -n 1 -c 100 http://localhost:$PORT/run.jsp?run=TEST$i

If I change -c 100 (100 sockets) to -c 1, each JSP request takes 1 second.

so what was happening in my test was running 1000 requests over 400 
connections, then invoking 1 request over 1 connection, and repeat.
Every time I did the single connection request, it does a 1sec delay, 
this cause the CPU to drop.


So basically, the NIO connector sucks majorly if you are a single user 
:), I'll trace this one down.

Filip


Rainer Jung wrote:

Hi Filip,

the fluctuation reminds me of something: depending on the client
behaviour connections will end up in TIME_WAIT state. Usually you run
into trouble (throughput stalls) once you have around 30K of them. They
will be cleaned up every now and then by the kernel (talking about the
unix/Linux style mechanisms) and then throughput (and CPU usage) start
again.

With modern systems handling 10-20k requests per second one can run into
trouble much faster, than the usual cleanup intervals.

Check with "netstat -an" if you can see a lot of TIME_WAIT connections
(thousands). If not it's something different :(

Regards,

Rainer

Filip Hanik - Dev Lists schrieb:
  

Remy Maucherat wrote:


[EMAIL PROTECTED] wrote:
  

Author: fhanik
Date: Wed Oct 25 15:11:10 2006
New Revision: 467787

URL: http://svn.apache.org/viewvc?view=rev&rev=467787
Log:
Documented socket properties
Added in the ability to cache bytebuffers based on number of channels
or number of bytes
Added in nonGC poller events to lower CPU usage during high traffic 


I'm starting to get emails again, so sorry for not replying.

I am testing with the default VM settings, which basically means that
excessive GC will have a very visible impact. I am testing to
optimize, not to see which connector would be faster in the real world
(probably neither unless testing scalability), so I think it's
reasonable.

This fixes the paranormal behavior I was seeing on Windows, so the NIO
connector works properly now. Great ! However, I still have NIO which
is slower than java.io which is slower than APR. It's ok if some
solutions are better than others on certain platforms of course.

  

thanks for the feedback, I'm testing with larger files now, 100k+ and
also see APR->JIO->NIO
NIO has a very funny CPU telemetry graph, it fluctuates way to much, so
I have to find where in the code it would do this, so there is still
some work to do.
I'd like to see a nearly flat CPU usage when running my test, but
instead the CPU goes from 20-80% up and down, up and down.

during my test
(for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)

my memory usage goes up to 40MB, then after a FullGC it goes down to
10MB again, so I wanna figure out where that comes from as well. My
guess is that all that data is actually in the java.net.Socket classes,
as I am seeing the same results with the JIO connector, but not with
APR(cause APR allocates mem using pools)
Btw, had to put in the byte[] buffer back into the
InternalNioOutputBuffer.java, ByteBuffers are way to slow.

With APR, I think the connections might be lingering to long as
eventually, during my test, it stop accepting connections. Usually
around the 89th iteration of the test.
I'm gonna keep working on this for a bit, as I think I am getting to a
point with the NIO connector where it is a viable alternative.

Filip

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



  



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Peter Rossbach

Hi Filip and Rainer,

I found the following info to reduce the TIME_WAIT at windows:

===

The TIME_WAIT problem is a very common one for Windows NT systems.  
Unlike most Unix systems, Windows NT does not have a generic setting  
for the TIME_WAIT interval modification. To modify this setting, you  
should create an entry in the Windows NT Registry (the information  
below is taken from the http://www.microsoft.com site:


Run Registry Editor (RegEdit.exe).
Go to the following key in the registry:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\tcpip\Parameters
Choose Add Value from the Edit menu and create the following entry:
Value Name:
TcpTimedWaitDelay
Data Type:
REG_DWORD
Value:
30-300 (decimal) - time in seconds
Default: 0xF0 (240 decimal) not in registry by default
Quit the Registry Editor
Restart the computer for the registry change to take effect.
Description: This parameter determines the length of time that a  
connection will stay in the TIME_WAIT state when being closed. While  
a connection is in the TIME_WAIT state, the socket pair cannot be  
reused. This is also known as the "2MSL" state, as by RFC the value  
should be twice the maximum segment lifetime on the network. See  
RFC793 for further details.





Regards
Peter



Am 26.10.2006 um 20:58 schrieb Filip Hanik - Dev Lists:

That's some very good info, it looks like my system never does go  
over 30k and cleaning it up seems to be working really well.
btw. do you know where I change the cleanup intervals for linux 2.6  
kernel?


I figured out what the problem was:
Somewhere I have a lock/wait problem

for example, this runs perfectly:
./ab -n 1 -c 100 http://localhost:$PORT/run.jsp?run=TEST$i

If I change -c 100 (100 sockets) to -c 1, each JSP request takes 1  
second.


so what was happening in my test was running 1000 requests over 400  
connections, then invoking 1 request over 1 connection, and repeat.
Every time I did the single connection request, it does a 1sec  
delay, this cause the CPU to drop.


So basically, the NIO connector sucks majorly if you are a single  
user :), I'll trace this one down.

Filip


Rainer Jung wrote:

Hi Filip,

the fluctuation reminds me of something: depending on the client
behaviour connections will end up in TIME_WAIT state. Usually you run
into trouble (throughput stalls) once you have around 30K of them.  
They
will be cleaned up every now and then by the kernel (talking about  
the
unix/Linux style mechanisms) and then throughput (and CPU usage)  
start

again.

With modern systems handling 10-20k requests per second one can  
run into

trouble much faster, than the usual cleanup intervals.

Check with "netstat -an" if you can see a lot of TIME_WAIT  
connections

(thousands). If not it's something different :(

Regards,

Rainer

Filip Hanik - Dev Lists schrieb:


Remy Maucherat wrote:


[EMAIL PROTECTED] wrote:


Author: fhanik
Date: Wed Oct 25 15:11:10 2006
New Revision: 467787

URL: http://svn.apache.org/viewvc?view=rev&rev=467787
Log:
Documented socket properties
Added in the ability to cache bytebuffers based on number of  
channels

or number of bytes
Added in nonGC poller events to lower CPU usage during high  
traffic

I'm starting to get emails again, so sorry for not replying.

I am testing with the default VM settings, which basically means  
that

excessive GC will have a very visible impact. I am testing to
optimize, not to see which connector would be faster in the real  
world

(probably neither unless testing scalability), so I think it's
reasonable.

This fixes the paranormal behavior I was seeing on Windows, so  
the NIO
connector works properly now. Great ! However, I still have NIO  
which

is slower than java.io which is slower than APR. It's ok if some
solutions are better than others on certain platforms of course.


thanks for the feedback, I'm testing with larger files now, 100k+  
and

also see APR->JIO->NIO
NIO has a very funny CPU telemetry graph, it fluctuates way to  
much, so

I have to find where in the code it would do this, so there is still
some work to do.
I'd like to see a nearly flat CPU usage when running my test, but
instead the CPU goes from 20-80% up and down, up and down.

during my test
(for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)

my memory usage goes up to 40MB, then after a FullGC it goes down to
10MB again, so I wanna figure out where that comes from as well. My
guess is that all that data is actually in the java.net.Socket  
classes,

as I am seeing the same results with the JIO connector, but not with
APR(cause APR allocates mem using pools)
Btw, had to put in the byte[] buffer back into the
InternalNioOutputBuffer.java, ByteBuffers are way to slow.

With APR, I think the connections might be lingering to long as
eventually, during my test, it stop accepting connections. Usually
around the 89th iteration of the test.
I'm gonna keep working

Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Peter Rossbach

Hi,

for other server os's I found:

=
For AIX: To see the current TCP_TIMEWAIT value, run the following  
command:

/usr/sbin/no a | grep tcp_timewait

To set the TCP_TIMEWAIT values to 15 seconds, run the following command:
/usr/sbin/no o tcp_timewait =1

The tcp_timewait option is used to configure how long connections are  
kept in the timewait state. It is given in 15-second intervals, and  
the default is 1.


For Linux: Set the timeout_timewait paramater using the following  
command:

/sbin/sysctl -w net.ipv4.vs.timeout_timewait=30
This will set TME_WAIT for 30 seconds.


For Solaris: Set the tcp_time_wait_interval to 3 milliseconds as  
follows:

/usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 3

==

Tipps for tuning mac os x 10.4 are very welcome :-(

Regards
Peter Roßbach
[EMAIL PROTECTED]



Am 26.10.2006 um 20:58 schrieb Filip Hanik - Dev Lists:

That's some very good info, it looks like my system never does go  
over 30k and cleaning it up seems to be working really well.
btw. do you know where I change the cleanup intervals for linux 2.6  
kernel?


I figured out what the problem was:
Somewhere I have a lock/wait problem

for example, this runs perfectly:
./ab -n 1 -c 100 http://localhost:$PORT/run.jsp?run=TEST$i

If I change -c 100 (100 sockets) to -c 1, each JSP request takes 1  
second.


so what was happening in my test was running 1000 requests over 400  
connections, then invoking 1 request over 1 connection, and repeat.
Every time I did the single connection request, it does a 1sec  
delay, this cause the CPU to drop.


So basically, the NIO connector sucks majorly if you are a single  
user :), I'll trace this one down.

Filip


Rainer Jung wrote:

Hi Filip,

the fluctuation reminds me of something: depending on the client
behaviour connections will end up in TIME_WAIT state. Usually you run
into trouble (throughput stalls) once you have around 30K of them.  
They
will be cleaned up every now and then by the kernel (talking about  
the
unix/Linux style mechanisms) and then throughput (and CPU usage)  
start

again.

With modern systems handling 10-20k requests per second one can  
run into

trouble much faster, than the usual cleanup intervals.

Check with "netstat -an" if you can see a lot of TIME_WAIT  
connections

(thousands). If not it's something different :(

Regards,

Rainer

Filip Hanik - Dev Lists schrieb:


Remy Maucherat wrote:


[EMAIL PROTECTED] wrote:


Author: fhanik
Date: Wed Oct 25 15:11:10 2006
New Revision: 467787

URL: http://svn.apache.org/viewvc?view=rev&rev=467787
Log:
Documented socket properties
Added in the ability to cache bytebuffers based on number of  
channels

or number of bytes
Added in nonGC poller events to lower CPU usage during high  
traffic

I'm starting to get emails again, so sorry for not replying.

I am testing with the default VM settings, which basically means  
that

excessive GC will have a very visible impact. I am testing to
optimize, not to see which connector would be faster in the real  
world

(probably neither unless testing scalability), so I think it's
reasonable.

This fixes the paranormal behavior I was seeing on Windows, so  
the NIO
connector works properly now. Great ! However, I still have NIO  
which

is slower than java.io which is slower than APR. It's ok if some
solutions are better than others on certain platforms of course.


thanks for the feedback, I'm testing with larger files now, 100k+  
and

also see APR->JIO->NIO
NIO has a very funny CPU telemetry graph, it fluctuates way to  
much, so

I have to find where in the code it would do this, so there is still
some work to do.
I'd like to see a nearly flat CPU usage when running my test, but
instead the CPU goes from 20-80% up and down, up and down.

during my test
(for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)

my memory usage goes up to 40MB, then after a FullGC it goes down to
10MB again, so I wanna figure out where that comes from as well. My
guess is that all that data is actually in the java.net.Socket  
classes,

as I am seeing the same results with the JIO connector, but not with
APR(cause APR allocates mem using pools)
Btw, had to put in the byte[] buffer back into the
InternalNioOutputBuffer.java, ByteBuffers are way to slow.

With APR, I think the connections might be lingering to long as
eventually, during my test, it stop accepting connections. Usually
around the 89th iteration of the test.
I'm gonna keep working on this for a bit, as I think I am getting  
to a

point with the NIO connector where it is a viable alternative.

Filip

 
-

To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional comm

Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Rainer Jung
Hi Filip,

that's one of the not so nice things with linux. As far as I know it's
not configurable with standard linux. There exist kernel patches for
this and there is an ip filter module that lets you do that, but some
say that module is very bad for IP performance (and high performance
would be the major reason to decrease the time_wait interval).

It' shrinkable for solaris (ndd -set /dev/tcp tcp_time_wait_interval
VALUE_IN_SECONDS), but even there the thread cleaning up the tables runs
only every 5 seconds.

Concerning the one request 1 connection case: I often realized strange
behaviour (unclean shutdown) of ab concerning the last request in a
connection. I never analysed it though. If you can easily reproduce the
"one request over one connection is slow" problem without high load, you
might want to tcpdump to check, if it's really slow on the server side.

Just my 0.9 cents ...

Rainer

Filip Hanik - Dev Lists schrieb:
> That's some very good info, it looks like my system never does go over
> 30k and cleaning it up seems to be working really well.
> btw. do you know where I change the cleanup intervals for linux 2.6 kernel?
> 
> I figured out what the problem was:
> Somewhere I have a lock/wait problem
> 
> for example, this runs perfectly:
> ./ab -n 1 -c 100 http://localhost:$PORT/run.jsp?run=TEST$i
> 
> If I change -c 100 (100 sockets) to -c 1, each JSP request takes 1 second.
> 
> so what was happening in my test was running 1000 requests over 400
> connections, then invoking 1 request over 1 connection, and repeat.
> Every time I did the single connection request, it does a 1sec delay,
> this cause the CPU to drop.
> 
> So basically, the NIO connector sucks majorly if you are a single user
> :), I'll trace this one down.
> Filip
> 
> 
> Rainer Jung wrote:
>> Hi Filip,
>>
>> the fluctuation reminds me of something: depending on the client
>> behaviour connections will end up in TIME_WAIT state. Usually you run
>> into trouble (throughput stalls) once you have around 30K of them. They
>> will be cleaned up every now and then by the kernel (talking about the
>> unix/Linux style mechanisms) and then throughput (and CPU usage) start
>> again.
>>
>> With modern systems handling 10-20k requests per second one can run into
>> trouble much faster, than the usual cleanup intervals.
>>
>> Check with "netstat -an" if you can see a lot of TIME_WAIT connections
>> (thousands). If not it's something different :(
>>
>> Regards,
>>
>> Rainer
>>
>> Filip Hanik - Dev Lists schrieb:
>>  
>>> Remy Maucherat wrote:
>>>
 [EMAIL PROTECTED] wrote:
  
> Author: fhanik
> Date: Wed Oct 25 15:11:10 2006
> New Revision: 467787
>
> URL: http://svn.apache.org/viewvc?view=rev&rev=467787
> Log:
> Documented socket properties
> Added in the ability to cache bytebuffers based on number of channels
> or number of bytes
> Added in nonGC poller events to lower CPU usage during high traffic
> 
 I'm starting to get emails again, so sorry for not replying.

 I am testing with the default VM settings, which basically means that
 excessive GC will have a very visible impact. I am testing to
 optimize, not to see which connector would be faster in the real world
 (probably neither unless testing scalability), so I think it's
 reasonable.

 This fixes the paranormal behavior I was seeing on Windows, so the NIO
 connector works properly now. Great ! However, I still have NIO which
 is slower than java.io which is slower than APR. It's ok if some
 solutions are better than others on certain platforms of course.

   
>>> thanks for the feedback, I'm testing with larger files now, 100k+ and
>>> also see APR->JIO->NIO
>>> NIO has a very funny CPU telemetry graph, it fluctuates way to much, so
>>> I have to find where in the code it would do this, so there is still
>>> some work to do.
>>> I'd like to see a nearly flat CPU usage when running my test, but
>>> instead the CPU goes from 20-80% up and down, up and down.
>>>
>>> during my test
>>> (for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
>>> http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)
>>>
>>> my memory usage goes up to 40MB, then after a FullGC it goes down to
>>> 10MB again, so I wanna figure out where that comes from as well. My
>>> guess is that all that data is actually in the java.net.Socket classes,
>>> as I am seeing the same results with the JIO connector, but not with
>>> APR(cause APR allocates mem using pools)
>>> Btw, had to put in the byte[] buffer back into the
>>> InternalNioOutputBuffer.java, ByteBuffers are way to slow.
>>>
>>> With APR, I think the connections might be lingering to long as
>>> eventually, during my test, it stop accepting connections. Usually
>>> around the 89th iteration of the test.
>>> I'm gonna keep working on this for a bit, as I think I am getting to a
>>> point with the NIO connector where it i

Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Rainer Jung
Sorry: Solaris VALUE_IN_SECONDS -> VALUE_IN_MILLISECONDS

Rainer Jung schrieb:
> Hi Filip,
> 
> that's one of the not so nice things with linux. As far as I know it's
> not configurable with standard linux. There exist kernel patches for
> this and there is an ip filter module that lets you do that, but some
> say that module is very bad for IP performance (and high performance
> would be the major reason to decrease the time_wait interval).
> 
> It' shrinkable for solaris (ndd -set /dev/tcp tcp_time_wait_interval
> VALUE_IN_SECONDS), but even there the thread cleaning up the tables runs
> only every 5 seconds.
> 
> Concerning the one request 1 connection case: I often realized strange
> behaviour (unclean shutdown) of ab concerning the last request in a
> connection. I never analysed it though. If you can easily reproduce the
> "one request over one connection is slow" problem without high load, you
> might want to tcpdump to check, if it's really slow on the server side.
> 
> Just my 0.9 cents ...
> 
> Rainer
> 
> Filip Hanik - Dev Lists schrieb:
>> That's some very good info, it looks like my system never does go over
>> 30k and cleaning it up seems to be working really well.
>> btw. do you know where I change the cleanup intervals for linux 2.6 kernel?
>>
>> I figured out what the problem was:
>> Somewhere I have a lock/wait problem
>>
>> for example, this runs perfectly:
>> ./ab -n 1 -c 100 http://localhost:$PORT/run.jsp?run=TEST$i
>>
>> If I change -c 100 (100 sockets) to -c 1, each JSP request takes 1 second.
>>
>> so what was happening in my test was running 1000 requests over 400
>> connections, then invoking 1 request over 1 connection, and repeat.
>> Every time I did the single connection request, it does a 1sec delay,
>> this cause the CPU to drop.
>>
>> So basically, the NIO connector sucks majorly if you are a single user
>> :), I'll trace this one down.
>> Filip
>>
>>
>> Rainer Jung wrote:
>>> Hi Filip,
>>>
>>> the fluctuation reminds me of something: depending on the client
>>> behaviour connections will end up in TIME_WAIT state. Usually you run
>>> into trouble (throughput stalls) once you have around 30K of them. They
>>> will be cleaned up every now and then by the kernel (talking about the
>>> unix/Linux style mechanisms) and then throughput (and CPU usage) start
>>> again.
>>>
>>> With modern systems handling 10-20k requests per second one can run into
>>> trouble much faster, than the usual cleanup intervals.
>>>
>>> Check with "netstat -an" if you can see a lot of TIME_WAIT connections
>>> (thousands). If not it's something different :(
>>>
>>> Regards,
>>>
>>> Rainer
>>>
>>> Filip Hanik - Dev Lists schrieb:
>>>  
 Remy Maucherat wrote:

> [EMAIL PROTECTED] wrote:
>  
>> Author: fhanik
>> Date: Wed Oct 25 15:11:10 2006
>> New Revision: 467787
>>
>> URL: http://svn.apache.org/viewvc?view=rev&rev=467787
>> Log:
>> Documented socket properties
>> Added in the ability to cache bytebuffers based on number of channels
>> or number of bytes
>> Added in nonGC poller events to lower CPU usage during high traffic
>> 
> I'm starting to get emails again, so sorry for not replying.
>
> I am testing with the default VM settings, which basically means that
> excessive GC will have a very visible impact. I am testing to
> optimize, not to see which connector would be faster in the real world
> (probably neither unless testing scalability), so I think it's
> reasonable.
>
> This fixes the paranormal behavior I was seeing on Windows, so the NIO
> connector works properly now. Great ! However, I still have NIO which
> is slower than java.io which is slower than APR. It's ok if some
> solutions are better than others on certain platforms of course.
>
>   
 thanks for the feedback, I'm testing with larger files now, 100k+ and
 also see APR->JIO->NIO
 NIO has a very funny CPU telemetry graph, it fluctuates way to much, so
 I have to find where in the code it would do this, so there is still
 some work to do.
 I'd like to see a nearly flat CPU usage when running my test, but
 instead the CPU goes from 20-80% up and down, up and down.

 during my test
 (for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
 http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)

 my memory usage goes up to 40MB, then after a FullGC it goes down to
 10MB again, so I wanna figure out where that comes from as well. My
 guess is that all that data is actually in the java.net.Socket classes,
 as I am seeing the same results with the JIO connector, but not with
 APR(cause APR allocates mem using pools)
 Btw, had to put in the byte[] buffer back into the
 InternalNioOutputBuffer.java, ByteBuffers are way to slow.

 With APR, I think the connections might be lingering to long as
 eventually, dur

DO NOT REPLY [Bug 40822] New: - Session conflict

2006-10-26 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=40822

   Summary: Session conflict
   Product: Tomcat 5
   Version: 5.5.17
  Platform: Other
OS/Version: Linux
Status: NEW
  Severity: critical
  Priority: P2
 Component: Unknown
AssignedTo: tomcat-dev@jakarta.apache.org
ReportedBy: [EMAIL PROTECTED]


Well, I developed an application that creates an object with one id and a name,
this object is loaded in the session all time that the user effects login, and 
tomcat manages sessions placed in distinct spaces. But in some weeks of use the
system with Tomcat 5.5.17 and JDK 1.5_8, was printed the data of a user in
another session. This only happened two times, but it happened. So, when I used
tomcat 5.5.16 and the JDK 1.5_6 this never had happened. Somebody has some idea
of what it is?

Att,

Fred

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Filip Hanik - Dev Lists

Rainer Jung wrote:

Concerning the one request 1 connection case: I often realized strange
behaviour (unclean shutdown) of ab concerning the last request in a
connection. I never analysed it though. If you can easily reproduce the
"one request over one connection is slow" problem without high load, you
might want to tcpdump to check, if it's really slow on the server side.
  

you got it, that is the problem.
Filip


Just my 0.9 cents ...

Rainer

Filip Hanik - Dev Lists schrieb:
  

That's some very good info, it looks like my system never does go over
30k and cleaning it up seems to be working really well.
btw. do you know where I change the cleanup intervals for linux 2.6 kernel?

I figured out what the problem was:
Somewhere I have a lock/wait problem

for example, this runs perfectly:
./ab -n 1 -c 100 http://localhost:$PORT/run.jsp?run=TEST$i

If I change -c 100 (100 sockets) to -c 1, each JSP request takes 1 second.

so what was happening in my test was running 1000 requests over 400
connections, then invoking 1 request over 1 connection, and repeat.
Every time I did the single connection request, it does a 1sec delay,
this cause the CPU to drop.

So basically, the NIO connector sucks majorly if you are a single user
:), I'll trace this one down.
Filip


Rainer Jung wrote:


Hi Filip,

the fluctuation reminds me of something: depending on the client
behaviour connections will end up in TIME_WAIT state. Usually you run
into trouble (throughput stalls) once you have around 30K of them. They
will be cleaned up every now and then by the kernel (talking about the
unix/Linux style mechanisms) and then throughput (and CPU usage) start
again.

With modern systems handling 10-20k requests per second one can run into
trouble much faster, than the usual cleanup intervals.

Check with "netstat -an" if you can see a lot of TIME_WAIT connections
(thousands). If not it's something different :(

Regards,

Rainer

Filip Hanik - Dev Lists schrieb:
 
  

Remy Maucherat wrote:
   


[EMAIL PROTECTED] wrote:
 
  

Author: fhanik
Date: Wed Oct 25 15:11:10 2006
New Revision: 467787

URL: http://svn.apache.org/viewvc?view=rev&rev=467787
Log:
Documented socket properties
Added in the ability to cache bytebuffers based on number of channels
or number of bytes
Added in nonGC poller events to lower CPU usage during high traffic



I'm starting to get emails again, so sorry for not replying.

I am testing with the default VM settings, which basically means that
excessive GC will have a very visible impact. I am testing to
optimize, not to see which connector would be faster in the real world
(probably neither unless testing scalability), so I think it's
reasonable.

This fixes the paranormal behavior I was seeing on Windows, so the NIO
connector works properly now. Great ! However, I still have NIO which
is slower than java.io which is slower than APR. It's ok if some
solutions are better than others on certain platforms of course.

  
  

thanks for the feedback, I'm testing with larger files now, 100k+ and
also see APR->JIO->NIO
NIO has a very funny CPU telemetry graph, it fluctuates way to much, so
I have to find where in the code it would do this, so there is still
some work to do.
I'd like to see a nearly flat CPU usage when running my test, but
instead the CPU goes from 20-80% up and down, up and down.

during my test
(for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)

my memory usage goes up to 40MB, then after a FullGC it goes down to
10MB again, so I wanna figure out where that comes from as well. My
guess is that all that data is actually in the java.net.Socket classes,
as I am seeing the same results with the JIO connector, but not with
APR(cause APR allocates mem using pools)
Btw, had to put in the byte[] buffer back into the
InternalNioOutputBuffer.java, ByteBuffers are way to slow.

With APR, I think the connections might be lingering to long as
eventually, during my test, it stop accepting connections. Usually
around the 89th iteration of the test.
I'm gonna keep working on this for a bit, as I think I am getting to a
point with the NIO connector where it is a viable alternative.

Filip

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



  
  

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



  



-

Re: [TC6] Double AJP connector implementation

2006-10-26 Thread Jean-frederic Clere

Mladen Turk wrote:


Remy Maucherat wrote:


Filip Hanik - Dev Lists wrote:

I don't have any preference either way, since we are pretty few 
active folks at the moment, the less code is usually better



My plan was not to do that (org.apache.jk is not that huge) and keep 
people happy.




Nevertheless, the Apache/IIS Config will need some rewrite anyhow.
I suppose if we came up with the alternate solution for
the Config generator, the org.apache.jk can be marked as dormant.

Anyhow, I would like we have the org.apache.coyote.ajp as default
AJP connector, both for APR and JIO. It would give more stability
to both of them thought.


That won't be bad... I am always testing org.apache.jk instead 
org.apache.coyote.ajp ;-)


Cheers

Jean-Frederic



Regards,
Mladen.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Jean-frederic Clere

Peter Rossbach wrote:


Hi,

for other server os's I found:

=
For AIX: To see the current TCP_TIMEWAIT value, run the following  
command:

/usr/sbin/no a | grep tcp_timewait

To set the TCP_TIMEWAIT values to 15 seconds, run the following command:
/usr/sbin/no o tcp_timewait =1

The tcp_timewait option is used to configure how long connections are  
kept in the timewait state. It is given in 15-second intervals, and  
the default is 1.


For Linux: Set the timeout_timewait paramater using the following  
command:

/sbin/sysctl -w net.ipv4.vs.timeout_timewait=30
This will set TME_WAIT for 30 seconds.



No... My machine (debian 2.6.13) says:
+++
[EMAIL PROTECTED]:~$ sudo /sbin/sysctl -w net.ipv4.vs.timeout_timewait=30
error: "net.ipv4.vs.timeout_timewait" is an unknown key
+++
net.ipv4.tcp_fin_timeout is probably the thing to use:
+++
[EMAIL PROTECTED]:~$ more  /proc/sys/net/ipv4/tcp_fin_timeout
60
+++

Cheers

Jean-Frederic




For Solaris: Set the tcp_time_wait_interval to 3 milliseconds as  
follows:

/usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 3

==

Tipps for tuning mac os x 10.4 are very welcome :-(

Regards
Peter Roßbach
[EMAIL PROTECTED]



Am 26.10.2006 um 20:58 schrieb Filip Hanik - Dev Lists:

That's some very good info, it looks like my system never does go  
over 30k and cleaning it up seems to be working really well.
btw. do you know where I change the cleanup intervals for linux 2.6  
kernel?


I figured out what the problem was:
Somewhere I have a lock/wait problem

for example, this runs perfectly:
./ab -n 1 -c 100 http://localhost:$PORT/run.jsp?run=TEST$i

If I change -c 100 (100 sockets) to -c 1, each JSP request takes 1  
second.


so what was happening in my test was running 1000 requests over 400  
connections, then invoking 1 request over 1 connection, and repeat.
Every time I did the single connection request, it does a 1sec  
delay, this cause the CPU to drop.


So basically, the NIO connector sucks majorly if you are a single  
user :), I'll trace this one down.

Filip


Rainer Jung wrote:


Hi Filip,

the fluctuation reminds me of something: depending on the client
behaviour connections will end up in TIME_WAIT state. Usually you run
into trouble (throughput stalls) once you have around 30K of them.  
They

will be cleaned up every now and then by the kernel (talking about  the
unix/Linux style mechanisms) and then throughput (and CPU usage)  start
again.

With modern systems handling 10-20k requests per second one can  run 
into

trouble much faster, than the usual cleanup intervals.

Check with "netstat -an" if you can see a lot of TIME_WAIT  connections
(thousands). If not it's something different :(

Regards,

Rainer

Filip Hanik - Dev Lists schrieb:


Remy Maucherat wrote:


[EMAIL PROTECTED] wrote:


Author: fhanik
Date: Wed Oct 25 15:11:10 2006
New Revision: 467787

URL: http://svn.apache.org/viewvc?view=rev&rev=467787
Log:
Documented socket properties
Added in the ability to cache bytebuffers based on number of  
channels

or number of bytes
Added in nonGC poller events to lower CPU usage during high  traffic


I'm starting to get emails again, so sorry for not replying.

I am testing with the default VM settings, which basically means  
that

excessive GC will have a very visible impact. I am testing to
optimize, not to see which connector would be faster in the real  
world

(probably neither unless testing scalability), so I think it's
reasonable.

This fixes the paranormal behavior I was seeing on Windows, so  
the NIO
connector works properly now. Great ! However, I still have NIO  
which

is slower than java.io which is slower than APR. It's ok if some
solutions are better than others on certain platforms of course.



thanks for the feedback, I'm testing with larger files now, 100k+  and
also see APR->JIO->NIO
NIO has a very funny CPU telemetry graph, it fluctuates way to  
much, so

I have to find where in the code it would do this, so there is still
some work to do.
I'd like to see a nearly flat CPU usage when running my test, but
instead the CPU goes from 20-80% up and down, up and down.

during my test
(for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)

my memory usage goes up to 40MB, then after a FullGC it goes down to
10MB again, so I wanna figure out where that comes from as well. My
guess is that all that data is actually in the java.net.Socket  
classes,

as I am seeing the same results with the JIO connector, but not with
APR(cause APR allocates mem using pools)
Btw, had to put in the byte[] buffer back into the
InternalNioOutputBuffer.java, ByteBuffers are way to slow.

With APR, I think the connections might be lingering to long as
eventually, during my test, it stop accepting connections. Usually
around the 89th iteration of the test.
I'm gonna keep working on this for a bit, as I think I am getting  
to a

point with the NIO

DO NOT REPLY [Bug 40817] - servlet-cgi throws index out of bounds exception on certain cgi

2006-10-26 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=40817





--- Additional Comments From [EMAIL PROTECTED]  2006-10-26 12:59 ---
This is likely a configuration issue.  Make sure that you aren't defining the
CGIServlet init-param cgiPathPrefix as '/' in your CGI servlet definition. 
Remove the cgiPathPrefix init-param and it should work as expected.  Setting
cgiPathPrefix to '/' was the only way that I could repro this issue.  By having
that set you are unneccessarily adding an extra '/' to the path:

INFO http-8080-Processor25
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/cgi-test] - cgi:
findCGI: path=/test.pl, /home/chris/apache-tomcat-5.5.20/webapps/cgi-test//

The CGIServlet is already set up to trim any trailing file seperator from the
webAppRootDir, but it only trims one:

if ((webAppRootDir != null)
&& (webAppRootDir.lastIndexOf(File.separator) ==
(webAppRootDir.length() - 1))) {
//strip the trailing "/" from the webAppRootDir
webAppRootDir =
webAppRootDir.substring(0, (webAppRootDir.length() - 1));
}

A possibly more appropriate patch would trim an arbitrary number of file
seperators from webAppRootDir, though right now I can't think of another case
where that would be needed.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



DO NOT REPLY [Bug 40823] New: - Specifying default context w/empty path outside of server.xml

2006-10-26 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=40823

   Summary: Specifying default context w/empty path outside of
server.xml
   Product: Tomcat 5
   Version: 5.5.20
  Platform: All
OS/Version: other
Status: NEW
  Severity: normal
  Priority: P2
 Component: Unknown
AssignedTo: tomcat-dev@jakarta.apache.org
ReportedBy: [EMAIL PROTECTED]


The documentation (http://tomcat.apache.org/tomcat-5.5-
doc/config/context.html) encourages users to define their context's outside of 
the server.xml file to enable hot-deployments.  The documentation also 
says "If you specify a context path of an empty string (""), you are defining 
the default web application for this Host, which will process all requests not 
assigned to other Contexts"  Unfortunately, unless the context is defined 
inside the server.xml file, Tomcat doesn't recognize the empty string path 
value.  The workaround suggested on the user mailing list was:
To specify the default app, you must first delete the existing webapps/ROOT 
directory, then install your app in webapps/ROOT (or webapps/ROOT.war) or put 
your  element in conf/[engine]/[host]/ROOT.xml.

This bug is to request the documentation be updated to include the 
instructions for specifying the default context when it's defined physically 
outside of the server.xml file.  

The dev mailing list indicated this is also how Tomcat 6 works.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



DO NOT REPLY [Bug 40824] New: - Tomcat doesn't honor use of an empty string ("") to define the default web application for a Host outside server.xml

2006-10-26 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=40824

   Summary: Tomcat doesn't honor use of an empty string ("") to
define the default web application for a Host outside
server.xml
   Product: Tomcat 5
   Version: 5.5.20
  Platform: Other
OS/Version: other
Status: NEW
  Severity: normal
  Priority: P2
 Component: Unknown
AssignedTo: tomcat-dev@jakarta.apache.org
ReportedBy: [EMAIL PROTECTED]


The documentation (http://tomcat.apache.org/tomcat-5.5-
doc/config/context.html) encourages users to define their context's outside of 
the server.xml file to enable hot-deployments.  The documentation also 
says "If you specify a context path of an empty string (""), you are defining 
the default web application for this Host, which will process all requests not 
assigned to other Contexts"  Unfortunately, unless the context is defined 
inside the server.xml file, Tomcat doesn't recognize the empty string path 
value.  The workaround suggested on the user mailing list was:
To specify the default app, you must first delete the existing webapps/ROOT 
directory, then install your app in webapps/ROOT (or webapps/ROOT.war) or put 
your  element in conf/[engine]/[host]/ROOT.xml.

This bug is to request a single mechanism for specifying the default context 
regardless of if it's physically defined inside or outside of the server.xml 
file.  

The dev mailing list indicated this is also how Tomcat 6 works.
Also bug# 40823 is a stop-gap solution to callout the workaround.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Rainer Jung
Jean-frederic Clere schrieb:
> Peter Rossbach wrote:
>> For Linux: Set the timeout_timewait paramater using the following 
>> command:
>> /sbin/sysctl -w net.ipv4.vs.timeout_timewait=30
>> This will set TME_WAIT for 30 seconds.
> 
> 
> No... My machine (debian 2.6.13) says:
> +++
> [EMAIL PROTECTED]:~$ sudo /sbin/sysctl -w net.ipv4.vs.timeout_timewait=30
> error: "net.ipv4.vs.timeout_timewait" is an unknown key

It's an extension of linux virtual server not existing in a standard kernel.

> +++
> net.ipv4.tcp_fin_timeout is probably the thing to use:
> +++
> [EMAIL PROTECTED]:~$ more  /proc/sys/net/ipv4/tcp_fin_timeout
> 60

No that's something different, responsible for FIN_WAIT2 and not for
TIME_WAIT. I'm still pretty sure, TIME_WAIT interval is (unfortunately)
not tunable on standard 2.6 kernel.

> +++
> 
> Cheers
> 
> Jean-Frederic
> 
>>
>> 
>> For Solaris: Set the tcp_time_wait_interval to 3 milliseconds as 
>> follows:
>> /usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 3
>>
>> ==
>>
>> Tipps for tuning mac os x 10.4 are very welcome :-(
>>
>> Regards
>> Peter Roßbach
>> [EMAIL PROTECTED]
>>
>>
>>
>> Am 26.10.2006 um 20:58 schrieb Filip Hanik - Dev Lists:
>>
>>> That's some very good info, it looks like my system never does go 
>>> over 30k and cleaning it up seems to be working really well.
>>> btw. do you know where I change the cleanup intervals for linux 2.6 
>>> kernel?
>>>
>>> I figured out what the problem was:
>>> Somewhere I have a lock/wait problem
>>>
>>> for example, this runs perfectly:
>>> ./ab -n 1 -c 100 http://localhost:$PORT/run.jsp?run=TEST$i
>>>
>>> If I change -c 100 (100 sockets) to -c 1, each JSP request takes 1 
>>> second.
>>>
>>> so what was happening in my test was running 1000 requests over 400 
>>> connections, then invoking 1 request over 1 connection, and repeat.
>>> Every time I did the single connection request, it does a 1sec 
>>> delay, this cause the CPU to drop.
>>>
>>> So basically, the NIO connector sucks majorly if you are a single 
>>> user :), I'll trace this one down.
>>> Filip
>>>
>>>
>>> Rainer Jung wrote:
>>>
 Hi Filip,

 the fluctuation reminds me of something: depending on the client
 behaviour connections will end up in TIME_WAIT state. Usually you run
 into trouble (throughput stalls) once you have around 30K of them. 
 They
 will be cleaned up every now and then by the kernel (talking about  the
 unix/Linux style mechanisms) and then throughput (and CPU usage)  start
 again.

 With modern systems handling 10-20k requests per second one can  run
 into
 trouble much faster, than the usual cleanup intervals.

 Check with "netstat -an" if you can see a lot of TIME_WAIT  connections
 (thousands). If not it's something different :(

 Regards,

 Rainer

 Filip Hanik - Dev Lists schrieb:

> Remy Maucherat wrote:
>
>> [EMAIL PROTECTED] wrote:
>>
>>> Author: fhanik
>>> Date: Wed Oct 25 15:11:10 2006
>>> New Revision: 467787
>>>
>>> URL: http://svn.apache.org/viewvc?view=rev&rev=467787
>>> Log:
>>> Documented socket properties
>>> Added in the ability to cache bytebuffers based on number of 
>>> channels
>>> or number of bytes
>>> Added in nonGC poller events to lower CPU usage during high  traffic
>>
>> I'm starting to get emails again, so sorry for not replying.
>>
>> I am testing with the default VM settings, which basically means 
>> that
>> excessive GC will have a very visible impact. I am testing to
>> optimize, not to see which connector would be faster in the real 
>> world
>> (probably neither unless testing scalability), so I think it's
>> reasonable.
>>
>> This fixes the paranormal behavior I was seeing on Windows, so 
>> the NIO
>> connector works properly now. Great ! However, I still have NIO 
>> which
>> is slower than java.io which is slower than APR. It's ok if some
>> solutions are better than others on certain platforms of course.
>>
>>
> thanks for the feedback, I'm testing with larger files now, 100k+  and
> also see APR->JIO->NIO
> NIO has a very funny CPU telemetry graph, it fluctuates way to 
> much, so
> I have to find where in the code it would do this, so there is still
> some work to do.
> I'd like to see a nearly flat CPU usage when running my test, but
> instead the CPU goes from 20-80% up and down, up and down.
>
> during my test
> (for i in $(seq 1 100); do echo -n "$i."; ./ab -n 1000 -c 400
> http://localhost:$PORT/104k.jpg 2>1 |grep "Requests per"; done)
>
> my memory usage goes up to 40MB, then after a FullGC it goes down to
> 10MB again, so I wanna figure out where that comes from as well. My
> guess is that all that data is actually in the java.net.Socket 
> classes,
> as I am seeing the same results with th

svn commit: r468124 - in /tomcat/tc6.0.x/trunk/java/org/apache: coyote/http11/ tomcat/util/net/

2006-10-26 Thread fhanik
Author: fhanik
Date: Thu Oct 26 13:37:40 2006
New Revision: 468124

URL: http://svn.apache.org/viewvc?view=rev&rev=468124
Log:
Make sure the socket buffer is not bigger than anticipated header size
Reuse the key attachment objects properly

Modified:
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProcessor.java
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProtocol.java

tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalOutputBuffer.java
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net/NioChannel.java
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net/NioEndpoint.java

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProcessor.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProcessor.java?view=diff&rev=468124&r1=468123&r2=468124
==
--- tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProcessor.java 
(original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProcessor.java 
Thu Oct 26 13:37:40 2006
@@ -83,7 +83,7 @@
 // --- Constructors
 
 
-public Http11NioProcessor(int rxBufSize, int txBufSize, NioEndpoint 
endpoint) {
+public Http11NioProcessor(int rxBufSize, int txBufSize, int 
maxHttpHeaderSize, NioEndpoint endpoint) {
 
 this.endpoint = endpoint;
 
@@ -95,12 +95,12 @@
 readTimeout = timeout;
 //readTimeout = -1;
 }
-inputBuffer = new InternalNioInputBuffer(request, 
rxBufSize,readTimeout);
+inputBuffer = new InternalNioInputBuffer(request, 
maxHttpHeaderSize,readTimeout);
 request.setInputBuffer(inputBuffer);
 
 response = new Response();
 response.setHook(this);
-outputBuffer = new InternalNioOutputBuffer(response, 
txBufSize,readTimeout);
+outputBuffer = new InternalNioOutputBuffer(response, 
maxHttpHeaderSize,readTimeout);
 response.setOutputBuffer(outputBuffer);
 request.setResponse(response);
 

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProtocol.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProtocol.java?view=diff&rev=468124&r1=468123&r2=468124
==
--- tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProtocol.java 
(original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/Http11NioProtocol.java 
Thu Oct 26 13:37:40 2006
@@ -655,8 +655,9 @@
 
 public Http11NioProcessor createProcessor() {
 Http11NioProcessor processor = new Http11NioProcessor(
-  
Math.max(proto.maxHttpHeaderSize,proto.ep.getSocketProperties().getRxBufSize()),
-  
Math.max(proto.maxHttpHeaderSize,proto.ep.getSocketProperties().getRxBufSize()),
 
+  proto.ep.getSocketProperties().getRxBufSize(),
+  proto.ep.getSocketProperties().getTxBufSize(), 
+  proto.maxHttpHeaderSize,
   proto.ep);
 processor.setAdapter(proto.adapter);
 processor.setMaxKeepAliveRequests(proto.maxKeepAliveRequests);

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java?view=diff&rev=468124&r1=468123&r2=468124
==
--- 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java 
(original)
+++ 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java 
Thu Oct 26 13:37:40 2006
@@ -605,7 +605,7 @@
 
 int total = 0;
 private void addToBB(byte[] buf, int offset, int length) throws 
IOException {
-if (socket.getBufHandler().getWriteBuffer().remaining() <= length) {
+if (socket.getBufHandler().getWriteBuffer().remaining() < length) {
 flushBuffer();
 }
 socket.getBufHandler().getWriteBuffer().put(buf, offset, length);

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalOutputBuffer.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalOutputBuffer.java?view=diff&rev=468124&r1=468123&r2=468124
==
--- 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalOutputBuffer.java 
(original)
+++ 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalOutputBuffer.java 
Thu Oct 26 13:37:40 2006
@@ -61,6 +61,7 @@
 public InternalOutputBuffer(Response response, int headerBufferSize) {
 
 this.response = response;
+
 

svn commit: r468132 - in /tomcat/tc6.0.x/trunk/java/org/apache: coyote/http11/InternalNioInputBuffer.java coyote/http11/InternalNioOutputBuffer.java tomcat/util/net/NioSelectorPool.java tomcat/util/ne

2006-10-26 Thread fhanik
Author: fhanik
Date: Thu Oct 26 13:57:28 2006
New Revision: 468132

URL: http://svn.apache.org/viewvc?view=rev&rev=468132
Log:
Ooops, forgot to pass in the double buffered channel to the selector pool for 
write and read operations

Modified:

tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java

tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net/NioSelectorPool.java
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net/SecureNioChannel.java

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java?view=diff&rev=468132&r1=468131&r2=468132
==
--- 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java 
(original)
+++ 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java 
Thu Oct 26 13:57:28 2006
@@ -570,7 +570,7 @@
 Selector selector = null;
 try { selector = getSelectorPool().get(); }catch ( IOException x ) 
{}
 try {
-nRead = 
getSelectorPool().read(socket.getBufHandler().getReadBuffer(),socket.getIOChannel(),selector,rto);
+nRead = 
getSelectorPool().read(socket.getBufHandler().getReadBuffer(),socket,selector,rto);
 } catch ( EOFException eof ) {
 nRead = -1;
 } finally { 

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java?view=diff&rev=468132&r1=468131&r2=468132
==
--- 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java 
(original)
+++ 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioOutputBuffer.java 
Thu Oct 26 13:57:28 2006
@@ -431,7 +431,7 @@
 //ignore
 }
 try {
-written = getSelectorPool().write(bytebuffer, 
socket.getIOChannel(), selector, writeTimeout);
+written = getSelectorPool().write(bytebuffer, socket, selector, 
writeTimeout);
 //make sure we are flushed 
 do {
 if (socket.flush(selector)) break;

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net/NioSelectorPool.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net/NioSelectorPool.java?view=diff&rev=468132&r1=468131&r2=468132
==
--- tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net/NioSelectorPool.java 
(original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net/NioSelectorPool.java 
Thu Oct 26 13:57:28 2006
@@ -99,7 +99,7 @@
  * @throws SocketTimeoutException if the write times out
  * @throws IOException if an IO Exception occurs in the underlying socket 
logic
  */
-public int write(ByteBuffer buf, SocketChannel socket, Selector selector, 
long writeTimeout) throws IOException {
+public int write(ByteBuffer buf, NioChannel socket, Selector selector, 
long writeTimeout) throws IOException {
 SelectionKey key = null;
 int written = 0;
 boolean timedout = false;
@@ -118,7 +118,7 @@
 }
 if ( selector != null ) {
 //register OP_WRITE to the selector
-if (key==null) key = socket.register(selector, 
SelectionKey.OP_WRITE);
+if (key==null) key = 
socket.getIOChannel().register(selector, SelectionKey.OP_WRITE);
 else key.interestOps(SelectionKey.OP_WRITE);
 keycount = selector.select(writeTimeout);
 }
@@ -147,7 +147,7 @@
  * @throws SocketTimeoutException if the read times out
  * @throws IOException if an IO Exception occurs in the underlying socket 
logic
  */
-public int read(ByteBuffer buf, SocketChannel socket, Selector selector, 
long readTimeout) throws IOException {
+public int read(ByteBuffer buf, NioChannel socket, Selector selector, long 
readTimeout) throws IOException {
 SelectionKey key = null;
 int read = 0;
 boolean timedout = false;
@@ -163,7 +163,7 @@
 }
 if ( selector != null ) {
 //register OP_WRITE to the selector
-if (key==null) key = socket.register(selector, 
SelectionKey.OP_READ);
+if (key==null) key = 
socket.getIOChannel().register(selector, SelectionKey.OP_READ);
 else key.interestOps(SelectionKey.OP_READ);
 keycount = selector.select(readTimeout);
  

DO NOT REPLY [Bug 40817] - servlet-cgi throws index out of bounds exception on certain cgi

2006-10-26 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=40817





--- Additional Comments From [EMAIL PROTECTED]  2006-10-26 14:08 ---
Well, I double-check the init-param but it doesn't have a '/'
It's "blank"
  cgiPathPrefix
  
(also though it's a different issue, I found that "SCRIPT_NAME" was wrong
too...it was returning "/test.pltest.pl" or "/test/test.pltest/test.pl"...fixed
it in the if statement a few lines down...scriptname = cginame and scriptname =
contextpath + cginame, respectively but that's another issue).

waitI seeline 918 adds an extra '/' if the pathprefix setting is null.

anyways, "cginame = (currentLocation.getParent() +
File.separator).substring(webAppRootDir.length()) + name;" seems to work.


btw, how do I submit a feature request? (I added it myself as I was having
problems with PHP CGII made it so under certain circumstances, it will use
"php" instead of "perl" as the cgiexecutable and lo-and-behold, it
works...though I also had to add the env "SCRIPT_FILENAME" [which is just a
exact copy of "X_TOMCAT_SCRIPT_PATH"] ).  I plan to make the php "enhancement"
as a part of the init-param so it can be turned on or off as need be (as well as
be able to define what constitutes "PHP" mode...as right now, it's hard-coded to
look for commands that end with ".php" ".php3" ".php4" ".phps")

anyways..maybe a regex for removing the trailing '/' might do...

(In reply to comment #2)
> This is likely a configuration issue.  Make sure that you aren't defining the
> CGIServlet init-param cgiPathPrefix as '/' in your CGI servlet definition. 
> Remove the cgiPathPrefix init-param and it should work as expected.  Setting
> cgiPathPrefix to '/' was the only way that I could repro this issue.  By 
> having
> that set you are unneccessarily adding an extra '/' to the path:
> 
> INFO http-8080-Processor25
> org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/cgi-test] - 
> cgi:
> findCGI: path=/test.pl, /home/chris/apache-tomcat-5.5.20/webapps/cgi-test//
> 
> The CGIServlet is already set up to trim any trailing file seperator from the
> webAppRootDir, but it only trims one:
> 
> if ((webAppRootDir != null)
> && (webAppRootDir.lastIndexOf(File.separator) ==
> (webAppRootDir.length() - 1))) {
> //strip the trailing "/" from the webAppRootDir
> webAppRootDir =
> webAppRootDir.substring(0, (webAppRootDir.length() - 1));
> }
> 
> A possibly more appropriate patch would trim an arbitrary number of file
> seperators from webAppRootDir, though right now I can't think of another case
> where that would be needed.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



svn commit: r468166 - /tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java

2006-10-26 Thread fhanik
Author: fhanik
Date: Thu Oct 26 15:04:24 2006
New Revision: 468166

URL: http://svn.apache.org/viewvc?view=rev&rev=468166
Log:
Cleaned up imports

Modified:

tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java

Modified: 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java?view=diff&rev=468166&r1=468165&r2=468166
==
--- 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java 
(original)
+++ 
tomcat/tc6.0.x/trunk/java/org/apache/coyote/http11/InternalNioInputBuffer.java 
Thu Oct 26 15:04:24 2006
@@ -20,21 +20,16 @@
 
 import java.io.EOFException;
 import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.nio.channels.CancelledKeyException;
-import java.nio.channels.SelectionKey;
+import java.nio.channels.Selector;
 
 import org.apache.coyote.InputBuffer;
 import org.apache.coyote.Request;
 import org.apache.tomcat.util.buf.ByteChunk;
 import org.apache.tomcat.util.buf.MessageBytes;
 import org.apache.tomcat.util.http.MimeHeaders;
-import org.apache.tomcat.util.net.NioEndpoint.KeyAttachment;
-import org.apache.tomcat.util.net.NioEndpoint.Poller;
-import org.apache.tomcat.util.res.StringManager;
 import org.apache.tomcat.util.net.NioChannel;
 import org.apache.tomcat.util.net.NioSelectorPool;
-import java.nio.channels.Selector;
+import org.apache.tomcat.util.res.StringManager;
 
 /**
  * Implementation of InputBuffer which provides HTTP request header parsing as



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: svn commit: r467787 - in /tomcat/tc6.0.x/trunk/java/org/apache/tomcat/util/net: NioChannel.java NioEndpoint.java SecureNioChannel.java SocketProperties.java

2006-10-26 Thread Filip Hanik - Dev Lists

Remy Maucherat wrote:

[EMAIL PROTECTED] wrote:

Author: fhanik
Date: Wed Oct 25 15:11:10 2006
New Revision: 467787

URL: http://svn.apache.org/viewvc?view=rev&rev=467787
Log:
Documented socket properties
Added in the ability to cache bytebuffers based on number of channels 
or number of bytes
Added in nonGC poller events to lower CPU usage during high traffic 


I'm starting to get emails again, so sorry for not replying.

I am testing with the default VM settings, which basically means that 
excessive GC will have a very visible impact. I am testing to 
optimize, not to see which connector would be faster in the real world 
(probably neither unless testing scalability), so I think it's 
reasonable.


This fixes the paranormal behavior I was seeing on Windows, so the NIO 
connector works properly now. Great ! However, I still have NIO which 
is slower than java.io which is slower than APR. It's ok if some 
solutions are better than others on certain platforms of course.
The NIO implementation seems to be very GC intense, here are a couple of 
graphs from a profiler that shows objects that were collected during a 
test run.


http://www.hanik.com/gc-by-object-count.html
http://www.hanik.com/gc-by-object-size.html

As you can see, the majority is IO related, the rest of it, like the 
ConcurrentLinkedQueue is pretty wasteful as well, and all the HashMap 
and LinkedList are traced to calls from some java IO function. So 
naturally running our JIO connector, we get the same GC behavior.


Filip



Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[PROPOSAL] More Non-Non blocking NIO crap

2006-10-26 Thread Filip Hanik - Dev Lists
gents, so I finally think I have a stable NIO implementation that is 
doing fairly well.


I have an idea for a next generation of the NIO connector that I wanted 
to present so that you can comment and if you'd like help out with.


NIO GEN 2

Current Implementation
--
* Non blocking until the entire request has been received
* Blocking for servlet read
* Blocking for servlet write

Suggested implementation

* Non blocking until the entire request has been received(same as above)
* Blocking for servlet read (same as above)
* Non blocking for servlet write (new feature)

Explanation
---
This feature is very much like the SENDFILE feature for static content, 
but will

work with dynamic content and keep alive connections.
Of course this could work as a java sendfile for static content as well.

Goal

Thread reduction, as each worker thread only spends the time it takes to 
generate the data.

Thread count no longer is tied to concurrent requests being handled/written
Slow clients will no longer tie up server threads

Features

Ability to have a pluggable "data pool" ie, to avoid running over the 
java heap with data to be written,
this component can be pluggable with one that writes to disk, and can be 
made fairly intelligent.



Negatives
-
Much more memory consumption, and I mean much much more if the data pool 
is pure in memory



Where some work needs to be done

Decouple the InternalXXXOutputBuffer from the socket, to write to a 
memory pool of data to be written
Register the memory pool for writes to a shared selector, same way we 
poll for reads
Recycling the request/response object pair can no longer be done at the 
end of the thread, instead it is done when the data write has completed
When the write is complete, the writer thread re-register the connection 
for READ making it ready for the next request if keep alive is requested.
Should be configurable when to use a blocking write(today) vs a non 
blocking write(proposal)

Need to consider how this would work with Comet stuff

I believe this to be a pretty cool idea, and that could work out to 
greatly increase the scalability of tomcat. Implementing this thread 
count will no longer be an issue and the ratio threads to connections 
will dramatically change.


I wont be making any commits to trunk on this, as I don't wanna 
jeopardize our 6.0 release. However, if there are people that wanna work 
on this as a team, we could do something inside Tomcat's sandbox.



Ok dokie dudes and dudettes, that's all for now

Filip





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



svn commit: r468186 - in /tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler: Generator.java JspConfig.java Validator.java

2006-10-26 Thread remm
Author: remm
Date: Thu Oct 26 16:19:13 2006
New Revision: 468186

URL: http://svn.apache.org/viewvc?view=rev&rev=468186
Log:
- Some deferred expressions handling fixes.

Modified:
tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Generator.java
tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/JspConfig.java
tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Validator.java

Modified: tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Generator.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Generator.java?view=diff&rev=468186&r1=468185&r2=468186
==
--- tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Generator.java 
(original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Generator.java Thu Oct 
26 16:19:13 2006
@@ -892,9 +892,9 @@
 
 public void visit(Node.ELExpression n) throws JasperException {
 n.setBeginJavaLine(out.getJavaLine());
-if (!pageInfo.isELIgnored()) {
+if (!pageInfo.isELIgnored() && (n.getEL() != null)) {
 out.printil("out.write("
-+ JspUtil.interpreterCall(this.isTagFile, "${"
++ JspUtil.interpreterCall(this.isTagFile, n.getType() 
+ "{"
 + new String(n.getText()) + "}", String.class,
 n.getEL().getMapName(), false) + ");");
 } else {

Modified: tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/JspConfig.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/JspConfig.java?view=diff&rev=468186&r1=468185&r2=468186
==
--- tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/JspConfig.java 
(original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/JspConfig.java Thu Oct 
26 16:19:13 2006
@@ -114,7 +114,7 @@
 String isXml = null;
 Vector includePrelude = new Vector();
 Vector includeCoda = new Vector();
-String deferedSyntaxAllowedAsLiteral = null;
+String deferredSyntaxAllowedAsLiteral = null;
 String trimDirectiveWhitespaces = null;
 
 while (list.hasNext()) {
@@ -137,7 +137,7 @@
 else if ("include-coda".equals(tname))
 includeCoda.addElement(element.getBody());
 else if 
("deferred-syntax-allowed-as-literal".equals(tname))
-deferedSyntaxAllowedAsLiteral = element.getBody();
+deferredSyntaxAllowedAsLiteral = element.getBody();
 else if ("trim-directive-whitespaces".equals(tname))
 trimDirectiveWhitespaces = element.getBody();
 }
@@ -195,7 +195,7 @@
 pageEncoding,
 includePrelude,
 includeCoda,
-deferedSyntaxAllowedAsLiteral,
+deferredSyntaxAllowedAsLiteral,
 trimDirectiveWhitespaces);
 JspPropertyGroup propertyGroup =
 new JspPropertyGroup(path, extension, property);

Modified: tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Validator.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Validator.java?view=diff&rev=468186&r1=468185&r2=468186
==
--- tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Validator.java 
(original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Validator.java Thu Oct 
26 16:19:13 2006
@@ -664,9 +664,11 @@
 
 // JSP.2.2 - '#{' not allowed in template text
 if (n.getType() == '#') {
-if (pageInfo.isDeferredSyntaxAllowedAsLiteral())
+if (!pageInfo.isDeferredSyntaxAllowedAsLiteral()) {
+err.jspError(n, "jsp.error.el.template.deferred");
+} else {
 return;
-err.jspError(n, "jsp.error.el.template.deferred");
+}
 }
 
 // build expression
@@ -1007,10 +1009,7 @@
 // Attribute does not accept any expressions.
 // Make sure its value does not contain any.
 if (isExpression(n, attrs.getValue(i))) {
-err
-.jspError(
-n,
-
"jsp.error.attribute.custom.non_rt_with_expr",
+err .jspError(n, 
"jsp.error.attribute.custom.non_rt_with_expr",

Re: [PROPOSAL] More Non-Non blocking NIO crap

2006-10-26 Thread Remy Maucherat

Filip Hanik - Dev Lists wrote:
gents, so I finally think I have a stable NIO implementation that is 
doing fairly well.


I have an idea for a next generation of the NIO connector that I wanted 
to present so that you can comment and if you'd like help out with.


NIO GEN 2

Current Implementation
--
* Non blocking until the entire request has been received
* Blocking for servlet read
* Blocking for servlet write

Suggested implementation

* Non blocking until the entire request has been received(same as above)
* Blocking for servlet read (same as above)
* Non blocking for servlet write (new feature)

Explanation
---
This feature is very much like the SENDFILE feature for static content, 
but will

work with dynamic content and keep alive connections.
Of course this could work as a java sendfile for static content as well.

Goal

Thread reduction, as each worker thread only spends the time it takes to 
generate the data.

Thread count no longer is tied to concurrent requests being handled/written
Slow clients will no longer tie up server threads

Features

Ability to have a pluggable "data pool" ie, to avoid running over the 
java heap with data to be written,
this component can be pluggable with one that writes to disk, and can be 
made fairly intelligent.



Negatives
-
Much more memory consumption, and I mean much much more if the data pool 
is pure in memory


I guess it's the trick I documented in the Asynchronous writes section 
of chapter 25: if there's a "large" amount of content that won't change 
too often, then you write it to a dumb file, and then use sendfile.


However, using memory to buffer (which could well be the only practical 
implementation if it's to be magically done by the container) is indeed 
expensive, and given threads are simple and not that expensive, I didn't 
see it as very practical (and problem: what if the servlet does a flush 
?). In a way, it's as if the buffer size for the servlet was infinite, 
right ?


It's a nice experiment, though, and could give interesting results.

Rémy

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



svn commit: r468205 - in /tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler: Generator.java Node.java

2006-10-26 Thread remm
Author: remm
Date: Thu Oct 26 17:24:37 2006
New Revision: 468205

URL: http://svn.apache.org/viewvc?view=rev&rev=468205
Log:
- Implement the JspIdConsumer feature.

Modified:
tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Generator.java
tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Node.java

Modified: tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Generator.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Generator.java?view=diff&rev=468205&r1=468204&r2=468205
==
--- tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Generator.java 
(original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Generator.java Thu Oct 
26 17:24:37 2006
@@ -35,6 +35,7 @@
 
 import javax.el.MethodExpression;
 import javax.el.ValueExpression;
+import javax.servlet.jsp.tagext.JspIdConsumer;
 import javax.servlet.jsp.tagext.TagAttributeInfo;
 import javax.servlet.jsp.tagext.TagInfo;
 import javax.servlet.jsp.tagext.TagVariableInfo;
@@ -2151,7 +2152,7 @@
 out.print(" ");
 out.print(tagHandlerVar);
 out.print(" = ");
-if (isPoolingEnabled) {
+if (isPoolingEnabled && 
!(JspIdConsumer.class.isAssignableFrom(tagHandlerClass))) {
 out.print("(");
 out.print(tagHandlerClassName);
 out.print(") ");
@@ -2305,7 +2306,7 @@
 .println(".doEndTag() == 
javax.servlet.jsp.tagext.Tag.SKIP_PAGE) {");
 out.pushIndent();
 if (!n.implementsTryCatchFinally()) {
-if (isPoolingEnabled) {
+if (isPoolingEnabled && 
!(JspIdConsumer.class.isAssignableFrom(n.getTagHandlerClass( {
 out.printin(n.getTagHandlerPoolName());
 out.print(".reuse(");
 out.print(tagHandlerVar);
@@ -2835,7 +2836,7 @@
 sb.append(getJspContextVar());
 sb.append(".getELContext()");
 sb.append(")");
-} 
+}
 attrValue = sb.toString();
 } else if (attr.isDeferredMethodInput()
 || MethodExpression.class.getName().equals(type)) {
@@ -2925,6 +2926,14 @@
 TagHandlerInfo handlerInfo, boolean simpleTag)
 throws JasperException {
 
+// Set the id of the tag
+if (JspIdConsumer.class.isAssignableFrom(n.getTagHandlerClass())) {
+out.printin(tagHandlerVar);
+out.print(".setJspId(\"");
+out.print(n.getId());
+out.println("\");");
+}
+
 // Set context
 if (simpleTag) {
 // Generate alias map

Modified: tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Node.java
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Node.java?view=diff&rev=468205&r1=468204&r2=468205
==
--- tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Node.java (original)
+++ tomcat/tc6.0.x/trunk/java/org/apache/jasper/compiler/Node.java Thu Oct 26 
17:24:37 2006
@@ -1354,6 +1354,8 @@
  */
 public static class CustomTag extends Node {
 
+private static int id = 0;
+
 private String uri;
 
 private String prefix;
@@ -1624,6 +1626,10 @@
 return this.numCount;
 }
 
+public String getId() {
+return "_" + (++id);
+}
+
 public void setScriptingVars(Vector vec, int scope) {
 switch (scope) {
 case VariableInfo.AT_BEGIN:



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]