Author: fhanik
Date: Fri Jan  5 08:42:38 2007
New Revision: 493078

URL: http://svn.apache.org/viewvc?view=rev&rev=493078
Log:
Completed clustering documentation to a state more understandable.

Modified:
    tomcat/tc6.0.x/trunk/webapps/docs/cluster-howto.xml
    tomcat/tc6.0.x/trunk/webapps/docs/config/cluster-manager.xml
    tomcat/tc6.0.x/trunk/webapps/docs/tribes/introduction.xml

Modified: tomcat/tc6.0.x/trunk/webapps/docs/cluster-howto.xml
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/webapps/docs/cluster-howto.xml?view=diff&rev=493078&r1=493077&r2=493078
==============================================================================
--- tomcat/tc6.0.x/trunk/webapps/docs/cluster-howto.xml (original)
+++ tomcat/tc6.0.x/trunk/webapps/docs/cluster-howto.xml Fri Jan  5 08:42:38 2007
@@ -16,8 +16,7 @@
 
 
 <section name="Important Note">
-<p><b>This document is pending an update to the latest implementation.<br/>
-   You can also check the <a href="config/cluster.html">configuration 
reference documentation.</a></b>
+<p><b>You can also check the <a href="config/cluster.html">configuration 
reference documentation.</a></b>
 </p>
 </section>
 
@@ -27,12 +26,17 @@
     to your <code>&lt;Engine&gt;</code> or your <code>&lt;Host&gt;</code> 
element to enable clustering.
   </p>
   <p>
-    Using the above configuration will enable all to all session replication
-    using the <code>DeltaManager</code> to replicate session deltas.<br/>
+    Using the above configuration will enable all-to-all session replication
+    using the <code>DeltaManager</code> to replicate session deltas. By 
all-to-all we mean that the session gets replicated to all the other 
+    nodes in the cluster. This works great for smaller cluster but we don't 
recommend it for larger clusters(a lot of tomcat nodes).
+    Also when using the delta manager it will replicate to all nodes, even 
nodes that don't have the application deployed.<br/>
+    To get around this problem, you'll want to use the BackupManager. This 
manager only replicates the session data to one backup
+    node, and only to nodes that have the application deployed. Downside of 
the BackupManager: not quite as battle tested as the delta manager.
+    <br/>
     Here are some of the important default values:<br/>
     1. Multicast address is 228.0.0.4<br/>
-    2. Multicast port is 45564<br/>
-    3. The IP broadcasted is 
<code>java.net.InetAddress.getLocalHost().getHostAddress()</code><br/>
+    2. Multicast port is 45564 (the port and the address together determine 
cluster membership.<br/>
+    3. The IP broadcasted is 
<code>java.net.InetAddress.getLocalHost().getHostAddress()</code> (make sure 
you don't broadcast 127.0.0.1, this is a common error)<br/>
     4. The TCP port listening for replication messages is the first available 
server socket in range <code>4000-4100</code><br/>
     5. Two listeners are configured <code>ClusterSessionListener</code> and 
<code>JvmRouteSessionIDBinderListener</code><br/>
     6. Two interceptors are configured <code>TcpFailureDetector</code> and 
<code>MessageDispatch15Interceptor</code><br/>
@@ -67,6 +71,7 @@
 
           &lt;Valve 
className=&quot;org.apache.catalina.ha.tcp.ReplicationValve&quot;
                  filter=&quot;&quot;/&gt;
+          &lt;Valve 
className=&quot;org.apache.catalina.ha.session.JvmRouteBinderValve&quot;/&gt;
 
           &lt;Deployer 
className=&quot;org.apache.catalina.ha.deploy.FarmWarDeployer&quot;
                     tempDir=&quot;/tmp/war-temp/&quot;
@@ -79,6 +84,7 @@
         &lt;/Cluster&gt;    
     </source>
   </p>
+  <p>Will cover this section in more detail later in this document.</p>
 </section>
 
 <section name="Cluster Basics">
@@ -86,23 +92,25 @@
 <p>To run session replication in your Tomcat 6.0 container, the following steps
 should be completed:</p>
 <ul>
-<li>All your session attributes must implement 
<code>java.io.Serializable</code></li>
-<li>Uncomment the <code>Cluster</code> element in server.xml</li>
-<li>Uncomment the <code>Valve(ReplicationValve)</code> element in 
server.xml</li>
-<li>If your Tomcat instances are running on the same machine, make sure the 
<code>tcpListenPort</code>
-    attribute is unique for each instance.</li>
-<li>Make sure your <code>web.xml</code> has the 
<code>&lt;distributable/&gt;</code> element 
-    or set at your <code>&lt;Context distributable="true" /&gt;</code></li>
-<li>If you are using mod_jk, make sure that jvmRoute attribute is set at your 
Engine <code>&lt;Engine name="Catalina" jvmRoute="node01" &gt;</code>
-    and that the jvmRoute attribute value matches your worker name in 
workers.properties</li>
-<li>Make sure that all nodes have the same time and sync with NTP service!</li>
-<li>Make sure that your loadbalancer is configured for sticky session 
mode.</li>
+  <li>All your session attributes must implement 
<code>java.io.Serializable</code></li>
+  <li>Uncomment the <code>Cluster</code> element in server.xml</li>
+  <li>If you have defined custom cluster valves, make sure you have the 
<code>ReplicationValve</code>  defined as well under the Cluster element in 
server.xml</li>
+  <li>If your Tomcat instances are running on the same machine, make sure the 
<code>tcpListenPort</code>
+      attribute is unique for each instance, in most cases Tomcat is smart 
enough to resolve this on it's own by autodetecting available ports in the 
range 4000-4100</li>
+  <li>Make sure your <code>web.xml</code> has the 
<code>&lt;distributable/&gt;</code> element 
+      or set at your <code>&lt;Context distributable="true" /&gt;</code></li>
+  <li>If you are using mod_jk, make sure that jvmRoute attribute is set at 
your Engine <code>&lt;Engine name="Catalina" jvmRoute="node01" &gt;</code>
+      and that the jvmRoute attribute value matches your worker name in 
workers.properties</li>
+  <li>Make sure that all nodes have the same time and sync with NTP 
service!</li>
+  <li>Make sure that your loadbalancer is configured for sticky session 
mode.</li>
 </ul>
 <p>Load balancing can be achieved through many techniques, as seen in the
 <a href="balancer-howto.html">Load Balancing</a> chapter.</p>
 <p>Note: Remember that your session state is tracked by a cookie, so your URL 
must look the same from the out
    side otherwise, a new session will be created.</p>
 <p>Note: Clustering support currently requires the JDK version 1.5 or 
later.</p>
+<p>The Cluster module uses the Tomcat JULI logging framework, so you can 
configure logging 
+   through the regular logging.properties file. To track messages, you can 
enable logging on the key:<code>org.apache.catalina.tribes.MESSAGES</code></p>
 </section>
 
 
@@ -112,16 +120,15 @@
 <ol>
   <li>Using session persistence, and saving the session to a shared file 
system (PersistenceManager + FileStore)</li>
   <li>Using session persistence, and saving the session to a shared database 
(PersistenceManager + JDBCStore)</li>
-  <li>Using in-memory-replication, using the SimpleTcpCluster that ships with 
Tomcat 5 (server/lib/catalina-cluster.jar)</li>
+  <li>Using in-memory-replication, using the SimpleTcpCluster that ships with 
Tomcat 6 (lib/catalina-tribes.jar + lib/catalina-ha.jar)</li>
 </ol>
 
-<p>In this release of session replication, Tomcat performs an all-to-all 
replication of session state.
-
-   This is an algorithm that is only efficient when the clusters are small. 
For large clusters, the next
-   release will support a primary-secondary session replication where the 
session will only be stored at one
-   or maybe two backup servers. 
+<p>In this release of session replication, Tomcat can perform an all-to-all 
replication of session state using the <code>DeltaManager</code> or 
+   perform backup replication to only one node using the 
<code>BackupManager</code>.
+   The all-to-all replication is an algorithm that is only efficient when the 
clusters are small. For larger clusters,  to use 
+   a primary-secondary session replication where the session will only be 
stored at one backup server simply setup the BackupManager. <br/>
    Currently you can use the domain worker attribute (mod_jk &gt; 1.2.8) to 
build cluster partitions
-   with the potential of very scaleable cluster solution.
+   with the potential of having a more scaleable cluster solution with the 
DeltaManager(you'll need to configure the domain interceptor for this).
    In order to keep the network traffic down in an all-to-all environment, you 
can split your cluster
    into smaller groups. This can be easily achieved by using different 
multicast addresses for the different groups.
    A very simple setup would look like this:
@@ -165,41 +172,32 @@
     sent over the wire and reinstantiated on all the other cluster nodes.
     Synchronous vs asynchronous is configured using the 
<code>channelSendOptions</code>
     flag and is an integer value. The default value for the 
<code>SimpleTcpCluster/DeltaManager</code> combo is
-    8, which is asynchronous. You can read more on the <a 
href="#pointer-to-Tribes-Channel-Javadoc-here">send flag</a>.
+    8, which is asynchronous. You can read more on the <a 
href="tribes/introduction.html">send flag(overview)</a> or the 
+    <a 
href="http://tomcat.apache.org/tomcat-6.0-doc/api/org/apache/catalina/tribes/Channel.html";>send
 flag(javadoc)</a>.
     During async replication, the request is returned before the data has been 
replicated. async replication yields shorter
     request times, and synchronous replication guarantees the session to be 
replicated before the request returns.
 </p>
-
+</section>
 
 <section name="Bind session after crash to failover node">
 <p>
-  As you configure more then two nodes at same cluster for backup, most 
loadbalancer
-  send don't all your requests after failover to the same node.
+    If you are using mod_jk and not using sticky sessions or for some reasons 
sticky session don't 
+    work, or you are simply failing over, the session id will need to be 
modified as it previously contained 
+    the worker id of the previous tomcat (as defined by jvmRoute in the Engine 
element).
+    To solve this, we will use the JvmRouteBinderValve.
 </p>
 <p> 
-    The JvmRouteBinderValve handle tomcat jvmRoute takeover using mod_jk 
module after node
-    failure. After a node crashed the next request going to other cluster 
node. The JvmRouteBinderValve 
-    now detect the takeover and rewrite the jsessionid
-    information to the backup cluster node. After the next response all client
-    request goes direct to the backup node. The change sessionid send also to 
all
-    other cluster nodes. Well, now the session stickyness work directly to the
-    backup node, but traffic don't go back too restarted cluster nodes!<br/>
-    As jsessionid was created by cookie, the change JSESSIONID cookie resend 
with next response.
+    The JvmRouteBinderValve rewrites the session id to ensure that the next 
request will remain sticky
+    (and not fall back to go to random nodes since the worker is no longer 
available) after a fail over.
+    The valve rewrites the JSESSIONID value in the cookie with the same name.
+    Not having this valve in place, will make it harder to ensure stickyness 
in case of a failure for the mod_jk module.
 </p>
 <p>
-    You must add JvmRouteBinderValve and the corresponding cluster message 
listener JvmRouteSessionIDBinderListener.
-    As you add the new listener you must also add the default 
ClusterSessionListener that receiver the normal cluster messages.
-
-<source>
-&lt;Cluster className="org.apache.catalina.tcp.SimpleTcpCluster" &gt;
-...
-     &lt;Valve 
className="org.apache.catalina.cluster.session.JvmRouteBinderValve"
-               enabled="true" sessionIdAttribute="takeoverSessionid"/&gt;      
-     &lt;ClusterListener 
className="org.apache.catalina.cluster.session.JvmRouteSessionIDBinderListener" 
/&gt;
-     &lt;ClusterListener 
className="org.apache.catalina.cluster.session.ClusterSessionListener" /&gt;
-...
-&lt;Cluster&gt;
-</source>
+    By default, if no valves are configured, the JvmRouteBinderValve is added 
on.
+    The cluster message listener called JvmRouteSessionIDBinderListener is 
also defined by default and is used to actually rewrite the 
+    session id on the other nodes in the cluster once a fail over has occurred.
+    Remember, if you are adding your own valves or cluster listeners in 
server.xml then the defaults are no longer valid,
+    make sure that you add in all the appropriate valves and listeners as 
defined by the default.
 </p>
 <p>
     <b>Hint:</b><br/>
@@ -214,12 +212,12 @@
     This use case means that only requested session are migrated.
 </p>
 
-</section>
+
 
 </section>
 
 <section name="Configuration Example">
-<source>
+    <source>
         &lt;Cluster 
className=&quot;org.apache.catalina.ha.tcp.SimpleTcpCluster&quot;
                  channelSendOptions=&quot;6&quot;&gt;
 
@@ -263,7 +261,153 @@
 
           &lt;ClusterListener 
className=&quot;org.apache.catalina.ha.session.ClusterSessionListener&quot;/&gt;
         &lt;/Cluster&gt;
-</source>
+    </source>
+    <p>
+      Break it down!!
+    </p>
+    <source>
+        &lt;Cluster 
className=&quot;org.apache.catalina.ha.tcp.SimpleTcpCluster&quot;
+                 channelSendOptions=&quot;6&quot;&gt;
+    </source>
+    <p>
+      The main element, inside this element all cluster details can be 
configured.
+      The <code>channelSendOptions</code> is the flag that is attached to each 
message sent by the
+      SimplTcpCluster class or any objects that are invoking the 
SimplTcpCluster.send method.
+      The description of the send flags is available at <a 
href="http://tomcat.apache.org/tomcat-6.0-doc/api/org/apache/catalina/tribes/Channel.html";>
+      our javadoc site</a>
+      The <code>DeltaManager</code> sends information using the 
SimpleTcpCluster.send method, while the backup manager
+      sends it itself directly through the channel.
+      <br/>For more info, Please visit the <a 
href="config/cluster.html">reference documentation</a>
+    </p>
+    <source>
+          &lt;Manager 
className=&quot;org.apache.catalina.ha.session.BackupManager&quot;
+                   expireSessionsOnShutdown=&quot;false&quot;
+                   notifyListenersOnReplication=&quot;true&quot;
+                   mapSendOptions=&quot;6&quot;/&gt;
+          &lt;!--
+          &lt;Manager 
className=&quot;org.apache.catalina.ha.session.DeltaManager&quot;
+                   expireSessionsOnShutdown=&quot;false&quot;
+                   notifyListenersOnReplication=&quot;true&quot;/&gt;
+          --&gt;        
+    </source>
+    <p>
+        This is a template for the manager configuration that will be used if 
no manager is defined in the &lt;Context&gt;
+        element. In Tomcat 5.x each webapp marked distributable had to use the 
same manager, this is no longer the case
+        since Tomcat 6 you can define a manager class for each webapp, so that 
you can mix managers in your cluster.
+        Obviously the managers on one node's application has to correspond 
with the same manager on the same application on the other node.
+        If no manager has been specified for the webapp, and the webapp is 
marked &lt;distributable/&gt; Tomcat will take this manager configuration 
+        and create a manager instance cloning this configuration.
+        <br/>For more info, Please visit the <a 
href="config/cluster-manager.html">reference documentation</a>
+    </p>
+    <source>
+          &lt;Channel 
className=&quot;org.apache.catalina.tribes.group.GroupChannel&quot;&gt;
+    </source>
+    <p>
+        The channel element is <a href="tribes/introduction.html">Tribes</a>, 
the group communication framework
+        used inside Tomcat. This element encapsulates everything that has to 
do with communication and membership logic.
+        <br/>For more info, Please visit the <a 
href="config/cluster-channel.html">reference documentation</a>
+    </p>
+    <source>
+            &lt;Membership 
className=&quot;org.apache.catalina.tribes.membership.McastService&quot;
+                        address=&quot;228.0.0.4&quot;
+                        port=&quot;45564&quot;
+                        frequency=&quot;500&quot;
+                        dropTime=&quot;3000&quot;/&gt;
+    </source>
+    <p>
+        Membership is done using multicasting. Please note that Tribes also 
supports static memberships using the 
+        <code>StaticMembershipInterceptor</code> if you want to extend your 
membership to points beyond multicasting.
+        The address attribute is the multicast address used and the port is 
the multicast port. These two together
+        create the cluster separation. If you want a QA cluster and a 
production cluster, the easiest config is to
+        have the QA cluster be on a separate multicast address/port 
combination the the production cluster.<br/>
+        The membership component broadcasts TCP adress/port of itselt to the 
other nodes so that communication between
+        nodes can be done over TCP. Please note that the address being 
broadcasted is the one of the 
+        <code>Receiver.address</code> attribute.
+        <br/>For more info, Please visit the <a 
href="config/cluster-membership.html">reference documentation</a>
+    </p>
+    <source>
+            &lt;Receiver 
className=&quot;org.apache.catalina.tribes.transport.nio.NioReceiver&quot;
+                      address=&quot;auto&quot;
+                      port=&quot;5000&quot;
+                      selectorTimeout=&quot;100&quot;
+                      maxThreads=&quot;6&quot;/&gt;
+    </source>
+    <p>
+        In tribes the logic of sending and receiving data has been broken into 
two functional components. The Receiver, as the name suggests
+        is responsible for receiving messages. Since the Tribes stack is 
thread less, (a popular improvement now adopted by other frameworks as well),
+        there is a thread pool in this component that has a maxThreads and 
minThreads setting.<br/>
+        The address attribute is the host address that will be broadcasted by 
the membership component to the other nodes.
+        <br/>For more info, Please visit the <a 
href="config/cluster-receiver.html">reference documentation</a>
+    </p>
+    <source>
+
+            &lt;Sender 
className=&quot;org.apache.catalina.tribes.transport.ReplicationTransmitter&quot;&gt;
+              &lt;Transport 
className=&quot;org.apache.catalina.tribes.transport.nio.PooledParallelSender&quot;/&gt;
+            &lt;/Sender&gt;
+    </source>
+    <p>
+        The sender component, as the name indicates is responsible for sending 
messages to other nodes.
+        The sender has a shell component, the 
<code>ReplicationTransmitter</code> but the real stuff done is done in the 
+        sub component, <code>Transport</code>.
+        Tribes support having a pool of senders, so that messages can be sent 
in parallel and if using the NIO sender,
+        you can send messages concurrently as well.<br/>
+        Concurrently means one message to multiple senders at the same time 
and Parallel means multiple messages to multiple senders
+        at the same time.
+        <br/>For more info, Please visit the <a 
href="config/cluster-sender.html">reference documentation</a>
+    </p>
+    <source>
+            &lt;Interceptor 
className=&quot;org.apache.catalina.tribes.group.interceptors.TcpFailureDetector&quot;/&gt;
+            &lt;Interceptor 
className=&quot;org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor&quot;/&gt;
+            &lt;Interceptor 
className=&quot;org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor&quot;/&gt;
+          &lt;/Channel&gt;
+    </source>
+    <p>
+        Tribes uses a stack to send messages through. Each element in the 
stack is called an interceptor, and works much like the valves do 
+        in the Tomcat servlet container.
+        Using interceptors, logic can be broken into more managable pieces of 
code. The interceptors configured above are:<br/>
+        TcpFailureDetector - verifies crashed members through TCP, if 
multicast packets get dropped, this interceptor protects against false 
positives,
+        ie the node marked as crashed even though it still is alive and 
running.<br/>
+        MessageDispatch15Interceptor - dispatches messages to a thread (thread 
pool) to send message asynchrously.<br/>
+        ThroughputInterceptor - prints out simple stats on message 
traffic.<br/>
+        Please note that the order of interceptors is important. the way they 
are defined in server.xml is the way they are represented in the 
+        channel stack. Think of it as a linked list, with the head being the 
first most interceptor and the tail the last.
+        <br/>For more info, Please visit the <a 
href="config/cluster-interceptor.html">reference documentation</a>
+    </p>
+    <source>
+          &lt;Valve 
className=&quot;org.apache.catalina.ha.tcp.ReplicationValve&quot;
+                 
filter=&quot;.*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;&quot;/&gt;
+    </source>
+    <p>
+        The cluster uses valves to track requests to web applications, we've 
mentioned the ReplicationValve and the JvmRouteBinderValve above.
+        The &lt;Cluster&gt; element itself is not part of the pipeline in 
Tomcat, instead the cluster adds the valve to its parent container.
+        If the &lt;Cluster&gt; elements is configured in the &lt;Engine&gt; 
element, the valves get added to the engine and so on.
+        <br/>For more info, Please visit the <a 
href="config/cluster-valve.html">reference documentation</a>
+    </p>
+    <source>
+          &lt;Deployer 
className=&quot;org.apache.catalina.ha.deploy.FarmWarDeployer&quot;
+                    tempDir=&quot;/tmp/war-temp/&quot;
+                    deployDir=&quot;/tmp/war-deploy/&quot;
+                    watchDir=&quot;/tmp/war-listen/&quot;
+                    watchEnabled=&quot;false&quot;/&gt;
+    </source>
+    <p>
+        The default tomcat cluster supports farmed deployment, ie, the cluster 
can deploy and undeploy applications on the other nodes.
+        The state of this component is currently in flux but will be addressed 
soon. There was a change in the deployment algorithm 
+        between Tomcat 5.0 and 5.5 and at that point, the logic of this 
component changed to where the deploy dir has to match the 
+        webapps directory.
+        <br/>For more info, Please visit the <a 
href="config/cluster-deployer.html">reference documentation</a>
+    </p>
+    <source>
+          &lt;ClusterListener 
className=&quot;org.apache.catalina.ha.session.ClusterSessionListener&quot;/&gt;
+        &lt;/Cluster&gt;
+    </source>
+    <p>
+        Since the SimpleTcpCluster itself is a sender and receiver of the 
Channel object, components can register themselves as listeners to 
+        the SimpleTcpCluster. The listener above 
<code>ClusterSessionListener</code> listens for DeltaManager replication 
messages
+        and applies the deltas to the manager that in turn applies it to the 
session.
+        <br/>For more info, Please visit the <a 
href="config/cluster-listener.html">reference documentation</a>
+    </p>
+    
 </section>
 
 <section name="Cluster Architecture">
@@ -287,28 +431,34 @@
         |             -- Manager
         |                   \
         |                   -- DeltaManager
+        |                   -- BackupManager
         |
      ---------------------------
         |                       \
       Channel                    \
     ----------------------------- \
-     |          |         |        \
-   Receiver    Sender   Membership  \
-                                     -- Valve
-                                     |      \
-                                     |       -- ReplicationValve
-                                     |       -- JvmRouteBinderValve 
-                                     |
-                                     -- LifecycleListener 
-                                     |
-                                     -- ClusterListener 
-                                     |      \
-                                     |       -- ClusterSessionListener
-                                     |       -- JvmRouteSessionIDBinderListener
-                                     |
-                                     -- Deployer 
-                                            \
-                                             -- FarmWarDeployer
+        |                          \
+     Interceptor_1 ..               \
+        |                            \
+     Interceptor_N                    \
+    -----------------------------      \
+     |          |         |             \
+   Receiver    Sender   Membership       \
+                                         -- Valve
+                                         |      \
+                                         |       -- ReplicationValve
+                                         |       -- JvmRouteBinderValve 
+                                         |
+                                         -- LifecycleListener 
+                                         |
+                                         -- ClusterListener 
+                                         |      \
+                                         |       -- ClusterSessionListener
+                                         |       -- 
JvmRouteSessionIDBinderListener
+                                         |
+                                         -- Deployer 
+                                                \
+                                                 -- FarmWarDeployer
       
       
 </source>
@@ -497,39 +647,6 @@
     <td>The complete cluster element</td>
     <td><code>type=Cluster</code></td>
     <td><code>type=Cluster,host=${HOST}</code></td>
-  </tr>
- 
-  <tr>
-    <td>ClusterSender</td>
-    <td>Configuration and stats of the sender infrastructure</td>
-    <td><code>type=ClusterSender</code></td>
-    <td><code>type=ClusterSender,host=${HOST}</code></td>
-  </tr>
- 
-  <tr>
-    <td>ClusterReceiver</td>
-    <td>Configuration and stats of the recevier infrastructure</td>
-    <td><code>type=ClusterReceiver</code></td>
-    <td><code>type=ClusterReceiver,host=${HOST}</code></td>
-  </tr>
-
-  <tr>
-    <td>ClusterMembership</td>
-    <td>Configuration and stats of the membership infrastructure</td>
-    <td><code>type=ClusterMembership</code></td>
-    <td><code>type=ClusterMembership,host=${HOST}</code></td>
-  </tr>
-
-  <tr>
-    <td>IDataSender</td>
-    <td>For every cluster member it exist a sender mbeans. 
-    It exists speziall MBeans to all replication modes</td>
-    <td><code>type=IDataSender,
-        senderAddress=${MEMBER.SENDER.IP},
-        senderPort=${MEMBER.SENDER.PORT}</code></td>
-    <td><code>type=IDataSender,host=${HOST},
-        senderAddress=${MEMBER.SENDER.IP},
-        senderPort=${MEMBER.SENDER.PORT}</code></td>
   </tr>
  
   <tr>

Modified: tomcat/tc6.0.x/trunk/webapps/docs/config/cluster-manager.xml
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/webapps/docs/config/cluster-manager.xml?view=diff&rev=493078&r1=493077&r2=493078
==============================================================================
--- tomcat/tc6.0.x/trunk/webapps/docs/config/cluster-manager.xml (original)
+++ tomcat/tc6.0.x/trunk/webapps/docs/config/cluster-manager.xml Fri Jan  5 
08:42:38 2007
@@ -43,17 +43,10 @@
 </section>
 
 <section name="Attributes">
-
   <subsection name="Common Attributes">
     <attributes>
      <attribute name="className" required="true">
      </attribute>
-     <attribute name="domainReplication" required="false">
-      Set to true if you wish sessions to be replicated only to members that 
have the same logical
-      domain set. If set to false, session replication will ignore the domain 
setting the 
-      <code><a href="cluster-membership.html">&lt;Membership&gt;</a></code>
-      element.
-     </attribute>
      <attribute name="name" required="false">
       <b>The name of this cluster manager, the name is used to identify a 
session manager on a node.
       The name might get modified by the <code>Cluster</code> element to make 
it unique in the container.</b>
@@ -65,26 +58,39 @@
        Set to <code>true</code> if you wish to have session listeners notified 
when
        session attributes are being replicated or removed across Tomcat nodes 
in the cluster.
      </attribute>
+     <attribute name="expireSessionsOnShutdown" required="false">
+       When a webapplication is being shutdown, Tomcat issues an expire call 
to each session to 
+       notify all the listeners. If you wish for all sessions to expire on all 
nodes when
+       a shutdown occurs on one node, set this value to <code>true</code>.
+       Default value is <code>false</code>.
+     </attribute>
+
     </attributes>
   </subsection> 
   <subsection name="org.apache.catalina.ha.session.DeltaManager Attributes">
     <attributes>
+     <attribute name="domainReplication" required="false">
+      Set to true if you wish sessions to be replicated only to members that 
have the same logical
+      domain set. If set to false, session replication will ignore the domain 
setting the 
+      <code><a href="cluster-membership.html">&lt;Membership&gt;</a></code>
+      element.
+     </attribute>
      <attribute name="expireSessionsOnShutdown" required="false">
        When a webapplication is being shutdown, Tomcat issues an expire call 
to each session to 
        notify all the listeners. If you wish for all sessions to expire on all 
nodes when
        a shutdown occurs on one node, set this value to <code>true</code>.
        Default value is <code>false</code>.
      </attribute>
-
-  </attributes>
-
-
+    </attributes>
+  </subsection>
+  <subsection name="org.apache.catalina.ha.session.BackupManager Attributes">
+    <attributes>
+     <attribute name="mapSendOptions" required="false">
+       The backup manager uses a replicated map, this map is sending and 
receiving messages.
+       You can setup the flag for how this map is sending messages, the 
default value is <code>8</code>(asynchronous).
+     </attribute>
+    </attributes>
   </subsection>
-
-
 </section>
-
-
 </body>
-
 </document>

Modified: tomcat/tc6.0.x/trunk/webapps/docs/tribes/introduction.xml
URL: 
http://svn.apache.org/viewvc/tomcat/tc6.0.x/trunk/webapps/docs/tribes/introduction.xml?view=diff&rev=493078&r1=493077&r2=493078
==============================================================================
--- tomcat/tc6.0.x/trunk/webapps/docs/tribes/introduction.xml (original)
+++ tomcat/tc6.0.x/trunk/webapps/docs/tribes/introduction.xml Fri Jan  5 
08:42:38 2007
@@ -2,7 +2,7 @@
 <!DOCTYPE document [
   <!ENTITY project SYSTEM "project.xml">
 ]>
-<document url="introduction.html">
+<document url="tribes.html">
 
     &project;
 
@@ -244,87 +244,12 @@
 
 <section name="Where can I get Tribes">
   <p>
-    I hope you have enjoyed this short introduction to Tribes. You can 
download <a href="../apache-tribes.jar">Tribes here</a>
-    or you can download Tribes <a href="../tribes-all.zip">including javadoc 
and this doc</a>
+    
   </p>
 
 
 </section>
 
-<!--
-<section name="Cluster Configuration for ReplicationTransmitter">
-<p>
-List of Attributes<br/>
-<table border="1" cellpadding="5">
-
-  <tr>
-    <th align="center" bgcolor="aqua">Attribute</th>
-    <th align="center" bgcolor="aqua">Description</th>
-    <th align="center" bgcolor="aqua">Default value</th>
-  </tr>
-
-  <tr>
-    <td>replicationMode</td>
-    <td>replication mode (<em>synchronous</em>, <em>pooled</em>, 
<em>asynchronous</em> or <em>fastasyncqueue</em>)
-    </td>
-    <td><code>pooled</code></td>
-  </tr>
-
-  <tr>
-    <td>processSenderFrequency</td>
-    <td>Control the sender keepalive status and drop sender socket connection 
after timeout is reached.
-    Check every processSenderFrequency value engine background ticks.
-    </td>
-    <td><code>2</code></td>
-  </tr>
-
-  <tr>
-    <td>compress</td>
-    <td>compress bytes before sending (consume memory, but reduce network 
traffic - GZIP)</td>
-    <td><code>false</code></td>
-  </tr>
-
-  <tr>
-    <td>ackTimeout</td>
-    <td>acknowledge timeout and only usefull it waitForAck is true</td>
-    <td><code>15000</code></td>
-  </tr>
-
-  <tr>
-    <td>waitForAck</td>
-    <td>Wait for ack after data send</td>
-    <td><code>false</code></td>
-  </tr>
-
-  <tr>
-    <td>autoConnect</td>
-    <td>is sender disabled, fork a new socket</td>
-    <td><code>false</code></td>
-  </tr>
-
-  <tr>
-    <td>doTransmitterProcessingStats</td>
-    <td>create processing time stats</td>
-    <td><code>false</code></td>
-  </tr>
-</table>
-</p>
-<p>
-Example to get statistic information, wait for ack at every message send and 
transfer at compressed mode<br/>
-<source>
-    &lt;Sender
-      className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
-      replicationMode="fastasyncqueue"
-      compress="true"
-      doTransmitterProcessingStats="true"
-      ackTimeout="15000"
-      waitForAck="true"
-      autoConnect="false"/&gt;
-</source>
-</p>
-</section>
-
--->
 </body>
 
 </document>



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to