TCP state timeout shouldn't make a big difference; it will just cause a little more communication to create the state. When a packet comes from the midtier server to the arserver and a state does not exist on the firewall, the packet will be dropped; the midtier should then send a SYN packet, which create a new state, at which point the packet will be forwarded and an ACK will go back to the midtier server.
1 hour is rather short for TCP, but can be justified in high traffic environments. I use the following on my firewalls, which are the defaults for the platform I use and is pretty typical for other platforms. Note that TCP state expirations are valid for 24 hours. The firewall is smart enough that at a certain point it will start expiring the states (adaptive) when a certain threshold is reached. TIMEOUTS: tcp.first 120s tcp.opening 30s tcp.established 86400s tcp.closing 900s tcp.finwait 45s tcp.closed 90s tcp.tsdiff 30s udp.first 60s udp.single 30s udp.multiple 60s icmp.first 20s icmp.error 10s other.first 60s other.single 30s other.multiple 60s frag 30s interval 10s adaptive.start 18000 states adaptive.end 36000 states src.track 0s LIMITS: states hard limit 30000 src-nodes hard limit 10000 frags hard limit 5000 tables hard limit 1000 table-entries hard limit 200000 I would still suggest looking at the latency between the arserver and the midtier server. If you are getting beyond 20ms it's going to cause a noticeable impact on performance. Ideally, in a data center environment, you want to see < 2 ms latency between your servers if the traffic is routed and < 1 ms if the traffic is not routed (switched). Axton Grams On Tue, Nov 29, 2011 at 10:22 AM, L G Robinson <[email protected]> wrote: > ** Hi Folks, > Thanks to Axton, I believe we have identified the problem. When he suggested > a "traceroute" between the Mid-tier server and the AR Server, I discovered > that ICMP traffic was not being routed. That got me thinking that there > might be some other "issues" with the firewall that had been imposed on me. > After describing the symptoms to the firewall guy, he instantly knew what > the problem was and corrected it in seconds. Apparently, their default > behavior is to setup a one hour timeout on TCP traffic unless there is a > Linux machine involved. He suggested that the person who setup the rules for > this particular configuration did not realize that the Mid-tier servers were > running on Linux. He adjusted the timeout and I am now waiting for an hour > to elapse so I can see what happens. But confidence is high. > Thanks to all who replied with suggestions. This job would be a lot harder > and more frustrating without all of you who are willing to share your > knowledge and your time. I appreciate you. > Larry > > On Mon, Nov 28, 2011 at 3:41 PM, Axton <[email protected]> wrote: >> >> You have a number of varaibles in play that's going to make it hard to >> pinpoint the cause of the slowness: >> - midtier local to arserver host in fast configuration >> - different tomcat versions >> - different operating systems >> - physical vs. virtual machines >> - different java versions >> >> The fact that one is on the same box as the arserver and the other is >> on a separate physical host is the first thing I'd look at. >> - What is the route between the linux midtier and the solaris arserver >> (latency and hops) >> >> The physical vs. virtual is the next thing I'd look at: >> - What is the load on the physical server that the virtual server is >> running on (mem, cpu, disk)? >> >> I don't think the version differences with Java, Tomcat, and the >> mid-tier software will make a difference, but it could. >> >> Axton Grams >> > _attend WWRUG12 www.wwrug.com ARSlist: "Where the Answers Are"_ _______________________________________________________________________________ UNSUBSCRIBE or access ARSlist Archives at www.arslist.org attend wwrug12 www.wwrug12.com ARSList: "Where the Answers Are"

