On Fri, Jun 22, 2012 at 12:41 PM, Stan Hoeppner <s...@hardwarefreak.com> wrote: > On 6/22/2012 2:22 AM, Muhammad Yousuf Khan wrote: >> On Thu, Jun 21, 2012 at 9:45 PM, Stan Hoeppner <s...@hardwarefreak.com> >> wrote: >>> On 6/21/2012 8:54 AM, Muhammad Yousuf Khan wrote: >>> >>>> Yes i am aware of the jumbo frame and played a bit with it in >>>> openfiler thanks for reminding me that btw are you getting 600Mbps >>>> with Jumbo frame? >>> >>> I don't use jumbo frames here because: >>> >>> 1. Not all the desktop NICs support it >>> 2. No single host _needs_ maximum GbE throughput >>> We don't do large single file transfers >>> 3. The servers can hit wire speed doing parallel xfers >>> without using jumbo frames >>> 4. My SAN is fibre channel >>> >>> I have done testing with GbE and 9000 byte frames and the information I > > >> With reference to the Bruno point. he says it could be the bottleneck >> on HD end regardless of what size of ram or Processor are we using. so >> my question is have you tested >> this on RAID 1? > > Before you even progress to the things below, you must run iperf to > obtain a maximum baseline performance. That is the measure of your TCP > transmit/receive throughout. Then you know what you target maximum is > when you tune these other things.
ok here you go with the details this is the storage server NAS/SAN box root@nasbox:/# iperf -c 10.X.X.7 -r ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 10.X.X.7, TCP port 5001 TCP window size: 65.2 KByte (default) ------------------------------------------------------------ [ 5] local 10.X.X.15 port 33819 connected with 10.X.X.7 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 744 MBytes 624 Mbits/sec [ 4] local 10.X.X.15 port 5001 connected with 10.X.X.7 port 59971 [ 4] 0.0-10.0 sec 876 MBytes 734 Mbits/sec and here you go with my Virtualization Server based on lenny Qemu KVM lion:/mnt/vmbk# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 10.X.X.7 port 5001 connected with 10.X.X.15 port 33819 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 744 MBytes 623 Mbits/sec ------------------------------------------------------------ Client connecting to 10.X.X.15, TCP port 5001 TCP window size: 539 KByte (default) ------------------------------------------------------------ [ 4] local 10.X.X.7 port 59971 connected with 10.X.X.15 port 5001 Waiting for server threads to complete. Interrupt again to force quit. [ 4] 0.0-10.0 sec 876 MBytes 735 Mbits/sec > >> as i believe read right will highly effect the performance, >> >> Second question is have you tested this on common SATA drives? >> >> 3. are you using Linux iSCSI or other sharing methods like FTP, SAMBA >> etc. and if yes then how reliable iSCSI could be since i have a bit >> bad experience with openfiler and iSCSI connection with XP Clients. so >> i want to ask your opinion. >> >> 4. and the test results that you have shown are only tests or you are >> working on it in productions (you know reliability is also some thing >> that i need to know as i am going to be trying this in production) >> >> 5. would you please share some details of you SAN BOX >> like HARDWARE and OS level. > > Let's take things one step at a time, please. > > -- > Stan > -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/CAGWVfM=TyU9ZY+UQcX5xuK-h=9oo_hdd_ooiw2u7yze9gvy...@mail.gmail.com