On Fri, Oct 11, 2013 at 4:12 PM, Stan Hoeppner wrote:
> On 10/11/2013 2:42 AM, Muhammad Yousuf Khan wrote:
> > [Cut].
> > Are dual and quad port Intel NICs available in your country?
> >
> > Not very easily but yes, we can arrange. i personally have PCIe 4 Port
> > intel NIC.
> > so this can
On 10/11/2013 2:42 AM, Muhammad Yousuf Khan wrote:
> [Cut].
> Are dual and quad port Intel NICs available in your country?
>
> Not very easily but yes, we can arrange. i personally have PCIe 4 Port
> intel NIC.
> so this can be arranged.
I recommend Intel NICs because they simply work, every
[Cut].
Are dual and quad port Intel NICs available in your country?
Not very easily but yes, we can arrange. i personally have PCIe 4 Port
intel NIC.
so this can be arranged.
> Before a person makes a first attempt at using the Linux bonding driver,
> s/he typically thinks that it will magi
On 10/9/2013 5:51 AM, Muhammad Yousuf Khan wrote:
> [cut]...
>
>
> What workload do you have that requires 400 MB/s of parallel stream TCP
>> throughput at the server? NFS, FTP, iSCSI? If this is a business
>> requirement and you actually need this much bandwidth to/from one
>> server,
[cut]...
What workload do you have that requires 400 MB/s of parallel stream TCP
> throughput at the server? NFS, FTP, iSCSI? If this is a business
> requirement and you actually need this much bandwidth to/from one
> server, you will achieve far better results putting a 10GbE card in t
On 10/8/2013 4:41 AM, Muhammad Yousuf Khan wrote:
> i am using bond mode balance-alb. and here is my "/etc/network/interfaces"
...
> auto bond0
>
> iface bond0 inet static
> address 10.5.X.200
> netmask 255.255.255.0
> newtork 10.5.x.0
> gateway 10.5.x.9
> slaves eth2 eth3
> #bond-mode active-back
On Tue, Oct 08, 2013 at 02:41:12PM +0500, Muhammad Yousuf Khan wrote:
> i am using bond mode balance-alb. and here is my "/etc/network/interfaces"
> file
> ##
> # The loopback network interface
> auto lo
> iface lo inet loopback
>
> # The primary network interface
> all
i am using bond mode balance-alb. and here is my "/etc/network/interfaces"
file
##
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug eth0
iface eth0 inet static
address 10.X.X.221
netmask 255.25
n Tue, Nov 1, 2011 at 6:05 AM, Stan Hoeppner wrote:
>
>> On 10/31/2011 6:00 PM, Jesus arteche wrote:
>>> Hey guys,
>>>
>>> Do you know if it is possible to make Ethernet bonding between several
>>> machines?
>>
>> If I correctly understan
server hasta bandwidth limit, and I want to expand this limit..
On Tue, Nov 1, 2011 at 6:05 AM, Stan Hoeppner wrote:
> On 10/31/2011 6:00 PM, Jesus arteche wrote:
> > Hey guys,
> >
> > Do you know if it is possible to make Ethernet bonding between several
> > m
On 10/31/2011 6:00 PM, Jesus arteche wrote:
> Hey guys,
>
> Do you know if it is possible to make Ethernet bonding between several
> machines?
If I correctly understand what you're asking, no, it is not possible.
> are there some way the create a high availability load bal
Hey guys,
Do you know if it is possible to make Ethernet bonding between several
machines?
are there some way the create a high availability load balancer with
several machines whit a bandwidth equal the sum of all the bandwidths ??
Thanks in advance
Hello,
I just setup a system with two hardware ethernet devices (eth0 and eth1)
to share these devices in a bonding device. I use 'mode=0 miimon=100
downdelay=2000 updelay=200' as options to the bonding kernel module, and
my /etc/network/interfaces contains the following:
auto bond0
iface bond0 i
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Birju Prajapati wrote:
> Hi,
> I have two network cards that are both displaying as such:
>
> [EMAIL PROTECTED]:~$ sudo mii-tool
> eth0: no autonegotiation, 100baseTx-HD, link ok
> eth1: no autonegotiation, 100baseTx-HD, link ok
>
> I have bonded the
Hi,
I have two network cards that are both displaying as such:
[EMAIL PROTECTED]:~$ sudo mii-tool
eth0: no autonegotiation, 100baseTx-HD, link ok
eth1: no autonegotiation, 100baseTx-HD, link ok
I have bonded them in modprope.d as such:
alias bond0 bonding
options bonding mode=0 miimon=100 downd
tring to do a bit of research on the bond interfaces. in the description it
talks about sending data out the bonded interfaces in a round robin type
fashion. i was under the impression that the bond interfaces were for
failover usage.
I've got 9 debian servers broken up into 4 different vlan
Jacob Friis Larsen wrote:
I have a HP ProLiant DL140 with Integrated Dual Broadcom 10/100/1000 NICs.
Am I correct that duplex means that I can use the two jacks as one?
Duplex means that the interface can send and receive information at the
same time. It can be used on switched networks. It cannot
> Has anyone figured out hot to configure ethernet bonding using
> two or more NIC's?
Enable bridging in the kernel and use the bridge-utils package.
-Igor Mozetic
Has anyone figured out hot to configure ethernet
bonding using two or more NIC's??? I'm using Woody with
2.4.
Thanks,
Rocky.
Rafael
BorregoSystems
NSC TechWorks,
Ltd. 1100 Landmeier Rd.Suite 20
19 matches
Mail list logo