> > This is not quite always the case.  Ethernet's CSMA/CD (Carrier Sense
> > Multiple Access with Collision Detection) was invented during a time when
> > a hub or bus were the primary method of connection.  Collision was indeed
> > a problem then, and keeping the LAN small was a way to ensure network
> > performance.
> > 
> > However, these days, switches are much cheaper and are easily within the
> > reach of most organizations.
> > 
> > If your users are hooked up to a switch instead of a hub, you can ignore
> > the "collisions problem", as it no longer exists.  At that point, the
> > limiting factors are the speed/RAM of the gateway and the speed/RAM of the
> > switch.
> > 
> > A good, short explanation can be found at
> > http://www.duxcw.com/faq/network/hubsw.htm
> > 
> > 
> > Ben
> > 
> Actually CSMA/CD is the problem on a large single area network.  I read the
> article and see the point.  Here is the problem.  When a node transmits it
> first listen for no traffic then it tries to transmit, if a collision occurs
> then it goes into an algorithm to make a attempt again after it selects it's
> new time slot, well the larger the number of nodes the greater the
> probability that they will select the same time slot and cause a collision
> again. Etc. etc .... Therefore large networks always bottle neck under
> Ethernet and that is why no company will place a large number of computers
> in the same area no matter what the transport medium is.  There is always a
> optimum number that should be in an area before it is broken down.  That's
> the theory.

Otto,

I'm sorry, but you have absolutely no idea as to how ethernet works in a 
*switched* environment.  You are describing a LAN that is on a hub.  In 
that case, you are correct with all of your above comment.

On the other hand, in a situation where all of the hosts are connected to 
a *switch*, instead of a hub, then each "segment" consists of exactly two 
devices: the host and the switch port.  In that case, the chance for 
collision is greatly reduced.  The reason for this is simple: 

If host-a transmits a packet while connected to a hub, then all other 
hosts connected to that hub will see the packet, whether or not it is 
intended for them.

If host-a transmits a packet while connected to a switch, then something 
entirely different happens.  The switch looks at the packet and decides 
where it is to go.  If it is a multicast or broadcast packet, then most or 
all of the other hosts on the switch will see it (I won't go into those 
rules, it's beyond the scope of this particular discussion.)  

If, on the other hand, it is a *unicast* packet, and the destination host 
is on the switch, then the switch will only transmit it on the port to 
which that destination host is connected.  If the host is not connected to 
the switch, then it will send it on it's "uplink" port, depending upon 
configuration.

Now, what this means is that any host that is connected to a switch will 
only see broadcast traffic, multicast traffic to which it is subscribed, 
or unicast traffic that is addressed to it.  This sharply decreases the 
number of packets that the host's ethernet card will see as inbound, which 
will thereby reduce the number of collisions during transmission and 
subsequently increase the perceived bandwidth.

At that point the bottleneck becomes the switch's backplane capabilities, 
not collisions.

Switched ethernet is vastly superior to ethernet on a hub, and has become 
very cheap.  It is now easily in reach for most organizations, including 
the home office.

Ben


-- 
redhat-list mailing list
unsubscribe mailto:[EMAIL PROTECTED]
https://www.redhat.com/mailman/listinfo/redhat-list

Reply via email to