| T.R | Title | User | Personal Name
 | Date | Lines | 
|---|
| 589.1 | Known behavior | QUIVER::POULIN |  | Wed May 27 1992 13:03 | 34 | 
|  |     
    
    .0 is correct in that a periodic check of the port (every 15 seconds)
    occurs to determine if the port is "OK".  Actually, a loopback test is
    performed which would fail if the transceiver, AUI cable or coax is
    broken (or missing).  A detailed discussion of how this is done would
    involve proprietary information.
    
    See Note 482.*.
    
    This is being documented.
    
    Some comments concerning the performance data in .0:
    
    1.  The DECbridge does not forward all packets in all cases (the case
        in .0 is just one example).
    
    2.  The bridge is designed to operate at "well above" the performance
        requirements of "most" customers.
    
    3.  Design decisions were made that traded off performance for network
    	stability (something our competitors sometimes fail to do).
    
    4.  If a customer is experiencing performance problems that appear to
    	involve the DECbridge, there is a strong possibility that the
    	network segments attached to the bridge are operating at such a
    	high level of utilization that the segments themselves are unstable
    	or operating beyond recommended limits.
    
    
    John
    
    
    
 | 
| 589.2 | performance is needed | ABACUS::BUKOWSKI |  | Wed May 27 1992 15:35 | 10 | 
|  |     re:-1
    
    	So what are the recommended segment utilization levels?  I have
        segments that typically run at 60% and spend a few hours at 80+%
    	and even peak in the high 80s that I need to convert over to bridge
    	610's.  Am I going to recieve timeouts on LAT protocol?  Will the
    	bridge drop packets?  Should I buy 3 bridge 510s for every bridge
    	610 that I plan on purchasing?
    
    	Mike
 | 
| 589.3 | 60% is high, 80% is overloaded | LEVERS::S_JACOBS | Live Free and Prosper | Wed May 27 1992 16:10 | 18 | 
|  |     If you are seeing "utilization" in the 80% range, it is likely that a
    large portion of that utilization is collisions.  That segment should
    be divided up into smaller segments wherever traffic can be localized. 
    
    If the traffic cannot be localized, then an FDDI backbone with multiple
    DB620's serving segments with reasonable station counts should help the
    overall network throughput.  If it is a case of lots of clients ganging
    up on one server, look into putting the server directly on the FDDI
    ring.
    
    Using three DB520's instead of one DB620 will give you increased
    bandwidth between FDDI and Ethernet if the average packet sizes are
    small (less than 100 bytes) and the utilization of all three Ethernets 
    is in the 60% range.  Note however that Ethernet to Ethernet traffic 
    will then be going over the FDDI ring instead of being bridged directly 
    to the outbound port.
    
    Steve
 | 
| 589.4 |  | ABACUS::BUKOWSKI |  | Thu May 28 1992 12:05 | 36 | 
|  |     RE:-1
    
    	In my network, we have both types of traffic situations that you
    had mentioned.  We do try to localize traffic where we are able to,
    but having approximately 3500 customers on one LAN constantly moving
    around doesn't make for a ideal network.   We have ordered 610's to
    be placed on our FDDI backbone which will replace all of our 10/10
    bridges.  It should work out fine, since most of the high utilization
    segments (50+%) are localized traffic (vary large MIVC's), and those
    bridges are filtering most of the traffic.
    
    	You mentioned that if many clients were using a server node, then
    the server node should be connected directly to the FDDI ring.  Well,
    that would force the DB610's to forward many more packets, and that
    is what these bridges can't do vary fast.  So did I mis-interpret
    something?
    
    	You also mentioned that the DB5xx's will provide increased
    performance between the Ethernet port and FDDI ring, but only if the
    packet sizes were less than 100 and utilization less than 60%.   Why 100
    and 60%?   Are you approximating or is there a guideline paper
    available?
    
    	One last topic.  You mentioned that Ethernet to Ethernet traffic
    will have to go over the FDDI ring instead of being bridged directly to
    the outbond port.  Does this mean that the DB6xx's bridge amongst the
    three Ethernet ports and then goes out over the FDDI port only if
    needed?   I thought that the DB6xx's were like having three bridge in
    one box and that the FDDI port was shared, and that all packets
    originating on Ethernet A and not destined for Ethernet A would be
    sent out Ethernet B, Ethernet C, and the FDDI port.  Am I incorrect?
    
    Mike
    
    
    	
 | 
| 589.5 | packets per second more important | QUIVER::POULIN |  | Thu May 28 1992 12:56 | 56 | 
|  |     
    
    Please do not assume that the DECbridges "can't forward packets very
    fast".  This is not true.  The DECbridges are very high performers.
    
    They do not perform, however, at the highest possible rate.  They do
    not need to.  They were designed to handle traffic more than
    adequately.
    
    Packet sizes are important for the following reasons:
    
    1.  The maximum packet per second rate for 1518 byte packets (the
    	largest possible on Ethernet) is about 780 pps.  The maximum pps
    	rate for 64 byte packets (the smallest possible on Ethernet) is
    	about 14,881 pps.
    
    2.  Generally speaking, the bridge performs the same amount of work on
    	a packet independent of packet size.
    
    3.  If the Ethernet segment has all 1518 bytes packets on it, it takes
    	only 780 pps to reach 100% utilization.  If the segment has all
    	minimum size packets it takes 14,881 pps to reach 100%.
    
    4.  When speaking about segment utilization, one must consider the
    	distribution of packet sizes in order to completely characterize the
    	"offered load" (the load presented to the bridge) because of (2)
    	(ie. 80% at 100 bytes per packet is very different than 80% at 1500
    	bytes per packet from a packets per second view).
    
    5.  Most packet size distributions fall within the range that the
    	bridge can process.
    
    
    Try repeating your experiment with packet sizes approximating the
    actual offered load (although this kind of work has already be done
    by the NaC Performance Evaluation Group).
    
    About the bridging of packets, the bridge will forward a packet to the
    port that the bridge learned the address on.  It does not forward the
    packet to all other ports.  In your experiment, the bridge did not get a
    chance to learn the destination port of the packet that you sent to the
    bridge because the bridge must "see" the address as a source address on
    a port in order to learn it.  In your case the address was always a
    destination, never a source.
    
    
    John
    
    
    
    
    
    
    
    
    
 | 
| 589.6 | more on DB6XX performance | LEVERS::S_JACOBS | Live Free and Prosper | Thu May 28 1992 15:25 | 55 | 
|  | Hi Mike:
    
    Here's some feedback on your questions:
    
        
>    	You mentioned that if many clients were using a server node, then
>    the server node should be connected directly to the FDDI ring.  Well,
>    that would force the DB610's to forward many more packets, and that
>    is what these bridges can't do vary fast.  So did I mis-interpret
>    something?
 
    When I said to put the server on FDDI, I had assumed that the clients
    were scattered over many separate Enet LAN segments.  If you have an
    Ethernet LAVC with a local server, then it makes sense to put the server 
    directly on the Ethernet.
       
>    	You also mentioned that the DB5xx's will provide increased
>    performance between the Ethernet port and FDDI ring, but only if the
>    packet sizes were less than 100 and utilization less than 60%.   Why 100
>    and 60%?   Are you approximating or is there a guideline paper
>    available?
    
    There is a "white paper" on this subject.  I don't have a soft copy. 
    If some other noter has one, maybe they can post it here.  I would hope
    that you could get one through Marketing, since that is the whole
    reason we wrote the darn thing.
    
    The translation performance is the same for the 5XX and 6XX bridges. 
    It works out to be slightly less than 20Kpps.  So you can see that with
    a single ethernet port the FDDI-ENET translation is full performance.
    
    With 3 ethernet ports, the 20Kpps is divided among the three ports.
    ONLY THE PACKETS THAT GO BETWEEN FDDI AND ETHERNET ARE TRANSLATED.  Any
    ethernet to ethernet traffic stays away from FDDI entirely.
    This leaves about 6.5Kpps of translation per port.  6.5Kpps is what you
    get when an Ethernet is 60% utilized with packets that have an average
    size of 100 bytes.
    
    
>    	One last topic.  You mentioned that Ethernet to Ethernet traffic
>    will have to go over the FDDI ring instead of being bridged directly to
>    the outbond port.  Does this mean that the DB6xx's bridge amongst the
>    three Ethernet ports and then goes out over the FDDI port only if
>    needed?   I thought that the DB6xx's were like having three bridge in
>    one box and that the FDDI port was shared, and that all packets
>    originating on Ethernet A and not destined for Ethernet A would be
>    sent out Ethernet B, Ethernet C, and the FDDI port.  Am I incorrect?
    
    As John mentioned, the DB6XX bridges are true multiport bridges.  They
    bridge from any port to any port without the packet showing up on any
    unnecessary port.  The DB6XX is not the same as three DB5XX in a box.
    You would only use multiple DB5XX in place of a single DB6XX if a
    customer ABSOLUTELY demanded full-performance.  There are studies and
    references in literature that specify 60% as a maximum practical load
    for ethernet.
 |