|  | 
               <<< MOUSE::USER$:[NOTES$LIBRARY]ETHERNET.NOTE;1 >>>
                                -< Ethernet V2 >-
================================================================================
Note 703.3                   questions for smarthub                       3 of 9
PRNSYS::LOMICKAJ "Jeffrey A. Lomicka"               162 lines   8-APR-1991 10:59
                        -< More answers than you need >-
--------------------------------------------------------------------------------
Hi, I'm one of the engineers on the DECbridge 90.  I will try to answer
your questions.
>    1.  What is the reason of maximum 200 nodes in the workgroup LAN
>    ? Is it the memory constraint ?  What happens if there are more
>    than 200 workstations (lower performance ?) ?  Do we need to count
>    the DECserver 90L, the DECrepeater 90C, 90T and even the DECbridge
>    90 itself as an entry as well ? 
The 200 node limit is a memory constraint.  The value was choses as a
"reasonable" cost/feature trade-off.  The DECbridge 90 uses a 256-entry
content addressable memory to implement the address table.  We use "56" of
these to implement the protocol filters, our own station address, the
addresses reserved by Bridge Architecture, and other internal stuff, and
leave 200 for the work group.
If there are more than 200, the "work group size exceeded" lamp lights.
The work group size exceeded counter is incremented when a message arrives
from an address that cannot be learned due to the 200 node limit.
The 201st is not entered into the table.  This way, it is the NEW station
that suffers, not any of the existing stations.
What happens is that messages from the backbone that are destin for the
new station(s) will not be received.  Performance of existing stations is
not affected.
To get the 200 count, you count every station address.  The bridge itself
does NOT count.  The DECserver 90L counts as 1.  The DECrepeater units do
not count, nor would DEMPR's or DELNI's or similar things.  Bridges in the
work group (officially not allowed, but keep on reading) would count as
2, because they have two addresses.  Everything else counts as 1, but you
may want to be a little bit careful with counting LAVC workstations, as
they have one address when they boot, and another after DECnet is started.
If you boot an entire work group of 150 workstations at once, you may have
a transient condition (lasting the "age time", 900 seconds by default)
where your work group size is exceeded (to 300) due to this use of two
addresses.  If you only boot 50 at a time, there would be no problem.
(Rumor has it that this DECnet behavior goes away with Phase V.)
Note, by the way, there is NO LIMIT on the backbone side.  You could have
100,000 or more stations out there, and the DECbridge will still be happy.
>    2. What is the reason that a normal bridge (eg LB200) cannot be
>    placed between two DECbridge 90 ?  What is the meaning of 'end-node
>    bridge' ?  Is that implied the DECbridge 90 is not spanning tree 
>    compliance or only subnet of spanning tree has been implemented in 
>    it ?  Does it support spanning tree, 802.1d or both ?  If only
>    802.1d, does it imply we cannot use it in a spanning tree network
>    (eg. network with LB100) ?
You have to be careful how you quantify "between".
By "end node bridge" we mean "no bridges connected to the work group
side".  You can connect anything you want to the backbone side.  The
reason for this is beacuse of the 200 node limit (I'll get into this).
The spanning tree in the DECbridge 90 is a full Epoch 2 algorithm.  (Both
DNA and 802.1d.) You can use it with both LANbridge 100 and with IEEE
bridges, no problem.
The preferred configuration is:
(backbone A, over 200)	+---------+  (backbone B, over 200)
        --------+-------+  LB200  +-----+-----------+--------
                |       +---------+     |           |
            +---+---+               +---+---+   +---+---+
            | '90 A |               | '90 B |   | '90 B |
            +---+---+               +---+---+   +---+---+
                |(Work group A)         |	    | (Work group C)
        --------+------         --------+---	----+------------
				(work group B)	       
The problem is that you cannot set up a condition where there is a ANY
POSSIBILITY that you may route traffic between two backbones through the
work group.  Don't be tempted to redundantly feed one work group from two
backbones like this:
Really bad thing to do:
(backbone A, over 200)	+---------+  (backbone B, over 200)
	--------+-------+  LB200  +-----+--------
		|	+---------+	|
	    +---+---+		    +---+---+
	    | '90 A |		    | '90 B |
	    +---+---+		    +---+---+
		|(Work group, under 200)|
	--------+-----------------------+---------
Initially, the spanning tree will select one of '90A or '90B to go into
backup mode, and it will work fine.  Then one day somebody trips over the
power cord to the LB200, and suddenly you find both DECbridge 90 units are
forwarding, and both have over 200 nodes in their work group!
Party line:
	You may not connect any bridges to the work group.  Redundant
	bridging is not supported.  This prevents anyone from configuring
	the above scenario, accidently or otherwise.
Real truth:
	A:  If there are fewer than 200 nodes TOTAL in the ENTIRE extended
	LAN, you can do whatever you want with DECbridge 90 units, just as
	if they were full bridges.
	B:  You can put all the bridges you want in the work group, so
	long as the total count of stations (counting bridges as two) does
	not exceed 200 nodes, and none of these bridges attach to anything
	outside the work group.  The work group MUST be a CLOSED AREA, for
	which the only way in is via one DEcbridge 90.
	C:  There is one "almost safe" redundant bridge situation.
	However, the only thing redundant bridge situation protects you
	against is the loss of power or failure of the WGB itself.  It
	REQUIRES that both bridges are fed from taps on the SAME backbone
	cable.
	(backbone A, over 200)
        --------+-----------+--------   Both bridges attached to the SAME
                |           |		CABLE!!!  No equipment of ANY KIND
            +---+---+   +---+---+	between them.  No repeaters, no
            | '90 A |   | '90 B |	bridges, nothing.  SAME CABLE.
            +---+---+   +---+---+
                |	    |
        --------+-----------+--------
	(work group A, under 200)
	This is based on the idea that there is no possible failure mode
	where both bridges would be forwarding at the same time.
Remember, although these configurations will work, there is danger that
if they are later re-configured so that a path exists out of the work
group other than one DECbridge 90, a network failure that reconfigures the
spanning tree through the work group will blow away the 200 node limit,
and the network will become unusable.  You are best advised to play by the
rules, using DECbridge 90 only when no bridges are required in the work
group, and using the LANbridge 150 or 200 for all other cases.
>    3. How can we set up a redundant path for the workgroup LAN ?  For
>    the normal Lan bridge family, we can do so by simply adding an
>    additional bridge in parallel with the primary bridge.  How can
>    we do that for DECbridge 90 ?  
See "C" above.  Note that you MUST allow the backbone wire to be a single
failure point.  You cannot do FULL REDUNDANT networking with '90s.  You
can, in an unsupported way, protect against independent power failure of
the '90's or of the '90 itself.
>    4. If there is no DECbridge 90 installed in the DEChub, can we connect
>    a normal LAN bridge (say, LB150) to one of the segments of the
>    DECrepeater 90C ? 
Yes.  You can do that.
>    Simply speaking, can we say DECrepeater
>    90C = DEMPR and DECrepeater 90T = DETPR ?  If so, is that means
>    it is time to say goodbye to DEMPR and DETPR ?
Well, the DEMPR and DETPR have an AUI port, 90T and 90C do not.  DEMPR has
8 outgoing ports, 90C has 6.  The older repeaters still have application.
There is no good way to connect a 90C or 90T to a thickwire without
introducing another repeater or a bridge with an AUI port.  (You can't do
it with, for example, a DESTA.  It's the wrong end of the AUI cable.)  I
don't think its time to retire DEMPR or DETPR for that reason.
 Keyword              Note
>DECBRIDGE90          703.0, 710.0, 733.0, 744.3, 771.0, 775.1, 786.0,
795.0,                       798.0, 803.0, 812.0, 819.0
 End of requested listing
    
 | 
|  | This document can be found on the public directory
EMDS::$1$DUA4:[SMART_HUB90.MANAGEMENT]CABLETRON.TXT
Subj:	Cabletron Hub Timing Problems
This can be used for competitive selling against Cabletron.  Our design of
repeaters and management do not have this problem.
DESCRIPTION OF PROBLEM: 
Poor network performance under moderate load,  possible lost packets.
PROBABLE CAUSE:
It appears that the individual repeater modules must contend for access to the
IRM module which provides the actual retiming/signal amplification function for
the MMAC hub.  In order to accomodate traffic jams the IRM module provides 
buffers which slow performance.  However, since the buffers are of limited size
even at moderate Ethernet loads the buffer space may be exceeded which means 
packets are dropped and must be regenerated which further degrades performance.
The loss of packets is identified by the error message "no resource available".
Field information indicates that the MMAC-3 has seen excess collisions at 3-4%
sustained load and the MMAC-8 is overwhelmed beyond 10-20% sustained load. 
DEChub 90 SOLUTION:
At one account where Cabletron repeaters were replaced with DE*MR repeaters
system performance improved so significantly users were calling up the system
manager asking what had been done to the network.
DETAILS FROM THE FIELD.  
                  I N T E R O F F I C E   M E M O R A N D U M
                                        Date:     05-May-1992 07:58am CDT
                                        From:     TONY REMBOWSKI
                                                  REMBOWSKI.TONY
                                        Dept:     SOFTWARE SERVICES
                                        Tel No:   713-953-3754
Subject: Cabletron vs DECrepeater 90's at ***
                            LAVC Transition History
    
    	The following information is based on Tony Rembowski's involvement 
    at *** concerning Local Area VAX Cluster (LAVC) transitions.
    
    PROBLEM
    
    	The indications are, LAVC running slow, unexplained loss and 
    regains of LAVC nodes and LAVC transitions are unpredictable.
    Timing issues?
    
    
    BASE LINE CONFIGURATION
    
    	The VAX cluster resides on the FDDI ring and has an IEEE 
    802.3/Ethernet connection also.  *** is utilizing a 10Base2 (Thinnet) 
    over twisted pair IEEE 802.3/Ethernet implementation.  The following is 
    a basic configuration for discussion purposes, this does not represent 
    the total configuration;
    
    VAXcluster-->FDDI Concentrator-->FDDI 620 Bridge-->ThinWire-->Cabletron 
    MMAC3-->Pair Tamer-->Twisted Pair Wire-->Pair Tamer-->Workstation
    
    It should be noted an IEEE 802.3/Ethernet segment is attached to the 
    cluster and provides another path for LAVC traffic.
    
    
    RESOLUTION TEAM MEMBERS
    
    	    	Digital Members:  Czarena Siebert, VAX Cluster Engineering, 
    			  Local Digital Services,  FDDI Engineering, Tony 
    		 	  Rembowski
    
    ACTION LOG
    
    Week of January 20
    
    	o  Cabletron MMAC3 identified as network traffic bottleneck. ***
           upgraded IRM module to IRM2 due to performance improvements.  
           Performance improved and IRM2 module provided more operating 
           information.  IRM2 logs No Resource Available, i.e., lack of 
           buffer space.  MMAC3 still appears to be a bottleneck.
    
    	o  Digital loans DECrepeater 90C to *** for evaluation,  
           performance improvements noticed immediately. 
    
    	o  Digital recommends *** evaluate the Network Professor System. 
           During the evaluation period excessive collisions conditions 
           were noted.  Evaluation period January 24th - February 12th.
    
    	o  FDDI and BI adapter repositioned on the XMI in 6440's to 
           determine if performance would improve, no significant change 
           was noted.
    
    
    Week of January 27
    
    	o  Phone support provided by Technically Elite Concepts January 29.
    
    	o  Begin acquisition process of DEChub 90's and DECrepeater 90C's 
           to replace Cabletron MMAC3's.
    
    	o  Digital provides quotation for onsite Network Consulting 
           Services.
    
    	o  Network performance suspect, LAVC loss and regains can be tied 
           to network errors, i.e., IRM2 and DECrepeater reduced 
           occurrences.
    
    
    	Six Cabletron MMAC3's where totally replaced with DEChub90's and 
    DECrepeater 90C's, 130 connections.  To verify the MMAC3 is a bottle 
    neck a single VAXstation 3100 was attached to one and the user noticed 
    slower response immediately.  Slower response is described as slower 
    screen updates, file gets from the server take longer, applications 
    starting slower and windows opening slower.
    
    	There are 10 segments in this facility.  The six LAVC legs 
    experience about 20% sustained traffic with peaks of 60%, the remaining 
    segments sustain about 6% utilization with peaks of 30%.  While 
    investigating performance issues I found a note in the Ethernet notes 
    conference that stated the MMAC3's have performance problems with 
    sustained traffic of 3-4%.  Cabletron technical support stated the 
    MMAC8's may experience similar conditions with sustained traffic of 
    10-20%, i.e., reconfiguration may be required.
    
    	The team does not have hard numbers to prove Cabletron's MMAC3 and 
    MMAC8 concentrators provide significant delay under heavy load 
    conditions, if 6% to 20% is considered heavy load, NOT.  The user 
    community phone calls, "What did you do to my workstation it's running 
    faster", after installation of the DEChub and 90C's, left the team 
    with the impression the Cabletron concentrators do add significant 
    delay.  Another indicator, the loss and regains of LAVC nodes has 
    virtually stopped.
    
    
 |