| T.R | Title | User | Personal Name
 | Date | Lines | 
|---|
| 2131.1 | Well....not quite 0 latency  :-) | NETCAD::BATTERSBY |  | Wed Mar 22 1995 09:53 | 8 | 
|  |     It is mentioned in the Bridge/Switching Products on the DECHUB web
    page that Scott Bradner's test equipment accuracy has a tolerance of 
    +/- 64 microseconds. We measured 35 microseconds in our lab. This is
    mentioned too under Performance. Oh and we do not use cut-through 
    switching. The 35 usecs is all the time we need to store&forward the 
    packets. 
    
    Bob
 | 
| 2131.2 | Further clarification.... | NETCAD::BATTERSBY |  | Wed Mar 22 1995 09:56 | 5 | 
|  |     Also the statement in 716.3 *does* in fact mention that there is a 
    small finite amount of latency, thus alluding to some test accuracy
    allowance in Bradner's test.
    
    Bob
 | 
| 2131.3 | How Scott timed it. | CGOS01::DMARLOWE | Wow! Reality, what a concept! | Wed Mar 22 1995 16:14 | 5 | 
|  |     Also Scott's time starts when the last byte of the packet arrives (CRC)
    til the first byte starts coming out of the destination port.  Thus the
    35uS in the lab fits into the +-64uS accuracy of Scott's clock.
    
    dave
 | 
| 2131.4 | More on Latency Comparisons | DELNI::PIEPER |  | Wed Mar 22 1995 18:28 | 33 | 
|  |     It is important to note that latency for store and forward devices is
    measured from receipt of the LAST bit of the incoming frame to the
    transmission of the first bit of the outgoing frame.  This is per an
    IETF spec for bridges and routers.  As stated in the other responses
    the 35uS latency was too fine for Bradners equipment to measure so we
    ended up with a 0 latency number (as well as most of the other vendors).  
    Bradner now has better equipment for measuring the latency and has been 
    retesting equipment so you will probably see the 35uS number showing up 
    on his latest reports.
    
    Latency for cut-thru devices is typically measured from the receipt
    of the FIRST bit of the incoming frame to the transmission of the
    first bit of the outgoing frame.  Quite different from the IETF
    specification!
    
    If you want to try to compare cut-thru switches and store and 
    forward switches, you would need to use a new definition instead of 
    latency.  Let's call it "delay".  So the "delay" for the cut-thru 
    device is the latency and the "delay" for the store and foward device 
    is the length of the packet plus the latency.
    
    Don't get all exited about this apparent "delay" advantage though since
    cut-thru switches suffer from many other disadvantages (forwarding of 
    runt packets that are caused by collisions, inability to interconnect 
    dissimilar networks (FDDI to Ethernet) or dissimilar speeds (10 Mb 
    Ethernet to 100 Mb Ethernet), lack of filtering options) 
    
    Also realize that cut-thru switches can only cut-thru if the
    outbound port is NOT busy, otherwise they too have to store and
    forward!  So if more than two ports try to get to a single outbound
    port, the cut-thru switch needs to revert to store and forward.  This
    scenario will happen quite a bit in a normally busy network.  When 
    this happens, the "delay" advantage evaporates!
 | 
| 2131.5 | Ethernet cut thru not worth reduced latency | CGOS01::DMARLOWE | Wow! Reality, what a concept! | Thu Mar 23 1995 12:16 | 7 | 
|  |     Also discussed in ETHERNET_V2 1023 were some comments about cut thru
    switching (bridging).  With low traffic cut thru is only marginally
    faster than store-and-forward.  With medium and high traffic, even
    Digital News and Review didn't like cut thru (Kalpana) due to packet
    loss because of small buffers.
    
    dave
 |