|  |     
    As usual, the answer is, it depends......
    
    First off, the DECHub 900 does not have an FDDI MAC, so it 
    does not contribute to the setting of TRT. It sounds like 
    there are only 3 nodes in this network, the DECconn 900 and the 
    two SGI machines. 
    
    SGI has long advocated the use of very high TRT values because 
    theoretically the ring's utilization under load increases from
    80% to 90+ % . They ignore the fact that latency also will go up. 
    
    I'm again assuming that there are only 3 nodes in this ring. If 
    you raise T_Req of the concentrator to 168 ms, one of the SGI machines
    will win the claim, and set the value of T_Neg. You will be able
    to see this value via hubwatch, on the FDDI MAC Summary screen.
    When the change is made, T_Neg will change from 7.99 to the value
    determined by the SGI machine. That value will be, effectively,
    the maximum time that they can transmit before having to give up the 
    token. 
    
    Now, if the station has the ring all to itself, all you do is increase
    the number of back to back packets that got out of its transmitter
    before it gives up the token. Since it is effectively the only station 
    that wants the token, it doesn't seem that you will get much of a 
    change in performance. If you do, it may be because of something
    in their adapter besides just the setting of TTRT. 
    
    Its worth trying though, just to see what we can learn about these SGI 
    stations behaviors. Please post the results back here so that we can 
    see what happened. 
    
    If you have an FDDI analyzer, try to get before and after pictures
    of the packets on the wire. I'd be curious to know how many back to
    back packets get transmitted under each condition. 
    
    
 |