| T.R | Title | User | Personal Name
 | Date | Lines | 
|---|
| 2064.1 | I'm not imagining this! | KLUSTR::GARDNER | The secret word is Mudshark. | Thu Mar 02 1995 16:14 | 9 | 
|  | 	fwiw I have now verified the results in .0 using a variety of PCs
	with a variety of NICs on the Ethernet side and a variety of Alphas
	running either DEC OSF/1 (aka Digital UNIX) or OpenVMS...the results
	are consistently a twofold order of magnitude differences in
	Kbyte/second performance...
	I'm stumped...its gotta be something simple but what?? HELP!
	_kelley
 | 
| 2064.2 | Maybe IP MTU Discovery is happening ? | NETCAD::KRISHNA |  | Thu Mar 02 1995 17:25 | 30 | 
|  |     
    
    On the DB900MX we implement the IP MTU discovery algorithm. One
    possibility of what could be happening is that when the FTP session is
    started, the OSF/1 station tries to use the max size packet (4500
    bytes) but with the Don't fragment bit set. Since we cannot pass the
    packet to Ethernet without fragmenting it, the DB900MX would send an
    ICMP message to the OSF/1 telling it that it can't pass the packet. Now
    the OSF/1 has to retry with a smaller size packet. I am not sure about
    the following:
    
    1. What is the retry algorithm, in other words does the OSF/1 drop the
    packet size to 1500 on receiving an ICMP msg or does it undergo a trial
    and error mecahnism each one resulting in an ICMP msg from the bridge
    until it hits 1500 byte size packets.
    
    2. Once we send the ICMP message back, what priority is it serviced and
    then notifies the FTP application ? Could there be a high latency there
    ?
    
    The best way to debug this problem is to get a FDDI analyzer and
    monitor the exchanges between the bridge and the OSF/1 system looking at
    the timestamps of the messages.
    
    For those who are interested, the IP MTU Discovery algorithm is
    specified in RFC 1191 and is implemented in the DB900MX, DS900EF and
    the PE900TX.
    
    Hope this helps,
    Krishna
 | 
| 2064.3 | yeah, but... | KLUSTR::GARDNER | The secret word is Mudshark. | Thu Mar 02 1995 17:51 | 14 | 
|  | 	re: .2
	hmmm...thought of that, and some variation of that is what I would
	expect *IF* IP Fragmentation in the DB900MX was Disabled...thing
	is I get the same result with IP Fragementation Enabled which I
	thought would avoid the packetsize renegotiation and from what
	I've read in this conf. I also thought the DB900MX could
	saturate an Ethernet segment in this mode (i.e. IP Frag Enabled
	shouldn't impose a performance penalty)...OR AM I MISSING
	SOMETHING???
	_kelley
	ps - I'll see what I can do about getting an analyzer on FDDI
	(maybe there's a software based one I can use)....
 | 
| 2064.4 | never mind | KLUSTR::GARDNER | The secret word is Mudshark. | Fri Mar 03 1995 17:13 | 10 | 
|  | 	well it turns out this has something to do with the particular
	network software I'm using on the PCs here...in short my setup
	involves using Novell's ODI Netware client and Microsoft's
	TCP/IP-32a stack...its some sort of tuning problem but, as is
	typical when products from Novell *and* Microsoft are involved,
	there is little hope of finding a quick answer...
	sorry for the noise ;-)
	_kelley
 | 
| 2064.5 | Raw 802.3 problem? | PTOJJD::DANZAK | Pittsburgher � | Sun Mar 05 1995 14:27 | 5 | 
|  |     Did you need to set the "enable raw 802.3" option on the
    DECswitches...?  If they are running RAW 802.3 that could cause
    issues...
    j
    
 | 
| 2064.6 | not the problem | KLUSTR::GARDNER | The secret word is Mudshark. | Mon Mar 06 1995 09:46 | 11 | 
|  | 	re: .5
	- my Netware world is setup to use (the now default for Novell)
	  802.2, not the bogus "raw 802.3"...
	- the problem I'm having is with IP, not IPX...indeed, IPX is
	  performing as expected.....
	(aside: our implementations of Netware for Unix on both OpenVMS
	 and Digital UNIX do not support "raw 802.3" mode)...
	_kelley
 | 
| 2064.7 | window sizes? | WOTVAX::BANKST | Network Mercenary | Mon Mar 06 1995 12:42 | 9 | 
|  |     Is there a mismatch of TCP window size perhaps, with the OSF/1 system
    perhaps having a max window smaller by default, than the PC?  SO it
    would request a small window when you initiate from the OSF/1, but use
    a bigger window when accepting PC connects?
    
    Or am I crediting TCP/IP with DECnet/OSI like functionality it doesn't
    have?? :-)
    
    Tim
 |