|  |     re .0,
    First thing first, an n-dimensional matrix usually means an nxn matrix.
    re .2,
    What you are talking about is the so called tensor (which is a kind of
    generalization of matrix) operation.  Under such generalization a matrix
    is a tensor of type (1,1).  The matrix multiplication is the same as the 
    tensor product followed by a tensor contraction, i.e. if we consider 
    matrice A and B as tensor, then the normal definition of A*B is the
    same as C(AXB) where X means tensor product and C means contraction
    (with the appropriate indices).
    Now if you want to deal with tensor of higher types, you have to
    specify whether it is of type (1, 2) or of type (2, 1).  Then you can
    generalize the so called "multiplication" several ways depends on your
    need by combining tensor product and tensor contraction.  In short,
    there many ways of defining your "high dimension matrices" and many
    ways of defining your "multiplication".
    
    Eugene 
                                
 | 
|  | >>	>    Is multiplication defined for n-dimensional matrices?  If so,
>>	>    what are the rules for computing the product?
>>    
>>    
>>    yes.  AB(I,J) = SUM over k of A(I,K) * B(K,J)
>>    
>>    Did I understand the question?
	Matrix usually implies a two dimensional array of numbers.
	By "n-dimensional matrices" the author of .0 meant arrays
	with potentially greater than two dimensions, not two dimensional
	arrays where both dimensions were length n.
	You can multiply A by B where A is, say, 3 dimensions and
	B is, say, 2 dimensions by forming C with dim(A) + dim(B) = 5
	dimensions and defining C[i,j,k,l,m] = A[i,j,k] B[l,m].  Then
	you can contract that in various ways, for example, assuming the
	middle A dimension and the final B dimension both have length n,
	you can form
		D[i,k,l] = sum(j = 1,...,n) C[i,j,k,l,j]
			 = sum(j = 1,...,n) A[i,j,k] B[l,j]
	In fact it is a common tensor notation to have a doubled index
	mean summation over its range, so that the above might be written
	as just D[i,k,l] = A[i,j,k] B[l,j].
	The physics (or whatever) underlying the arbitrary dimensional
	arrays will determine which dimensions it makes sense to "pair up"
	this way.  The usual "2 d" matrix multiplication just sums along
	the second dimension of one matrix and the first dimension of the
	other which of course must be the same length).
	Dan
 |