Friday, April 8, 2011

Random matrices and the Kronecker product

The distribution of a zero-mean, jointly Gaussian column vector x is pretty basic stuff in probability: we get the covariance matrix R, given by

where the superscript T represents transposition. Then we find the probability density function (pdf)






where k is the number of elements in x.

But suppose you have not a random vector, but a jointly Gaussian, zero-mean k by k random matrix X. How do you express the pdf compactly? And can you compactly represent the pdf of matrix multiplications of X?




The easiest way to get the pdf of X is to turn X into a vector: for example, we can form the "matrix stack" of X, s(X), by stacking the columns of the matrix on top of each other as a column vector.  That is, if X can be written










then s(X) is given by

Now, letting




we can compactly express the pdf of X as





Finally, we add the following twist: say we want to do some signal processing on the random matrix X.  In particular, we want to calculate the matrix Y, given by




where A and B are u by k and k by v deterministic matrices, respectively (u and v can be arbitrary, but k must be the same as X's dimension so that the multiplication works out).  Then Y is a u by v matrix.

To get the PDF of Y, we need to calculate the covariance matrix of its stack. All we know is the covariance matrix of s(X), so it would be most convenient to express it in terms of that.

To do this, we can use the Kronecker product, represented by ⊗. It is not too hard to show that the stack s(Y) of Y is given by




Thus, we have










and at last, f(Y) can be expressed in terms of s(Y) and s(Y)'s covariance matrix, just like f(X).

See also: A. Graham, Kronecker Products and Matrix Calculus with Applications, New York: Wiley, 1981.

No comments: