For the Kronecker product of representations of symmetric groups, see Kronecker coefficient.
In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product (which is denoted by the same symbol) from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.[1]
The Kronecker product is named after the German mathematician Leopold Kronecker (1823–1891), even though there is little evidence that he was the first to define and use it. The Kronecker product has also been called the Zehfuss matrix, and the Zehfuss product, after Johann Georg Zehfuss [de], who in 1858 described this matrix operation, but Kronecker product is currently the most widely used term.[2][3] The misattribution to Kronecker rather than Zehfuss was due to Kurt Hensel.[4]
Definition
If A is an m × n matrix and B is a p × q matrix, then the Kronecker product A ⊗ B is the pm × qn block matrix:
For the usual numbering starting from 1, one obtains
If A and B represent linear transformationsV1 → W1 and V2 → W2, respectively, then the tensor product of the two maps is represented by A ⊗ B, which is the same as V1 ⊗ V2 → W1 ⊗ W2.
In general, A ⊗ B and B ⊗ A are different matrices. However, A ⊗ B and B ⊗ A are permutation equivalent, meaning that there exist permutation matricesP and Q such that[5]
If A and B are square matrices, then A ⊗ B and B ⊗ A are even permutation similar, meaning that we can take P = QT.
The matrices P and Q are perfect shuffle matrices, called the "commutation" matrix.[6] The Commutation matrix Sp,q can be constructed by taking slices of the Ir identity matrix, where .
MATLAB colon notation is used here to indicate submatrices, and Ir is the r × r identity matrix. If and , then
The mixed-product property:
If A, B, C and D are matrices of such size that one can form the matrix productsAC and BD, then[7]
This is called the mixed-product property, because it mixes the ordinary matrix product and the Kronecker product.
As an immediate consequence (again, taking and ),
In particular, using the transpose property from below, this means that if
and Q and U are orthogonal (or unitary), then A is also orthogonal (resp., unitary).
The mixed Kronecker matrix-vector product can be written as:
where is the vectorization operator applied on (formed by reshaping the matrix).
The mixed-product property also works for the element-wise product. If A and C are matrices of the same size, B and D are matrices of the same size, then[7]
The inverse of a Kronecker product:
It follows that A ⊗ B is invertibleif and only if both A and B are invertible, in which case the inverse is given by
In the language of Category theory, the mixed-product property of the Kronecker product (and more general tensor product) shows that the category MatF of matrices over a fieldF, is in fact a monoidal category, with objects natural numbers n, morphisms n → m are n×m matrices with entries in F, composition is given by matrix multiplication, identity arrows are simply n × nidentity matricesIn, and the tensor product is given by the Kronecker product.[9]
MatF is a concrete skeleton category for the equivalent categoryFinVectF of finite dimensional vector spaces over F, whose objects are such finite dimensional vector spaces V, arrows are F-linear maps L : V → W, and identity arrows are the identity maps of the spaces. The equivalence of categories amounts to simultaneously choosing a basis in every finite-dimensional vector space V over F; matrices' elements represent these mappings with respect to the chosen bases; and likewise the Kronecker product is the representation of the tensor product in the chosen bases.
We have the following formula for the matrix exponential, which is useful in some numerical evaluations.[10]
Kronecker sums appear naturally in physics when considering ensembles of non-interacting systems.[citation needed] Let Hk be the Hamiltonian of the kth such system. Then the total Hamiltonian of the ensemble is
Let be an matrix and a matrix. When the order of the Kronecker product and vectorization is interchanged, the two operations can be linked linearly through a function that involves the commutation matrix, . That is, and have the following relationship:
Furthermore, the above relation can be rearranged in terms of either or as follows:
where
Outer Product:
If and are arbitrary vectors, then the outer product between and is defined as . The Kronecker product is related to the outer product by: .
Suppose that A and B are square matrices of size n and m respectively. Let λ1, ..., λn be the eigenvalues of A and μ1, ..., μm be those of B (listed according to multiplicity). Then the eigenvalues of A ⊗ B are
It follows that the trace and determinant of a Kronecker product are given by
The Kronecker product of matrices corresponds to the abstract tensor product of linear maps. Specifically, if the vector spaces V, W, X, and Y have bases {v1, ..., vm},{w1, ..., wn},{x1, ..., xd}, and {y1, ..., ye}, respectively, and if the matrices A and B represent the linear transformations S : V → X and T : W → Y, respectively in the appropriate bases, then the matrix A ⊗ B represents the tensor product of the two maps, S ⊗ T : V ⊗ W → X ⊗ Y with respect to the basis {v1 ⊗ w1, v1 ⊗ w2, ..., v2 ⊗ w1, ..., vm ⊗ wn} of V ⊗ W and the similarly defined basis of X ⊗ Y with the property that A ⊗ B(vi ⊗ wj) = (Avi) ⊗ (Bwj), where i and j are integers in the proper range.[11]
The Kronecker product can be used to get a convenient representation for some matrix equations. Consider for instance the equation AXB = C, where A, B and C are given matrices and the matrix X is the unknown. We can use the "vec trick" to rewrite this equation as
Here, vec(X) denotes the vectorization of the matrix X, formed by stacking the columns of X into a single column vector.
It now follows from the properties of the Kronecker product that the equation AXB = C has a unique solution, if and only if A and B are invertible (Horn & Johnson 1991, Lemma 4.3.1).
If X and C are row-ordered into the column vectors u and v, respectively, then (Jain 1989, 2.8 Block Matrices and Kronecker Products)
Another example is when a matrix can be factored as a Kronecker product, then matrix multiplication can be performed faster by using the above formula. This can be applied recursively, as done in the radix-2 FFT and the Fast Walsh–Hadamard transform. Splitting a known matrix into the Kronecker product of two smaller matrices is known as the "nearest Kronecker product" problem, and can be solved exactly[13] by using the SVD. To split a matrix into the Kronecker product of more than two matrices, in an optimal fashion, is a difficult problem and the subject of ongoing research; some authors cast it as a tensor decomposition problem.[14][15]
Two related matrix operations are the Tracy–Singh and Khatri–Rao products, which operate on partitioned matrices. Let the m × n matrix A be partitioned into the mi × nj blocks Aij and p × q matrix B into the pk × qℓ blocks Bkl, with of course Σi mi = m, Σj nj = n, Σk pk = p and Σℓ qℓ = q.
which means that the (ij)-th subblock of the mp × nq product AB is the mi p × nj q matrix AijB, of which the (kℓ)-th subblock equals the mi pk × nj qℓ matrix Aij ⊗ Bkℓ. Essentially the Tracy–Singh product is the pairwise Kronecker product for each pair of partitions in the two matrices.
For example, if A and B both are 2 × 2 partitioned matrices e.g.:
^
Van Loan, C.; Pitsianis, N. (1992). Approximation with Kronecker Products. Ithaca, NY: Cornell University Press.
^
King Keung Wu; Yam, Yeung; Meng, Helen; Mesbahi, Mehran (2016). "Kronecker product approximation with multiple factor matrices via the tensor product algorithm". 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). pp. 004277–004282. doi:10.1109/SMC.2016.7844903. ISBN978-1-5090-1897-0. S2CID30695585.
^
Liu, Shuangzhe; Trenkler, Götz (2008). "Hadamard, Khatri-Rao, Kronecker and other matrix products". International Journal of Information and Systems Sciences. 4 (1): 160–177.
^
Ahle, Thomas Dybdahl; Knudsen, Jakob Bæk Tejs (2019-09-03). "Almost optimal tensor sketch". arXiv:1909.01821 [cs.DS].
^
Ninh, Pham; Pagh, Rasmus (2013). Fast and scalable polynomial kernels via explicit feature maps. SIGKDD international conference on Knowledge discovery and data mining. Association for Computing Machinery. CiteSeerX10.1.1.718.2766. doi:10.1145/2487575.2487591.
Steeb, Willi-Hans (1997). Matrix Calculus and Kronecker Product with Applications and C++ Programs. World Scientific Publishing. ISBN978-981-02-3241-2.
Liu, Shuangzhe; Trenkler, Götz (2008), "Hadamard, Khatri-Rao, Kronecker and other matrix products", International Journal of Information and Systems Sciences, 4: 160–177