In mathematics, a Euclidean distance matrix is an n×nmatrix representing the spacing of a set of npoints in Euclidean space.
For points in k-dimensional space ℝk, the elements of their Euclidean distance matrix A are given by squares of distances between them.
That is
In the context of (not necessarily Euclidean) distance matrices, the entries are usually defined directly as distances, not their squares.
However, in the Euclidean case, squares of distances are used to avoid computing square roots and to simplify relevant theorems and algorithms.
In practical applications, distances are noisy measurements or come from arbitrary dissimilarity estimates (not necessarily metric).
The goal may be to visualize such data by points in Euclidean space whose distance matrix approximates a given dissimilarity matrix as well as possible — this is known as multidimensional scaling.
Alternatively, given two sets of data already represented by points in Euclidean space, one may ask how similar they are in shape, that is, how closely can they be related by a distance-preserving transformation — this is Procrustes analysis.
Some of the distances may also be missing or come unlabelled (as an unordered set or multiset instead of a matrix), leading to more complex algorithmic tasks, such as the graph realization problem or the turnpike problem (for points on a line).[1][2]
Properties
By the fact that Euclidean distance is a metric, the matrix A has the following properties.
In dimension k, a Euclidean distance matrix has rank less than or equal to k+2. If the points are in general position, the rank is exactly min(n, k + 2).
Distances can be shrunk by any power to obtain another Euclidean distance matrix. That is, if is a Euclidean distance matrix, then is a Euclidean distance matrix for every 0<s<1.[3]
Relation to Gram matrix
The Gram matrix of a sequence of points in k-dimensional space ℝk
is the n×n matrix of their dot products (here a point is thought of as a vector from 0 to that point):
, where is the angle between the vector and .
In particular
is the square of the distance of from 0.
Thus the Gram matrix describes norms and angles of vectors (from 0 to) .
Let be the k×n matrix containing as columns.
Then
, because (seeing as a column vector).
Matrices that can be decomposed as , that is, Gram matrices of some sequence of vectors (columns of ), are well understood — these are precisely positive semidefinite matrices.
To relate the Euclidean distance matrix to the Gram matrix, observe that
That is, the norms and angles determine the distances.
Note that the Gram matrix contains additional information: distances from 0.
Conversely, distances between pairs of n+1 points determine dot products between n vectors (1≤i≤n):
For a n×n matrix A, a sequence of points in k-dimensional Euclidean space ℝk
is called a realization of A in ℝk if A is their Euclidean distance matrix.
One can assume without loss of generality that (because translating by preserves distances).
Theorem[4](Schoenberg criterion,[5]
independently shown by Young & Householder[6]) —
A symmetric hollown×n matrix A with real entries admits a realization in ℝk if and only if the (n-1)×(n-1) matrix defined by
This follows from the previous discussion because G is positive semidefinite of rank at most k if and only if it can be decomposed as where X is a k×n matrix.[7]
Moreover, the columns of X give a realization in ℝk.
Therefore, any method to decompose G allows to find a realization.
The two main approaches are variants of Cholesky decomposition or using spectral decompositions to find the principal square root of G, see Definite matrix#Decomposition.
The statement of theorem distinguishes the first point . A more symmetric variant of the same theorem is the following:
Corollary[8] —
A symmetric hollown×n matrix A with real entries admits a realization if and only if A
is negative semidefinite on the hyperplane , that is
In practice, the definiteness or rank conditions may fail due to numerical errors, noise in measurements, or due to the data not coming from actual Euclidean distances.
Points that realize optimally similar distances can then be found by semidefinite approximation (and low rank approximation, if desired) using linear algebraic tools such as singular value decomposition or semidefinite programming.
This is known as multidimensional scaling.
Variants of these methods can also deal with incomplete distance data.
Unlabeled data, that is, a set or multiset of distances not assigned to particular pairs, is much more difficult to deal with.
Such data arises, for example, in DNA sequencing (specifically, genome recovery from partial digest) or phase retrieval.
Two sets of points are called homometric if they have the same multiset of distances (but are not necessarily related by a rigid transformation).
Deciding whether a given multiset of n(n-1)/2 distances can be realized in a given dimension k is strongly NP-hard.
In one dimension this is known as the turnpike problem; it is an open question whether it can be solved in polynomial time.
When the multiset of distances is given with error bars, even the one dimensional case is NP-hard.
Nevertheless, practical algorithms exist for many cases, e.g. random points.[10][11][12]
Theorem —
Let and be two sequences of points in k-dimensional Euclidean space ℝk.
The distances and are equal (for all 1≤i,j≤n) if and only if there is a rigid transformation of ℝk mapping to (for all 1≤i≤n).
Proof
Rigid transformations preserve distances so one direction is clear.
Suppose the distances and are equal.
Without loss of generality we can assume by translating the points by and , respectively.
Then the (n-1)×(n-1) Gram matrix of remaining vectors is identical to the Gram matrix of vectors (2≤i≤n).
That is, , where X and Y are the k×(n-1) matrices containing the respective vectors as columns.
This implies there exists an orthogonalk×k matrix Q such that QX=Y, see Definite symmetric matrix#Uniqueness up to unitary transformations.
Q describes an orthogonal transformation of ℝk (a composition of rotations and reflections, without translations) which maps to (and 0 to 0).
The final rigid transformation is described by .
^Schoenberg, I. J. (1935). "Remarks to Maurice Fréchet's Article "Sur La Definition Axiomatique D'Une Classe D'Espace Distances Vectoriellement Applicable Sur L'Espace De Hilbert"". Annals of Mathematics. 36 (3): 724–732. doi:10.2307/1968654. ISSN0003-486X. JSTOR1968654.
^Young, Gale; Householder, A. S. (1938-03-01). "Discussion of a set of points in terms of their mutual distances". Psychometrika. 3 (1): 19–22. doi:10.1007/BF02287916. ISSN1860-0980. S2CID122400126.