In probability theory and statistics, the covariance function describes how much two random variables change together (their covariance) with varying spatial or temporal separation. For a random field or stochastic processZ(x) on a domain D, a covariance function C(x, y) gives the covariance of the values of the random field at the two locations x and y:
The same C(x, y) is called the autocovariance function in two instances: in time series (to denote exactly the same concept except that x and y refer to locations in time rather than in space), and in multivariate random fields (to refer to the covariance of a variable with itself, as opposed to the cross covariance between two different variables at different locations, Cov(Z(x1), Y(x2))).[1]
Admissibility
For locations x1, x2, ..., xN ∈ D the variance of every linear combination
can be computed as
A function is a valid covariance function if and only if[2] this variance is non-negative for all possible choices of N and weights w1, ..., wN. A function with this property is called positive semidefinite.
For a given variance, a simple stationary parametric covariance function is the "exponential covariance function"
where V is a scaling parameter (correlation length), and d = d(x,y) is the distance between two points. Sample paths of a Gaussian process with the exponential covariance function are not smooth. The "squared exponential" (or "Gaussian") covariance function:
is a stationary covariance function with smooth sample paths.