Blum's speedup theorem

In computational complexity theory, Blum's speedup theorem, first stated by Manuel Blum in 1967, is a fundamental theorem about the complexity of computable functions.

Each computable function has an infinite number of different program representations in a given programming language. In the theory of algorithms one often strives to find a program with the smallest complexity for a given computable function and a given complexity measure (such a program could be called optimal). Blum's speedup theorem shows that for any complexity measure, there exists a computable function such that there is no optimal program computing it, because every program has a program of lower complexity. This also rules out the idea that there is a way to assign to arbitrary functions their computational complexity, meaning the assignment to any f of the complexity of an optimal program for f. This does of course not exclude the possibility of finding the complexity of an optimal program for certain specific functions.

Speedup theorem

Given a Blum complexity measure and a total computable function with two parameters, then there exists a total computable predicate (a boolean valued computable function) so that for every program for , there exists a program for so that for almost all

is called the speedup function. The fact that it may be as fast-growing as desired (as long as it is computable) means that the phenomenon of always having a program of smaller complexity remains even if by "smaller" we mean "significantly smaller" (for instance, quadratically smaller, exponentially smaller).

See also

References