The Mersenne Twister was designed specifically to rectify most of the flaws found in older PRNGs.
The most commonly used version of the Mersenne Twister algorithm is based on the Mersenne prime . The standard implementation of that, MT19937, uses a 32-bit word length. There is another implementation (with five variants[3]) that uses a 64-bit word length, MT19937-64; it generates a different sequence.
k-distribution
A pseudorandom sequence of w-bit integers of period P is said to be k-distributed to v-bit accuracy if the following holds.
Let truncv(x) denote the number formed by the leading v bits of x, and consider P of the kv-bit vectors
.
Then each of the possible combinations of bits occurs the same number of times in a period, except for the all-zero combination that occurs once less often.
Algorithmic detail
For a w-bit word length, the Mersenne Twister generates integers in the range .
The Mersenne Twister algorithm is based on a matrix linear recurrence over a finite binaryfield. The algorithm is a twisted generalised feedback shift register[4] (twisted GFSR, or TGFSR) of rational normal form (TGFSR(R)), with state bit reflection and tempering. The basic idea is to define a series through a simple recurrence relation, and then output numbers of the form , where T is an invertible -matrix called a tempering matrix.
The general algorithm is characterized by the following quantities:
w: word size (in number of bits)
n: degree of recurrence
m: middle word, an offset used in the recurrence relation defining the series ,
r: separation point of one word, or the number of bits of the lower bitmask,
a: coefficients of the rational normal form twist matrix
b, c: TGFSR(R) tempering bitmasks
s, t: TGFSR(R) tempering bit shifts
u, d, l: additional Mersenne Twister tempering bit shifts/masks
with the restriction that is a Mersenne prime. This choice simplifies the primitivity test and k-distribution test that are needed in the parameter search.
The series is defined as a series of w-bit quantities with the recurrence relation:
where denotes concatenation of bit vectors (with upper bits on the left), the bitwise exclusive or (XOR), means the upper w − r bits of , and means the lower r bits of .
The subscripts may all be offset by -n
where now the LHS, , is the next generated value in the series in terms of values generated in the past, which are on the RHS.
The twist transformation A is defined in rational normal form as:
with as the identity matrix. The rational normal form has the benefit that multiplication by A can be efficiently expressed as: (remember that here matrix multiplication is being done in , and therefore bitwise XOR takes the place of addition)where is the lowest order bit of .
As like TGFSR(R), the Mersenne Twister is cascaded with a tempering transform to compensate for the reduced dimensionality of equidistribution (because of the choice of A being in the rational normal form). Note that this is equivalent to using the matrix A where for T an invertible matrix, and therefore the analysis of characteristic polynomial mentioned below still holds.
As with A, we choose a tempering transform to be easily computable, and so do not actually construct T itself. This tempering is defined in the case of Mersenne Twister as
where is the next value from the series, is a temporary intermediate value, and is the value returned from the algorithm, with and as the bitwise left and right shifts, and as the bitwise AND. The first and last transforms are added in order to improve lower-bit equidistribution. From the property of TGFSR, is required to reach the upper bound of equidistribution for the upper bits.
The coefficients for MT19937 are:
Note that 32-bit implementations of the Mersenne Twister generally have d = FFFFFFFF16. As a result, the d is occasionally omitted from the algorithm description, since the bitwise and with d in that case has no effect.
The state needed for a Mersenne Twister implementation is an array of n values of w bits each. To initialize the array, a w-bit seed value is used to supply through by setting to the seed value and thereafter setting
for from to .
The first value the algorithm then generates is based on , not on .
The constant f forms another parameter to the generator, though not part of the algorithm proper.
The value for f for MT19937 is 1812433253.
The value for f for MT19937-64 is 6364136223846793005.[5]
C code
#include<stdint.h>#define n 624#define m 397#define w 32#define r 31#define UMASK (0xffffffffUL << r)#define LMASK (0xffffffffUL >> (w-r))#define a 0x9908b0dfUL#define u 11#define s 7#define t 15#define l 18#define b 0x9d2c5680UL#define c 0xefc60000UL#define f 1812433253ULtypedefstruct{uint32_tstate_array[n];// the array for the state vector intstate_index;// index into state vector array, 0 <= state_index <= n-1 always}mt_state;voidinitialize_state(mt_state*state,uint32_tseed){uint32_t*state_array=&(state->state_array[0]);state_array[0]=seed;// suggested initial seed = 19650218ULfor(inti=1;i<n;i++){seed=f*(seed^(seed>>(w-2)))+i;// Knuth TAOCP Vol2. 3rd Ed. P.106 for multiplier.state_array[i]=seed;}state->state_index=0;}uint32_trandom_uint32(mt_state*state){uint32_t*state_array=&(state->state_array[0]);intk=state->state_index;// point to current state location// 0 <= state_index <= n-1 always// int k = k - n; // point to state n iterations before// if (k < 0) k += n; // modulo n circular indexing// the previous 2 lines actually do nothing// for illustration onlyintj=k-(n-1);// point to state n-1 iterations beforeif(j<0)j+=n;// modulo n circular indexinguint32_tx=(state_array[k]&UMASK)|(state_array[j]&LMASK);uint32_txA=x>>1;if(x&0x00000001UL)xA^=a;j=k-(n-m);// point to state n-m iterations beforeif(j<0)j+=n;// modulo n circular indexingx=state_array[j]^xA;// compute next value in the statestate_array[k++]=x;// update new state valueif(k>=n)k=0;// modulo n circular indexingstate->state_index=k;uint32_ty=x^(x>>u);// tempering y=y^((y<<s)&b);y=y^((y<<t)&c);uint32_tz=y^(y>>l);returnz;}
MTGP is a variant of Mersenne Twister optimised for graphics processing units published by Mutsuo Saito and Makoto Matsumoto.[8] The basic linear recurrence operations are extended from MT and parameters are chosen to allow many threads to compute the recursion in parallel, while sharing their state space to reduce memory load. The paper claims improved equidistribution over MT and performance on an old (2008-era) GPU (Nvidia GTX260 with 192 cores) of 4.7 ms for 5×107 random 32-bit integers.
The SFMT (SIMD-oriented Fast Mersenne Twister) is a variant of Mersenne Twister, introduced in 2006,[9] designed to be fast when it runs on 128-bit SIMD.
It is roughly twice as fast as Mersenne Twister.[10]
TinyMT is a variant of Mersenne Twister, proposed by Saito and Matsumoto in 2011.[12] TinyMT uses just 127 bits of state space, a significant decrease compared to the original's 2.5 KiB of state. However, it has a period of , far shorter than the original, so it is only recommended by the authors in cases where memory is at a premium.
Characteristics
This section contains a pro and con list. Please help rewriting it into consolidated sections based on topics.(March 2024)
Passes numerous tests for statistical randomness, including the Diehard tests and most, but not all of the TestU01 tests.[13]
A very long period of . Note that while a long period is not a guarantee of quality in a random number generator, short periods, such as the common in many older software packages, can be problematic.[14]
k-distributed to 32-bit accuracy for every (for a definition of k-distributed, see below)
Implementations generally create random numbers faster than hardware-implemented methods. A study found that the Mersenne Twister creates 64-bit floating point random numbers approximately twenty times faster than the hardware-implemented, processor-based RDRAND instruction set.[15]
Disadvantages:
Relatively large state buffer, of almost 2.5 kB, unless the TinyMT variant is used.
Mediocre throughput by modern standards, unless the SFMT variant (discussed below) is used.[16]
Exhibits two clear failures (linear complexity) in both Crush and BigCrush in the TestU01 suite. The test, like Mersenne Twister, is based on an -algebra.[13]
Multiple instances that differ only in seed value (but not other parameters) are not generally appropriate for Monte-Carlo simulations that require independent random number generators, though there exists a method for choosing multiple sets of parameter values.[17][18]
Poor diffusion: can take a long time to start generating output that passes randomness tests, if the initial state is highly non-random—particularly if the initial state has many zeros. A consequence of this is that two instances of the generator, started with initial states that are almost the same, will usually output nearly the same sequence for many iterations, before eventually diverging. The 2002 update to the MT algorithm has improved initialization, so that beginning with such a state is very unlikely.[19] The GPU version (MTGP) is said to be even better.[20]
Contains subsequences with more 0's than 1's. This adds to the poor diffusion property to make recovery from many-zero states difficult.
Is not cryptographically secure, unless the CryptMT variant (discussed below) is used. The reason is that observing a sufficient number of iterations (624 in the case of MT19937, since this is the size of the state vector from which future iterations are produced) allows one to predict all future iterations.
Applications
The Mersenne Twister is used as default PRNG by the following software:
The Mersenne Twister is one of two PRNGs in SPSS: the other generator is kept only for compatibility with older programs, and the Mersenne Twister is stated to be "more reliable".[54] The Mersenne Twister is similarly one of the PRNGs in SAS: the other generators are older and deprecated.[55] The Mersenne Twister is the default PRNG in Stata, the other one is KISS, for compatibility with older versions of Stata.[56]
Alternatives
An alternative generator, WELL ("Well Equidistributed Long-period Linear"), offers quicker recovery, and equal randomness, and nearly equal speed.[57]
Marsaglia's xorshift generators and variants are the fastest in the class of LFSRs.[58]
64-bit MELGs ("64-bit Maximally Equidistributed -Linear Generators with Mersenne Prime Period") are completely optimized in terms of the k-distribution properties.[59]
The ACORN family (published 1989) is another k-distributed PRNG, which shows similar computational speed to MT, and better statistical properties as it satisfies all the current (2019) TestU01 criteria; when used with appropriate choices of parameters, ACORN can have arbitrarily long period and precision.
The PCG family is a more modern long-period generator, with better cache locality, and less detectable bias using modern analysis methods.[60]
^E.g. Marsland S. (2011) Machine Learning (CRC Press), §4.1.1. Also see the section "Adoption in software systems".
^John Savard. "The Mersenne Twister". A subsequent paper, published in the year 2000, gave five additional forms of the Mersenne Twister with period 2^19937-1. All five were designed to be implemented with 64-bit arithmetic instead of 32-bit arithmetic.
^Note: 219937 is approximately 4.3 × 106001; this is many orders of magnitude larger than the estimated number of particles in the observable universe, which is 1087.