In statistics and statistical physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution from which direct sampling is difficult. New samples are added to the sequence in two steps: first a new sample is proposed based on the previous sample, then the proposed sample is either added to the sequence or rejected depending on the value of the probability distribution at that point. The resulting sequence can be used to approximate the distribution (e.g. to generate a histogram) or to compute an integral (e.g. an expected value).
Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multi-dimensional distributions, especially when the number of dimensions is high. For single-dimensional distributions, there are usually other methods (e.g. adaptive rejection sampling) that can directly return independent samples from the distribution, and these are free from the problem of autocorrelated samples that is inherent in MCMC methods.
Some controversy exists with regard to credit for development of the Metropolis algorithm. Metropolis, who was familiar with the computational aspects of the method, had coined the term "Monte Carlo" in an earlier article with Stanisław Ulam, and led the group in the Theoretical Division that designed and built the MANIAC I computer used in the experiments in 1952. However, prior to 2003 there was no detailed account of the algorithm's development. Shortly before his death, Marshall Rosenbluth attended a 2003 conference at LANL marking the 50th anniversary of the 1953 publication. At this conference, Rosenbluth described the algorithm and its development in a presentation titled "Genesis of the Monte Carlo Algorithm for Statistical Mechanics".[4] Further historical clarification is made by Gubernatis in a 2005 journal article[5] recounting the 50th anniversary conference. Rosenbluth makes it clear that he and his wife Arianna did the work, and that Metropolis played no role in the development other than providing computer time.
This contradicts an account by Edward Teller, who states in his memoirs that the five authors of the 1953 article worked together for "days (and nights)".[6] In contrast, the detailed account by Rosenbluth credits Teller with a crucial but early suggestion to "take advantage of statistical mechanics and take ensemble averages instead of following detailed kinematics". This, says Rosenbluth, started him thinking about the generalized Monte Carlo approach – a topic which he says he had discussed often with John Von Neumann. Arianna Rosenbluth recounted (to Gubernatis in 2003) that Augusta Teller started the computer work, but that Arianna herself took it over and wrote the code from scratch. In an oral history recorded shortly before his death,[7] Rosenbluth again credits Teller with posing the original problem, himself with solving it, and Arianna with programming the computer.
Description
The Metropolis–Hastings algorithm can draw samples from any probability distribution with probability density, provided that we know a function proportional to the density and the values of can be calculated. The requirement that must only be proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because it removes the need to calculate the density's normalization factor, which is often extremely difficult in practice.
The Metropolis–Hastings algorithm generates a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution. These sample values are produced iteratively in such a way, that the distribution of the next sample depends only on the current sample value, which makes the sequence of samples a Markov chain. Specifically, at each iteration, the algorithm proposes a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted, in which case the candidate value is used in the next iteration, or it is rejected in which case the candidate value is discarded, and the current value is reused in the next iteration. The probability of acceptance is determined by comparing the values of the function of the current and candidate sample values with respect to the desired distribution.
The method used to propose new candidates is characterized by the probability distribution (sometimes written ) of a new proposed sample given the previous sample . This is called the proposal density, proposal function, or jumping distribution. A common choice for is a Gaussian distribution centered at , so that points closer to are more likely to be visited next, making the sequence of samples into a Gaussian random walk. In the original paper by Metropolis et al. (1953), was suggested to be a uniform distribution limited to some maximum distance from . More complicated proposal functions are also possible, such as those of Hamiltonian Monte Carlo, Langevin Monte Carlo, or preconditioned Crank–Nicolson.
For the purpose of illustration, the Metropolis algorithm, a special case of the Metropolis–Hastings algorithm where the proposal function is symmetric, is described below.
Let be a function that is proportional to the desired probability density function (a.k.a. a target distribution)[a].
Initialization: Choose an arbitrary point to be the first observation in the sample and choose a proposal function . In this section, is assumed to be symmetric; in other words, it must satisfy .
For each iteration t:
Propose a candidate for the next sample by picking from the distribution .
Calculate the acceptance ratio, which will be used to decide whether to accept or reject the candidate[b]. Because f is proportional to the density of P, we have that .
Accept or reject:
Generate a uniform random number .
If , then accept the candidate by setting ,
If , then reject the candidate and set instead.
This algorithm proceeds by randomly attempting to move about the sample space, sometimes accepting the moves and sometimes remaining in place. Note that the acceptance ratio indicates how probable the new proposed sample is with respect to the current sample, according to the distribution whose density is . If we attempt to move to a point that is more probable than the existing point (i.e. a point in a higher-density region of corresponding to an ), we will always accept the move. However, if we attempt to move to a less probable point, we will sometimes reject the move, and the larger the relative drop in probability, the more likely we are to reject the new point. Thus, we will tend to stay in (and return large numbers of samples from) high-density regions of , while only occasionally visiting low-density regions. Intuitively, this is why this algorithm works and returns samples that follow the desired distribution with density .
Compared with an algorithm like adaptive rejection sampling[8] that directly generates independent samples from a distribution, Metropolis–Hastings and other MCMC algorithms have a number of disadvantages:
The samples are autocorrelated. Even though over the long term they do correctly follow , a set of nearby samples will be correlated with each other and not correctly reflect the distribution. This means that effective sample sizes can be significantly lower than the number of samples actually taken, leading to large errors.
Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very different distribution, especially if the starting point is in a region of low density. As a result, a burn-in period is typically necessary,[9] where an initial number of samples are thrown away.
On the other hand, most simple rejection sampling methods suffer from the "curse of dimensionality", where the probability of rejection increases exponentially as a function of the number of dimensions. Metropolis–Hastings, along with other MCMC methods, do not have this problem to such a degree, and thus are often the only solutions available when the number of dimensions of the distribution to be sampled is high. As a result, MCMC methods are often the methods of choice for producing samples from hierarchical Bayesian models and other high-dimensional statistical models used nowadays in many disciplines.
In multivariate distributions, the classic Metropolis–Hastings algorithm as described above involves choosing a new multi-dimensional sample point. When the number of dimensions is high, finding the suitable jumping distribution to use can be difficult, as the different individual dimensions behave in very different ways, and the jumping width (see above) must be "just right" for all dimensions at once to avoid excessively slow mixing. An alternative approach that often works better in such situations, known as Gibbs sampling, involves choosing a new sample for each dimension separately from the others, rather than choosing a sample for all dimensions at once. That way, the problem of sampling from potentially high-dimensional space will be reduced to a collection of problems to sample from small dimensionality.[10] This is especially applicable when the multivariate distribution is composed of a set of individual random variables in which each variable is conditioned on only a small number of other variables, as is the case in most typical hierarchical models. The individual variables are then sampled one at a time, with each variable conditioned on the most recent values of all the others. Various algorithms can be used to choose these individual samples, depending on the exact form of the multivariate distribution: some possibilities are the adaptive rejection sampling methods,[8] the adaptive rejection Metropolis sampling algorithm,[11] a simple one-dimensional Metropolis–Hastings step, or slice sampling.
Formal derivation
The purpose of the Metropolis–Hastings algorithm is to generate a collection of states according to a desired distribution . To accomplish this, the algorithm uses a Markov process, which asymptotically reaches a unique stationary distribution such that .[12]
A Markov process is uniquely defined by its transition probabilities , the probability of transitioning from any given state to any other given state . It has a unique stationary distribution when the following two conditions are met:[12]
Existence of stationary distribution: there must exist a stationary distribution . A sufficient but not necessary condition is detailed balance, which requires that each transition is reversible: for every pair of states , the probability of being in state and transitioning to state must be equal to the probability of being in state and transitioning to state , .
Uniqueness of stationary distribution: the stationary distribution must be unique. This is guaranteed by ergodicity of the Markov process, which requires that every state must (1) be aperiodic—the system does not return to the same state at fixed intervals; and (2) be positive recurrent—the expected number of steps for returning to the same state is finite.
The Metropolis–Hastings algorithm involves designing a Markov process (by constructing transition probabilities) that fulfills the two above conditions, such that its stationary distribution is chosen to be . The derivation of the algorithm starts with the condition of detailed balance:
which is re-written as
The approach is to separate the transition in two sub-steps; the proposal and the acceptance-rejection. The proposal distribution is the conditional probability of proposing a state given , and the acceptance distribution is the probability to accept the proposed state . The transition probability can be written as the product of them:
Inserting this relation in the previous equation, we have
The next step in the derivation is to choose an acceptance ratio that fulfills the condition above. One common choice is the Metropolis choice:
For this Metropolis acceptance ratio , either or and, either way, the condition is satisfied.
The Metropolis–Hastings algorithm can thus be written as follows:
Initialise
Pick an initial state .
Set .
Iterate
Generate a random candidate state according to .
Calculate the acceptance probability .
Accept or reject:
generate a uniform random number ;
if , then accept the new state and set ;
if , then reject the new state, and copy the old state forward .
Increment: set .
Provided that specified conditions are met, the empirical distribution of saved states will approach . The number of iterations () required to effectively estimate depends on the number of factors, including the relationship between and the proposal distribution and the desired accuracy of estimation.[13] For distribution on discrete state spaces, it has to be of the order of the autocorrelation time of the Markov process.[14]
It is important to notice that it is not clear, in a general problem, which distribution one should use or the number of iterations necessary for proper estimation; both are free parameters of the method, which must be adjusted to the particular problem in hand.
A common use of Metropolis–Hastings algorithm is to compute an integral. Specifically, consider a space and a probability distribution over , . Metropolis–Hastings can estimate an integral of the form of
where is a (measurable) function of interest.
For example, consider a statistic and its probability distribution , which is a marginal distribution. Suppose that the goal is to estimate for on the tail of . Formally, can be written as
and, thus, estimating can be accomplished by estimating the expected value of the indicator function, which is 1 when and zero otherwise.
Because is on the tail of , the probability to draw a state with on the tail of is proportional to , which is small by definition. The Metropolis–Hastings algorithm can be used here to sample (rare) states more likely and thus increase the number of samples used to estimate on the tails. This can be done e.g. by using a sampling distribution to favor those states (e.g. with ).
Step-by-step instructions
Suppose that the most recent value sampled is . To follow the Metropolis–Hastings algorithm, we next draw a new proposal state with probability density and calculate a value
where
is the probability (e.g., Bayesian posterior) ratio between the proposed sample and the previous sample , and
is the ratio of the proposal density in two directions (from to and conversely).
This is equal to 1 if the proposal density is symmetric.
Then the new state is chosen according to the following rules.
If
else:
The Markov chain is started from an arbitrary initial value , and the algorithm is run for many iterations until this initial state is "forgotten". These samples, which are discarded, are known as burn-in. The remaining set of accepted values of represent a sample from the distribution .
The algorithm works best if the proposal density matches the shape of the target distribution , from which direct sampling is difficult, that is .
If a Gaussian proposal density is used, the variance parameter has to be tuned during the burn-in period.
This is usually done by calculating the acceptance rate, which is the fraction of proposed samples that is accepted in a window of the last samples.
The desired acceptance rate depends on the target distribution, however it has been shown theoretically that the ideal acceptance rate for a one-dimensional Gaussian distribution is about 50%, decreasing to about 23% for an -dimensional Gaussian target distribution.[15] These guidelines can work well when sampling from sufficiently regular Bayesian posteriors as they often follow a multivariate normal distribution as can be established using the Bernstein–von Mises theorem.[16]
If is too small, the chain will mix slowly (i.e., the acceptance rate will be high, but successive samples will move around the space slowly, and the chain will converge only slowly to ). On the other hand,
if is too large, the acceptance rate will be very low because the proposals are likely to land in regions of much lower probability density, so will be very small, and again the chain will converge very slowly. One typically tunes the proposal distribution so that the algorithms accepts on the order of 30% of all samples – in line with the theoretical estimates mentioned in the previous paragraph.
MCMC can be used to draw samples from the posterior distribution of a statistical model.
The acceptance probability is given by:
where is the likelihood, the prior probability density and the (conditional) proposal probability.
^ abGilks, W. R.; Wild, P. (1992-01-01). "Adaptive Rejection Sampling for Gibbs Sampling". Journal of the Royal Statistical Society. Series C (Applied Statistics). 41 (2): 337–348. doi:10.2307/2347565. JSTOR2347565.
^Gilks, W. R.; Best, N. G.; Tan, K. K. C. (1995-01-01). "Adaptive Rejection Metropolis Sampling within Gibbs Sampling". Journal of the Royal Statistical Society. Series C (Applied Statistics). 44 (4): 455–472. doi:10.2307/2986138. JSTOR2986138.
^In the original paper by Metropolis et al. (1953), was actually the Boltzmann distribution, as it was applied to physical systems in the context of statistical mechanics (e.g., a maximal-entropy distribution of microstates for a given temperature at thermal equilibrium). Consequently, the acceptance ratio was itself an exponential of the difference in the parameters of the numerator and denominator of this ratio.
Further reading
Bernd A. Berg. Markov Chain Monte Carlo Simulations and Their Statistical Analysis. Singapore, World Scientific, 2004.
Kill the LoveSutradaraIm Jong-jaeProduserLee Choon-yunYoo In-taekDitulis olehCho Myung-jooIm Jong-jaePemeranLee Byung-hunJeong Seon-kyeong Yu Oh-seongPenata musikLee Byung-wooSinematograferSeo Jeong-minPenyuntingKim HyeonDistributorCine 2000Tanggal rilis 19 Oktober 1996 (1996-10-19) Durasi119 menitNegaraKorea SelatanBahasaKorea Kill the Love (Hangul: 그들만의 세상; RR: Geudeulmanui sesang) adalah film drama kriminal Korea Selatan tahun 1996. Plot Ketika ...
Berkas:S-300PMU-2 missile defence system (2).gif Close up view of SA-N-6 launchers on Marshal Ustinov. S-300 (HongQi 9 HQ-9) launcher China S-300 (pelaporan nama NATO:SA-10 Grumble) adalah serangkaian sistem rudal permukaan-ke-udara rentang panjang awalnya Soviet dan kemudian Rusia yang diproduksi oleh NPO Almaz, semua didasarkan pada awal versi S-300P. S-300 dikembangkan untuk pertahaan terhadap pesawat dan rudal jelajah untuk Angkatan Udara Pertahanan Soviet. Variasi berikutnya dikembangka...
American politician from Iowa (1914–1997) Mary Louise SmithChair of the Republican National CommitteeIn officeSeptember 16, 1974 – January 15, 1977Preceded byGeorge H. W. BushSucceeded byBill Brock Personal detailsBornMary Louise Epperson(1914-10-06)October 6, 1914Eddyville, Iowa, U.S.DiedAugust 22, 1997(1997-08-22) (aged 82)Des Moines, Iowa, U.S.Political partyRepublicanEducationUniversity of Iowa (BA) Mary Louise Smith (October 6, 1914 – August 22, 1997), a U.S. poli...
Kathleen RobertsonKathleen Robertson di Festival Film Internasional Toronto 2010LahirKathleen E. Robertson8 Juli 1973 (umur 50)Hamilton, Ontario, KanadaTempat tinggalLos Angeles, California, Amerika SerikatKebangsaanKanadaPekerjaanAktris, produserTahun aktif1985–sekarangSuami/istriChris Cowles (m. 2004)Anak1 Kathleen Robertson (lahir 8 Juli 1973) adalah seorang aktris asal Kanada. Ia dikenal karena berperan sebagai Tina Edison dalam sitkom Kanada Ma...
1994 Mercedes-Benz E 280 (S 124) station wagon 3 macam jenis sedan: sedan biasa / notchback, station wagon, dan hatchback. Station wagon adalah bentuk mobil berbasis sedan yang atap bagian belakangnya dipanjangkan sampai ke atas bagasi. Hal ini membuat ruang bagasi station wagon menyatu dengan ruang penumpang. Dengan demikian station wagon memiliki pilar D pada ujung belakangnya. Station wagon pada umumnya memiliki pintu bagasi yang membuka ke atas seperti layaknya hatchback, tetapi ada juga ...
SirEdwin ArnoldLahir(1832-06-10)10 Juni 1832Gravesend, Gravesham, Kent, InggrisMeninggal24 Maret 1904(1904-03-24) (umur 71)London, InggrisPekerjaanWartawan, penyunting, dan penyairKebangsaanInggrisPendidikanUniversity College, OxfordKarya terkenalThe Light of AsiaTanda tangan Sir Edwin Arnold KCIE CSI (10 Juni 1832 – 24 Maret 1904) adalah seorang penyair dan wartawan asal Inggris. Ia dikenal karena membuat The Light of Asia.[1] Referensi ^ Sir Edwin Arno...
Pour les articles homonymes, voir Cyrano de Bergerac. Cyrano de Bergerac Logotype du titre. Données clés Réalisation Jean-Paul Rappeneau Scénario Jean-Paul Rappeneau Jean-Claude Carrière Musique Jean-Claude Petit Kurt Kuenne Acteurs principaux Gérard Depardieu Anne Brochet Vincent Pérez Jacques Weber Roland Bertin Sociétés de production Caméra One DD Productions Films A2 Hachette Première UGC Pays de production France Genre Comédie dramatique Historique Durée 137 minutes So...
American private equity firm Not to be confused with Oak Investment Partners, Oaktree Capital Management. This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: Oak Hill Capital Partners – news · newspapers · books · scholar · JSTOR (April 2017) (Learn how and when to remove this message) Oak Hill Capital Managemen...
20th-century British armed merchant ship This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: HMS Jervis Bay – news · newspapers · books · scholar · JSTOR (January 2014) (Learn how and when to remove this message) Jervis Bay at Dakar in 1940 History United Kingdom NameJervis Bay BuilderVickers Limited, Barrow-in-...
KickassTorrentsURLkat.amTipeDirektori torrent, penyedia pranala magnetPendaftaranOpsionalBahasaMultibahasa (30+ bahasa, dengan bahasa utama Inggris)PenggunaLebih dari satu juta per hari (Agustus 2015)Berdiri sejakNovember 2008; 15 tahun lalu (2008-11)Peringkat Alexa490.683 (1r Desember 2017) StatusLuring KickassTorrents (KAT) adalah penyelenggara direktori berkas torrent dan pranala magnet untuk memfasilitasi berbagi berkas peer-to-peer dalam protokol BitTorrent, didirikan tahun 2008. Hi...
Widespread human mitochondrial DNA grouping indicating common ancestry This article is about the human mtDNA haplogroup. For the human Y-DNA haplogroup, see Haplogroup M-P256. Haplogroup MPossible time of originca. 55,000-65,000 years ago[1] or 50,000-65,000 years ago[2]Possible place of originSouth Asia,[3][4][5][6][7][8] Southwest Asia,[2][1] Southeast Asia,[9][10] or East Africa[11][...
Medical school in London, England SGUL redirects here. For the Scottish Golf Union Limited, the board of the Scottish Golf Union, see Scottish Golf Union. St George's, University of LondonSt George's Hospital Medical SchoolTypePublic research universityEstablished1733; 291 years ago (1733)Parent institutionUniversity of LondonEndowment£5.8 million (2022)[1]Budget£87.8 million (2021–22)[1]ChancellorThe Princess Royal(as Chancellor of the Universi...
Government department dedicated to criminal investigations Criminal Investigative DivisionEmblem of the Criminal Investigative DivisionCountryUnited StatesAgencyFederal Bureau of InvestigationPart ofCriminal, Cyber, Response, and Services BranchHeadquartersJ. Edgar Hoover BuildingWashington, D.C.AbbreviationCIDCommandersCurrentcommanderFBI Assistant Director in Charge - Michael D. Nordwall The Criminal Investigative Division (CID) is a division within the Criminal, Cyber, Response, and S...
A mention of Heaðobards in the Beowulf The Heaðobards [needs IPA] (Old English: Heaðubeardan, Old Saxon: Headubarden, war-beards) were possibly a branch of the Langobards,[1] and their name may be preserved in toponym Bardengau, in Lower Saxony, Germany.[1] They are mentioned in both Beowulf and in Widsith, where they are in conflict with the Danes. However, in the Norse tradition the Heaðobards, also called Hadubards, had apparently been forgotten and the conflict...
1998 United States Senate election in Ohio ← 1992 November 3, 1998 2004 → Nominee George Voinovich Mary Boyle Party Republican Democratic Popular vote 1,922,087 1,482,054 Percentage 56.46% 43.54% County results Voinovich: 50–60% 60–70% 70–80% Boyle: 50–60% 60–70% U.S. senator before election John Glen...
Self-enforced restraint from pleasurable activities For other uses, see Abstinence (disambiguation). Purity rings are worn by some youth committed to the practice of sexual abstinence.[1] Abstinence is the practice of self-enforced restraint from indulging in bodily activities that are widely experienced as giving pleasure. Most frequently, the term refers to sexual abstinence, but it can also mean abstinence from alcohol, drugs, food, or other comforts.[citation needed] Becau...
لمعانٍ أخرى، طالع حزب الاتحاد (توضيح). حزب الاتحاد البلد لبنان تاريخ التأسيس العقد 1960 المقر الرئيسي قضاء البقاع الغربي، لبنان الأيديولوجيا التيار الناصري المشاركة في الحكم مجلس النواب اللبناني 1 / 128 مجلس الوزراء اللبناني 0 / 30 علم الحزب سياسة لبنان الأحزاب ا...
فيكتوريا جاستيس Victoria Justice معلومات شخصية الميلاد 19 فبراير 1993 (العمر 31 سنة)هوليوود، فلوريدا، الولايات المتحدة الأمريكية الإقامة إنسينو [لغات أخرى] الجنسية الولايات المتحدة الحياة الفنية النوع موسيقى البوب نوع الصوت ميزو-سوبرانو الآلات الموسيقية غناء، ب�...
يفتقر محتوى هذه المقالة إلى الاستشهاد بمصادر. فضلاً، ساهم في تطوير هذه المقالة من خلال إضافة مصادر موثوق بها. أي معلومات غير موثقة يمكن التشكيك بها وإزالتها. (أبريل 2022) رقص عربي المحيط الرقص أنواع العرضة · يولة · الرقص الشكة · طنبورة (فن) · رقص خليجي · خكا · مز�...