Severity: Notice
Message: Undefined offset: 1
Filename: infosekolah/leftmenudasboard.php
Line Number: 33
Line Number: 34
The ratio of the density functions above is monotone in the parameter x , {\displaystyle \ x\ ,} so f ( x ) g ( x ) {\displaystyle \ {\frac {\ f(x)\ }{g(x)}}\ } satisfies the monotone likelihood ratio property.
In statistics, the monotone likelihood ratio property is a property of the ratio of two probability density functions (PDFs). Formally, distributions f ( x ) {\displaystyle \ f(x)\ } and g ( x ) {\displaystyle \ g(x)\ } bear the property if
that is, if the ratio is nondecreasing in the argument x {\displaystyle x} .
If the functions are first-differentiable, the property may sometimes be stated
For two distributions that satisfy the definition with respect to some argument x , {\displaystyle \ x\ ,} we say they "have the MLRP in x . {\displaystyle \ x~.} " For a family of distributions that all satisfy the definition with respect to some statistic T ( X ) , {\displaystyle \ T(X)\ ,} we say they "have the MLR in T ( X ) . {\displaystyle \ T(X)~.} "
The MLRP is used to represent a data-generating process that enjoys a straightforward relationship between the magnitude of some observed variable and the distribution it draws from. If f ( x ) {\displaystyle \ f(x)\ } satisfies the MLRP with respect to g ( x ) {\displaystyle \ g(x)\ } , the higher the observed value x {\displaystyle \ x\ } , the more likely it was drawn from distribution f {\displaystyle \ f\ } rather than g . {\displaystyle \ g~.} As usual for monotonic relationships, the likelihood ratio's monotonicity comes in handy in statistics, particularly when using maximum-likelihood estimation. Also, distribution families with MLR have a number of well-behaved stochastic properties, such as first-order stochastic dominance and increasing hazard ratios. Unfortunately, as is also usual, the strength of this assumption comes at the price of realism. Many processes in the world do not exhibit a monotonic correspondence between input and output.
Suppose you are working on a project, and you can either work hard or slack off. Call your choice of effort e {\displaystyle \ e\ } and the quality of the resulting project q . {\displaystyle \ q~.} If the MLRP holds for the distribution of q {\displaystyle \ q\ } conditional on your effort e {\displaystyle \ e\ } , the higher the quality the more likely you worked hard. Conversely, the lower the quality the more likely you slacked off.
Hence if some employer is doing a "performance review" he can infer his employee's behavior from the merits of his work.
Statistical models often assume that data are generated by a distribution from some family of distributions and seek to determine that distribution. This task is simplified if the family has the monotone likelihood ratio property (MLRP).
A family of density functions { f θ ( x ) | θ ∈ Θ } {\displaystyle \ {\bigl \{}\ f_{\theta }(x)\ {\big |}\ \theta \in \Theta \ {\bigr \}}\ } indexed by a parameter θ {\displaystyle \ \theta \ } taking values in an ordered set Θ {\displaystyle \ \Theta \ } is said to have a monotone likelihood ratio (MLR) in the statistic T ( X ) {\displaystyle \ T(X)\ } if for any θ 1 < θ 2 , {\displaystyle \ \theta _{1}<\theta _{2}\ ,}
Then we say the family of distributions "has MLR in T ( X ) {\displaystyle \ T(X)\ } ".
If the family of random variables has the MLRP in T ( X ) , {\displaystyle \ T(X)\ ,} a uniformly most powerful test can easily be determined for the hypothesis H 0 : θ ≤ θ 0 {\displaystyle \ H_{0}\ :\ \theta \leq \theta _{0}\ } versus H 1 : θ > θ 0 . {\displaystyle \ H_{1}\ :\ \theta >\theta _{0}~.}
Example: Let e {\displaystyle \ e\ } be an input into a stochastic technology – worker's effort, for instance – and y {\displaystyle \ y\ } its output, the likelihood of which is described by a probability density function f ( y ; e ) . {\displaystyle \ f(y;e)~.} Then the monotone likelihood ratio property (MLRP) of the family f {\displaystyle \ f\ } is expressed as follows: For any e 1 , e 2 , {\displaystyle \ e_{1},e_{2}\ ,} the fact that e 2 > e 1 {\displaystyle e_{2}>e_{1}} implies that the ratio f ( y ; e 2 ) f ( y ; e 1 ) {\displaystyle \ {\frac {\ f(y;e_{2})\ }{f(y;e_{1})}}\ } is increasing in y . {\displaystyle \ y~.}
Monotone likelihoods are used in several areas of statistical theory, including point estimation and hypothesis testing, as well as in probability models.
One-parameter exponential families have monotone likelihood-functions. In particular, the one-dimensional exponential family of probability density functions or probability mass functions with
has a monotone non-decreasing likelihood ratio in the sufficient statistic T ( x ) , {\displaystyle \ T(x)\ ,} provided that π ( θ ) {\displaystyle \ \pi (\theta )\ } is non-decreasing.
Monotone likelihood functions are used to construct uniformly most powerful tests, according to the Karlin–Rubin theorem.[1] Consider a scalar measurement having a probability density function parameterized by a scalar parameter θ , {\displaystyle \ \theta \ ,} and define the likelihood ratio ℓ ( x ) = f θ 1 ( x ) f θ 0 ( x ) . {\displaystyle \ \ell (x)={\frac {f_{\theta _{1}}(x)}{\ f_{\theta _{0}}(x)\ }}~.} If ℓ ( x ) {\displaystyle \ \ell (x)\ } is monotone non-decreasing, in x , {\displaystyle \ x\ ,} for any pair θ 1 ≥ θ 0 {\displaystyle \ \theta _{1}\geq \theta _{0}\ } (meaning that the greater x {\displaystyle \ x\ } is, the more likely H 1 {\displaystyle \ H_{1}\ } is), then the threshold test:
is the UMP test of size α {\displaystyle \ \alpha \ } for testing H 0 : θ ≤ θ 0 {\displaystyle \ H_{0}\ :\ \theta \leq \theta _{0}~~} vs. H 1 : θ > θ 0 . {\displaystyle ~~H_{1}:\theta >\theta _{0}~.}
Note that exactly the same test is also UMP for testing H 0 : θ = θ 0 {\displaystyle \ H_{0}\ :\ \theta =\theta _{0}~~} vs. H 1 : θ > θ 0 . {\displaystyle ~~H_{1}:\theta >\theta _{0}~.}
Monotone likelihood-functions are used to construct median-unbiased estimators, using methods specified by Johann Pfanzagl and others.[2][3] One such procedure is an analogue of the Rao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao–Blackwell procedure for mean-unbiased estimation but for a larger class of loss functions.[3]: 713
If a family of distributions f θ ( x ) {\displaystyle \ f_{\theta }(x)\ } has the monotone likelihood ratio property in T ( X ) , {\displaystyle \ T(X)\ ,}
But not conversely: neither monotone hazard rates nor stochastic dominance imply the MLRP.
Let distribution family f θ {\displaystyle \ f_{\theta }\ } satisfy MLR in x , {\displaystyle \ x\ ,} so that for θ 1 > θ 0 {\displaystyle \ \theta _{1}>\theta _{0}\ } and x 1 > x 0 : {\displaystyle \ x_{1}>x_{0}\ :}
or equivalently:
Integrating this expression twice, we obtain:
integrate and rearrange to obtain
Combine the two inequalities above to get first-order dominance:
Use only the second inequality above to get a monotone hazard rate:
The MLR is an important condition on the type distribution of agents in mechanism design and economics of information, where Paul Milgrom defined "favorableness" of signals (in terms of stochastic dominance) as a consequence of MLR.[4] Most solutions to mechanism design models assume type distributions that satisfy the MLR to take advantage of solution methods that may be easier to apply and interpret.