Two-parameter family of continuous probability distributions
In probability theory and statistics , the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line , which is the distribution of the reciprocal of a variable distributed according to the gamma distribution .
Perhaps the chief use of the inverse gamma distribution is in Bayesian statistics , where the distribution arises as the marginal posterior distribution for the unknown variance of a normal distribution , if an uninformative prior is used, and as an analytically tractable conjugate prior , if an informative prior is required.[ 1] It is common among some Bayesians to consider an alternative parametrization of the normal distribution in terms of the precision , defined as the reciprocal of the variance, which allows the gamma distribution to be used directly as a conjugate prior. Other Bayesians prefer to parametrize the inverse gamma distribution differently, as a scaled inverse chi-squared distribution .
Characterization
Probability density function
The inverse gamma distribution's probability density function is defined over the support
x
>
0
{\displaystyle x>0}
f
(
x
;
α α -->
,
β β -->
)
=
β β -->
α α -->
Γ Γ -->
(
α α -->
)
(
1
/
x
)
α α -->
+
1
exp
-->
(
− − -->
β β -->
/
x
)
{\displaystyle f(x;\alpha ,\beta )={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}(1/x)^{\alpha +1}\exp \left(-\beta /x\right)}
with shape parameter
α α -->
{\displaystyle \alpha }
and scale parameter
β β -->
{\displaystyle \beta }
.[ 2] Here
Γ Γ -->
(
⋅ ⋅ -->
)
{\displaystyle \Gamma (\cdot )}
denotes the gamma function .
Unlike the gamma distribution , which contains a somewhat similar exponential term,
β β -->
{\displaystyle \beta }
is a scale parameter as the density function satisfies:
f
(
x
;
α α -->
,
β β -->
)
=
f
(
x
/
β β -->
;
α α -->
,
1
)
β β -->
{\displaystyle f(x;\alpha ,\beta )={\frac {f(x/\beta ;\alpha ,1)}{\beta }}}
Cumulative distribution function
The cumulative distribution function is the regularized gamma function
F
(
x
;
α α -->
,
β β -->
)
=
Γ Γ -->
(
α α -->
,
β β -->
x
)
Γ Γ -->
(
α α -->
)
=
Q
(
α α -->
,
β β -->
x
)
{\displaystyle F(x;\alpha ,\beta )={\frac {\Gamma \left(\alpha ,{\frac {\beta }{x}}\right)}{\Gamma (\alpha )}}=Q\left(\alpha ,{\frac {\beta }{x}}\right)\!}
where the numerator is the upper incomplete gamma function and the denominator is the gamma function . Many math packages allow direct computation of
Q
{\displaystyle Q}
, the regularized gamma function.
Moments
Provided that
α α -->
>
n
{\displaystyle \alpha >n}
, the
n
{\displaystyle n}
-th moment of the inverse gamma distribution is given by[ 3]
E
[
X
n
]
=
β β -->
n
Γ Γ -->
(
α α -->
− − -->
n
)
Γ Γ -->
(
α α -->
)
=
β β -->
n
(
α α -->
− − -->
1
)
⋯ ⋯ -->
(
α α -->
− − -->
n
)
.
{\displaystyle \mathrm {E} [X^{n}]=\beta ^{n}{\frac {\Gamma (\alpha -n)}{\Gamma (\alpha )}}={\frac {\beta ^{n}}{(\alpha -1)\cdots (\alpha -n)}}.}
Characteristic function
The inverse gamma distribution has characteristic function
2
(
− − -->
i
β β -->
t
)
α α -->
2
Γ Γ -->
(
α α -->
)
K
α α -->
(
− − -->
4
i
β β -->
t
)
{\displaystyle {\frac {2\left(-i\beta t\right)^{\!\!{\frac {\alpha }{2}}}}{\Gamma (\alpha )}}K_{\alpha }\left({\sqrt {-4i\beta t}}\right)}
where
K
α α -->
{\displaystyle K_{\alpha }}
is the modified Bessel function of the 2nd kind.
Properties
For
α α -->
>
0
{\displaystyle \alpha >0}
and
β β -->
>
0
{\displaystyle \beta >0}
,
E
[
ln
-->
(
X
)
]
=
ln
-->
(
β β -->
)
− − -->
ψ ψ -->
(
α α -->
)
{\displaystyle \mathbb {E} [\ln(X)]=\ln(\beta )-\psi (\alpha )\,}
and
E
[
X
− − -->
1
]
=
α α -->
β β -->
,
{\displaystyle \mathbb {E} [X^{-1}]={\frac {\alpha }{\beta }},\,}
The information entropy is
H
-->
(
X
)
=
E
-->
[
− − -->
ln
-->
(
p
(
X
)
)
]
=
E
-->
[
− − -->
α α -->
ln
-->
(
β β -->
)
+
ln
-->
(
Γ Γ -->
(
α α -->
)
)
+
(
α α -->
+
1
)
ln
-->
(
X
)
+
β β -->
X
]
=
− − -->
α α -->
ln
-->
(
β β -->
)
+
ln
-->
(
Γ Γ -->
(
α α -->
)
)
+
(
α α -->
+
1
)
ln
-->
(
β β -->
)
− − -->
(
α α -->
+
1
)
ψ ψ -->
(
α α -->
)
+
α α -->
=
α α -->
+
ln
-->
(
β β -->
Γ Γ -->
(
α α -->
)
)
− − -->
(
α α -->
+
1
)
ψ ψ -->
(
α α -->
)
.
{\displaystyle {\begin{aligned}\operatorname {H} (X)&=\operatorname {E} [-\ln(p(X))]\\&=\operatorname {E} \left[-\alpha \ln(\beta )+\ln(\Gamma (\alpha ))+(\alpha +1)\ln(X)+{\frac {\beta }{X}}\right]\\&=-\alpha \ln(\beta )+\ln(\Gamma (\alpha ))+(\alpha +1)\ln(\beta )-(\alpha +1)\psi (\alpha )+\alpha \\&=\alpha +\ln(\beta \Gamma (\alpha ))-(\alpha +1)\psi (\alpha ).\end{aligned}}}
where
ψ ψ -->
(
α α -->
)
{\displaystyle \psi (\alpha )}
is the digamma function .
The Kullback-Leibler divergence of Inverse-Gamma(αp , βp ) from Inverse-Gamma(αq , βq ) is the same as the KL-divergence of Gamma(αp , βp ) from Gamma(αq , βq ):
D
K
L
(
α α -->
p
,
β β -->
p
;
α α -->
q
,
β β -->
q
)
=
E
[
log
-->
ρ ρ -->
(
X
)
π π -->
(
X
)
]
=
E
[
log
-->
ρ ρ -->
(
1
/
Y
)
π π -->
(
1
/
Y
)
]
=
E
[
log
-->
ρ ρ -->
G
(
Y
)
π π -->
G
(
Y
)
]
,
{\displaystyle D_{\mathrm {KL} }(\alpha _{p},\beta _{p};\alpha _{q},\beta _{q})=\mathbb {E} \left[\log {\frac {\rho (X)}{\pi (X)}}\right]=\mathbb {E} \left[\log {\frac {\rho (1/Y)}{\pi (1/Y)}}\right]=\mathbb {E} \left[\log {\frac {\rho _{G}(Y)}{\pi _{G}(Y)}}\right],}
where
ρ ρ -->
,
π π -->
{\displaystyle \rho ,\pi }
are the pdfs of the Inverse-Gamma distributions and
ρ ρ -->
G
,
π π -->
G
{\displaystyle \rho _{G},\pi _{G}}
are the pdfs of the Gamma distributions,
Y
{\displaystyle Y}
is Gamma(αp , βp ) distributed.
D
K
L
(
α α -->
p
,
β β -->
p
;
α α -->
q
,
β β -->
q
)
=
(
α α -->
p
− − -->
α α -->
q
)
ψ ψ -->
(
α α -->
p
)
− − -->
log
-->
Γ Γ -->
(
α α -->
p
)
+
log
-->
Γ Γ -->
(
α α -->
q
)
+
α α -->
q
(
log
-->
β β -->
p
− − -->
log
-->
β β -->
q
)
+
α α -->
p
β β -->
q
− − -->
β β -->
p
β β -->
p
.
{\displaystyle {\begin{aligned}D_{\mathrm {KL} }(\alpha _{p},\beta _{p};\alpha _{q},\beta _{q})={}&(\alpha _{p}-\alpha _{q})\psi (\alpha _{p})-\log \Gamma (\alpha _{p})+\log \Gamma (\alpha _{q})+\alpha _{q}(\log \beta _{p}-\log \beta _{q})+\alpha _{p}{\frac {\beta _{q}-\beta _{p}}{\beta _{p}}}.\end{aligned}}}
If
X
∼ ∼ -->
Inv-Gamma
(
α α -->
,
β β -->
)
{\displaystyle X\sim {\mbox{Inv-Gamma}}(\alpha ,\beta )}
then
k
X
∼ ∼ -->
Inv-Gamma
(
α α -->
,
k
β β -->
)
{\displaystyle kX\sim {\mbox{Inv-Gamma}}(\alpha ,k\beta )\,}
, for
k
>
0
{\displaystyle k>0}
If
X
∼ ∼ -->
Inv-Gamma
(
α α -->
,
1
2
)
{\displaystyle X\sim {\mbox{Inv-Gamma}}(\alpha ,{\tfrac {1}{2}})}
then
X
∼ ∼ -->
Inv-
χ χ -->
2
(
2
α α -->
)
{\displaystyle X\sim {\mbox{Inv-}}\chi ^{2}(2\alpha )\,}
(inverse-chi-squared distribution )
If
X
∼ ∼ -->
Inv-Gamma
(
α α -->
2
,
1
2
)
{\displaystyle X\sim {\mbox{Inv-Gamma}}({\tfrac {\alpha }{2}},{\tfrac {1}{2}})}
then
X
∼ ∼ -->
Scaled Inv-
χ χ -->
2
(
α α -->
,
1
α α -->
)
{\displaystyle X\sim {\mbox{Scaled Inv-}}\chi ^{2}(\alpha ,{\tfrac {1}{\alpha }})\,}
(scaled-inverse-chi-squared distribution )
If
X
∼ ∼ -->
Inv-Gamma
(
1
2
,
c
2
)
{\displaystyle X\sim {\textrm {Inv-Gamma}}({\tfrac {1}{2}},{\tfrac {c}{2}})}
then
X
∼ ∼ -->
Levy
(
0
,
c
)
{\displaystyle X\sim {\textrm {Levy}}(0,c)\,}
(Lévy distribution )
If
X
∼ ∼ -->
Inv-Gamma
(
1
,
c
)
{\displaystyle X\sim {\textrm {Inv-Gamma}}(1,c)}
then
1
X
∼ ∼ -->
Exp
(
c
)
{\displaystyle {\tfrac {1}{X}}\sim {\textrm {Exp}}(c)\,}
(Exponential distribution )
If
X
∼ ∼ -->
Gamma
(
α α -->
,
β β -->
)
{\displaystyle X\sim {\mbox{Gamma}}(\alpha ,\beta )\,}
(Gamma distribution with rate parameter
β β -->
{\displaystyle \beta }
) then
1
X
∼ ∼ -->
Inv-Gamma
(
α α -->
,
β β -->
)
{\displaystyle {\tfrac {1}{X}}\sim {\mbox{Inv-Gamma}}(\alpha ,\beta )\,}
(see derivation in the next paragraph for details)
Note that If
X
∼ ∼ -->
Gamma
(
k
,
θ θ -->
)
{\displaystyle X\sim {\mbox{Gamma}}(k,\theta )}
(Gamma distribution with scale parameter
θ θ -->
{\displaystyle \theta }
) then
1
/
X
∼ ∼ -->
Inv-Gamma
(
k
,
1
/
θ θ -->
)
{\displaystyle 1/X\sim {\mbox{Inv-Gamma}}(k,1/\theta )}
Inverse gamma distribution is a special case of type 5 Pearson distribution
A multivariate generalization of the inverse-gamma distribution is the inverse-Wishart distribution .
For the distribution of a sum of independent inverted Gamma variables see Witkovsky (2001)
Derivation from Gamma distribution
Let
X
∼ ∼ -->
Gamma
(
α α -->
,
β β -->
)
{\displaystyle X\sim {\mbox{Gamma}}(\alpha ,\beta )}
, and recall that the pdf of the gamma distribution is
f
X
(
x
)
=
β β -->
α α -->
Γ Γ -->
(
α α -->
)
x
α α -->
− − -->
1
e
− − -->
β β -->
x
{\displaystyle f_{X}(x)={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}x^{\alpha -1}e^{-\beta x}}
,
x
>
0
{\displaystyle x>0}
.
Note that
β β -->
{\displaystyle \beta }
is the rate parameter from the perspective of the gamma distribution.
Define the transformation
Y
=
g
(
X
)
=
1
X
{\displaystyle Y=g(X)={\tfrac {1}{X}}}
. Then, the pdf of
Y
{\displaystyle Y}
is
f
Y
(
y
)
=
f
X
(
g
− − -->
1
(
y
)
)
|
d
d
y
g
− − -->
1
(
y
)
|
=
β β -->
α α -->
Γ Γ -->
(
α α -->
)
(
1
y
)
α α -->
− − -->
1
exp
-->
(
− − -->
β β -->
y
)
1
y
2
=
β β -->
α α -->
Γ Γ -->
(
α α -->
)
(
1
y
)
α α -->
+
1
exp
-->
(
− − -->
β β -->
y
)
=
β β -->
α α -->
Γ Γ -->
(
α α -->
)
(
y
)
− − -->
α α -->
− − -->
1
exp
-->
(
− − -->
β β -->
y
)
{\displaystyle {\begin{aligned}f_{Y}(y)&=f_{X}\left(g^{-1}(y)\right)\left|{\frac {d}{dy}}g^{-1}(y)\right|\\[6pt]&={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}\left({\frac {1}{y}}\right)^{\alpha -1}\exp \left({\frac {-\beta }{y}}\right){\frac {1}{y^{2}}}\\[6pt]&={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}\left({\frac {1}{y}}\right)^{\alpha +1}\exp \left({\frac {-\beta }{y}}\right)\\[6pt]&={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}\left(y\right)^{-\alpha -1}\exp \left({\frac {-\beta }{y}}\right)\\[6pt]\end{aligned}}}
Note that
β β -->
{\displaystyle {\beta }}
is the scale parameter from the perspective of the inverse gamma distribution. This can be straightforwardly demonstrated by seeing that
β β -->
{\displaystyle {\beta }}
satisfies the conditions for being a scale parameter .
f
(
y
/
β β -->
;
α α -->
,
1
)
β β -->
=
1
β β -->
1
Γ Γ -->
(
α α -->
)
(
y
β β -->
)
− − -->
α α -->
− − -->
1
exp
-->
(
− − -->
1
y
β β -->
)
=
β β -->
α α -->
Γ Γ -->
(
α α -->
)
(
y
)
− − -->
α α -->
− − -->
1
exp
-->
(
− − -->
β β -->
y
)
=
f
(
y
;
α α -->
,
β β -->
)
{\displaystyle {\begin{aligned}{\frac {f(y/\beta ;\alpha ,1)}{\beta }}&={\frac {1}{\beta }}{\frac {1}{\Gamma (\alpha )}}\left({\frac {y}{\beta }}\right)^{-\alpha -1}\exp(-{\frac {1}{\frac {y}{\beta }}})\\[6pt]&={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}\left(y\right)^{-\alpha -1}\exp(-{\frac {\beta }{y}})\\[6pt]&=f(y;\alpha ,\beta )\end{aligned}}}
Occurrence
See also
References
Witkovsky, V. (2001). "Computing the Distribution of a Linear Combination of Inverted Gamma Variables". Kybernetika . 37 (1): 79–90. MR 1825758 . Zbl 1263.62022 .
Discrete univariate
with finite support with infinite support
Continuous univariate
supported on a bounded interval supported on a semi-infinite interval supported on the whole real line with support whose type varies
Mixed univariate
Multivariate (joint) Directional Degenerate and singular Families