Continuous probability distribution
Modified Kumaraswamy
Probability density function
Cumulative distribution function
Parameters
α α -->
>
0
{\displaystyle \alpha >0\,}
(real)
β β -->
>
0
{\displaystyle \beta >0\,}
(real) Support
x
∈ ∈ -->
(
0
,
1
)
{\displaystyle x\in (0,1)\,}
PDF
α α -->
β β -->
e
α α -->
− − -->
α α -->
/
x
(
1
− − -->
e
α α -->
− − -->
α α -->
/
x
)
β β -->
− − -->
1
x
2
{\displaystyle {\frac {\alpha \beta \mathrm {e} ^{\alpha -\alpha /x}(1-\mathrm {e} ^{\alpha -\alpha /x})^{\beta -1}}{x^{2}}}}
CDF
1
− − -->
(
1
− − -->
e
α α -->
− − -->
α α -->
/
x
)
β β -->
{\displaystyle 1-(1-\mathrm {e} ^{\alpha -\alpha /x})^{\beta }}
Quantile
α α -->
α α -->
− − -->
log
-->
(
1
− − -->
(
1
− − -->
u
)
1
/
β β -->
)
{\displaystyle {\frac {\alpha }{\alpha -\log(1-(1-u)^{1/\beta })}}}
Mean
α α -->
β β -->
e
α α -->
∑ ∑ -->
i
=
0
∞ ∞ -->
(
− − -->
1
)
i
(
β β -->
− − -->
1
i
)
e
α α -->
i
Γ Γ -->
[
0
,
(
i
+
1
)
α α -->
]
{\displaystyle \alpha \beta \mathrm {e} ^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}\Gamma \left[0,\left(i+1\right)\alpha \right]}
Variance
α α -->
2
β β -->
e
α α -->
∑ ∑ -->
i
=
0
∞ ∞ -->
(
− − -->
1
)
i
(
β β -->
− − -->
1
i
)
e
α α -->
i
(
i
+
1
)
Γ Γ -->
[
− − -->
1
,
(
i
+
1
)
α α -->
]
− − -->
μ μ -->
2
{\displaystyle \alpha ^{2}\beta e^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}(i+1)\Gamma \left[-1,\left(i+1\right)\alpha \right]-\mu ^{2}}
MGF
α α -->
β β -->
e
α α -->
∑ ∑ -->
i
=
0
∞ ∞ -->
(
− − -->
1
)
i
(
β β -->
− − -->
1
i
)
e
α α -->
i
(
α α -->
+
α α -->
i
)
h
− − -->
1
Γ Γ -->
[
1
− − -->
h
,
(
i
+
1
)
α α -->
]
{\displaystyle \alpha \beta e^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}(\alpha +\alpha i)^{h-1}\Gamma \left[1-h,\left(i+1\right)\alpha \right]}
In probability theory , the Modified Kumaraswamy (MK) distribution is a two-parameter continuous probability distribution defined on the interval (0,1). It serves as an alternative to the beta and Kumaraswamy distributions for modeling double-bounded random variables. The MK distribution was originally proposed by Sagrillo, Guerra, and Bayer [ 1] through a transformation of the Kumaraswamy distribution .
Its density exhibits an increasing-decreasing-increasing shape, which is not characteristic of the beta or Kumaraswamy distributions. The motivation for this proposal stemmed from applications in hydro-environmental problems.
Definitions
Probability density function
The probability density function of the Modified Kumaraswamy distribution is
f
X
(
x
;
θ θ -->
)
=
α α -->
β β -->
x
α α -->
− − -->
α α -->
/
x
(
1
− − -->
e
α α -->
− − -->
α α -->
/
x
)
β β -->
− − -->
1
x
2
{\displaystyle f_{X}\left(x;{\boldsymbol {\theta }}\right)={\frac {\alpha \beta x^{\alpha -\alpha /x}(1-\mathrm {e} ^{\alpha -\alpha /x})^{\beta -1}}{x^{2}}}}
where
θ θ -->
=
(
α α -->
,
β β -->
)
⊤ ⊤ -->
{\displaystyle {\boldsymbol {\theta }}=(\alpha ,\beta )^{\top }}
,
α α -->
>
0
{\displaystyle \alpha >0}
and
β β -->
>
0
{\displaystyle \beta >0}
are shape parameters.
Cumulative distribution function
The cumulative distribution function of Modified Kumaraswamy is given by
F
X
(
x
;
θ θ -->
)
=
1
− − -->
(
1
− − -->
e
α α -->
− − -->
α α -->
/
x
)
β β -->
{\displaystyle F_{X}\left(x;{\boldsymbol {\theta }}\right)=1-(1-\mathrm {e} ^{\alpha -\alpha /x})^{\beta }}
where
θ θ -->
=
(
α α -->
,
β β -->
)
⊤ ⊤ -->
{\displaystyle {\boldsymbol {\theta }}=(\alpha ,\beta )^{\top }}
,
α α -->
>
0
{\displaystyle \alpha >0}
and
β β -->
>
0
{\displaystyle \beta >0}
are shape parameters.
Quantile function
The inverse cumulative distribution function (quantile function) is
Q
X
(
u
;
θ θ -->
)
=
α α -->
α α -->
− − -->
log
-->
(
1
− − -->
(
1
− − -->
u
)
1
/
β β -->
)
{\displaystyle Q_{X}\left(u;{\boldsymbol {\theta }}\right)={\frac {\alpha }{\alpha -\log(1-(1-u)^{1/\beta })}}}
Properties
Moments
The hth statistical moment of X is given by:
E
(
X
h
)
=
α α -->
β β -->
e
α α -->
∑ ∑ -->
i
=
0
∞ ∞ -->
(
− − -->
1
)
i
(
β β -->
− − -->
1
i
)
e
α α -->
i
(
α α -->
+
α α -->
i
)
h
− − -->
1
Γ Γ -->
[
1
− − -->
h
,
(
i
+
1
)
α α -->
]
{\displaystyle {\textrm {E}}\left(X^{h}\right)=\alpha \beta \mathrm {e} ^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}(\alpha +\alpha i)^{h-1}\Gamma \left[1-h,\left(i+1\right)\alpha \right]}
Mean and Variance
Measure of central tendency , the mean
(
μ μ -->
)
{\displaystyle (\mu )}
of X is:
μ μ -->
=
E
(
X
)
=
α α -->
β β -->
e
α α -->
∑ ∑ -->
i
=
0
∞ ∞ -->
(
− − -->
1
)
i
(
β β -->
− − -->
1
i
)
e
α α -->
i
Γ Γ -->
[
0
,
(
i
+
1
)
α α -->
]
{\displaystyle \mu ={\text{E}}(X)=\alpha \beta \mathrm {e} ^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}\Gamma \left[0,\left(i+1\right)\alpha \right]}
And its variance
(
σ σ -->
2
)
{\displaystyle (\sigma ^{2})}
:
σ σ -->
2
=
E
(
X
2
)
=
α α -->
2
β β -->
e
α α -->
∑ ∑ -->
i
=
0
∞ ∞ -->
(
− − -->
1
)
i
(
β β -->
− − -->
1
i
)
e
α α -->
i
(
i
+
1
)
Γ Γ -->
[
− − -->
1
,
(
i
+
1
)
α α -->
]
− − -->
μ μ -->
2
{\displaystyle \sigma ^{2}={\text{E}}(X^{2})=\alpha ^{2}\beta \mathrm {e} ^{\alpha }\sum _{i=0}^{\infty }(-1)^{i}{\begin{pmatrix}\beta -1\\i\end{pmatrix}}\mathrm {e} ^{\alpha i}(i+1)\Gamma \left[-1,\left(i+1\right)\alpha \right]-\mu ^{2}}
Parameter estimation
Sagrillo, Guerra, and Bayer[ 1] suggested using the maximum likelihood method for parameter estimation of the MK distribution. The log-likelihood function for the MK distribution, given a sample
x
1
,
… … -->
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
, is:
ℓ ℓ -->
(
θ θ -->
)
=
n
α α -->
+
n
log
-->
(
α α -->
)
+
n
log
-->
(
β β -->
)
− − -->
α α -->
∑ ∑ -->
i
=
1
n
1
x
i
− − -->
2
∑ ∑ -->
i
=
1
n
log
-->
(
x
i
)
+
(
β β -->
− − -->
1
)
∑ ∑ -->
i
=
1
n
log
-->
(
1
− − -->
e
α α -->
− − -->
α α -->
/
x
i
)
.
{\displaystyle {\begin{aligned}\ell ({\boldsymbol {\theta }})=&\,n\alpha +n\log \left(\alpha \right)+n\log \left(\beta \right)-\alpha \sum _{i=1}^{n}{\frac {1}{x_{i}}}-2\sum _{i=1}^{n}\log(x_{i})\\&+(\beta -1)\sum _{i=1}^{n}\log(1-\mathrm {e} ^{\alpha -\alpha /x_{i}}).\end{aligned}}}
The components of the score vector
U
(
θ θ -->
)
=
[
∂ ∂ -->
ℓ ℓ -->
(
θ θ -->
)
∂ ∂ -->
α α -->
,
∂ ∂ -->
ℓ ℓ -->
(
θ θ -->
)
∂ ∂ -->
β β -->
]
{\displaystyle U\left({\boldsymbol {\theta }}\right)=\left[{\frac {\partial \ell ({\boldsymbol {\theta }})}{\partial \alpha }},{\frac {\partial \ell ({\boldsymbol {\theta }})}{\partial \beta }}\right]}
are
∂ ∂ -->
ℓ ℓ -->
(
θ θ -->
)
∂ ∂ -->
α α -->
=
n
+
n
α α -->
+
(
β β -->
− − -->
1
)
e
α α -->
∑ ∑ -->
i
=
1
n
x
i
− − -->
1
x
i
(
e
α α -->
− − -->
e
α α -->
/
x
i
)
− − -->
∑ ∑ -->
i
=
1
n
1
x
i
{\displaystyle {\begin{aligned}{\frac {\partial \ell ({\boldsymbol {\theta }})}{\partial \alpha }}=n+{\frac {n}{\alpha }}+(\beta -1)\mathrm {e} ^{\alpha }\sum _{i=1}^{n}{\frac {x_{i}-1}{x_{i}(\mathrm {e} ^{\alpha }-\mathrm {e} ^{\alpha /x_{i}})}}-\sum _{i=1}^{n}{\frac {1}{x_{i}}}\end{aligned}}}
and
∂ ∂ -->
ℓ ℓ -->
(
θ θ -->
)
∂ ∂ -->
β β -->
=
n
β β -->
+
∑ ∑ -->
i
=
1
n
log
-->
(
1
− − -->
e
α α -->
− − -->
α α -->
/
x
i
)
{\displaystyle {\begin{aligned}{\frac {\partial \ell ({\boldsymbol {\theta }})}{\partial \beta }}={\frac {n}{\beta }}+\sum _{i=1}^{n}\log(1-\mathrm {e} ^{\alpha -\alpha /x_{i}})\end{aligned}}}
The MLEs of
θ θ -->
{\displaystyle {\boldsymbol {\theta }}}
, denoted by
θ θ -->
^ ^ -->
=
(
α α -->
^ ^ -->
,
β β -->
^ ^ -->
)
⊤ ⊤ -->
{\displaystyle {\hat {\boldsymbol {\theta }}}=\left({\hat {\alpha }},{\hat {\beta }}\right)^{\top }}
, are obtained as the simultaneous solution of
U
(
θ θ -->
)
=
0
{\displaystyle {\boldsymbol {U}}({\boldsymbol {\theta }})={\boldsymbol {0}}}
, where
0
{\displaystyle {\boldsymbol {0}}}
is a two-dimensional null vector.
If
X
∼ ∼ -->
MK
(
α α -->
,
β β -->
)
{\displaystyle X\sim {\textrm {MK}}(\alpha ,\beta )}
, then
{
1
− − -->
1
X
}
∼ ∼ -->
K
(
α α -->
,
β β -->
)
{\displaystyle \left\{1-{\frac {1}{X}}\right\}\sim {\textrm {K}}(\alpha ,\beta )}
(Kumaraswamy distribution )
If
X
∼ ∼ -->
MK
(
α α -->
,
β β -->
)
{\displaystyle X\sim {\textrm {MK}}(\alpha ,\beta )}
, then
1
X
− − -->
1
∼ ∼ -->
{\displaystyle {\frac {1}{X}}-1\sim }
Exponentiated exponential (EE) distribution[ 2]
If
X
∼ ∼ -->
MK
(
1
,
β β -->
)
{\displaystyle X\sim {\textrm {MK}}(1,\beta )}
, then
exp
-->
{
1
− − -->
1
X
}
∼ ∼ -->
Beta
(
1
,
β β -->
)
{\displaystyle \exp \left\{1-{\frac {1}{X}}\right\}\sim {\textrm {Beta}}(1,\beta )}
. (Beta distribution )
If
X
∼ ∼ -->
MK
(
α α -->
,
1
)
{\displaystyle X\sim {\textrm {MK}}(\alpha ,1)}
, then
exp
-->
{
1
− − -->
1
X
}
∼ ∼ -->
Beta
(
α α -->
,
1
)
{\displaystyle \exp \left\{1-{\frac {1}{X}}\right\}\sim {\textrm {Beta}}(\alpha ,1)}
.
If
X
∼ ∼ -->
MK
(
α α -->
,
β β -->
)
{\displaystyle X\sim {\textrm {MK}}(\alpha ,\beta )}
, then
1
X
− − -->
1
∼ ∼ -->
Exp
(
α α -->
)
{\displaystyle {\frac {1}{X}}-1\sim {\textrm {Exp}}(\alpha )}
(Exponential distribution ).
Applications
The Modified Kumaraswamy distribution was introduced for modeling hydro-environmental data. It has been shown to outperform the Beta and Kumaraswamy distributions for the useful volume of water reservoirs in Brazil.[ 1]
See also
References
^ a b c Sagrillo, M.; Guerra, R. R.; Bayer, F. M. (2021). "Modified Kumaraswamy distributions for double bounded hydro-environmental data". Journal of Hydrology . 603 . doi :10.1016/j.jhydrol.2021.127021 .
^ Gupta, R.D.; Kundu, D (1999). "Theory & Methods: Generalized exponential distributions". Australian & New Zealand Journal of Statistics . 41 : 173– 188. doi :10.1111/1467-842X.00072 .