# The big 3 claim frequency models

We now turn the attention to discrete distributions. In particular the focus is on counting distributions. These are the discrete distributions that have positive probabilities only on the non-negative integers 0, 1, 2, 3, … One important application is on finding suitable counting distributions for modeling the number of losses or the number claims to an insurer or more generally the number of other random events that are of interest in actuarial applications. From a claim perspective, these counting distributions would be models for claim frequency. Combining frequency models with models for claim severity would provide a more complete picture of the exposure of risks to the insurer than using claim severity alone. This post and several subsequent posts are preparation for the discussion on modeling aggregate losses and claims. Another upcoming topic would be the effect of insurance coverage modifications (e.g. deductibles) on the claim frequency and claim severity.

This post focuses on the three commonly used counting distributions – Poisson, binomial and negative binomial (the big 3). These three distributions are the basis for defining a large class of other counting distributions.

Probability Generating Function

Let $Y$ be a random variable with positive probabilities only on the non-negative integers, i.e. $P(Y=k)$ is positive only for $k=0,1,2,\cdots$. The function $P(Y=k)$ is the probability of the occurrence of the event $Y=k$, i.e. the observed value of the random variable $Y$ is $k$. It is called the probability function of the random variable $Y$ (also called probability mass function). From the probability function, many other distributional quantities can be derived, e.g. mean, variance and higher moments.

We can also elicit information about $Y$ from its generating function. The generating function (or probability generating function) of $Y$ is defined by:

$\displaystyle P_Y(z)=p_0+p_1 z +p_2 z^2 + \cdots=\sum \limits_{j=0}^\infty p_j z^j$

where each $p_j=P(Y=j)$. The generating function $P_Y(z)$ is defined wherever the infinite sum converges. At minimum, $P_Y(z)$ converges for $\lvert z \lvert \le 1$. Some $P_Y(z)$ converges for all real $z$, e.g. when $Y$ has a Poisson distribution (see below).

One reason for paying attention to generating function is that the moments of $Y$ can be generated from $P_Y(z)$. The $n$th moment of $Y$ is derived from the result of taking the $n$th derivative of $P_Y(z)$ and evaluating at $z=1$.

$\displaystyle E[Y (Y-1) (Y-2) \cdots (Y-(n-1)]=P_Y^{(n)}(1)$

The above expectation is said to be a factorial moment. It follows that $E(Y)=P_{Y}^{(1)}(1)$. Since $E[Y (Y-1)]=P_Y^{(2)}(1)$, the second moment is $E(Y^2)=P_{Y}^{(1)}(1)+P_Y^{(2)}(1)$. In general, the $n$th moment $E(Y^n)$ can be expressed in terms of $P_Y^{(k)}(1)$ for all $k \ge n$.

Another application of generating function is that $P_Y(z)$ encodes the probability function $P(Y=k)$, which is obtained by taking the derivatives of $P_Y(z)$ and then evaluated at $z=0$.

$\displaystyle P(Y=n)=\frac{P_{Y}^{(n)}(0)}{n!}$

where $n=0,1,2,3,\cdots$. Another useful property about generating function is that the probability distribution of a random variable is uniquely determined by its generating function. This fundamental property is useful in determining the distribution of an independent sum. The generating function of an independent sum of random variables is simply the product of the individual generating functions. If the product is the generating function of a certain distribution, then the independent sum must be of the same distribution.

For a more detailed discussion on probability generating function, see this blog post in a companion blog.

Poisson Distribution

We now describe the three counting distributions indicated at the beginning of the post. We start with the Poisson distribution. Consider a random variable $X$ that only takes on the non-negative integers. For each $k=0,1,2,\cdots$, let $p_k=P(X=k)$.

The random variable $X$ has a Poisson distribution if its probability function is:

$\displaystyle p_k=\frac{e^{-\lambda} \lambda^k}{k!} \ \ \ \ \ \ k=0,1,2,\cdots$

for some positive constant $\lambda$. This constant $\lambda$ is the parameter of the Poisson distribution in question. It is also the mean and variance of the Poisson distribution. The following is the probability generating function of the Poisson distribution.

$\displaystyle P_X(z)=e^{\lambda \ (z-1)}$

The Poisson generating function is defined for all real numbers $z$. The mean and variance and higher moments can be computed using the generating function.

$E(X)=P_X^{(1)}(1)=\lambda$

$E[X (X-1)]=P_X^{(2)}(1)=\lambda^2$

$E[X^2]=P_X^{(1)}(1)+P_X^{(2)}(1)=\lambda+\lambda^2$

$Var(X)=E(X^2)-E(X)^2=\lambda$

One interesting characteristic of the Poisson distribution is that its mean is the same the variance. From a mathematical standpoint, the Poisson distribution arises from the Poisson process (see a more detailed discussion here). Another discussion of the Poisson distribution is found here.

One useful characteristic of the Poisson distribution is that combining independent Poisson distributions results in another Poisson distribution. Suppose that $X_1,X_2,\cdots,X_n$ are independent Poisson random variables with means $\lambda_1,\lambda_2,\cdots,\lambda_n$, respectively. Then the probability generating function of the sum $X=X_1+X_2+\cdots +X_n$ is simply the product of the individual probability generating functions.

$\displaystyle P_X(z)=\prod \limits_{j=1}^n e^{\lambda_j (z-1)}=e^{\lambda \ (z-1)}$

where $\lambda=\lambda_1+\lambda_2+\cdots +\lambda_n$. The probability generating function of the sum $X$ is the generating function of a Poisson distribution. Thus independent sum of Poisson distributions is a Poisson distribution with parameter being the sum of the individual Poisson parameters.

Another useful property is that of splitting a Poisson distribution. For example, suppose that the number of claims $N$ in a given year follows a Poisson distribution with mean $\lambda$ per year. Also suppose that the claims can be classified into $W$ distinct types such that the probability of a claim being of type $i$ is $p_i$, $i=1,2,\cdots, W$ and such that $p_1+\cdots+p_W=1$. If we are interested in studying the number $N_i$ of claims in a year that are of type $i$, $i=1,2,\cdots,W$, then $N_1,N_2,\cdots,N_W$ are independent Poisson random variables with means $\lambda_1 p_1, \lambda_2 p_2,\cdots,\lambda_W p_W$, respectively. For a mathematical discussion of this Poisson splitting phenomenon, see this blog post in a companion blog.

Binomial Distribution

Consider a series of independent events each of which results in one of two distinct outcomes (one is called Success and the other Failure) in such a way that the probability of observing a Success in a trial is constant across all trials (these are called Bernoulli trials). For a binomial distribution, we are only interested in observing $n$ such trials and count the number of successes in these $n$ trials.

More specifically, let $p$ be the probability of observing a Success in a Bernoulli trial. Let $X$ be the number of Successes observed in $n$ independent trials. Then the random variable $X$ is said to have a binomial distribution with parameters $n$ and $p$.

Note that the random variable $X$ is the independent sum of $X_1,X_2,\cdots,X_n$ where $X_i$ is the number of Success in the $i$th Bernoulli trial. Thus $X_i$ is 1 with probability $p$ and is 0 with probability $1-p$. Its probability generating function would be:

$\displaystyle g(z)=(1-p) z^0 +p z^1=1-p+p z$

As a result, the probability generating function for $X$ would be $g(z)$ raised to $n$.

$\displaystyle P_X(z)=(1-p+p z)^n$

The generating function $P_X(z)$ is defined for all real values $z$. Differentiating $P_X(z)$ twice produces the mean and variance.

$E(X)=n p$

$E[X (X-1)]=n (n-1) p^2$

$Var(X)=n p (1-p)$

By differentiating $P_X(z)$ and evaluating at $z=0$, we obtain the probability function.

$\displaystyle P(X=k)=\frac{P_X^{(k)}(0)}{k!}=\binom{n}{k} p^k (1-p)^{n-k}$

where $k=0,1,2,\cdots,n$.

By taking the product of probability generating functions, it follows that the independent sum $Y=Y_1+Y_2+\cdots+Y_m$, where each $Y_i$ has a binomial distribution with parameters $n_i$ and $p$, has a binomial distribution with parameters $n=n_1+\cdots+n_m$ and $p$. In other words, as long as the probability of success $p$ is identical in the binomial distributions, the independent sum is always a binomial distribution.

Note that the variance of the binomial distribution is less than the mean. Thus the binomial distribution is suitable candidate for modeling frequency for the situations where the sample variance is smaller than the sample mean.

Negative Binomial Distribution

As mentioned above, the Poisson distribution requires that the mean and the variance are equaled. The binomial binomial distribution requires that the variance is smaller than the mean. Thus these two counting distributions are not appropriate in all cases. The negative binomial distribution is an excellent alternative to the Poisson distribution and the binomial distribution, especially in the cases where the observed variance is greater than the observed mean.

The negative binomial naturally arises from the same probability experiment that generates the binomial distribution. Consider a series of independent Bernoulli trials each of which results in one of two distinct outcomes (called success and failure) in such a way that the probability of success $p$ is constant across the trials. Instead of observing the outcomes in a fixed number of trials, we now observe the trials until $r$ number of success have occurred.

As we observe the Bernoulli trials, let’s $Y$ be the number of failures until the $r$th success has occurred. The random variable $Y$ has a negative binomial distribution with parameters $r$ and $p$. The parameter $r$ is necessarily a positive integer and the parameter $p$ is a real number between 0 and 1. The following is the probability function for the random variable $Y$.

\displaystyle \begin{aligned} P(Y=k)&=\binom{r+k-1}{k} \ p^r (1-p)^k \\&=\frac{(r+k-1)!}{k! \ (r-1)!} \ p^r (1-p)^k \ \ \ \ \ \ k=0,1,2,\cdots \\& \end{aligned}

In the above probability function, the parameter $r$ must be a positive integer. The binomial coefficient $\binom{r+k-1}{k}$ is computed by its usual definition. The above probability probability function can be relaxed so that $r$ can be any positive real number. The key to the relaxation is a reformulation of the binomial coefficient.

$\displaystyle \binom{n}{j}=\left\{ \begin{array}{ll} \displaystyle \frac{n (n-1) (n-2) \cdots (n-(j-1))}{j!} &\ n>0, j=1,2,3,\cdots \\ \text{ } & \text{ } \\ \displaystyle 1 &\ n>0, j=0 \end{array} \right.$

Note that in the above formulation, the $n$ in $\binom{n}{j}$ does not have to be an integer. If $n$ were to be a positive integer, the usual definition $\binom{n}{j}=\frac{n!}{j! (n-j)!}$ would lead to the same calculation. The reformulation is a generalization of the usual binomial coefficient definition.

With the new definition of binomial coefficient, the following is the probability function of the negative binomial distribution in the general case.

$\displaystyle P(Y=k)=\binom{r+k-1}{k} \ p^r (1-p)^k \ \ \ \ \ \ k=0,1,2,\cdots$

The following is the same probability function with the binomial coefficient explicitly written out.

$\displaystyle P(Y=k)=\left\{ \begin{array}{ll} \displaystyle p^r &\ k=0 \\ \text{ } & \text{ } \\ \displaystyle \frac{(k-1+r) \cdots (1+r) r}{k!} \ p^r \ (1-p)^k &\ k=1,2,\cdots \end{array} \right.$

For either of the above versions, the mean and variance are:

$\displaystyle E(Y)=\frac{r (1-p)}{p}$

$\displaystyle Var(Y)=\frac{r (1-p)}{p^2}$

Another formulation of the negative binomial distribution is that it is a Poisson-gamma mixture. The following is the probability function.

$\displaystyle P(Y=k)=\left\{ \begin{array}{ll} \displaystyle \biggl(\frac{1}{\beta+1} \biggr)^r &\ k=0 \\ \text{ } & \text{ } \\ \displaystyle \frac{(k-1+r) \cdots (1+r) r}{k!} \ \biggl(\frac{1}{\beta+1} \biggr)^r \biggl(\frac{\beta}{\beta+1} \biggr)^k &\ k=1,2,\cdots \end{array} \right.$

It is still a 2-parameter discrete distribution. The parameters $r$ and $\beta$ originate from the parameters of the gamma distribution in the Poisson-gamma mixture. The mean and variance are:

$\displaystyle E(Y) = r \ \beta$

$\displaystyle Var(Y)=r \ \beta \ (1+\beta)$

The negative binomial distribution has been discussed at length in blog posts in several companion blogs. For the natural interpretation of negative binomial distribution based on counting the number of failures until the $r$th success, see this blog post. This is an excellent introduction.

For the general version of the negative binomial distribution where the parameter $r$ can be any positive real number, see this blog post.

For the version of negative binomial distribution from a Poisson-Gamma mixture point of view, see this blog post.

This blog post has additional facts about the negative binomial distribution. This blog post summarizes the various versions as well as focusing on the calculation of probabilities.

More Counting Distributions

The three counting distributions – Poisson, binomial and negative binomial – provide a versatile tool kit in modeling the number of random events such as losses to the insured or claims to the insurer. The tool kit can be greatly expanded by modifying these three distributions to generate additional distributions. The new distributions belong to the (a,b,0) and (a,b,1) classes. This topic is discussed in the subsequent posts.

Dan Ma actuarial topics
Dan Ma actuarial
Dan Ma math

Daniel Ma actuarial
Daniel Ma mathematics
Daniel Ma actuarial topics

$\copyright$ 2018 – Dan Ma

# An introduction to risk measures

This post is an introduction to measures of risk. Examples of risk measures are considered. The post also focuses on coherent risk measures, which possess several desirable properties for risk measures.

In actuarial applications, an important focus is on developing loss distributions for insurance products. When the loss distribution is a probability based model, we can also use the model to derive a description of risk. Often times, a measure of risk is given by one number that describes the level of exposure to risk. One example of a measure of risk is value-at-risk (VaR), discussed here, which can quantify the likelihood of an adverse outcome. VaR can then be used to determine the amount of capital required to withstand such adverse outcomes. Actuaries and risk managers along with investors, regulators and rating agencies are particularly interested in using VaR and other measures of risk to quantify the ability of an enterprise to withstand various potential adverse events.

Two important examples of measures of risk – value-at-risk and tail-value-at-risk – have been discussed in this previous post.

Examples of Risk Measures

A measure of risk (or a risk measure) is a mapping that maps a given loss distribution to the real numbers. In a general discussion, we use the Greek letter $\rho$ to denote the mapping. For example, if $X$ is the random variable that describes the losses, then $\rho(X)$ is a number that is intended to quantify the risk exposure. We first look at some examples.

Some of the early risk measures in actuarial science were based on the so called premium principles. The purpose is to develop an appropriate premium to charge for a given risk. The following gives several risk measures based on premium principles.

$\rho(X)=E[X]$ ………………………………………………….(Equivalence Principle)

$\rho(X)=(1+k) \ E[X]$ ……………………………………….(Expected Value Principle)

$\rho(X)=E[X]+ k \ Var(X)$…………………………………(Variance Principle)

$\rho(X)=\mu+k \ \sqrt{Var(X)}$…………………………………..(Standard Deviation Principle)

Each case is a rule for assigning a premium to an insurance risk. In each of the last three cases, $k$ is a fixed constant where $k \ge 0$. The fixed constant $k$ reflects an adjustment that may include expenses and profit and/or a risk loading. The equivalence principle does not account for any risk loading; it simply charges the expected losses, i.e. the pure premium.

Example 2 (Standard Deviation Principle)
This example highlights the standard deviation premium principle:

$\rho(X)=\mu+k \ \sigma$

where $\mu=E[X]$ and $\sigma^2$ is the variance of $X$.

To shed light on the standard deviation principle, suppose that the loss $X$ has a normal distribution. The $\rho(X)$ resembles a quantile in the normal distribution where $k$ is a z-score. With $k=1.645$, $P(X>\mu+1.645 \ \sigma)=0.05$. This means that $\mu+1.645 \ \sigma$ is a threshold that indicates the level of adverse outcome such that the probability of exceeding this threshold is 5%. With $k=2.326$, $P(X>\mu+2.326 \ \sigma)=0.01$. The higher threshold $\mu+2.326 \ \sigma$ is the level of adverse outcome so that there is a 1% chance of exceeding it.

If the loss distribution $X$ is not a normal distribution, the coefficient $k$ would not be the same as the ones given for normal distribution. Whatever the loss distribution is, the fixed constant $k$ is usually chosen such that the probability of losses exceeding the risk measure equals to some pre-determined small probability level.

Example 3 (Value-at-Risk)
When the loss $X$ does not have a normal distribution, the same constants $k$ in Example 2 cannot be used. Instead, we can start with a small probability level $p$ (5%, 1%, 0.5% etc) and determine the $100p$th percentile. The corresponding $k$ can then be determined. This is the basic idea behind the risk measure of value-at-risk.

The risk measure of value-at-risk (VaR) is mathematically a percentile of the loss distribution. Given that $X$ represents the losses, $\text{VaR}_p(X)$ is the $100p$th percentile of $X$. The value-at-risk (VaR) discussed here is for gauging the exposure of risk with respect to insurance losses such that the probability of exceeding the threshold is $p$, the pre-determined security level. See this previous post for a more detailed discussion.

Example 4 (Tail-Value-at-Risk)
We now briefly discuss Tail-value-at-risk (TVaR). Suppose that $X$ is the random variable that models losses. As before we assume that $X$ takes on positive real numbers. The tail-value-at-risk at the $100p$% security level, denoted by $\text{TVaR}_p(X)$, is the expected loss on the condition that loss exceeds the 100pth percentile of $X$. The following is a more succinct way of describing it.

$\text{TVaR}_p(X)=E(X \lvert X > \pi_p)$

where $\pi_p=\text{VaR}_p(X)$. Tail-value-at-risk is a risk measure that is in many ways superior than VaR. The risk measure VaR is a merely a cutoff point and does not describe the tail behavior beyond the VaR threshold. TVaR reflects the shape of the tail beyond VaR threshold. See this previous post for a more detailed discussion.

Desirable Properties of Risk Measures

One important consideration when choosing risk measures concerns the combining of risk entities in a given company or enterprise. For example, an insurance company may have different divisions specializing in different insurance products and services. The results of the risk measure applied individually to the different divisions should be consistent to the results of the risk measure applied to the entire company. For example, it is desirable for the risk measure for two risks combined be no greater than the sum of the two results applied to two risks individually. The following is a list of four desirable properties for a risk measure. Any risk measure that satisfies these four properties is said to be a coherent risk measure.

Coherent Risk Measure
Let $X$ and $Y$ be two loss random variables. The risk measure $\rho$ is a coherent risk measure if it satisfies the following four properties.

1. $\rho(X+Y) \le \rho(X)+\rho(Y)$ ……………………………………….(Subadditivity)
2. If $X \le Y$ with probability 1, then $\rho(X) \le \rho(Y)$ ………………..(Monotonicity)
3. For any constant $c >0$, $\rho(c X)=c \ \rho(X)$ ………………………(Positive Homogeneity)
4. For any constant $c >0$, $\rho(X+c)=\rho(X)+c$ ……………….(Translation Invariance)

The first one, subadditivity, requires that for any two random loss variables $X$ and $Y$, the risk measure for $X+Y$ be no greater than the risk measures for $X$ and $Y$ separately. This is a sensible requirement for a risk measure. This reflects the notion that combining risks leads to diversification and thus to a reduction of total overall risk. Otherwise, a large enterprise would simply be broken into smaller entities in order to reduce risk.

The property of monotonicity is another sensible requirement. If the random loss $X$ is always less than the random loss $Y$, then it makes sense for the risk measure of $X$ be no greater than the risk measure of $Y$. For example, if the risk measure represents the surplus or reserve that is required to cover a random loss, then we would want the reserve for the lesser loss to be no greater than the reserve for a larger potential loss.

The property of positive homogeneity specifies that the risk measure of a constant multiple of the random loss should be the constant multiple of the risk measure. For example, expressing the loss random variable in another currency should indicate the same level of reserve or surplus.

Translation invariance says that the risk measure of combining a random loss and a fixed loss should be the risk measure of the random loss plus the fixed loss. The reserve to cover a fixed loss should be just the fixed loss.

A Closer Look at Examples

Now that we have a list of desirable properties a risk measure should have, which of the risk measures discussed above possess these properties? We comment on these risk measures one by one.

Example 5 (Equivalence Principle)
The risk measure based on the equivalence principle is a coherent risk measure since it satisfies all four properties. The properties of expected value derive the four properties of coherent measure.

• $\rho(X+Y)=E(X+Y)=E(X)+E(Y)=\rho(X)+\rho(Y)$
• If $X \le Y$ always, then $\rho(X)=E(X) \le E(Y)=\rho(Y)$.
• $\rho(cX)=E(c X)=c \ E(X)=c \ \rho(X)$
• $\rho(X+c)=E(X+c)=E(X)+c=\rho(X)+c$

Example 6 (Expected Value Principle)
For the four measures based on the premium principles, equivalence principle is the only one that is coherent. The risk measure based on the expected value principle fails translation invariance. Here’s the derivation.

• $\rho(X+Y)=(1+k) \ E(X+Y)=(1+k) \ E(X)+(1+k) \ E(Y)=\rho(X)+\rho(Y)$
• If $X \le Y$ always, then $\rho(X)=(1+k) \ E(X) \le (1+k) \ E(Y)=\rho(Y)$.
• $\rho(cX)=(1+k) \ E(c X)=c \ (1+k) \ E(X)=c \ \rho(X)$
• $\rho(X+c)=(1+k) \ E(X+c)=(1+k) \ E(X)+(1+k) \ c \ne \rho(X)+c$

Example 7 (Variance Principle)
Now, consider the risk measure based on the variance principle. Recall the risk measure based on the variance principle is of the form $\rho(L)=E(L)+k Var(L)$ where $L$ represents the random losses and $k$ is some positive constant. First we show that it does not satisfy the subadditivity property.

\displaystyle \begin{aligned} \rho(X+Y)&=E(X+Y)+k Var(X+Y) \\&=E(X)+E(Y)+k Var(X)+k Var(Y)+2 k Cov(X,Y) \\&=\rho(X)+\rho(Y)+2 k Cov(X,Y) \end{aligned}

When $Cov(X,Y)>0$, i.e. the risk $X$ and the risk $Y$ are positively correlated, we have $\rho(X+Y)>\rho(X)+\rho(Y)$. There is no benefit of risk reduction in combing risks that are positively correlated.

Next we demonstrate why the variance principle fails the monotonicity. Recall that $\rho$ satisfies the property of monotonicity if $\rho(X) \le \rho(Y)$ for any risks $X$ and $Y$ such that $X \le Y$ for all scenarios. For the risk measure $\rho(L)=E(L)+k Var(L)$, we show that for each $k>0$, we can find random variables $X$ and $Y$ such that $X \le Y$ always and $\rho(X)>\rho(Y)$. To this end, we define random variables $X_n$ and $Y$ for each positive integer $n$.

Let $n$ be any positive integer. Let $X_n$ be a random variable such that $P(X_n=-n)=0.1$ and $P(X_n=90)=0.9$. Let $Y$ be a constant value of 100, i.e. $P(Y=100)=1$. Then it is straightforward to verify that

$E(X_n)=81-0.1n$

$E(X_n^2)=7290+0.1n^2$

$Var(X_n)=7290+0.1n^2-(81-0.1n)^2=0.09 n^2+16.2 n+729$

$E(Y)=100$

$Var(Y)=0$

For $\rho(X_n)=81-0.1 n^2+k (0.09 n^2+16.2 n+729)>\rho(Y)=100$, the constant $k$ must satisfy the following.

$\displaystyle k>\frac{19+0.1 n}{0.09 n^2+16.2 n+729} \longrightarrow 0$ as $n \longrightarrow \infty$

Note that the quantity on the right approaches zero as $n$ becomes large. For each $k>0$ (however small), we can always choose an integer $n$ large enough such that the above inequality is satisfied. Then for the pair $X_n$ and $Y$, $X_n \le Y$ always and $\rho(X_n) >\rho(Y)$. Thus monotonicity fails for the variance principle.

The following derivation shows that positive homogeneity fails and translation invariance is satisfied.

$\rho(c \ X)=c \ E(x)+k \ c^2 \ Var(X) \ne c \ \rho(X)=c \ E(X)+k \ c \ Var(X)$

$\rho(X+c)=E(X+c)+k \ Var(X+c)=E(X)+c+k \ Var(X)=\rho(X)+c$

Example 8 (Standard Deviation Principle)
The risk measure according to the standard deviation principle is also not coherent even though it is an improvement to the variance principle. The standard deviation principle satisfies all properties except monotonicity.

To show subadditivity, note that the fact that $\sqrt{Var(X+Y)} \le \sqrt{Var(X)}+\sqrt{Var(Y)}$. This is derived by the following:

\displaystyle \begin{aligned} Var(X+Y)&=Var(X)+Var(Y)+2 Cov(X,Y) \\&=Var(X)+Var(Y)+2 Corr(X,Y) \sqrt{Var(X)} \sqrt{Var(Y)} \\&=Var(X)+Var(Y)+2 \sqrt{Var(X)} \sqrt{Var(Y)} \\&\le (\sqrt{Var(X)}+\sqrt{Var(Y)})^2 \end{aligned}

In the above derivation, $Corr(X,Y)$ refers to the correlation coefficient, which is always $\le 1$. The following shows the subadditivity.

\displaystyle \begin{aligned} \rho(X+Y)&=E(X+Y)+k \sqrt{Var(X+Y)} \\&\le E(X)+E(Y) +k (\sqrt{Var(X)}+\sqrt{Var(Y)}) \\&=\rho(X)+\rho(Y) \end{aligned}

The following derivation verifies positive homogeneity and translation invariance.

\displaystyle \begin{aligned} \rho(c \ X)&=E(c \ X)+k \sqrt{Var(c \ X)} \\&=c \ E(X)+ k \sqrt{c^ 2 \ Var(X)} \\&=c \ E(X)+ k \ c \ \sqrt{Var(X)} \\&=c \ \rho(X) \end{aligned}

\displaystyle \begin{aligned} \rho(X+c)&=E(X+c)+k \ \sqrt{Var(X+c)} \\&=E(X)+c+k \ \sqrt{Var(X)} \\&=\rho(X)+c \end{aligned}

We now show that the standard deviation principle does not satisfy monotonicity. The proof is similar to the one for variance principle in Example 7 but with a crucial adjustment. For the risk measure $\rho(L)=E(L)+k \sqrt{Var(L)}$, we show that for each $k>0$, we can find random variables $X$ and $Y$ such that $X \le Y$ always and $\rho(X)>\rho(Y)$. To this end, we define random variables $X_n$ and $Y$ for each positive integer $n$.

Let $n$ be any positive integer. Let $X_n$ be a random variable such that $P(X_n=- x_n)=10^{-n}$ and $P(X_n=90)=1-10^{-n}$ where $x_n$ is the positive number defined by $x_n=10^{n+1}-90$. Let $Y$ be a constant value of 100, i.e. $P(Y=100)=1$. Then it is straightforward to verify that

$E(X_n)=80$

$E(X_n^2)=10^{n+2}+6300$

$Var(X_n)=10^{n+2}+6300-80^2=10^{n+2}-100$

$E(Y)=100$

$Var(Y)=0$

For $\rho(X_n)=80+k \ \sqrt{10^{n+2}-100} > \rho(Y)=100$, the constant $k$ must satisfy the following inequality.

$\displaystyle k > \frac{20}{\sqrt{10^{n+2}-100}} \longrightarrow 0$ as $n \longrightarrow \infty$

For any $k >0$ (however small), we can always find an integer $n$ large enough so that the above inequality holds. Then the pair of random variables $X_n$ and $Y$ are such that $X_n \le Y$ always and $\rho(X_n)>\rho(Y)$. Thus monotonicity fails for the standard deviation principle.

Example 9 (Value-at-Risk)
The risk measure of VaR is not coherent despite its attractive quality. An example for the violation of subadditivity can be found in this article. Example 1 in this article gives two independent risks $X$ and $Y$ such that $\text{VaR}_{0.99}(X+Y)>\text{VaR}_{0.99}(X)+\text{VaR}_{0.99}(Y)$.

Example 10 (Tail-Value-at-Risk)
One advantage of TVaR is that it describes the tail behavior beyond the threshold of VaR. Another is that TVaR is a coherent risk measure. This fact has been verified in the article Artzner, P., Delbaen, F., Eber, J.-M., & Heath, D. (1997). Thinking Coherently. Risk, 10, 68-71.

Dan Ma actuarial topics
Dan Ma actuarial
Dan Ma math

Daniel Ma actuarial
Daniel Ma mathematics
Daniel Ma actuarial topics

$\copyright$ 2018 – Dan Ma

# Tail-value-at-risk of a mixture

The risk measures of value-at-risk and tail-value-at-risk are discussed in the preceding post. This post extends the preceding post with an algorithm on evaluating the tail-value-at-risk of a mixture distribution with discrete mixing weights.

The preceding post introduces the notions of value-at-risk (VaR) and tail-value-at-risk (TVaR). These are two particular examples of risk measures that are useful for insurance companies and other enterprises in a risk management context. For $0, VaR at the security level $p$ gives the threshold that the probability of a loss more adverse than the threshold is at most $1-p$. Thus in our context VaR is a percentile of the loss distribution. TVaR is a conditional expected value. At the security level $p$, TVaR is the expected value of the losses given that the losses exceed the threshold VaR.

The preceding post gives several representations of TVaR. It also gives the formula for TVaR for several distributions – exponential, Pareto, normal and lognormal. We now discuss TVaR of a mixture distribution. If a distribution is the mixture of two distributions and if each of the individual distributions has a clear formulation of TVaR, can we mix the two TVaR's? The answer is that we can provided some adjustments are made. The following gives the formula.

Suppose that the loss $X$ is a mixture of two distributions represented by the random variables $X_1$ and $X_2$, with weights $w$ and $1-w$, respectively. Let $\pi_p$ be the $100p$th percentile of the loss $X$, i.e. $\pi_p=\text{VaR}_p(X)$. Then the tail-value-at-risk at the $100p$ percent security level is:

\displaystyle \begin{aligned} \text{TVaR}_p(X)&=\pi_p+\frac{1}{1-p} \biggl[w \times P(X_1>\pi_p) \times e_{X_1}(\pi_p)\\& \ \ +(1-w) \times P(X_2>\pi_p) \times e_{X_2}(\pi_p)\biggr] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (a) \end{aligned}

The comparison is with formula (3) in the preceding post. That formula shows that TVaR is $\pi_p+e(\pi_p)$. In other words, TVaR is VaR plus the mean excess loss function evaluated at $\pi_p$. The content within the squared brackets in Formula (a) is a weighted average of the two individual mean excess loss functions with the adjustment of multiplying with the probabilities $P(X_1>\pi_p)$ and $P(X_2>\pi_p)$. This formula is useful if the $\pi_p$ (VaR) can be calculated and if the mean excess loss functions are accessible.

We give an example and then show the derivation of Formula (a).

Example 1
The mean excess loss function of an exponential distribution is constant. Let’s consider the mixture of two exponential distributions. Suppose that losses follow a mixture of two exponential distributions where one distribution has mean 5 (75% weight) and the other has mean 10 (25% weight). Determine the VaR and TVaR at the security level 99%.

First, calculate the 99th percentile of the mixture, which is the solution to the following equation.

$\displaystyle 0.75 e^{-x/5}+0.25 e^{-x/10}=1-0.99=0.01$

By letting $y=e^{-x/10}$, we solve the following equation.

$\displaystyle 0.75 y^2+0.25 y-0.01=0$

Use the quadratic formula to solve for $y$. Then solve for $x$. The following is the 99th percentile of the loss $X$.

$\displaystyle \pi_p=-10 \times \text{ln} \biggl(\frac{-1+\sqrt{1.48}}{6} \biggr)=33.2168$

The following gives the TVaR.

\displaystyle \begin{aligned} \text{TVaR}_p(X)&=\pi_p+\frac{1}{1-0.99} \biggl[0.75 \times e^{-\pi_p/5} \times 5 +0.25 \times e^{-\pi_p/10} \times 10 \biggr] \\&=42.7283 \end{aligned}

Note that the mean excess loss for the first exponential distribution is 5 and for the second one is 10 (the unconditional means). The survival functions $P(X_1>\pi_p)$ and $P(X_2>\pi_p)$ are also easy to evaluate. As long as the percentile $\pi_p$ of the mixture is calculated, the formula is very useful. In this example, the two exponential parameters are set so that the calculation of percentiles uses the quadratic formula. If the parameters are set differently, then we can use software to evaluate the required percentile.

Deriving the formula

Suppose that $X$ is the mixture of $X_1$, with weight $w$, and $X_2$, with weight $1-w$. The density function for $X_1$ is $f_1(x)$ and the density function for $X_2$ is $f_2(x)$. The density function of $X$ is then $f(x)=w f_1(x)+(1-w) f_2(x)$. We derive from the basic definition of TVaR. Let $\pi_p$ be the $100p$th percentile of $X$.

\displaystyle \begin{aligned} \text{TVaR}_p(X)&=\frac{\int_{\pi_p}^\infty x f(x) \ dx}{1-p}\\&=\pi_p+\frac{\int_{\pi_p}^\infty (x-\pi_p) f(x) \ dx}{1-p} \\&=\pi_p+\frac{\int_{\pi_p}^\infty (x-\pi_p) (w f_1(x)+(1-w) f_2(x)) \ dx}{1-p} \\&=\pi_p+\frac{1}{1-p} \biggl[w \int_{\pi_p}^\infty (x-\pi_p) f_1(x) \ dx +(1-w) \int_{\pi_p}^\infty (x-\pi_p) f_2(x) \ dx\biggr] \\&=\pi_p+\frac{1}{1-p} \biggl[w \ P(X_1>\pi_p) \ \frac{\int_{\pi_p}^\infty (x-\pi_p) f_1(x) \ dx}{P(X_1>\pi_p)}\\& \ \ +(1-w) \ P(X_2>\pi_p) \ \frac{\int_{\pi_p}^\infty (x-\pi_p) f_2(x) \ dx}{P(X_2>\pi_p)} \biggr] \\&=\pi_p+\frac{1}{1-p} \biggl[w \ P(X_1>\pi_p) \ e_{X_1}(\pi_p) +(1-w) \ P(X_2>\pi_p) \ e_{X_2}(\pi_p) \biggr] \end{aligned}

The formula derived here is for mixtures for two distributions. It is straightforward to extend it for mixtures of any finite-mixture.

Practice Problems

Practice problems are available in the companion blog to reinforce the concepts of value-at-risk and tail-value-at-risk. Practice Problems 10-G and 10-H in that link are for TVaR of mixtures.

actuarial
math

Daniel Ma
mathematics

$\copyright$ 2018 – Dan Ma

# Value-at risk and tail-value-at-risk

In actuarial applications, an important focus is on developing loss distributions for insurance products. In such applications, it is desirable to employ risk measures to evaluate the exposure to risk. Such risk measures are indicators, often one or a small set of numbers, that inform actuaries and risk managers about the degree to which the risk bearing entity is subject to various aspects of risk. This post introduces two risk measures – value-at-risk (VaR) and tail-value-at-risk (TVaR).

Value-at-Risk

One natural question for a risk bearing entity (e.g. insurance companies and other enterprises) is: what is the chance of an adverse outcome? Value-at-risk (VaR) provides a ready answer to this question. Mathematically speaking, VaR is a quantile of the distribution of aggregate losses. For example, VaR at the 99% probability level indicates the level of adverse outcome such that the probability of exceeding this threshold is 1%. More broadly, VaR is the amount of capital required to ensure, with a high level of confidence, that the risk bearing entity does not become insolvent. The security level or probability level is chosen arbitrarily. In practice, it is usually a high number such as 95%, 99% or 99.5%. The preference is for a higher security level when evaluating the risk exposure for the entire enterprise. For a sub unit of the enterprise, the security level may be set to a lower number such as 95%.

Suppose that $X$ is a random variable that models the loss distribution in question. We assume that the support of $X$ is the set of positive real numbers or some appropriate subset. The value-at-risk (VaR) of $X$ at the $100p$th security level, denoted by $\text{VaR}_p(X)$, is the $100p$th percentile of $X$.

In the current discussion, we focus on loss distributions that are continuous random variables. Thus $\text{VaR}_p(X)$ is the value $\pi_p$ such that $P(X>\pi_p)=1-p$. In some ways, VaR is an attractive risk measure. Mathematically speaking, VaR has a clear and simple definition. For certain probability models, VaR can be evaluated in closed form. For those models that have no closed form for percentiles, VaR can be evaluated using software. However, VaR has limitations (this point will be briefly discussed below).

Example 1
Suppose that the loss $X$ is normally distributed with mean $\mu$ and variance $\sigma^2$. Then $\text{VaR}_p(X)=\mu+z_p \cdot \sigma$ where $z_p$ is the $100p$th percentile of the standard normal distribution (i.e. normal with mean 0 and standard deviation 1).

VaR for normal distribution is identical to the risk measure called standard deviation principle. For any loss $X$ with mean $\mu$ and variance $\sigma^2$, the quantity $\mu+k \cdot \sigma$ for some fixed constant $k$ is a risk measure called the standard deviation principle. The constant $k$ is usually chosen such that losses will exceed $\mu+k \cdot \sigma$ with a pre-determined small probability. For losses that are normally distributed, $k=1.645$ for security level $p=0.95$ and $k=2.326$ for security level $p=0.99$.

Comment
The value-at-risk discussed here is for gauging the exposure of risk with respect to insurance losses. Thus we assume that the random variable $X$ is one that takes on positive real values (or some appropriate subset of the positive real number line). In this context, the adverse outcomes would be the extremely high values of the positive real numbers. In other words, the adverse outcomes we wish to guard against would be the right tails of the loss distribution. Thus $\text{VaR}_p(X)$ is evaluated for high values for $p$ such as 0.95 or 0.99 or higher. In the actuarial loss point of view, VaR is the high right tail of the loss distribution.

VaR is also used extensively in banking and investment industry. The random variable $X$ in that setting is a profit and loss distribution, which can extend into the negative real numbers (when there are losses). The adverse outcomes for banking and investment applications would be the extreme left tails of the profit and loss distribution. Thus when VaR is evaluated at the security level 95%, we actually calculate the 5th percentile of the profit and loss distribution.

Tail-Value-at-Risk

Tail-value-at-risk (TVaR) is risk measure that is in many ways superior than VaR. The risk measure VaR is a merely a cutoff point and does not describe the tail behavior beyond the VaR threshold. We will see that TVaR reflects the shape of the tail beyond VaR threshold.

Suppose that $X$ is the random variable that models losses. As before we assume that $X$ takes on positive real numbers. The tail-value-at-risk at the $100p$th security level, denoted by $\text{TVaR}_p(X)$, is the expected loss on the condition that loss exceeds the $100p$th percentile of $X$.

Just as in the discussion of VaR, the discussion of TVaR focuses on continuous distributions. As before, we use $\pi_p$ to denote the $100p$th percentile of $X$. The quantity $\text{TVaR}_p(X)$ can be expressed as follows:

$\displaystyle (1) \ \ \ \ \ \text{TVaR}_p(X)=E(X \lvert X> \pi_p)=\frac{\int_{\pi_p}^\infty x f(x) \ dx}{1-F(\pi_p)}=\frac{\int_{\pi_p}^\infty x f(x) \ dx}{1-p}$

The above is the formulation of TVaR is that based on the definition. To use (1) to obtain TVaR, evaluate VaR and then evaluate the integral on the right side. TVaR is obtained by dividing the integral result by $1-p$. However, there are other formulations that will give more insight into TVaR. The following is another formulation.

$\displaystyle (2) \ \ \ \ \ \text{TVaR}_p(X)=\frac{\int_p^1 \text{VaR}_w(X) \ dw}{1-p}$

The above is derived by the substitution $w=F(x)$ where $F(x)$ is the cumulative distribution function (CDF) of $X$. From a computation standpoint, (2) is not easy to use since it is usually hard to integrate VaR. The value of (2) is in the insight. From (2), we see that TVaR can be viewed as the average of all VaR at the level $w$ above $p$. Thus TVaR tells us much more about the tail of the loss distribution than VaR for just one security level. The following is another formulation.

\displaystyle \begin{aligned} (3) \ \ \ \ \ \text{TVaR}_p(X)&=\frac{\int_{\pi_p}^\infty x f(x) \ dx}{1-p} \\&=\pi_p+\frac{\int_{\pi_p}^\infty (x-\pi_p) f(x) \ dx}{1-p} \\&=\pi_p+e(\pi_p) \\&=\text{VaR}_p(X)+e(\text{VaR}_p(X))\end{aligned}

In (3), the function $e(x)$ is the mean excess loss function evaluated at $x$. Thus TVaR is the VaR plus the mean excess loss evaluated at VaR. Thus TVaR is VaR plus the average excess of all losses exceeding the threshold of VaR. As a result, TVaR gives information on the tail to the right of VaR. For those parametric loss distributions that have accessible formulations for the limited expectation $E(X \wedge x)$, the following is a useful formulation for TVaR.

\displaystyle \begin{aligned} (4) \ \ \ \ \ \text{TVaR}_p(X)&=\text{VaR}_p(X)+e(\text{VaR}_p(X)) \\&=\text{VaR}_p(X)+\frac{E[X]-E[X \wedge \text{VaR}_p(X)]}{1-p} \end{aligned}

See this blog post in a companion blog for more information on mean excess loss function and limited expectation $E(X \wedge x)$.

Tail-value-at-risk is also known as conditional tail expectation (CTE) as well as tail conditional expectation (TCE). CTE and TCE are widely used in North America. In Europe, TVaR is also known as expected shortfall (ES).

Formulas for VaR and TVaR

Many distributions have CDFs that allow relatively easy computation of percentiles. Thus VaR is quite accessible for these parametric distributions. This link has a table of distributional information for parametric distributions that includes value-at-risk. Distributions in this table that have formulas for VaR include Burr distribution, inverse Burr distribution, Pareto distribution, inverse Pareto distribution, loglogistic distribution, paralogistic distribution, inverse paralogistic distribution, Weibull distribution, inverse Weibull distribution, exponential distribution, inverse exponential distribution.

The calculation for TVaR is not so accessible in the table in the given link. In fact, for some distributions that have heavy tails, TVaR is not even defined since TVaR involves an average value of the excess of losses above a threshold. In the remainder of the post, we present formulas for TVaR for four distributions – exponential, Pareto, normal and lognormal.

Distribution VaR TVaR
Exponential $-\theta \text{ln}(1-p)$ $\theta (1- \text{ln}(1-p))$
Pareto $\displaystyle \frac{\theta}{(1-p)^{1/\alpha}}-\theta$ $\displaystyle \text{VaR}_p(X)+\frac{\theta+\text{VaR}_p(X)}{\alpha-1}$ $\alpha>1$
Pareto $\displaystyle \text{VaR}_p(X) \ \frac{\alpha}{\alpha-1}+\frac{\theta}{\alpha-1}$ $\alpha>1$
Pareto $\displaystyle \frac{\theta}{\alpha-1} \biggl[1+ \frac{\alpha}{\theta} \ \text{VaR}_p(X) \biggr]$ $\alpha>1$
Normal $\mu+z_p \ \sigma$ $\displaystyle \mu+ \sigma \ \frac{\phi(z_p)}{1-p}$
Lognormal $\displaystyle e^{\mu+z_p \ \sigma}$ $\displaystyle e^{\mu+0.5 \ \sigma^2} \biggl[\frac{\Phi(\sigma-z_p)}{1-p} \biggr]$

The exponential distribution in the table has parametrized by a scale parameter $\theta$, which happens to be the mean of the distribution (see here for more information).

The Pareto distribution in the table is Pareto Type II Lomax distribution discussed here. It is parametrized by a shape parameter $\alpha$ and a scale parameter $\theta$. The table gives three formulations of TVaR. Note that in order for the mean to exist, the shape parameter must be greater than 1. Thus TVaR is not defined when $\alpha>1$.

For the normal distribution, the parameters are $\mu$ (mean) and $\sigma$ (standard deviation). The lognormal distribution are parametrized by $\mu$ and $\sigma$, meaning that the logarithm of the lognormal distribution is a normal distribution with mean $\mu$ and standard deviation $\sigma$. To use the formulas for TVaR, note that $\phi$ and $\Phi$ are the probability density function and the cumulative distribution function of the standard normal distribution, respectively. Thus $\phi$ and $\Phi$ are:

$\displaystyle \phi(x)=\frac{1}{\sqrt{2 \pi}} \ e^{-0.5 x^2}$

$\displaystyle \Phi(x)=\int_{-\infty}^x \frac{1}{\sqrt{2 \pi}} \ e^{-0.5 t^2} \ dt$

Of course $\Phi(x)$ is evaluated by using a table or software and not by evaluating the integral.

Comment

The risk measure of value-at-risk, though simple in concept and calculation, has shortcomings. One undesirable aspect is that VaR does not possess certain desirable properties among risk measures. VaR is not a coherent risk measure. To be a coherent risk measure, it must satisfy four properties, one of which is subadditivity. When a risk measure is subadditive, the risk measure of two risks combined into one will not be greater than the risk measure for the two risks treated separately. It is desirable to have the benefits of diversification by combing several risks into one. When the risk measure is not subadditive, it cannot model the diversification of risks. It had been shown that VaR is not a subadditive risk measure. On the other hand, TVaR had been shown to be a coherent measure. It satisfies subadditivity and three other desirable properties for risk measures. The topic of risk measures is to be discussed in a subsequent post.

A Formula for a Mixture

A relatively easy to use formula for tail-value-at-risk for a mixture distribution is shown in next post.

Practice Problems

Practice problems are available in the companion blog to reinforce the concepts of value-at-risk and tail-value-at-risk.

actuarial
math

Daniel Ma
mathematics

$\copyright$ 2017 – Dan Ma

# A catalog of parametric severity models

Various parametric continuous probability models have been presented and discussed in this blog. The number of parameters in these models ranges from one to two, and in a small number of cases three. They are all potential candidates for models of severity in insurance applications and in other actuarial applications. This post highlights these models. The list presented here is not exhaustive; it is only a brief catalog. There are other models that are also suitable for actuarial applications but not accounted for here. However, the list is a good place to begin. This post also serves a navigation device (the table shown below contains links to the blog posts).

A Catalog

Many of the models highlighted here are related to gamma distribution either directly or indirectly. So the catalog starts with the gamma distribution at the top and then branches out to the other related models. Mathematically, the gamma distribution is a two-parameter continuous distribution defined using the gamma function. The gamma sub family includes the exponential distribution, Erlang distribution and chi-squared distribution. These are distributions that are gamma distributions with certain restrictions on the one or both of the gamma parameters. Other distributions are obtained by raising a distribution to a power. Others are obtained by mixing distributions.

Here’s a listing of the models. Click on the links to find out more about the distributions.

……Derived From ………………….Model
Gamma function
Gamma sub families
Independent sum of gamma
Exponentiation
Raising to a power Raising exponential to a positive power

Raising exponential to a power

Raising gamma to a power

Raising Pareto to a power

Burr sub families
Mixture
Others

The above table categorizes the distributions according to how they are mathematically derived. For example, the gamma distribution is derived from the gamma function. The Pareto distribution is mathematically an exponential-gamma mixture. The Burr distribution is a transformed Pareto distribution, i.e. obtained by raising a Pareto distribution to a positive power. Even though these distributions can be defined simply by giving the PDF and CDF, knowing how their mathematical origins informs us of the specific mathematical properties of the distributions. Organizing according to the mathematical origin gives us a concise summary of the models.

$\text{ }$

$\text{ }$

From a mathematical standpoint, the gamma distribution is defined using the gamma function.

$\displaystyle \Gamma(\alpha)=\int_0^\infty t^{\alpha-1} \ e^{-t} \ dt$

In this above integral, the argument $\alpha$ is a positive number. The expression $t^{\alpha-1} \ e^{-t}$ in the integrand is always positive. The area in between the curve $t^{\alpha-1} \ e^{-t}$ and the x-axis is $\Gamma(\alpha)$. When this expression is normalized, i.e. divided by $\Gamma(\alpha)$, it becomes a density function.

$\displaystyle f(t)=\frac{1}{\Gamma(\alpha)} \ t^{\alpha-1} \ e^{-t}$

The above function $f(t)$ is defined over all positive $t$. The integral of $f(t)$ over all positive $t$ is 1. Thus $f(t)$ is a density function. It only has one parameter, the $\alpha$, which is the shape parameter. Adding the scale parameter $\theta$ making it a two-parameter distribution. The result is called the gamma distribution. The following is the density function.

$\displaystyle f(x)=\frac{1}{\Gamma(\alpha)} \ \biggl(\frac{1}{\theta}\biggr)^\alpha \ x^{\alpha-1} \ e^{-\frac{x}{\theta}} \ \ \ \ \ \ \ x>0$

Both parameters $\alpha$ and $\theta$ are positive real numbers. The first parameter $\alpha$ is the shape parameter and $\theta$ is the scale parameter.

As mentioned above, many of the distributions listed in the above table is related to the gamma distribution. Some of the distributions are sub families of gamma. For example, when $\alpha$ are positive integers, the resulting distributions are called Erlang distribution (important in queuing theory). When $\alpha=1$, the results are the exponential distributions. When $\alpha=\frac{k}{2}$ and $\theta=2$ where $k$ is a positive integer, the results are the chi-squared distributions (the parameter $k$ is referred to the degrees of freedom). The chi-squared distribution plays an important role in statistics.

Taking independent sum of $n$ independent and identically distributed exponential random variables produces the Erlang distribution, a sub gamma family of distribution. Taking independent sum of $n$ exponential random variables, with pairwise distinct means, produces the hypoexponential distributions. On the other hand, the mixture of $n$ independent exponential random variables produces the hyperexponential distribution.

The Pareto distribution (Pareto Type II Lomax) is the mixture of exponential distributions with gamma mixing weights. Despite the connection with the gamma distribution, the Pareto distribution is a heavy tailed distribution. Thus the Pareto distribution is suitable for modeling extreme losses, e.g. in modeling rare but potentially catastrophic losses.

As mentioned earlier, raising a Pareto distribution to a positive power generates the Burr distribution. Restricting the parameters in a Burr distribution in a certain way will produces the paralogistic distribution. The table indicates the relationships in a concise way. For details, go into the blog posts to get more information.

Tail Weight

Another informative way to categorize the distributions listed in the table is through looking at the tail weight. At first glance, all the distributions may look similar. For example, the distributions in the table are right skewed distributions. Upon closer look, some of the distributions put more weights (probabilities) on the larger values. Hence some of the models are more suitable for models of phenomena with significantly higher probabilities of large or extreme values.

When a distribution significantly puts more probabilities on larger values, the distribution is said to be a heavy tailed distribution (or said to have a larger tail weight). In general tail weight is a relative concept. For example, we say model A has a larger tail weight than model B (or model A has a heavier tail than model B). However, there are several ways to check for tail weight of a given distribution. Here are the four criteria.

Tail Weight Measure What to Look for
1 Existence of moments The existence of more positive moments indicates a lighter tailed distribution.
2 Hazard rate function An increasing hazard rate function indicates a lighter tailed distribution.
3 Mean excess loss function An increasing mean excess loss function indicates a heavier tailed distribution.
4 Speed of decay of survival function A survival function that decays rapidly to zero (as compared to another distribution) indicates a lighter tailed distribution.

Existence of moments
For a positive real number $k$, the moment $E(X^k)$ is defined by the integral $\int_0^\infty x^k \ f(x) \ dx$ where $f(x)$ is the density function of the distribution in question. If the distribution puts significantly more probabilities in the larger values in the right tail, this integral may not exist (may not converge) for some $k$. Thus the existence of moments $E(X^k)$ for all positive $k$ is an indication that the distribution is a light tailed distribution.

In the above table, the only distributions for which all positive moments exist are gamma (including all gamma sub families such as exponential), Weibull, lognormal, hyperexponential, hypoexponential and beta. Such distributions are considered light tailed distributions.

The existence of positive moments exists only up to a certain value of a positive integer $k$ is an indication that the distribution has a heavy right tail. All the other distributions in the table are considered heavy tailed distribution as compared to gamma, Weibull and lognormal. Consider a Pareto distribution with shape parameter $\alpha$ and scale parameter $\theta$. Note that the existence of the Pareto higher moments $E(X^k)$ is capped by the shape parameter $\alpha$. If the Pareto distribution is to model a random loss, and if the mean is infinite (when $\alpha=1$), the risk is uninsurable! On the other hand, when $\alpha \le 2$, the Pareto variance does not exist. This shows that for a heavy tailed distribution, the variance may not be a good measure of risk.

Hazard rate function
The hazard rate function $h(x)$ of a random variable $X$ is defined as the ratio of the density function and the survival function.

$\displaystyle h(x)=\frac{f(x)}{S(x)}$

The hazard rate is called the force of mortality in a life contingency context and can be interpreted as the rate that a person aged $x$ will die in the next instant. The hazard rate is called the failure rate in reliability theory and can be interpreted as the rate that a machine will fail at the next instant given that it has been functioning for $x$ units of time.

Another indication of heavy tail weight is that the distribution has a decreasing hazard rate function. On the other hand, a distribution with an increasing hazard rate function has a light tailed distribution. If the hazard rate function is decreasing (over time if the random variable is a time variable), then the population die off at a decreasing rate, hence a heavier tail for the distribution in question.

The Pareto distribution is a heavy tailed distribution since the hazard rate is $h(x)=\alpha/x$ (Pareto Type I) and $h(x)=\alpha/(x+\theta)$ (Pareto Type II Lomax). Both hazard rates are decreasing function.

The Weibull distribution is a flexible model in that when its shape parameter is $0<\tau<1$, the Weibull hazard rate is decreasing and when $\tau>1$, the hazard rate is increasing. When $\tau=1$, Weibull is the exponential distribution, which has a constant hazard rate.

The point about decreasing hazard rate as an indication of a heavy tailed distribution has a connection with the fourth criterion. The idea is that a decreasing hazard rate means that the survival function decays to zero slowly. This point is due to the fact that the hazard rate function generates the survival function through the following.

$\displaystyle S(x)=e^{\displaystyle -\int_0^x h(t) \ dt}$

Thus if the hazard rate function is decreasing in $x$, then the survival function will decay more slowly to zero. To see this, let $H(x)=\int_0^x h(t) \ dt$, which is called the cumulative hazard rate function. As indicated above, $S(x)=e^{-H(x)}$. If $h(x)$ is decreasing in $x$, $H(x)$ has a lower rate of increase and consequently $S(x)=e^{-H(x)}$ has a slower rate of decrease to zero.

In contrast, the exponential distribution has a constant hazard rate function, making it a medium tailed distribution. As explained above, any distribution having an increasing hazard rate function is a light tailed distribution.

The mean excess loss function
The mean excess loss is the conditional expectation $e_X(d)=E(X-d \lvert X>d)$. If the random variable $X$ represents insurance losses, mean excess loss is the expected loss in excess of a threshold conditional on the event that the threshold has been exceeded. Suppose that the threshold $d$ is an ordinary deductible that is part of an insurance coverage. Then $e_X(d)$ is the expected payment made by the insurer in the event that the loss exceeds the deductible.

Whenever $e_X(d)$ is an increasing function of the deductible $d$, the loss $X$ is a heavy tailed distribution. If the mean excess loss function is a decreasing function of $d$, then the loss $X$ is a lighter tailed distribution.

The Pareto distribution can also be classified as a heavy tailed distribution based on an increasing mean excess loss function. For a Pareto distribution (Type I) with shape parameter $\alpha$ and scale parameter $\theta$, the mean excess loss is $e(X)=d/(\alpha-1)$, which is increasing. The mean excess loss for Pareto Type II Lomax is $e(X)=(d+\theta)/(\alpha-1)$, which is also decreasing. They are both increasing functions of the deductible $d$! This means that the larger the deductible, the larger the expected claim if such a large loss occurs! If the underlying distribution for a random loss is Pareto, it is a catastrophic risk situation.

In general, an increasing mean excess loss function is an indication of a heavy tailed distribution. On the other hand, a decreasing mean excess loss function indicates a light tailed distribution. The exponential distribution has a constant mean excess loss function and is considered a medium tailed distribution.

Speed of decay of the survival function to zero
The survival function $S(x)=P(X>x)$ captures the probability of the tail of a distribution. If a distribution whose survival function decays slowly to zero (equivalently the cdf goes slowly to one), it is another indication that the distribution is heavy tailed. This point is touched on when discussing hazard rate function.

The following is a comparison of a Pareto Type II survival function and an exponential survival function. The Pareto survival function has parameters ($\alpha=2$ and $\theta=2$). The two survival functions are set to have the same 75th percentile, which is $x=2$. The following table is a comparison of the two survival functions.

$\displaystyle \begin{array}{llllllll} \text{ } &x &\text{ } & \text{Pareto } S_X(x) & \text{ } & \text{Exponential } S_Y(x) & \text{ } & \displaystyle \frac{S_X(x)}{S_Y(x)} \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ } \\ \text{ } &2 &\text{ } & 0.25 & \text{ } & 0.25 & \text{ } & 1 \\ \text{ } &10 &\text{ } & 0.027777778 & \text{ } & 0.000976563 & \text{ } & 28 \\ \text{ } &20 &\text{ } & 0.008264463 & \text{ } & 9.54 \times 10^{-7} & \text{ } & 8666 \\ \text{ } &30 &\text{ } & 0.00390625 & \text{ } & 9.31 \times 10^{-10} & \text{ } & 4194304 \\ \text{ } &40 &\text{ } & 0.002267574 & \text{ } & 9.09 \times 10^{-13} & \text{ } & 2.49 \times 10^{9} \\ \text{ } &60 &\text{ } & 0.001040583 & \text{ } & 8.67 \times 10^{-19} & \text{ } & 1.20 \times 10^{15} \\ \text{ } &80 &\text{ } & 0.000594884 & \text{ } & 8.27 \times 10^{-25} & \text{ } & 7.19 \times 10^{20} \\ \text{ } &100 &\text{ } & 0.000384468 & \text{ } & 7.89 \times 10^{-31} & \text{ } & 4.87 \times 10^{26} \\ \text{ } &120 &\text{ } & 0.000268745 & \text{ } & 7.52 \times 10^{-37} & \text{ } & 3.57 \times 10^{32} \\ \text{ } &140 &\text{ } & 0.000198373 & \text{ } & 7.17 \times 10^{-43} & \text{ } & 2.76 \times 10^{38} \\ \text{ } &160 &\text{ } & 0.000152416 & \text{ } & 6.84 \times 10^{-49} & \text{ } & 2.23 \times 10^{44} \\ \text{ } &180 &\text{ } & 0.000120758 & \text{ } & 6.53 \times 10^{-55} & \text{ } & 1.85 \times 10^{50} \\ \text{ } & \text{ } \\ \end{array}$

Note that at the large values, the Pareto right tails retain much more probabilities. This is also confirmed by the ratio of the two survival functions, with the ratio approaching infinity. Using an exponential distribution to model a Pareto random phenomenon would be a severe modeling error even though the exponential distribution may be a good model for describing the loss up to the 75th percentile (in the above comparison). It is the large right tail that is problematic (and catastrophic)!

Since the Pareto survival function and the exponential survival function have closed forms, We can also look at their ratio.

$\displaystyle \frac{\text{pareto survival}}{\text{exponential survival}}=\frac{\displaystyle \frac{\theta^\alpha}{(x+\theta)^\alpha}}{e^{-\lambda x}}=\frac{\theta^\alpha e^{\lambda x}}{(x+\theta)^\alpha} \longrightarrow \infty \ \text{ as } x \longrightarrow \infty$

In the above ratio, the numerator has an exponential function with a positive quantity in the exponent, while the denominator has a polynomial in $x$. This ratio goes to infinity as $x \rightarrow \infty$.

In general, whenever the ratio of two survival functions diverges to infinity, it is an indication that the distribution in the numerator of the ratio has a heavier tail. When the ratio goes to infinity, the survival function in the numerator is said to decay slowly to zero as compared to the denominator.

It is important to examine the tail behavior of a distribution when considering it as a candidate for a model. The four criteria discussed here provide a crucial way to classify parametric models according to the tail weight.

severity models
math

Daniel Ma
mathematics

$\copyright$ 2017 – Dan Ma

# Examples of mixtures

The notion of mixtures is discussed in this previous post. Many probability distributions useful for actuarial modeling are mixture distributions. The previous post touches on some examples – negative binomial distribution (a Poisson-Gamma mixture), Pareto distribution (an exponential-gamma mixture) and the normal-normal mixture. In this post we present additional examples. We discuss the following examples.

1. Poisson-Gamma mixture = Negative Binomial.
2. Normal-Normal mixture = Normal.
3. Exponential-Gamma mixture = Pareto.
4. Exponential-Inverse Gamma mixture = Pareto.
5. Gamma-Gamma mixture = Generalized Pareto.
6. Weibull-Exponential mixture = Loglogistic.
7. Gamma-Geometric mixture = Exponential.
8. Normal-Gamma mixture = Student t.

The first three examples are discussed in the previous post. We discuss the remaining examples in this post.

The Pareto Family

Examples 3 and 4 show that Pareto distributions are mixtures of exponential distributions with either gamma or inverse gamma mixing weights. In Example 3, $X \lvert \Theta$ is an exponential distribution with $\Theta$ being a rate parameter. When $\Theta$ follows a gamma distribution, the resulting mixture is a (Type I Lomax) Pareto distribution. In Example 4, $X \lvert \Theta$ is an exponential distribution with $\Theta$ being a scale parameter. When $\Theta$ follows an inverse gamma distribution, the resulting mixture is also a (Type I Lomax) Pareto distribution.

As a mixture, Example 5 is like Example 3, except that it is a gamma-gamma mixture resulting in a generalized Pareto distribution. Example 3 has been discussed in the previous post. We now discuss Example 4 and Example 5.

Example 4. Suppose that $X \lvert \Theta$ has an exponential distribution where $\Theta$ is a scale parameter.
Further suppose that the random parameter $\Theta$ follows an inverse gamma distribution with parameters $\alpha$ and $\beta$. Then the unconditional distribution for $X$ is a (Type I Lomax) Pareto distribution with shape parameter $\alpha$ and scale parameter $\beta$.

The following gives the cumulative distribution function (CDF) and survival function of the conditional random variable $X \lvert \Theta$.

$F(x \lvert \Theta)=1-e^{- x/\Theta}$

$S(x \lvert \Theta)=e^{- x/\Theta}$

The random parameter $\Theta$ follows an inverse gamma distribution with parameters $\alpha$ and $\beta$. The following is the pdf of $\Theta$:

$\displaystyle g(\theta)=\frac{1}{\Gamma(\alpha)} \ \biggl[\frac{\beta}{\theta}\biggr]^\alpha \ \frac{1}{\theta} \ e^{-\frac{\beta}{ \theta}} \ \ \ \ \ \theta>0$

We show that the unconditional survival function for $X$ is the survival function for the Pareto distribution with parameters $\alpha$ (shape parameter) and $\beta$ (scale parameter).

\displaystyle \begin{aligned} S(x)&=\int_0^\infty S(x \lvert \theta) \ g(\theta) \ d \theta \\&=\int_0^\infty e^{- x/\theta} \ \frac{1}{\Gamma(\alpha)} \ \biggl[\frac{\beta}{\theta}\biggr]^\alpha \ \frac{1}{\theta} \ e^{-\beta / \theta} \ d \theta \\&=\int_0^\infty \frac{1}{\Gamma(\alpha)} \ \biggl[\frac{\beta}{\theta}\biggr]^\alpha \ \frac{1}{\theta} \ e^{-(x+\beta) / \theta} \ d \theta \\&=\frac{\beta^\alpha}{(x+\beta)^\alpha} \ \int_0^\infty \frac{1}{\Gamma(\alpha)} \ \biggl[\frac{x+\beta}{\theta}\biggr]^\alpha \ \frac{1}{\theta} \ e^{-(x+\beta) / \theta} \ d \theta \\&=\biggl(\frac{\beta}{x+\beta} \biggr)^\alpha \end{aligned}

Note that the the integrand in the last integral is a density function for an inverse gamma distribution. Thus the integral is 1 and can be eliminated. The result that remains is the survival function for a Pareto distribution with parameters $\alpha$ and $\beta$. The following gives the CDF and density function of this Pareto distribution.

$\displaystyle F(x)=1-\biggl(\frac{\beta}{x+\beta} \biggr)^\alpha$

$\displaystyle f(x)=\frac{\alpha \ \beta^{\alpha}}{(x+\beta)^{\alpha+1}}$

See here for further information on Pareto Type I Lomax distribution.

Example 5. Suppose that $X \lvert \Theta$ has a gamma distribution with shape parameter $k$ (a known constant) and rate parameter $\Theta$. Further suppose that the random parameter $\Theta$ follows a gamma distribution with shape parameter $\alpha$ and rate parameter $\beta$. Then the unconditional distribution for $X$ is a generalized Pareto distribution with parameters $\alpha$, $\beta$ and $k$.

Conditional on $\Theta=\theta$, the following is the density function of $X$.

$\displaystyle f(x \lvert \theta)=\frac{1}{\Gamma(k)} \ \theta^k \ x^{k-1} \ e^{-\theta x} \ \ \ \ \ x>0$

The following is the density function of the random parameter $\Theta$.

$\displaystyle g(\theta)=\frac{1}{\Gamma(\alpha)} \ \beta^\alpha \ \theta^{\alpha-1} \ e^{-\beta \theta} \ \ \ \ \ \ \theta>0$

The following gives the unconditional density function for $X$.

\displaystyle \begin{aligned} f(x)&=\int_0^\infty f(x \lvert \theta) \ g(\theta) \ d \theta \\&=\int_0^\infty \frac{1}{\Gamma(k)} \ \theta^k \ x^{k-1} \ e^{-\theta x} \ \frac{1}{\Gamma(\alpha)} \ \beta^\alpha \ \theta^{\alpha-1} \ e^{-\beta \theta} \ d \theta \\&=\int_0^\infty \frac{1}{\Gamma(k)} \ \frac{1}{\Gamma(\alpha)} \ \beta^\alpha \ x^{k-1} \ \theta^{\alpha+k-1} \ e^{-(x+\beta) \theta} \ d \theta \\&= \frac{1}{\Gamma(k)} \ \frac{1}{\Gamma(\alpha)} \ \beta^\alpha \ x^{k-1} \frac{\Gamma(\alpha+k)}{(x+\beta)^{\alpha+k}} \int_0^\infty \frac{1}{\Gamma(\alpha+k)} \ (x+\beta)^{\alpha+k} \ \theta^{\alpha+k-1} \ e^{-(x+\beta) \theta} \ d \theta \\&=\frac{\Gamma(\alpha+k)}{\Gamma(\alpha) \ \Gamma(k)} \ \frac{\beta^\alpha \ x^{k-1}}{(x+\beta)^{\alpha+k}} \end{aligned}

Any distribution that has a density function described above is said to be a generalized Pareto distribution with the parameters $\alpha$, $\beta$ and $k$. Its CDF cannot be written in closed form but can be expressed using the incomplete beta function.

\displaystyle \begin{aligned} F(x)&=\int_0^x \frac{\Gamma(\alpha+k)}{\Gamma(\alpha) \ \Gamma(k)} \ \frac{\beta^\alpha \ t^{k-1}}{(t+\beta)^{\alpha+k}} \ dt \\&=\int_0^x \frac{\Gamma(\alpha+k)}{\Gamma(\alpha) \ \Gamma(k)} \ \biggl(\frac{t}{t+\beta} \biggr)^{k-1} \ \biggl(\frac{\beta}{t+\beta} \biggr)^{\alpha-1} \ \frac{\beta}{(t+\beta)^2} \ dt \\&=\frac{\Gamma(\alpha+k)}{\Gamma(\alpha) \ \Gamma(k)} \ \int_0^{\frac{x}{x+\beta}} u^{k-1} \ (1-u)^{\alpha-1} \ du, \ \ \ u=\frac{t}{t+\beta} \\&=\frac{\Gamma(\alpha+k)}{\Gamma(\alpha) \ \Gamma(k)} \ \int_0^{w} t^{k-1} \ (1-t)^{\alpha-1} \ dt, \ \ \ w=\frac{x}{x+\beta} \end{aligned}

The moments can be easily derived for the generalized Pareto distribution but on a limited basis. Since it is a mixture distribution, the unconditional mean is the weighted average of the conditional means.

\displaystyle \begin{aligned} E(X^w)&=\int_0^\infty E(X \lvert \theta) \ g(\theta) \ d \theta \\&=\int_0^\infty \frac{\Gamma(k+w)}{\theta^w \Gamma(k)} \ \frac{1}{\Gamma(\alpha)} \ \beta^\alpha \ \theta^{\alpha-1} \ e^{-\beta \theta} \ d \theta \\&=\frac{\beta^w \ \Gamma(k+w) \ \Gamma(\alpha-w)}{\Gamma(k) \ \Gamma(\alpha)} \int_0^\infty \frac{1}{\Gamma(\alpha-w)} \ \beta^{\alpha-w} \ \theta^{\alpha-w-1} \ e^{-\beta \theta} \ d \theta \\&=\frac{\beta^w \ \Gamma(k+w) \ \Gamma(\alpha-w)}{\Gamma(k) \ \Gamma(\alpha)} \ \ \ \ -k

Note that $E(X)$ has a simple expression $E(X)=\frac{k \beta}{\alpha-1}$ when $1<\alpha$.

When the parameter $k=1$, the conditional distribution for $X \lvert \Theta$ is an exponential distribution. Then the situation reverts back to Example 3, leading to a Pareto distribution. Thus the Pareto distribution is a special case of the generalized Pareto distribution. Both the Pareto distribution and the generalized Pareto distribution have thicker and longer tails than the original conditional gamma distribution.

It turns out that the F distribution is also a special case of the generalized Pareto distribution. The F distribution with $r_1$ and $r_2$ degrees of freedom is the generalized Pareto distribution with parameters $k=r_1/2$, $\alpha=r_2/2$ and $\beta=r_2/r_1$. As a result, the following is the density function.

\displaystyle \begin{aligned} h(x)&=\frac{\Gamma(r_1/2 + r_2/2)}{\Gamma(r_1/2) \ \Gamma(r_2/2)} \ \frac{(r_2/r_1)^{r_2/2} \ x^{r_1/2-1}}{(x+r_2/r_1)^{r_1/2+r_2/2}} \\&=\frac{\Gamma(r_1/2 + r_2/2)}{\Gamma(r_1/2) \ \Gamma(r_2/2)} \ \frac{(r_1/r_2)^{r_1/2} \ x^{r_1/2-1}}{(1+(r_1/r_2)x)^{r_1/2+r_2/2}} \ \ \ \ 0

Another way to generate the F distribution is from taking a ratio of two chi-squared distributions (see Theorem 9 in this previous post). Of course, there is no need to use the explicit form of the density function of the F distribution. In a statistical application, the F distribution is accessed using tables or software.

The Loglogistic Distribution

The loglogistic distribution can be derived as a mixture of Weillbull distribution with exponential mixing weights.

Example 6. Suppose that $X \lvert \Lambda$ has a Weibull distribution with shape parameter $\gamma$ (a known constant) and a parameter $\Lambda$ such that the CDF of $X \lvert \Lambda$ is $F(x \lvert \Lambda)=1-e^{-\Lambda \ x^\gamma}$. Further suppose that the random parameter $\Lambda$ follows an exponential distribution with rate parameter $\theta^{\gamma}$. Then the unconditional distribution for $X$ is a loglogistic distribution with shape parameter $\gamma$ and scale parameter $\theta$.

The following gives the conditional survival function for $X \lvert \Lambda$ and the exponential mixing weight.

$\displaystyle S(x \lvert \lambda)=e^{-\lambda \ x^\gamma}$

$\displaystyle g(\lambda)=\theta^\gamma \ e^{-\theta^\gamma \ \lambda}$

The following gives the unconditional survival function and CDF of $X$ as well as the PDF.

\displaystyle \begin{aligned} S(x)&=\int_0^\infty S(x \lvert \lambda) \ g(\lambda) \ d \lambda \\&=\int_0^\infty e^{-\lambda \ x^\gamma} \ \theta^\gamma \ e^{-\theta^\gamma \ \lambda} \ d \lambda \\&=\int_0^\infty \theta^\gamma \ e^{-(x^\gamma+\theta^\gamma) \ \lambda} \ d \lambda \\&=\frac{\theta^\gamma}{(x^\gamma+\theta^\gamma)} \int_0^\infty (x^\gamma+\theta^\gamma) \ e^{-(x^\gamma+\theta^\gamma) \ \lambda} \ d \lambda \\&=\frac{\theta^\gamma}{x^\gamma+\theta^\gamma} \end{aligned}

\displaystyle \begin{aligned} F(x)&=1-S(x)=1-\frac{\theta^\gamma}{x^\gamma+\theta^\gamma} =\frac{x^\gamma}{x^\gamma+\theta^\gamma} =\frac{(x/\theta)^\gamma}{1+(x/\theta)^\gamma} \end{aligned}

$\displaystyle f(x)=\frac{d}{dx} \biggl( \frac{x^\gamma}{x^\gamma+\theta^\gamma} \biggr)=\frac{\gamma \ (x/\theta)^\gamma}{x [1+(x/\theta)^\gamma]^2}$

Any distribution that has any one of the above three distributional quantities is said to be a loglogistic distribution with shape parameter $\gamma$ and scale parameter $\theta$.

One interesting point about loglogistic distribution that an inverse loglogistic distribution is another loglogistic distribution. Suppose that $X$ has a loglogistic distribution with shape parameter $\gamma$ and scale parameter $\theta$. Let $Y=\frac{1}{X}$. Then $Y$ has a loglogistic distribution with shape parameter $\gamma$ and scale parameter $\theta^{-1}$.

\displaystyle \begin{aligned} P[Y \le y]&=P[\frac{1}{X} \le y] =P[X \ge y^{-1}] =\frac{\theta^\gamma}{y^{-\gamma}+\theta^\gamma} \\&=\frac{\theta^\gamma \ y^\gamma}{1+\theta^\gamma \ y^\gamma} \\&=\frac{y^\gamma}{(\theta^{-1})^\gamma+y^\gamma} \end{aligned}

The above is a survival function for the loglogistic distribution with the desired parameters. Thus there is no need to specially call out the inverse loglogistic distribution.

In order to find the mean and higher moments of the loglogistic distribution, we take the approach of identifying the conditional Weibull means and the weight these means by the exponential mixing weights. Note that the parameter $\Lambda$ in the conditional CDF $F(x \lvert \Lambda)=1-e^{-\Lambda \ x^\gamma}$ is not a scale parameter. The Weibull distribution in this conditional CDF is equivalent to a Weibull distribution with shape parameter $\gamma$ and scale parameter $\Lambda^{-1/\gamma}$. According to formula (4) in this previous post, the $k$th moment of this Weillbull distribution is

$\displaystyle E[ (X \lvert \Lambda)^k]=\Gamma \biggl(1+\frac{k}{\gamma} \biggr) \Lambda^{-k/\gamma}$

The following gives the unconditional $k$th moment of the Weibull-exponential mixure.

\displaystyle \begin{aligned} E[X^k]&=\int_0^\infty E[ (X \lvert \Lambda)^k] \ g(\lambda) \ d \lambda \\&=\int_0^\infty \Gamma \biggl(1+\frac{k}{\gamma} \biggr) \lambda^{-k/\gamma} \ \theta^\gamma \ e^{-\theta^\gamma \ \lambda} \ d \lambda\\&=\Gamma \biggl(1+\frac{k}{\gamma} \biggr) \ \theta^\gamma \int_0^\infty \lambda^{-k/\gamma} \ e^{-\theta^\gamma \ \lambda} \ d \lambda \\&=\theta^k \ \Gamma \biggl(1+\frac{k}{\gamma} \biggr) \int_0^\infty t^{-k/\gamma} \ e^{-t} \ dt \ \ \text{ where } t=\theta^\gamma \lambda \\&=\theta^k \ \Gamma \biggl(1+\frac{k}{\gamma} \biggr) \int_0^\infty t^{[(\gamma-k)/\gamma]-1} \ e^{-t} \ dt \\&=\theta^k \ \Gamma \biggl(1+\frac{k}{\gamma} \biggr) \ \Gamma \biggl(1-\frac{k}{\gamma} \biggr) \ \ \ \ -\gamma

The range $\gamma follows from the fact that the arguments of the gamma function must be positive. Thus the $k$th moments of the loglogistic distribution are limited by its shape parameter $\gamma$. If $\gamma=1$, then $E(X)$ does not exist. For a larger $\gamma$, more moments exist but always a finite number of moments. This is an indication that the loglogistic distribution has a thick (right) tail. This is not surprising since mixture distributions (loglogistic in this case) tend to have thicker tails than the conditional distributions (Weibull in this case). The thicker tail is a result of the uncertainty in the random parameter in the conditional distribution (the Weibull $\Lambda$ in this case).

Another Way to Obtain Exponential Distribution

We now consider Example 7. The following is a precise statement of the gamma-geometric mixture.

Example 7. Suppose that $X \lvert \alpha$ has a gamma distribution with shape parameter $\alpha$ that is a positive integer and rate parameter $\beta$ (a known constant). Further suppose that the random parameter $\alpha$ follows a geometric distribution with probability function $P[Y=\alpha]=p (1-p)^{\alpha-1}$ where $\alpha=1,2,3,\cdots$. Then the unconditional distribution for $X$ is an exponential distribution with rate parameter $\beta p$.

The conditional gamma distribution has an uncertain shape parameter $\alpha$ that can take on positive integers. The parameter $\alpha$ follows a geometric distribution. Here’s the ingredients that go into the mixture.

$\displaystyle f(x \lvert \alpha)=\frac{1}{(\alpha-1)!} \ \beta^\alpha \ x^{\alpha-1} \ e^{-\beta x}$

$P[Y=\alpha]=p (1-p)^{\alpha-1}$

The following is the unconditional probability density function of $X$.

\displaystyle \begin{aligned} f(x)&=\sum \limits_{\alpha=1}^\infty f(x \lvert \alpha) \ P[Y=\alpha] \\&=\sum \limits_{\alpha=1}^\infty \frac{1}{(\alpha-1)!} \ \beta^\alpha \ x^{\alpha-1} \ e^{-\beta x} \ p (1-p)^{\alpha-1} \\&=\beta p \ e^{-\beta x} \sum \limits_{\alpha=1}^\infty \frac{[\beta(1-p) x]^{\alpha-1}}{(\alpha-1)!} \\&=\beta p \ e^{-\beta x} \sum \limits_{\alpha=0}^\infty \frac{[\beta(1-p) x]^{\alpha}}{(\alpha)!} \\&=\beta p \ e^{-\beta x} \ e^{\beta(1-p) x} \end{aligned}

The above density function is that of an exponential distribution with rate parameter $\beta p$.

Student t Distribution

Example 3 (discussed in the previous post) involves a normal distribution with a random mean. Example 8 involves a normal distribution with mean 0 and an uncertain variance, which follows a gamma distribution such that the two gamma parameters are related to a common parameter $r$, which will be the degrees of freedom of the student t distribution. The following is a precise description of the normal-gamma mixture.

Example 8. Suppose that $X \lvert \Lambda$ has a normal distribution with mean 0 and variance $1/\Lambda$. Further suppose that the random parameter $\Lambda$ follows a gamma distribution with shape parameter $\alpha$ and scale parameter $\theta$ such that $2 \alpha=\frac{2}{\theta}=r$ is a positive integer. Then the unconditional distribution for $X$ is a student t distribution with $r$ degrees of freedom.

The following gives the ingredients of the normal-gamma mixture. The first item is the conditional density function of $X$ given $\Lambda$. The second is the density function of the mixing weight $\Lambda$.

$\displaystyle f(x \lvert \lambda)=\frac{1}{\sqrt{1/\lambda} \ \sqrt{2 \pi}} \ e^{-(\lambda/2) \ x^2}=\sqrt{\frac{\lambda}{2 \pi}} \ e^{-(\lambda/2) \ x^2}$

$\displaystyle g(\lambda)=\frac{1}{\Gamma(\alpha)} \biggl( \frac{1}{\theta} \biggr)^\alpha \ \lambda^{\alpha-1} \ e^{-\lambda/\theta}$

The following calculation derives the unconditional density function of $X$.

\displaystyle \begin{aligned} f(x)&=\int_{0}^\infty f(x \lvert \lambda) \ g(\lambda) \ d \lambda \\&=\int_{0}^\infty \sqrt{\frac{\lambda}{2 \pi}} \ e^{-(\lambda/2) \ x^2} \ \frac{1}{\Gamma(\alpha)} \biggl( \frac{1}{\theta} \biggr)^\alpha \ \lambda^{\alpha-1} \ e^{-\lambda/\theta} \ d \lambda \\&=\frac{1}{\Gamma(\alpha)} \ \biggl( \frac{1}{\theta} \biggr)^\alpha \ \frac{1}{\sqrt{2 \pi}} \int_0^\infty \lambda^{\alpha+\frac{1}{2}-1} e^{-(\frac{x^2}{2}+\frac{1}{\theta} ) \lambda} \ d \lambda \\&=\frac{\Gamma(\alpha+\frac{1}{2})}{\Gamma(\alpha)} \ \biggl( \frac{1}{\theta} \biggr)^\alpha \ \frac{1}{\sqrt{2 \pi}} \ \biggl(\frac{2 \theta}{\theta x^2+2} \biggr)^{\alpha+\frac{1}{2}} \\& \times \int_0^\infty \frac{1}{\Gamma(\alpha+\frac{1}{2})} \ \biggl(\frac{\theta x^2+2}{2 \theta} \biggr)^{\alpha+\frac{1}{2}} \lambda^{\alpha+\frac{1}{2}-1} e^{-\frac{\theta x^2+2}{2 \theta} \lambda} \ d \lambda \\&=\frac{\Gamma(\alpha+\frac{1}{2})}{\Gamma(\alpha)} \ \biggl( \frac{1}{\theta} \biggr)^\alpha \ \frac{1}{\sqrt{2 \pi}} \ \biggl(\frac{2 \theta}{\theta x^2+2} \biggr)^{\alpha+\frac{1}{2}} \ \ \ \ \ -\infty

The above density function is in terms of the two parameters $\alpha$ and $\theta$. In the assumptions, the two parameters are related to a common parameter $r$ such that $\alpha=\frac{r}{2}$ and $\theta=\frac{2}{r}$. The following derivation converts to the common $r$.

\displaystyle \begin{aligned} f(x)&=\frac{\Gamma(\frac{r}{2}+\frac{1}{2})}{\Gamma(\frac{r}{2})} \ \biggl( \frac{r}{2} \biggr)^{\frac{r}{2}} \ \frac{1}{\sqrt{2 \pi}} \ \biggl(\frac{2 \frac{2}{r}}{\frac{2}{r} x^2+2} \biggr)^{\frac{r}{2}+\frac{1}{2}} \\&=\frac{\Gamma(\frac{r}{2}+\frac{1}{2})}{\Gamma(\frac{r}{2})} \ \frac{r^{r/2}}{2^{r/2}} \ \frac{1}{2^{1/2} \sqrt{\pi}} \ \biggl(\frac{2/r}{x^2/r+1} \biggr)^{(r+1)/2} \\&=\frac{\Gamma \biggl(\displaystyle \frac{r+1}{2} \biggr)}{\Gamma \biggl(\displaystyle \frac{r}{2} \biggr)} \ \frac{1}{\sqrt{\pi r}} \ \frac{1 \ \ \ \ \ }{\biggl(1+\displaystyle \frac{x^2}{r} \biggr)^{(r+1)/2}} \ \ \ \ \ -\infty

The above density function is that of a student t distribution with $r$ degrees of freedom. Of course, in performing test of significance, the t distribution is accessed by using tables or software. A usual textbook definition of the student t distribution is the ratio of a normal distribution and a chi-squared distribution (see Theorem 6 in this previous post.

$\text{ }$

$\text{ }$

$\text{ }$

$\copyright$ 2017 – Dan Ma

# Mixing probability distributions

This post discusses another way to generate new distributions from old, that of mixing distributions. The resulting distributions are called mixture distributions.

What is a Mixture?

First, let’s start with continuous mixture. Suppose that $X$ is a continuous random variable with probability density function (pdf) $f_{X \lvert \Theta}(x \lvert \theta)$ where $\theta$ is a parameter in the pdf. There may be other parameters in the distribution but they are not relevant at the moment (e.g. these other parameters may be known constants). Suppose that the parameter $\theta$ is an uncertain quantity and is a random variable with pdf $h_\Theta(\theta)$ (if $\Theta$ is a continuous random variable) or with probability function $P(\Theta=\theta)$ (if $\Theta$ a discrete random variable). Then taking the weighted average of $f_{X \lvert \Theta}(x \lvert \theta)$ with $h_\Theta(\theta)$ or $P(\Theta=\theta)$ as weight produces a mixture distribution. The following would be pdf of the resulting mixture distribution.

$\displaystyle (1a) \ \ \ \ \ f_X(x)=\int_{-\infty}^\infty f_{X \lvert \Theta}(x \lvert \theta) \ h_\Theta(\theta) \ d \theta$

$\displaystyle (1b) \ \ \ \ \ f_X(x)=\sum \limits_{\theta} \biggl(f_{X \lvert \Theta}(x \lvert \theta) \ P(\Theta=\theta) \biggr)$

Thus a continuous random variable $X$ is said to be a mixture (or has a mixture distribution) if its probability density function $f_X(x)$ is a weighted average of a family of pdfs $f_{X \lvert \Theta}(x \lvert \theta)$ where the weight is the density function or probability function of the random parameter $\Theta$. The random variable $\Theta$ is said to be the mixing random variable and its pdf or probability function is called the mixing weight.

Another definition of mixture distribution is that the cumulative distribution function (cdf) of the random variable $X$ is the weighted average of a family of cumulative distribution functions indexed by the mixing random variable $\Theta$.

$\displaystyle (2a) \ \ \ \ \ F_X(x)=\int_{-\infty}^\infty F_{X \lvert \Theta}(x \lvert \theta) \ h_\Theta(\theta) \ d \theta$

$\displaystyle (2b) \ \ \ \ \ F_X(x)=\sum \limits_{\theta} \biggl(F_{X \lvert \Theta}(x \lvert \theta) \ P(\Theta=\theta) \biggr)$

The idea of discrete mixture is similar. A discrete random variable $X$ is said to be a mixture if its probability function $P(X=x)$ or cumulative distribution function $P(X \le x)$ is a weighted average of a family of probability functions or cumulative distributions indexed by the mixing random variable $\Theta$. The mixing weight can be discrete or continuous. The following shows the probability function and the cdf of a discrete mixture distribution.

$\displaystyle (3a) \ \ \ \ \ P(X=x)=\int_{-\infty}^\infty P(X=x \lvert \Theta=\theta) \ h_\Theta(\theta) \ d \theta$

$\displaystyle (3b) \ \ \ \ \ P(X \le x)=\int_{-\infty}^\infty P(X \le x \lvert \Theta=\theta) \ h_\Theta(\theta) \ d \theta$

$\text{ }$

$\displaystyle (4a) \ \ \ \ \ P(X=x)=\sum \limits_{\theta} \biggl(P(X=x \lvert \Theta=\theta) \ P(\Theta=\theta) \biggr)$

$\displaystyle (4b) \ \ \ \ \ P(X \le x)=\sum \limits_{\theta} \biggl(P(X \le x \lvert \Theta=\theta) \ P(\Theta=\theta) \biggr)$

When the mixture distribution is a weighted average of finitely many distributions, it is called a $n$-point mixture where $n$ is the number of distributions. Suppose that there are $n$ distributions with pdfs

$f_1(x),f_2(x),\cdots,f_n(x)$ (continuous case)

or probability functions

$P(X_1=x),P(X_2=x),\cdots,P(X_n=x)$ (discrete case)

with mixing probabilities $p_1,p_2,\cdots,p_n$ where the sum of the $p_i$ is 1. Then the following gives the pdf or the probability function of the mixture distribution.

$\displaystyle (5a) \ \ \ \ \ f_X(x)=\sum \limits_{j=1}^n p_j \ f_j(x)$

$\displaystyle (5b) \ \ \ \ \ P(X=x)=\sum \limits_{j=1}^n p_j \ P(X_j=x)$

The cdf for the $n$-point mixture is similarly obtained by weighting the respective conditional cdfs as in (4b).

Distributional Quantities

Once the pdf (or probability function) or cdf of a mixture is established, the other distributional quantities can be derived from the pdf or cdf. Some of the distributional quantities can be obtained by taking weighted average of the corresponding conditional counterparts. For example, the following gives the survival function and moments of a mixture distribution. We assume that the mixing weight is continuous. For discrete mixing weight, simply replace the integral with summation.

$\displaystyle (6a) \ \ \ \ \ S_X(x)=\int_{-\infty}^\infty S_{X \lvert \Theta}(x \lvert \theta) \ h_\Theta(\theta) \ d \theta$

$\displaystyle (6b) \ \ \ \ \ E(X)=\int_{-\infty}^\infty E(X \lvert \theta) \ h_\Theta(\theta) \ d \theta$

$\displaystyle (6c) \ \ \ \ \ E(X^k)=\int_{-\infty}^\infty E(X^k \lvert \theta) \ h_\Theta(\theta) \ d \theta$

Once the moments are obtained, all distributional quantities that are based on moments can be evaluated, calculations such as variance, skewness, and kurtosis. Note that these quantities are not the weighted average of the conditional quantities. For example, variance of a mixture is not the weighted average of the variance of the conditional distributions. In fact, the variance of a mixture has two components.

$\displaystyle (7) \ \ \ \ \ Var(X)=E[Var(X \lvert \Theta)]+Var[E(X \lvert \Theta)]$

The relationship in (7) is called the law of total variance, which is the proper way of computing the unconditional variance $Var(X)$. The first component $E[Var(X \lvert \Theta)]$ is called the expected value of conditional variances, which is the weighted average of the conditional variances. The second component $Var[E(X \lvert \Theta)]$ is called the variance of the conditional means, which represents the additional variance as a result of the uncertainty in the parameter $\Theta$. If there is a great deal of variation among the conditional mean $E(X \lvert \Theta)$, the variation will be reflected in $Var(X)$ through the second component $Var[E(X \lvert \Theta)]$. This will be further illustrated in the examples below.

Motivation

Some of the examples discussed below have gamma distribution as mixing weights. See here for basic information on gamma distribution.

A natural interpretation of mixture is that of the uncertain parameter $\Theta$ in the conditional random variable $X \lvert \Theta$ describes an individual in a large population. For example, the parameter $\Theta$ describes a certain characteristics across the units in a population. In this section, we describe the idea of mixture in an insurance setting. The example is to mix Poisson distributions with a gamma distribution as mixing weight. We will see that the resulting mixture is a negative binomial distribution, which is more dispersed than the conditional Poisson distributions.

Consider a large group of insured drivers for auto collision coverage. Suppose that the claim frequency in a year for an insured driver has a Poisson distribution with mean $\theta$. The conditional probability function for the number of claims in a year for an insured driver is:

$\displaystyle P(X=x \lvert \Theta=\theta)=\frac{e^{-\theta} \ \theta^x}{x!} \ \ \ \ \ \ x=0,1,2,3,\cdots$ where $\theta>0$

The mean number of claims in a year for an insured driver is $\theta$. The parameter $\theta$ reflects the risk characteristics of an insured driver. Since the population of insured drivers is large, there is uncertainty in the parameter $\theta$. Thus it is more appropriate to regard $\theta$ as a random variable in order to capture the wide range of risk characteristics across the individuals in the population. As a result, the above probability function is not unconditional, but, rather, a conditional probability function of $X$.

What about the marginal (unconditional) probability function of $X$? Suppose that the pdf of $\Theta$ has a gamma distribution with the following pdf:

$\displaystyle h_{\Theta}(\theta)=\frac{1}{\Gamma(\alpha)} \ \beta^\alpha \ \theta^{\alpha-1} \ e^{-\beta \theta}$

where $\alpha>0$ and $\beta>0$ are known parameters of the gamma distribution. Then the unconditional pdf of $X$ is the weighted average of the conditional Poisson distribution.

\displaystyle \begin{aligned} P(X=x)&=\int_0^\infty P(X=x \lvert \Theta=\theta) \ h_{\Theta}(\theta) \ d \theta \\&=\int_0^\infty \frac{e^{-\theta} \ \theta^x}{x!} \ \frac{1}{\Gamma(\alpha)} \ \beta^\alpha \ \theta^{\alpha-1} \ e^{-\beta \theta} \\&= \frac{\beta^\alpha}{x! \Gamma(\alpha)} \int_0^\infty \theta^{x+\alpha-1} \ e^{(\beta+1) \theta} \ d \theta \\&=\frac{\beta^\alpha}{x! \Gamma(\alpha)} \ \frac{\Gamma(x+\alpha)}{(\beta+1)^{x+\alpha}} \int_0^\infty \frac{1}{\Gamma(x+\alpha)} \ (\beta+1)^{x+\alpha} \ \theta^{x+\alpha-1} \ e^{(\beta+1) \theta} \ d \theta \\&=\frac{\beta^\alpha}{x! \Gamma(\alpha)} \ \frac{\Gamma(x+\alpha)}{(\beta+1)^{x+\alpha}} \\&=\frac{\Gamma(x+\alpha)}{x! \ \Gamma(\alpha)} \ \biggl(\frac{\beta}{\beta+1} \biggr)^\alpha \biggl(\frac{1}{\beta+1} \biggr)^x \ \ x=0,1,2,\cdots \end{aligned}

Note that the integral in the 4th step is 1 since the integrand is a gamma density function. The probability function at the last step is that of a negative binomial distribution. If the parameter $\alpha$ is a positive integer, then the following gives the probability function of $X$ after simplifying the expression with gamma function.

$\displaystyle P(X=x)=\left\{ \begin{array}{ll} \displaystyle \biggl(\frac{\beta}{\beta+1} \biggr)^\alpha &\ x=0 \\ \text{ } & \text{ } \\ \displaystyle \frac{(x-1+\alpha) \cdots (1+\alpha) \alpha}{x!} \ \biggl(\frac{\beta}{\beta+1} \biggr)^\alpha \biggl(\frac{1}{\beta+1} \biggr)^x &\ x=1,2,\cdots \end{array} \right.$

This probability function can be further simplified as the following:

$\displaystyle P(X=x)=\binom{x+\alpha-1}{x} \biggl(\frac{\beta}{\beta+1} \biggr)^\alpha \biggl(\frac{1}{\beta+1} \biggr)^x$

where $x=0,1,2,\cdots$. This is one form of a negative binomial distribution. The mean is $E(X)=\frac{\alpha}{\beta}$ and the variance is $Var(X)=\frac{\alpha}{\beta} (1+\frac{1}{\beta})$. The variance of the negative binomial distribution is greater than the mean. In a Poisson distribution, the mean equals the variance. Thus the unconditional claim frequency $X$ is more dispersed than its conditional distributions. This is a characteristic of mixture distributions. The uncertainty in the parameter variable $\Theta$ has the effect of increasing the unconditional variance of the mixture distribution of $X$. Recall that the variance of a mixture distribution has two components, the weighted average of the conditional variances and the variance of the conditional means. The second component represents the additional variance introduced by the uncertainty in the parameter $\Theta$.

More Examples

We now further illustrate the notion of mixture with a few more examples. Many familiar distributions are mixture distribution. The negative binomial distribution is a mixture of Poisson distributions with gamma mixing weight as discussed above. The Pareto distribution, more specifically Pareto Type I Lomax, is a mixture of exponential distributions with gamma mixing weight (see Example 2 below). Example 3 discusses the normal-normal mixture. Example 1 demonstrates numerical calculation involving a finite mixture.

Example 1
Suppose that the size of an auto collision claim from a large group of insured drivers is a mixture of three exponential distributions with means 5, 8 and 10 (with respective weights 0.75, 0.15 and 0.10, respectively). Discuss the mixture distribution.

The pdf and cdf are the weighted averages of the respective exponential quantities.

\displaystyle \begin{aligned} f_X(x)&=0.75 \ (0.2 e^{-0.2x} )+0.15 \ (0.125 e^{-0.125x} )+0.10 (0.10 e^{-0.10x}) \\&\text{ } \\&=0.15 \ e^{-0.2x} +0.01875 \ e^{-0.125x}+0.01 \ e^{-0.10x} \end{aligned}

\displaystyle \begin{aligned} F_X(x)&=0.75 \ (1- e^{-0.2x} )+0.15 \ (1- e^{-0.125x} )+0.10 (1- e^{-0.10x}) \\&\text{ } \\&=1-0.75 \ e^{-0.2x} -0.15 \ e^{-0.125x}-0.10 \ e^{-0.10x} \end{aligned}

$\displaystyle S_X(x)=0.75 \ e^{-0.2x} +0.15 \ e^{-0.125x}+0.10 \ e^{-0.10x}$

For a randomly selected claim from this population of insured drivers, what is the probability that it exceeds 10? The answer is $S_X(10)=0.1813$. The pdf and cdf of the mixture will allow us to derive other distributional quantities such as moments and then using the moments to derive skewness and kurtosis. The moments for exponential distribution has a closed form. Then the moments of the mixture distribution is simply the weighted average of the exponential moments.

$\displaystyle E(X^k)=0.75 \ [5^k \ k!]+0.15 \ [8^k \ k!]+0.10 \ [10^k \ k!]$

where $k$ is a positive integer. The following evaluate the first four moments.

$\displaystyle E(X)=0.75 \ 5+0.15 \ 8+0.10 \ 10=5.95$

$\displaystyle E(X^2)=0.75 \ (5^2 \ 2!)+0.15 \ (8^2 \ 2!)+0.10 \ (10^2 \ 2!)=76.7$

$\displaystyle E(X^3)=0.75 \ (5^3 \ 3!)+0.15 \ (8^3 \ 3!)+0.10 \ (10^3 \ 3!)=1623.3$

$\displaystyle E(X^4)=0.75 \ (5^4 \ 4!)+0.15 \ (8^4 \ 4!)+0.10 \ (10^4 \ 4!)=49995.6$

The variance of $X$ is $Var(X)=76.7-5.95^2=41.2975$. The three conditional exponential variances are 25, 64 and 100. The weighted average of these would be 38.35. Because of the uncertainty resulting from not knowing which exponential distribution the claim is from, the unconditional variance is larger than 38.35.

The skewness of a distribution is the third central moments and the kurtosis is defined as the fourth central moment. Each of them can be expressed in terms of the raw moments up to the third or fourth raw moment.

$\displaystyle \gamma=E\biggl[\biggl( \frac{X-\mu}{\sigma} \biggr)^3\biggr]=\frac{E(X^3)-3 \mu \sigma^2-\mu^3}{(\sigma^2)^{1.5}}$

$\displaystyle \text{Kurt}[X]=E\biggl[\biggl( \frac{X-\mu}{\sigma} \biggr)^4\biggr]=\frac{E(X^4)-4 \mu E(X^3)+6 \mu^2 E(X^2)-3 \mu^4}{\sigma^4}$

Note that $\mu=E(X)$ and $\sigma^2=Var(X)$. The expressions on the right hand side are in terms of the raw moments $E(X^k)$ up to $k=4$. Plugging in the raw moments produces the skewness $\gamma=2.5453$ and kurtosis $\text{Kurt}[X]=14.0097$. The excess kurtosis is then 11.0097 (subtracting 3 from the kurtosis).

The skewness and excess kurtosis of an exponential distribution are always 2 and 6, respectively. One take way is that skewness and kurtosis of a mixture is not the weighted average of the conditional counterparts. In this particular case, the mixture is more skewed than the individual exponential distributions. Kurtosis is a measure of whether the data are heavy-tailed or light-tailed relative to a normal distribution (the kurtosis of a normal distribution is 3). Since the excess kurtosis for exponential distributions is 6, this mixture distribution is considered to be heavy tailed and to have higher likelihood of outliers.

Example 2 (Exponential-Gamma Mixture)
The Pareto distribution (Type I Lomax) is a mixture of exponential distributions with gamma mixing weight. Suppose $X$ has the exponential pdf $f_{X \lvert \Theta}(x \lvert \theta)=\theta \ e^{-\theta x}$, where $x>0$, conditional on the parameter $\Theta$. Suppose that the pdf of $\Theta$ has a gamma distribution with the following pdf:

$\displaystyle h_{\Theta}(\theta)=\frac{1}{\Gamma(\alpha)} \ \beta^\alpha \ \theta^{\alpha-1} \ e^{-\beta \theta}$

Then the following gives the unconditional pdf of the random variable $X$.

\displaystyle \begin{aligned} f_X(x)&=\int_0^\infty f_{X \lvert \Theta}(x \lvert \theta) \ h_{\Theta}(\theta) \ d \theta \\&=\int_0^\infty \theta \ e^{-\theta x} \ \frac{1}{\Gamma(\alpha)} \ \beta^\alpha \ \theta^{\alpha-1} \ e^{-\beta \theta} \ d \theta \\&= \frac{\beta^\alpha}{\Gamma(\alpha)} \int_0^\infty \theta^{\alpha+1-1} \ e^{-(x+\beta) \theta} \ d \theta \\&= \frac{\beta^\alpha}{\Gamma(\alpha)} \frac{\Gamma(\alpha+1)}{(x+\beta)^{\alpha+1}} \int_0^\infty \frac{1}{\Gamma(\alpha+1)} \ (x+\beta)^{\alpha+1} \ \theta^{\alpha+1-1} \ e^{-(x+\beta) \theta} \ d \theta \\&=\frac{\beta^\alpha}{\Gamma(\alpha)} \frac{\Gamma(\alpha+1)}{(x+\beta)^{\alpha+1}} \\&= \frac{\alpha \ \beta^{\alpha}}{(x+\beta)^{\alpha+1}} \end{aligned}

The above is the density of the Pareto Type I Lomax distribution. Pareto distribution is discussed here. The example of exponential-gamma mixture is discussed here.

Example 3 (Normal-Normal Mixture)
Conditional on $\Theta=\theta$, consider a normal random variable $X$ with mean $\theta$ and variance $v$ where $v$ is known. The following is the conditional density function of $X$.

$\displaystyle f_{X \lvert \Theta}(x \lvert \theta)=\frac{1}{\sqrt{2 \pi v}} \ \text{exp}\biggl[-\frac{1}{2v}(x-\theta)^2 \biggr] \ \ \ -\infty

Suppose that the parameter $\Theta$ is normally distributed with mean $\mu$ and variance $a$ (both known parameters). The following is the density function of $\Theta$.

$\displaystyle f_{\Theta}(\theta)=\frac{1}{\sqrt{2 \pi a}} \ \text{exp}\biggl[-\frac{1}{2a}(\theta-\mu)^2 \biggr] \ \ \ -\infty

Determine the unconditional pdf of $X$.

\displaystyle \begin{aligned} f_X(x)&=\int_{-\infty}^\infty \frac{1}{\sqrt{2 \pi v}} \ \text{exp}\biggl[-\frac{1}{2v}(x-\theta)^2 \biggr] \ \frac{1}{\sqrt{2 \pi a}} \ \text{exp}\biggl[-\frac{1}{2a}(\theta-\mu)^2 \biggr] \ d \theta \\&=\frac{1}{2 \pi \sqrt{va}} \int_{-\infty}^\infty \text{exp}\biggl[-\frac{1}{2v}(x-\theta)^2 -\frac{1}{2a}(\theta-\mu)^2\biggr] \ d \theta \end{aligned}

The expression in the exponent has the following equivalent expression.

$\displaystyle \frac{(x-\theta)^2}{v}+\frac{(\theta-\mu)^2}{a}=\frac{a+v}{va} \biggl[\theta-\frac{ax+v \mu}{a+v}\biggr]^2 +\frac{(x-\mu)^2}{a+v}$

Continuing the derivation:

\displaystyle \begin{aligned} f_X(x)&=\frac{1}{2 \pi \sqrt{va}} \int_{-\infty}^\infty \text{exp}\biggl[-\frac{1}{2} \biggl(\frac{a+v}{va} \biggl[\theta-\frac{ax+v \mu}{a+v}\biggr]^2 +\frac{(x-\mu)^2}{a+v} \biggr) \biggr] \ d \theta \\&\displaystyle =\frac{\text{exp}\biggl[\displaystyle -\frac{(x-\mu)^2}{2(a+v)} \biggr]}{2 \pi \sqrt{va}} \int_{-\infty}^\infty \text{exp}\biggl[\displaystyle -\frac{1}{2} \biggl(\frac{a+v}{va} \biggl[\theta-\frac{ax+v \mu}{a+v}\biggr]^2 \biggr) \biggr] \ d \theta \\&=\frac{\text{exp}\biggl[\displaystyle -\frac{(x-\mu)^2}{2(a+v)} \biggr]}{\sqrt{2 \pi (a+v)} } \int_{-\infty}^\infty \frac{1}{\sqrt{2 \pi}} \sqrt{\frac{a+v}{va}} \ \text{exp}\biggl[-\frac{1}{2} \biggl(\frac{a+v}{va} \biggl[\theta-\frac{ax+v \mu}{a+v}\biggr]^2 \biggr) \biggr] \ d \theta \\&=\frac{\text{exp}\biggl[\displaystyle -\frac{(x-\mu)^2}{2(a+v)} \biggr]}{\sqrt{2 \pi (a+v)} } \end{aligned}

Note that the integrand in the integral in the third line is the density function of a normal distribution with mean $\frac{ax+v \mu}{a+v}$ and variance $\frac{va}{a+v}$. Hence the integral is 1. The last expression is the unconditional pdf of $X$, repeated as follows.

$\displaystyle f_X(x)=\frac{1}{\sqrt{2 \pi (a+v)}} \ \text{exp}\biggl[-\frac{(x-\mu)^2}{2(a+v)} \biggr] \ \ \ \ -\infty

The above is the pdf of a normal distribution with mean $\mu$ and variance $a+v$. Thus the mixing normal distribution with mean $\Theta$ and variance $v$ with the mixing weight $\Theta$ being normally distributed with mean $\mu$ and variance $a$ produces a normal distribution with mean $\mu$ (same mean as the mixing weight) and variance $a+v$ (sum of the conditional variance and the mixing variance).

The mean of the conditional normal distribution is uncertain. When the mean $\Theta$ follows a normal distribution with mean $\mu$, the mixture is a normal distribution that centers around $\mu$, however, with increased variance $a+v$. The increased variance of the unconditional distribution reflects the uncertainty of the parameter $\Theta$.

Remarks

Mixture distributions can be used to model a statistical population with subpopulations, where the conditional density functions are the densities on the subpopulations, and the mixing weights are the proportions of each subpopulation in the overall population. If the population can be divided into finite number of homogeneous subpopulations, then the model would be a finite mixture as in Example 1. In certain situations, continuous mixing weights may be more appropriate (e.g. Poisson-Gamma mixture).

Many other familiar distributions are mixture distributions and are discussed in the next post.

$\text{ }$

$\text{ }$

$\text{ }$

$\copyright$ 2017 – Dan Ma