This post discusses another way to generate new distributions from old, that of mixing distributions. The resulting distributions are called mixture distributions.
What is a Mixture?
First, let’s start with continuous mixture. Suppose that is a continuous random variable with probability density function (pdf) where is a parameter in the pdf. There may be other parameters in the distribution but they are not relevant at the moment (e.g. these other parameters may be known constants). Suppose that the parameter is an uncertain quantity and is a random variable with pdf (if is a continuous random variable) or with probability function (if a discrete random variable). Then taking the weighted average of with or as weight produces a mixture distribution. The following would be pdf of the resulting mixture distribution.
Thus a continuous random variable is said to be a mixture (or has a mixture distribution) if its probability density function is a weighted average of a family of pdfs where the weight is the density function or probability function of the random parameter . The random variable is said to be the mixing random variable and its pdf or probability function is called the mixing weight.
Another definition of mixture distribution is that the cumulative distribution function (cdf) of the random variable is the weighted average of a family of cumulative distribution functions indexed by the mixing random variable .
The idea of discrete mixture is similar. A discrete random variable is said to be a mixture if its probability function or cumulative distribution function is a weighted average of a family of probability functions or cumulative distributions indexed by the mixing random variable . The mixing weight can be discrete or continuous. The following shows the probability function and the cdf of a discrete mixture distribution.
When the mixture distribution is a weighted average of finitely many distributions, it is called a -point mixture where is the number of distributions. Suppose that there are distributions with pdfs
(continuous case)
or probability functions
(discrete case)
with mixing probabilities where the sum of the is 1. Then the following gives the pdf or the probability function of the mixture distribution.
The cdf for the -point mixture is similarly obtained by weighting the respective conditional cdfs as in (4b).
Distributional Quantities
Once the pdf (or probability function) or cdf of a mixture is established, the other distributional quantities can be derived from the pdf or cdf. Some of the distributional quantities can be obtained by taking weighted average of the corresponding conditional counterparts. For example, the following gives the survival function and moments of a mixture distribution. We assume that the mixing weight is continuous. For discrete mixing weight, simply replace the integral with summation.
Once the moments are obtained, all distributional quantities that are based on moments can be evaluated, calculations such as variance, skewness, and kurtosis. Note that these quantities are not the weighted average of the conditional quantities. For example, variance of a mixture is not the weighted average of the variance of the conditional distributions. In fact, the variance of a mixture has two components.
The relationship in (7) is called the law of total variance, which is the proper way of computing the unconditional variance . The first component is called the expected value of conditional variances, which is the weighted average of the conditional variances. The second component is called the variance of the conditional means, which represents the additional variance as a result of the uncertainty in the parameter . If there is a great deal of variation among the conditional mean , the variation will be reflected in through the second component . This will be further illustrated in the examples below.
Motivation
Some of the examples discussed below have gamma distribution as mixing weights. See here for basic information on gamma distribution.
A natural interpretation of mixture is that of the uncertain parameter in the conditional random variable describes an individual in a large population. For example, the parameter describes a certain characteristics across the units in a population. In this section, we describe the idea of mixture in an insurance setting. The example is to mix Poisson distributions with a gamma distribution as mixing weight. We will see that the resulting mixture is a negative binomial distribution, which is more dispersed than the conditional Poisson distributions.
Consider a large group of insured drivers for auto collision coverage. Suppose that the claim frequency in a year for an insured driver has a Poisson distribution with mean . The conditional probability function for the number of claims in a year for an insured driver is:
where
The mean number of claims in a year for an insured driver is . The parameter reflects the risk characteristics of an insured driver. Since the population of insured drivers is large, there is uncertainty in the parameter . Thus it is more appropriate to regard as a random variable in order to capture the wide range of risk characteristics across the individuals in the population. As a result, the above probability function is not unconditional, but, rather, a conditional probability function of .
What about the marginal (unconditional) probability function of ? Suppose that the pdf of has a gamma distribution with the following pdf:
where and are known parameters of the gamma distribution. Then the unconditional pdf of is the weighted average of the conditional Poisson distribution.
Note that the integral in the 4th step is 1 since the integrand is a gamma density function. The probability function at the last step is that of a negative binomial distribution. If the parameter is a positive integer, then the following gives the probability function of after simplifying the expression with gamma function.
This probability function can be further simplified as the following:
where . This is one form of a negative binomial distribution. The mean is and the variance is . The variance of the negative binomial distribution is greater than the mean. In a Poisson distribution, the mean equals the variance. Thus the unconditional claim frequency is more dispersed than its conditional distributions. This is a characteristic of mixture distributions. The uncertainty in the parameter variable has the effect of increasing the unconditional variance of the mixture distribution of . Recall that the variance of a mixture distribution has two components, the weighted average of the conditional variances and the variance of the conditional means. The second component represents the additional variance introduced by the uncertainty in the parameter .
More Examples
We now further illustrate the notion of mixture with a few more examples. Many familiar distributions are mixture distribution. The negative binomial distribution is a mixture of Poisson distributions with gamma mixing weight as discussed above. The Pareto distribution, more specifically Pareto Type I Lomax, is a mixture of exponential distributions with gamma mixing weight (see Example 2 below). Example 3 discusses the normal-normal mixture. Example 1 demonstrates numerical calculation involving a finite mixture.
Example 1
Suppose that the size of an auto collision claim from a large group of insured drivers is a mixture of three exponential distributions with means 5, 8 and 10 (with respective weights 0.75, 0.15 and 0.10, respectively). Discuss the mixture distribution.
The pdf and cdf are the weighted averages of the respective exponential quantities.
For a randomly selected claim from this population of insured drivers, what is the probability that it exceeds 10? The answer is . The pdf and cdf of the mixture will allow us to derive other distributional quantities such as moments and then using the moments to derive skewness and kurtosis. The moments for exponential distribution has a closed form. Then the moments of the mixture distribution is simply the weighted average of the exponential moments.
where is a positive integer. The following evaluate the first four moments.
The variance of is . The three conditional exponential variances are 25, 64 and 100. The weighted average of these would be 38.35. Because of the uncertainty resulting from not knowing which exponential distribution the claim is from, the unconditional variance is larger than 38.35.
The skewness of a distribution is the third central moments and the kurtosis is defined as the fourth central moment. Each of them can be expressed in terms of the raw moments up to the third or fourth raw moment.
Note that and . The expressions on the right hand side are in terms of the raw moments up to . Plugging in the raw moments produces the skewness and kurtosis . The excess kurtosis is then 11.0097 (subtracting 3 from the kurtosis).
The skewness and excess kurtosis of an exponential distribution are always 2 and 6, respectively. One take way is that skewness and kurtosis of a mixture is not the weighted average of the conditional counterparts. In this particular case, the mixture is more skewed than the individual exponential distributions. Kurtosis is a measure of whether the data are heavy-tailed or light-tailed relative to a normal distribution (the kurtosis of a normal distribution is 3). Since the excess kurtosis for exponential distributions is 6, this mixture distribution is considered to be heavy tailed and to have higher likelihood of outliers.
Example 2 (Exponential-Gamma Mixture)
The Pareto distribution (Type I Lomax) is a mixture of exponential distributions with gamma mixing weight. Suppose has the exponential pdf , where , conditional on the parameter . Suppose that the pdf of has a gamma distribution with the following pdf:
Then the following gives the unconditional pdf of the random variable .
The above is the density of the Pareto Type I Lomax distribution. Pareto distribution is discussed here. The example of exponential-gamma mixture is discussed here.
Example 3 (Normal-Normal Mixture)
Conditional on , consider a normal random variable with mean and variance where is known. The following is the conditional density function of .
Suppose that the parameter is normally distributed with mean and variance (both known parameters). The following is the density function of .
Determine the unconditional pdf of .
The expression in the exponent has the following equivalent expression.
Continuing the derivation:
Note that the integrand in the integral in the third line is the density function of a normal distribution with mean and variance . Hence the integral is 1. The last expression is the unconditional pdf of , repeated as follows.
The above is the pdf of a normal distribution with mean and variance . Thus the mixing normal distribution with mean and variance with the mixing weight being normally distributed with mean and variance produces a normal distribution with mean (same mean as the mixing weight) and variance (sum of the conditional variance and the mixing variance).
The mean of the conditional normal distribution is uncertain. When the mean follows a normal distribution with mean , the mixture is a normal distribution that centers around , however, with increased variance . The increased variance of the unconditional distribution reflects the uncertainty of the parameter .
Remarks
Mixture distributions can be used to model a statistical population with subpopulations, where the conditional density functions are the densities on the subpopulations, and the mixing weights are the proportions of each subpopulation in the overall population. If the population can be divided into finite number of homogeneous subpopulations, then the model would be a finite mixture as in Example 1. In certain situations, continuous mixing weights may be more appropriate (e.g. Poisson-Gamma mixture).
Many other familiar distributions are mixture distributions and are discussed in the next post.
2017 – Dan Ma