The Chi-Squared Distribution, Part 3b

This post is a continuation of the previous post (Part 3a) on chi-squared test and is also part of a series of posts on chi-squared distribution. The first post (Part 1) is an introduction on the chi-squared distribution. The second post (Part 2) is on the chi-squared distribution as mathematical tools for inference involving quantitative variables. Part 3, which focuses on inference on categorical variables using the Pearson’s chi-squared statistic, is broken up in three posts. Part 3a is an introduction on the chi-squared test statistic and explains how to perform the chi-squared goodness-of-fit test. Part 3b (this post) focuses on using the chi-squared statistic to compare several populations (test of homogeneity). Part 3c (the next post) focuses on the test of independence.

_______________________________________________________________________________________________

Comparing Two Distributions

The interpretation of the chi-squared test discussed in this post is called chi-squared test of homogeneity. In this post, we show that the chi-squared statistic is employed to test whether the cell probabilities for certain categories are identical across several populations. We start by examining the two-population case, which will be fairly easily extended to the case of more than two populations.

Suppose that a multinomial experiment can result in k distinct outcomes. Suppose that the experiment is performed two times with the two samples drawn from two different populations. Let p_{1,j} be the probability that the outcome in the first experiment falls into the jth category (or cell j) and let p_{2,j} be the probability that the outcome in the second experiment falls into the jth category (or cell j) where j=1,2,\cdots,k. Furthermore, suppose that there are n_1 and n_2 independent multinomial trials in the first experiment and the second experiment, respectively.

We are interested in the random variables Y_{1,1}, Y_{1,2}, \cdots, Y_{1,k} and the random variables Y_{2,1}, Y_{2,2}, \cdots, Y_{2,k} where Y_{1,j} is the number of trials in the first experiment whose outcomes fall into cell j and Y_{2,j} is the number of trials in the second experiment whose outcomes fall into cell j. Then the sampling distribution of each of the following

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{1,j}-n_1 \ p_{1,j})^2}{n_1 \ p_{1,j}}

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{2,j}-n_2 \ p_{2,j})^2}{n_2 \ p_{2,j}}

has an approximate chi-squared distribution with k-1 degrees of freedom (discussed here). Because the two experiments are independent, the following sum

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{1,j}-n_1 \ p_{1,j})^2}{n_1 \ p_{1,j}}+\sum \limits_{j=1}^k \frac{(Y_{2,j}-n_2 \ p_{2,j})^2}{n_2 \ p_{2,j}}

has an approximate chi-squared distribution with k-1+k-1=2k-2 degrees of freedom. When p_{1,j} and p_{2,j}, j=1,2,\cdots,k, are unknown, we wish to test the following hypothesis.

    H_0: p_{1,1}=p_{2,1}, p_{1,2}=p_{2,2}, \cdots, p_{1,j}=p_{2,j}, \cdots, p_{1,k}=p_{2,k}

In other words, we wish to test the hypothesis that the cell probabilities associated with the two independent experiments are equal. Since the cell probabilities are generally unknown, we can use sample data to estimate p_{1,j} and p_{2,j}. How do we do that? If the null hypothesis H_0 is true, then the two independent experiments can be viewed as one combined experiment. Then the the following ratio

    \displaystyle \hat{p}_j=\frac{Y_{1,j}+Y_{2,j}}{n_1+n_2}

is the sample frequency of the event corresponding to cell j, j=1,2,\cdots,k. Furthermore, we only have to estimate p_{1,j} and p_{2,j} using \hat{p}_j for j=1,2,\cdots,k-1 since the estimator of p_{1,k} and p_{2,k} is 1-\hat{p}_1-\hat{p}_2-\cdots-\hat{p}_{k-1}. With all these in mind, the following is the test statistic we will need.

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{1,j}-n_1 \ \hat{p}_j)^2}{n_1 \ \hat{p}_j}+\sum \limits_{j=1}^k \frac{(Y_{2,j}-n_2 \ \hat{p}_j)^2}{n_2 \ \hat{p}_j}

Since k-1 parameters are estimated, the degrees of freedom of this test statistic is obtained by subtracting k-1 from 2k-2. Thus the degrees of freedom is 2k-2-(k-1)=k-1. We test the null hypothesis H_0 against all alternatives using the upper tailed chi-squared test. We use two examples to demonstrate how this procedure is done.

Example 1
A Million Random Digits with 100,000 Normal Deviates is a book with random numbers published by the RAND Corporation in 1955. It contains 1,000,000 random digits and was an important work in statistics and was used extensively in random number generation in the 20th century. A typical way to pick random numbers from the book is to randomly select a page and then randomly select a point on that page (row and column). Then read off the random digits from that point (going down and then continue with the next columns) until obtaining the desired number of digits. We selected 1,000 random digits in this manner from the book and compare them with the random digits generated in Excel using the Rand() function. The following table shows the frequency distributions of the digits from the two sources. In the following table, MRD = Million Random Digits. Test whether the distributions of digits are the same between MRD and Excel.

    \displaystyle \begin{array} {rrrrr} \text{Digit} & \text{ } & \text{Frequency (MRD)}  & \text{ } & \text{Frequency (Excel)}    \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ 0 & \text{ } & 90  & \text{ } & 93  \\ 1 & \text{ } & 112  & \text{ } & 96  \\ 2 & \text{ } & 104  & \text{ } & 105  \\ 3 & \text{ } & 102  & \text{ } & 95  \\ 4 & \text{ } & 84  & \text{ } & 112  \\ 5 & \text{ } & 110  & \text{ } & 103  \\ 6 & \text{ } & 106  & \text{ } & 98  \\ 7 & \text{ } & 101  & \text{ } & 114  \\ 8 & \text{ } & 101  & \text{ } & 106  \\ 9 & \text{ } & 90  & \text{ } & 78  \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ \text{Total} & \text{ } & 1000 & \text{ } & 1000     \end{array}

The frequencies of the digits are for the most part similar between MRD and Excel except for digits 1 and 4. The null hypothesis H_0 is that the frequencies or probabilities for the digits are the same between the two populations. The following is a precise statement of the null hypothesis.

    H_0: p_{1,j}=p_{2,j}

where j=0,1,\cdots,9 and p_{1,j} is the probability that a random MRD digit is j and p_{2,j} is the probability that a random digit from Excel is j. Under H_0, an estimate of p_{1,j}=p_{2,j} is the ratio \frac{Y_{1,j}+Y_{2,j}}{2000}. For digits 0 and 1, they are (90+93)/2000 = 0.0915, (112+96)/2000 = 0.104, respectively. The following two tables show the calculation for the chi-squared procedure.

    Chi-Squared Statistic (MRD)
    \displaystyle \begin{array} {ccccccccc} \text{Digit} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{ } & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 90  & \text{ } & 0.0915 & \text{ } & 91.5  & \text{ } & \frac{(90-91.5)^2}{91.5}      \\ 1 & \text{ } & 112  & \text{ } & 0.1040 & \text{ } & 104  & \text{ } & \frac{(112-104)^2}{104}    \\ 2 & \text{ } & 104  & \text{ } & 0.1045 & \text{ } & 104.5  & \text{ } & \frac{(104-104.5)^2}{104.5}      \\ 3 & \text{ } & 102  & \text{ } & 0.0985 & \text{ } & 98.5  & \text{ } & \frac{(102-98.5)^2}{98.5}      \\ 4 & \text{ } & 84  & \text{ } & 0.0980 & \text{ } & 98  & \text{ } & \frac{(84-98)^2}{98}    \\ 5 & \text{ } & 110  & \text{ } & 0.1065 & \text{ } & 106.5  & \text{ } & \frac{(110-106.5)^2}{106.5}      \\ 6 & \text{ } & 106  & \text{ } & 0.1020 & \text{ } & 102  & \text{ } & \frac{(106-102)^2}{102}      \\ 7 & \text{ } & 101  & \text{ } & 0.1075 & \text{ } & 107.5  & \text{ } & \frac{(101-107.5)^2}{107.5}      \\ 8 & \text{ } & 101  & \text{ } & 0.1035 & \text{ } & 103.5  & \text{ } & \frac{(101-103.5)^2}{103.5}      \\ 9 & \text{ } & 90  & \text{ } & 0.0840 & \text{ } & 84  & \text{ } & \frac{(90-84)^2}{84}  \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 1000 & \text{ } & 1.0 & \text{ } & 1000 & \text{ } & 3.920599983     \end{array}

    Chi-Squared Statistic (Excel)
    \displaystyle \begin{array} {ccccccccc} \text{Digit} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{ } & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 93  & \text{ } & 0.0915 & \text{ } & 91.5  & \text{ } & \frac{(93-91.5)^2}{91.5}      \\ 1 & \text{ } & 96  & \text{ } & 0.1040 & \text{ } & 104  & \text{ } & \frac{(96-104)^2}{104}    \\ 2 & \text{ } & 105  & \text{ } & 0.1045 & \text{ } & 104.5  & \text{ } & \frac{(105-104.5)^2}{104.5}      \\ 3 & \text{ } & 95  & \text{ } & 0.0985 & \text{ } & 98.5  & \text{ } & \frac{(95-98.5)^2}{98.5}      \\ 4 & \text{ } & 112  & \text{ } & 0.0980 & \text{ } & 98  & \text{ } & \frac{(112-98)^2}{98}    \\ 5 & \text{ } & 103  & \text{ } & 0.1065 & \text{ } & 106.5  & \text{ } & \frac{(103-106.5)^2}{106.5}      \\ 6 & \text{ } & 98  & \text{ } & 0.1020 & \text{ } & 102  & \text{ } & \frac{(98-102)^2}{102}      \\ 7 & \text{ } & 114  & \text{ } & 0.1075 & \text{ } & 107.5  & \text{ } & \frac{(114-107.5)^2}{107.5}      \\ 8 & \text{ } & 106  & \text{ } & 0.1035 & \text{ } & 103.5  & \text{ } & \frac{(106-103.5)^2}{103.5}      \\ 9 & \text{ } & 78  & \text{ } & 0.0840 & \text{ } & 84  & \text{ } & \frac{(78-84)^2}{84}  \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 1000 & \text{ } & 1.0 & \text{ } & 1000 & \text{ } & 3.920599983     \end{array}

The value of the chi-squared statistic is 7.841199966, the sum of the two individual ones. The degrees of freedom of the chi-squared statistic is 10 -1 = 9. At level of significance \alpha=0.05, the critical value (the upper area of the chi-squared density curve) is 16.9. Thus we do not reject the null hypothesis that the distributions of digits in these two sources of random numbers are the same. Given the value of the chi-squared statistic (7.84), the p-value is 0.55. Since the p-value is large, there is no reason to believe that the digit distributions are different between the two sources. \square

Example 2
Two groups of drivers (500 drivers in each group) are observed for a 3-year period. The frequencies of accidents of the two groups are shown below. Test whether the accident frequencies are the same between the two groups of drivers.

    \displaystyle \begin{array} {rrrrr} \text{Number of Accidents} & \text{ } & \text{Frequency (Group 1)}  & \text{ } & \text{Frequency (Group 2)}    \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ 0 & \text{ } & 193  & \text{ } & 154  \\ 1 & \text{ } & 185  & \text{ } & 191  \\ 2 & \text{ } & 88  & \text{ } & 97  \\ 3 & \text{ } & 29  & \text{ } & 38  \\ 4 & \text{ } & 4  & \text{ } & 17  \\ 5 & \text{ } & 1  & \text{ } & 3    \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ \text{Total} & \text{ } & 500 & \text{ } & 500     \end{array}

To ensure that the expected count in the last cell is not too small, we collapse two cells (4 and 5 accidents) into one. The following two tables show the calculation for the chi-squared procedure.

    Chi-Squared Statistic (Group 1)
    \displaystyle \begin{array} {ccccccccc} \text{Number of} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{Accidents} & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 193  & \text{ } & 0.347 & \text{ } & 173.5  & \text{ } & \frac{(193-173.5)^2}{173.5}      \\ 1 & \text{ } & 185  & \text{ } &0.376  & \text{ } & 188  & \text{ } & \frac{(185-188)^2}{188}    \\ 2 & \text{ } & 88  & \text{ } &0.185  & \text{ } & 92.5  & \text{ } & \frac{(88-92.5)^2}{92.5}      \\ 3 & \text{ } & 29  & \text{ } & 0.067 & \text{ } & 33.5  & \text{ } & \frac{(29-33.5)^2}{33.5}      \\ 4+ & \text{ } & 5  & \text{ } & 0.025 & \text{ } & 12.5  & \text{ } & \frac{(5-12.5)^2}{12.5}      \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 500 & \text{ } & 1.0 & \text{ } & 500 & \text{ } & 7.562911523     \end{array}

    Chi-Squared Statistic (Group 2)
    \displaystyle \begin{array} {ccccccccc} \text{Number of} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{Accidents} & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 154  & \text{ } & 0.347 & \text{ } & 173.5  & \text{ } & \frac{(154-173.5)^2}{173.5}      \\ 1 & \text{ } & 191  & \text{ } &0.376  & \text{ } & 188  & \text{ } & \frac{(191-188)^2}{188}    \\ 2 & \text{ } & 97  & \text{ } &0.185  & \text{ } & 92.5  & \text{ } & \frac{(97-92.5)^2}{92.5}      \\ 3 & \text{ } & 38  & \text{ } & 0.067 & \text{ } & 33.5  & \text{ } & \frac{(38-33.5)^2}{33.5}      \\ 4+ & \text{ } & 20  & \text{ } & 0.025 & \text{ } & 12.5  & \text{ } & \frac{(20-12.5)^2}{12.5}      \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 500 & \text{ } & 1.0 & \text{ } & 500 & \text{ } & 7.562911523     \end{array}

The total value of the chi-squared statistic is 15.1258 with df = 4. At level of significance \alpha=0.01, the critical value is 13.2767. Since the chi-squared statistic is larger than 13.2767, we reject the null hypothesis that the loss frequencies are the same between the two groups of drivers. We also reach the same conclusion by looking at the p-value. The p-value of the chi-squared statistic of 15.128 is 0.004447242, which is quite small. So we have reason to believe that the value of the chi-squared statistic 15.1258 is too large to be explained by random fluctuation. Thus we have reason to believe that the two groups have different accident rates. \square

_______________________________________________________________________________________________

Comparing Two or More Distributions

The procedure demonstrated in the previous section can be easily extended to handle more than two distributions. Suppose that the focus of interest is a certain multinomial experiment that results in k distinct outcomes. Suppose that the experiment is performed r times with the samples drawn from different populations. The r iterations of the experiment are independent. Note the following quantities.

    p_{i,j} is the probability that the outcome in the ith experiment falls into the jth cell where i=1,2,\cdots,r and j=1,2,\cdots,k.

    n_i is the number of times the ith experiment is performed.

    Y_{i,j} is the number of trials in the ith experiment whose outcomes fall into cell j where i=1,2,\cdots,r and j=1,2,\cdots,k.

With these in mind, consider the following chi-squared statistics

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{i,j}-n_i \ p_{i,j})^2}{n_i \ p_{i,j}}

where j=1,2,\cdots,k. Each of the above k statistic has an approximate chi-squared distribution with k-1 degrees of freedom. Since the experiments are independent, the sum of all these chi-squared statistics

    \displaystyle \sum \limits_{i=1}^r \sum \limits_{j=1}^k \frac{(Y_{i,j}-n_i \ p_{i,j})^2}{n_i \ p_{i,j}}

has an approximate chi-squared distribution with df = r(k-1). The null hypothesis is that the cell probabilities are the same across all populations. The following is the formal statement.

    H_0: p_{1,j}=p_{2,j}=p_{3,j}=\cdots =p_{r,j}

where j=1,2,\cdots,k. The unknown cell probabilities are to be estimated using sample data as follows:

    \displaystyle \hat{p}_j=\frac{Y_{1,j}+Y_{2,j}+\cdots+Y_{r,j}}{n_1+n_2+\cdots+n_r}

where j=1,2,\cdots,k. The reasoning behind \hat{p}_j is that if H_0 is true, then the r iterations of the experiment is just one large combined experiment. Then Y_{1,j}+Y_{2,j}+\cdots+Y_{r,j} is simply the number of observations that fall into cell j when H_0 is assumed to be true. Thus \hat{p}_j is an estimate of the cell probabilities p_{1,j}=p_{2,j}=p_{3,j}=\cdots =p_{r,j} for j=1,2,\cdots,k-1.

The next step is to replace the cell probabilities by the estimates \hat{p}_j to obtain the following chi-squared statistic.

    \displaystyle \sum \limits_{i=1}^r \sum \limits_{j=1}^k \frac{(Y_{i,j}-n_i \ \hat{p}_j)^2}{n_i \ \hat{p}_j}

Since we only have to estimate the cell probabilities p_{1,j}=p_{2,j}=p_{3,j}=\cdots =p_{r,j} for j up to k-1, the degrees of freedom of the above statistic is r(k-1)-(k-1)=(r-1) (k-1). In other words, the degrees of freedom is the number of experiments less one times the number of cells less one.

Once all the components are in place, we obtain the critical value of the chi-squared distribution of the df indicated above with an appropriate level of significance to decide on the rejection or acceptance of the null hypothesis. The p-value approach can also be used.

_______________________________________________________________________________________________

Reference

  1. Moore D. S., McCabe G. P., Craig B. A., Introduction to the Practice of Statistics, 7th ed., W. H. Freeman and Company, New York, 2012
  2. Wackerly D. D., Mendenhall III W., Scheaffer R. L.,Mathematical Statistics with Applications, Thomson Learning, Inc, California, 2008

_______________________________________________________________________________________________
\copyright \ 2017 - \text{Dan Ma}

5 thoughts on “The Chi-Squared Distribution, Part 3b

  1. Pingback: The Chi-Squared Distribution, Part 3c | Topics in Actuarial Modeling

  2. Pingback: The Chi-Squared Distribution, Part 3a | Topics in Actuarial Modeling

  3. Pingback: Chi-squared test | Topics in Actuarial Modeling

  4. Pingback: Gamma Function and Gamma Distribution – Daniel Ma

  5. Pingback: The Gamma Function | A Blog on Probability and Statistics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s