Chi-squared test

The chi-squared test is a useful and versatile test. There are several interpretations of the chi-squared test, which are discussed in three previous posts. The different uses of the same test can be confusing to the students. This post attempts to connect the ideas in the three previous posts and to supplement the previous discussions.

The chi-squared test is based on the chi-squared statistic, which is a measure of magnitude of the difference between the observed counts and the expected counts in an experimental design that involves one or more categorical variables. The null hypothesis is the assumption that the observed counts and expected counts are the same. A large value of the chi-squared statistic gives evidence for the rejection of the null hypothesis.

The chi-squared test is also simple to use. The chi-squared statistic has an approximate chi-squared distribution, which makes it easy to evaluate the sample data. The chi-squared test is included in various software packages. For applications with a small number of categories, the calculation can even be done with a hand-held calculator.

_______________________________________________________________________________________________

The Goodness-of-Fit Test and the Test of Homogeneity

The three interpretations of the chi-squared test have been discussed in these posts: goodness-of-fit test, test of homogeneity and test of independence.

The three different uses of the test as discussed in the three previous posts can be kept straight by having a firm understanding of the underlying experimental design.

For the goodness-of-fit test, there is only one population involved. The experiment is to measure one categorical variable on one population. Thus only one sample is used in applying the chi-squared test. The one-sample data would produce the observed counts for the categorical variable in questions. Let’s say the variable has k cells. Then there would be k observed counts. The expected counts for the k cells would come from an hypothesized distribution of the categorical variable. The chi-squared statistic is then the sum of k squared differences of the observed and expected counted (normalized by dividing the expected counts). Essentially the hypothesized distribution is the null hypothesis. More specifically, the null hypothesis would be the statement that the cell probabilities are derived from the hypothesized distribution.

As a quick example, we may want to answer the question whether a given die is a fair die. We then observe n rolls of the die and classify the rolls into 6 cells (the value of 1 to 6). The null hypothesis is that the values of the die follow a uniform distribution. Another way to state the hypothesis is that each cell probability is 1/6. Another example is the testing of the hypothesis of whether the claim frequency of a group of insured drivers follows a Poisson distribution. The cell probabilities are then calculated based on the assumption of a Poisson distribution. In short, the goodness-of-fit test is to test whether the observed counts for one categorical variable come from (or fit) a hypothesized distribution. See Example 1 and Example 2 in the post on goodness-of-fit test.

In the test of homogeneity, the focus is to compare two or more populations (or two or more subpopulations of a population) on the same categorical variable, i.e. whether the categorical variable in question follow the same distribution across the different populations. For example, do two different groups of insured drivers exhibit the same claim frequency rates? For example, do adults with different educational attainment levels have the same proportions of current smokers/former smokers/never smokers? For example, are political affiliations similar across racial/ethnic groups? In this test, the goal is to determine whether cells in the categorical variable have the same proportions across the populations, hence the name of test of homogeneity. In the experiment, researchers would sample each population (or group) separately on the categorical variable in questions. Thus there will be multiple samples (one for each group) and the samples are independent.

In the test of homogeneity, the calculation of the chi-squared statistic would involve adding up the squared differences of the observed counts and expected counts for the multiple samples. For illustration, see Example 1 and Example 2 in the post on test of homogeneity.

_______________________________________________________________________________________________

Test of Independence

The test of independence can be confused with the test of homogeneity. It is possible that the objectives for both tests are similar. For example, a test of hypothesis might seek to determine whether the proportions of smoking statuses (current smoker, former smoker and never smoker) are the same across the groups with different education levels. This sounds like a test of homogeneity since it seeks to determine whether the distribution of smoking status is the same across the different groups (levels of educational attainment). However, a test of independence can also have this same objective.

The difference between the test of homogeneity and the test of independence is one of experimental design. In the test of homogeneity, the researchers sample each group (or population) separately. For example, they would sample individuals from groups with various levels of education separately and classify the individuals in each group by smoking status. The chi-squared test to use in this case is the test of homogeneity. In this experimental design, the experimenter might sample 1,000 individuals who are not high school graduate, 1,000 individuals who are high school graduates, 1,000 individuals who have some college and so on. Then the experimenter would compare the distribution of smoking status across the different samples.

An experimenter using a test of independence might try to answer the same question but is proceeding in a different way. The experimenter would sample the individuals from a given population and observe two categorical variables (e.g. level of education and smoking status) for the same individual.

Then the researchers would classify each individual into a cell in a two-way table. See Table 3b in the previous post on test of independence. The values of the level of education go across the column in the table (the column variable). The values of the smoking status go down the rows (the row variable). Each individual in the sample would belong to one cell in the table according to the values of the row and column variables. The two-way table is to help determine whether the row variable and the column variable are associated in the given population. In other words, the experimenter is interested in finding out whether one variable explains the other (or one variable affects the other).

For the sake of ease in the discussion, let’s say the column variable (level of education) is the explanatory variable. The experimenter would then be interested in whether the conditional distribution of the row variable (smoking status) would be similar or different across the columns. If the conclusion is similar, it means that the column variable does not affect the row variable (or the two variables are not associated). This would also mean that the distribution of smoking status are the same across the different levels of education (a conclusion of homogeneity).

If the conclusion is that the conditional distribution of the row variable (smoking status) would be different across the columns, then the column variable does affect the row variable (or the two variables are associated). This would also mean that the distribution of smoking status are different across the different levels of education (a conclusion of non-homogeneity).

The test of independence and the test of homogeneity are based on two different experimental designs. Hence their implementations of the chi-squared statistic are different. However, each design can be structured to answer similar questions.

_______________________________________________________________________________________________
\copyright 2017 – Dan Ma

The Chi-Squared Distribution, Part 3b

This post is a continuation of the previous post (Part 3a) on chi-squared test and is also part of a series of posts on chi-squared distribution. The first post (Part 1) is an introduction on the chi-squared distribution. The second post (Part 2) is on the chi-squared distribution as mathematical tools for inference involving quantitative variables. Part 3, which focuses on inference on categorical variables using the Pearson’s chi-squared statistic, is broken up in three posts. Part 3a is an introduction on the chi-squared test statistic and explains how to perform the chi-squared goodness-of-fit test. Part 3b (this post) focuses on using the chi-squared statistic to compare several populations (test of homogeneity). Part 3c (the next post) focuses on the test of independence.

_______________________________________________________________________________________________

Comparing Two Distributions

The interpretation of the chi-squared test discussed in this post is called chi-squared test of homogeneity. In this post, we show that the chi-squared statistic is employed to test whether the cell probabilities for certain categories are identical across several populations. We start by examining the two-population case, which will be fairly easily extended to the case of more than two populations.

Suppose that a multinomial experiment can result in k distinct outcomes. Suppose that the experiment is performed two times with the two samples drawn from two different populations. Let p_{1,j} be the probability that the outcome in the first experiment falls into the jth category (or cell j) and let p_{2,j} be the probability that the outcome in the second experiment falls into the jth category (or cell j) where j=1,2,\cdots,k. Furthermore, suppose that there are n_1 and n_2 independent multinomial trials in the first experiment and the second experiment, respectively.

We are interested in the random variables Y_{1,1}, Y_{1,2}, \cdots, Y_{1,k} and the random variables Y_{2,1}, Y_{2,2}, \cdots, Y_{2,k} where Y_{1,j} is the number of trials in the first experiment whose outcomes fall into cell j and Y_{2,j} is the number of trials in the second experiment whose outcomes fall into cell j. Then the sampling distribution of each of the following

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{1,j}-n_1 \ p_{1,j})^2}{n_1 \ p_{1,j}}

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{2,j}-n_2 \ p_{2,j})^2}{n_2 \ p_{2,j}}

has an approximate chi-squared distribution with k-1 degrees of freedom (discussed here). Because the two experiments are independent, the following sum

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{1,j}-n_1 \ p_{1,j})^2}{n_1 \ p_{1,j}}+\sum \limits_{j=1}^k \frac{(Y_{2,j}-n_2 \ p_{2,j})^2}{n_2 \ p_{2,j}}

has an approximate chi-squared distribution with k-1+k-1=2k-2 degrees of freedom. When p_{1,j} and p_{2,j}, j=1,2,\cdots,k, are unknown, we wish to test the following hypothesis.

    H_0: p_{1,1}=p_{2,1}, p_{1,2}=p_{2,2}, \cdots, p_{1,j}=p_{2,j}, \cdots, p_{1,k}=p_{2,k}

In other words, we wish to test the hypothesis that the cell probabilities associated with the two independent experiments are equal. Since the cell probabilities are generally unknown, we can use sample data to estimate p_{1,j} and p_{2,j}. How do we do that? If the null hypothesis H_0 is true, then the two independent experiments can be viewed as one combined experiment. Then the the following ratio

    \displaystyle \hat{p}_j=\frac{Y_{1,j}+Y_{2,j}}{n_1+n_2}

is the sample frequency of the event corresponding to cell j, j=1,2,\cdots,k. Furthermore, we only have to estimate p_{1,j} and p_{2,j} using \hat{p}_j for j=1,2,\cdots,k-1 since the estimator of p_{1,k} and p_{2,k} is 1-\hat{p}_1-\hat{p}_2-\cdots-\hat{p}_{k-1}. With all these in mind, the following is the test statistic we will need.

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{1,j}-n_1 \ \hat{p}_j)^2}{n_1 \ \hat{p}_j}+\sum \limits_{j=1}^k \frac{(Y_{2,j}-n_2 \ \hat{p}_j)^2}{n_2 \ \hat{p}_j}

Since k-1 parameters are estimated, the degrees of freedom of this test statistic is obtained by subtracting k-1 from 2k-2. Thus the degrees of freedom is 2k-2-(k-1)=k-1. We test the null hypothesis H_0 against all alternatives using the upper tailed chi-squared test. We use two examples to demonstrate how this procedure is done.

Example 1
A Million Random Digits with 100,000 Normal Deviates is a book with random numbers published by the RAND Corporation in 1955. It contains 1,000,000 random digits and was an important work in statistics and was used extensively in random number generation in the 20th century. A typical way to pick random numbers from the book is to randomly select a page and then randomly select a point on that page (row and column). Then read off the random digits from that point (going down and then continue with the next columns) until obtaining the desired number of digits. We selected 1,000 random digits in this manner from the book and compare them with the random digits generated in Excel using the Rand() function. The following table shows the frequency distributions of the digits from the two sources. In the following table, MRD = Million Random Digits. Test whether the distributions of digits are the same between MRD and Excel.

    \displaystyle \begin{array} {rrrrr} \text{Digit} & \text{ } & \text{Frequency (MRD)}  & \text{ } & \text{Frequency (Excel)}    \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ 0 & \text{ } & 90  & \text{ } & 93  \\ 1 & \text{ } & 112  & \text{ } & 96  \\ 2 & \text{ } & 104  & \text{ } & 105  \\ 3 & \text{ } & 102  & \text{ } & 95  \\ 4 & \text{ } & 84  & \text{ } & 112  \\ 5 & \text{ } & 110  & \text{ } & 103  \\ 6 & \text{ } & 106  & \text{ } & 98  \\ 7 & \text{ } & 101  & \text{ } & 114  \\ 8 & \text{ } & 101  & \text{ } & 106  \\ 9 & \text{ } & 90  & \text{ } & 78  \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ \text{Total} & \text{ } & 1000 & \text{ } & 1000     \end{array}

The frequencies of the digits are for the most part similar between MRD and Excel except for digits 1 and 4. The null hypothesis H_0 is that the frequencies or probabilities for the digits are the same between the two populations. The following is a precise statement of the null hypothesis.

    H_0: p_{1,j}=p_{2,j}

where j=0,1,\cdots,9 and p_{1,j} is the probability that a random MRD digit is j and p_{2,j} is the probability that a random digit from Excel is j. Under H_0, an estimate of p_{1,j}=p_{2,j} is the ratio \frac{Y_{1,j}+Y_{2,j}}{2000}. For digits 0 and 1, they are (90+93)/2000 = 0.0915, (112+96)/2000 = 0.104, respectively. The following two tables show the calculation for the chi-squared procedure.

    Chi-Squared Statistic (MRD)
    \displaystyle \begin{array} {ccccccccc} \text{Digit} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{ } & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 90  & \text{ } & 0.0915 & \text{ } & 91.5  & \text{ } & \frac{(90-91.5)^2}{91.5}      \\ 1 & \text{ } & 112  & \text{ } & 0.1040 & \text{ } & 104  & \text{ } & \frac{(112-104)^2}{104}    \\ 2 & \text{ } & 104  & \text{ } & 0.1045 & \text{ } & 104.5  & \text{ } & \frac{(104-104.5)^2}{104.5}      \\ 3 & \text{ } & 102  & \text{ } & 0.0985 & \text{ } & 98.5  & \text{ } & \frac{(102-98.5)^2}{98.5}      \\ 4 & \text{ } & 84  & \text{ } & 0.0980 & \text{ } & 98  & \text{ } & \frac{(84-98)^2}{98}    \\ 5 & \text{ } & 110  & \text{ } & 0.1065 & \text{ } & 106.5  & \text{ } & \frac{(110-106.5)^2}{106.5}      \\ 6 & \text{ } & 106  & \text{ } & 0.1020 & \text{ } & 102  & \text{ } & \frac{(106-102)^2}{102}      \\ 7 & \text{ } & 101  & \text{ } & 0.1075 & \text{ } & 107.5  & \text{ } & \frac{(101-107.5)^2}{107.5}      \\ 8 & \text{ } & 101  & \text{ } & 0.1035 & \text{ } & 103.5  & \text{ } & \frac{(101-103.5)^2}{103.5}      \\ 9 & \text{ } & 90  & \text{ } & 0.0840 & \text{ } & 84  & \text{ } & \frac{(90-84)^2}{84}  \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 1000 & \text{ } & 1.0 & \text{ } & 1000 & \text{ } & 3.920599983     \end{array}

    Chi-Squared Statistic (Excel)
    \displaystyle \begin{array} {ccccccccc} \text{Digit} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{ } & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 93  & \text{ } & 0.0915 & \text{ } & 91.5  & \text{ } & \frac{(93-91.5)^2}{91.5}      \\ 1 & \text{ } & 96  & \text{ } & 0.1040 & \text{ } & 104  & \text{ } & \frac{(96-104)^2}{104}    \\ 2 & \text{ } & 105  & \text{ } & 0.1045 & \text{ } & 104.5  & \text{ } & \frac{(105-104.5)^2}{104.5}      \\ 3 & \text{ } & 95  & \text{ } & 0.0985 & \text{ } & 98.5  & \text{ } & \frac{(95-98.5)^2}{98.5}      \\ 4 & \text{ } & 112  & \text{ } & 0.0980 & \text{ } & 98  & \text{ } & \frac{(112-98)^2}{98}    \\ 5 & \text{ } & 103  & \text{ } & 0.1065 & \text{ } & 106.5  & \text{ } & \frac{(103-106.5)^2}{106.5}      \\ 6 & \text{ } & 98  & \text{ } & 0.1020 & \text{ } & 102  & \text{ } & \frac{(98-102)^2}{102}      \\ 7 & \text{ } & 114  & \text{ } & 0.1075 & \text{ } & 107.5  & \text{ } & \frac{(114-107.5)^2}{107.5}      \\ 8 & \text{ } & 106  & \text{ } & 0.1035 & \text{ } & 103.5  & \text{ } & \frac{(106-103.5)^2}{103.5}      \\ 9 & \text{ } & 78  & \text{ } & 0.0840 & \text{ } & 84  & \text{ } & \frac{(78-84)^2}{84}  \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 1000 & \text{ } & 1.0 & \text{ } & 1000 & \text{ } & 3.920599983     \end{array}

The value of the chi-squared statistic is 7.841199966, the sum of the two individual ones. The degrees of freedom of the chi-squared statistic is 10 -1 = 9. At level of significance \alpha=0.05, the critical value (the upper area of the chi-squared density curve) is 16.9. Thus we do not reject the null hypothesis that the distributions of digits in these two sources of random numbers are the same. Given the value of the chi-squared statistic (7.84), the p-value is 0.55. Since the p-value is large, there is no reason to believe that the digit distributions are different between the two sources. \square

Example 2
Two groups of drivers (500 drivers in each group) are observed for a 3-year period. The frequencies of accidents of the two groups are shown below. Test whether the accident frequencies are the same between the two groups of drivers.

    \displaystyle \begin{array} {rrrrr} \text{Number of Accidents} & \text{ } & \text{Frequency (Group 1)}  & \text{ } & \text{Frequency (Group 2)}    \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ 0 & \text{ } & 193  & \text{ } & 154  \\ 1 & \text{ } & 185  & \text{ } & 191  \\ 2 & \text{ } & 88  & \text{ } & 97  \\ 3 & \text{ } & 29  & \text{ } & 38  \\ 4 & \text{ } & 4  & \text{ } & 17  \\ 5 & \text{ } & 1  & \text{ } & 3    \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ \text{Total} & \text{ } & 500 & \text{ } & 500     \end{array}

To ensure that the expected count in the last cell is not too small, we collapse two cells (4 and 5 accidents) into one. The following two tables show the calculation for the chi-squared procedure.

    Chi-Squared Statistic (Group 1)
    \displaystyle \begin{array} {ccccccccc} \text{Number of} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{Accidents} & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 193  & \text{ } & 0.347 & \text{ } & 173.5  & \text{ } & \frac{(193-173.5)^2}{173.5}      \\ 1 & \text{ } & 185  & \text{ } &0.376  & \text{ } & 188  & \text{ } & \frac{(185-188)^2}{188}    \\ 2 & \text{ } & 88  & \text{ } &0.185  & \text{ } & 92.5  & \text{ } & \frac{(88-92.5)^2}{92.5}      \\ 3 & \text{ } & 29  & \text{ } & 0.067 & \text{ } & 33.5  & \text{ } & \frac{(29-33.5)^2}{33.5}      \\ 4+ & \text{ } & 5  & \text{ } & 0.025 & \text{ } & 12.5  & \text{ } & \frac{(5-12.5)^2}{12.5}      \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 500 & \text{ } & 1.0 & \text{ } & 500 & \text{ } & 7.562911523     \end{array}

    Chi-Squared Statistic (Group 2)
    \displaystyle \begin{array} {ccccccccc} \text{Number of} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{Accidents} & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 154  & \text{ } & 0.347 & \text{ } & 173.5  & \text{ } & \frac{(154-173.5)^2}{173.5}      \\ 1 & \text{ } & 191  & \text{ } &0.376  & \text{ } & 188  & \text{ } & \frac{(191-188)^2}{188}    \\ 2 & \text{ } & 97  & \text{ } &0.185  & \text{ } & 92.5  & \text{ } & \frac{(97-92.5)^2}{92.5}      \\ 3 & \text{ } & 38  & \text{ } & 0.067 & \text{ } & 33.5  & \text{ } & \frac{(38-33.5)^2}{33.5}      \\ 4+ & \text{ } & 20  & \text{ } & 0.025 & \text{ } & 12.5  & \text{ } & \frac{(20-12.5)^2}{12.5}      \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 500 & \text{ } & 1.0 & \text{ } & 500 & \text{ } & 7.562911523     \end{array}

The total value of the chi-squared statistic is 15.1258 with df = 4. At level of significance \alpha=0.01, the critical value is 13.2767. Since the chi-squared statistic is larger than 13.2767, we reject the null hypothesis that the loss frequencies are the same between the two groups of drivers. We also reach the same conclusion by looking at the p-value. The p-value of the chi-squared statistic of 15.128 is 0.004447242, which is quite small. So we have reason to believe that the value of the chi-squared statistic 15.1258 is too large to be explained by random fluctuation. Thus we have reason to believe that the two groups have different accident rates. \square

_______________________________________________________________________________________________

Comparing Two or More Distributions

The procedure demonstrated in the previous section can be easily extended to handle more than two distributions. Suppose that the focus of interest is a certain multinomial experiment that results in k distinct outcomes. Suppose that the experiment is performed r times with the samples drawn from different populations. The r iterations of the experiment are independent. Note the following quantities.

    p_{i,j} is the probability that the outcome in the ith experiment falls into the jth cell where i=1,2,\cdots,r and j=1,2,\cdots,k.

    n_i is the number of times the ith experiment is performed.

    Y_{i,j} is the number of trials in the ith experiment whose outcomes fall into cell j where i=1,2,\cdots,r and j=1,2,\cdots,k.

With these in mind, consider the following chi-squared statistics

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{i,j}-n_i \ p_{i,j})^2}{n_i \ p_{i,j}}

where j=1,2,\cdots,k. Each of the above k statistic has an approximate chi-squared distribution with k-1 degrees of freedom. Since the experiments are independent, the sum of all these chi-squared statistics

    \displaystyle \sum \limits_{i=1}^r \sum \limits_{j=1}^k \frac{(Y_{i,j}-n_i \ p_{i,j})^2}{n_i \ p_{i,j}}

has an approximate chi-squared distribution with df = r(k-1). The null hypothesis is that the cell probabilities are the same across all populations. The following is the formal statement.

    H_0: p_{1,j}=p_{2,j}=p_{3,j}=\cdots =p_{r,j}

where j=1,2,\cdots,k. The unknown cell probabilities are to be estimated using sample data as follows:

    \displaystyle \hat{p}_j=\frac{Y_{1,j}+Y_{2,j}+\cdots+Y_{r,j}}{n_1+n_2+\cdots+n_r}

where j=1,2,\cdots,k. The reasoning behind \hat{p}_j is that if H_0 is true, then the r iterations of the experiment is just one large combined experiment. Then Y_{1,j}+Y_{2,j}+\cdots+Y_{r,j} is simply the number of observations that fall into cell j when H_0 is assumed to be true. Thus \hat{p}_j is an estimate of the cell probabilities p_{1,j}=p_{2,j}=p_{3,j}=\cdots =p_{r,j} for j=1,2,\cdots,k-1.

The next step is to replace the cell probabilities by the estimates \hat{p}_j to obtain the following chi-squared statistic.

    \displaystyle \sum \limits_{i=1}^r \sum \limits_{j=1}^k \frac{(Y_{i,j}-n_i \ \hat{p}_j)^2}{n_i \ \hat{p}_j}

Since we only have to estimate the cell probabilities p_{1,j}=p_{2,j}=p_{3,j}=\cdots =p_{r,j} for j up to k-1, the degrees of freedom of the above statistic is r(k-1)-(k-1)=(r-1) (k-1). In other words, the degrees of freedom is the number of experiments less one times the number of cells less one.

Once all the components are in place, we obtain the critical value of the chi-squared distribution of the df indicated above with an appropriate level of significance to decide on the rejection or acceptance of the null hypothesis. The p-value approach can also be used.

_______________________________________________________________________________________________

Reference

  1. Moore D. S., McCabe G. P., Craig B. A., Introduction to the Practice of Statistics, 7th ed., W. H. Freeman and Company, New York, 2012
  2. Wackerly D. D., Mendenhall III W., Scheaffer R. L.,Mathematical Statistics with Applications, Thomson Learning, Inc, California, 2008

_______________________________________________________________________________________________
\copyright \ 2017 - \text{Dan Ma}