Chi-squared test

The chi-squared test is a useful and versatile test. There are several interpretations of the chi-squared test, which are discussed in three previous posts. The different uses of the same test can be confusing to the students. This post attempts to connect the ideas in the three previous posts and to supplement the previous discussions.

The chi-squared test is based on the chi-squared statistic, which is a measure of magnitude of the difference between the observed counts and the expected counts in an experimental design that involves one or more categorical variables. The null hypothesis is the assumption that the observed counts and expected counts are the same. A large value of the chi-squared statistic gives evidence for the rejection of the null hypothesis.

The chi-squared test is also simple to use. The chi-squared statistic has an approximate chi-squared distribution, which makes it easy to evaluate the sample data. The chi-squared test is included in various software packages. For applications with a small number of categories, the calculation can even be done with a hand-held calculator.

_______________________________________________________________________________________________

The Goodness-of-Fit Test and the Test of Homogeneity

The three interpretations of the chi-squared test have been discussed in these posts: goodness-of-fit test, test of homogeneity and test of independence.

The three different uses of the test as discussed in the three previous posts can be kept straight by having a firm understanding of the underlying experimental design.

For the goodness-of-fit test, there is only one population involved. The experiment is to measure one categorical variable on one population. Thus only one sample is used in applying the chi-squared test. The one-sample data would produce the observed counts for the categorical variable in questions. Let’s say the variable has k cells. Then there would be k observed counts. The expected counts for the k cells would come from an hypothesized distribution of the categorical variable. The chi-squared statistic is then the sum of k squared differences of the observed and expected counted (normalized by dividing the expected counts). Essentially the hypothesized distribution is the null hypothesis. More specifically, the null hypothesis would be the statement that the cell probabilities are derived from the hypothesized distribution.

As a quick example, we may want to answer the question whether a given die is a fair die. We then observe n rolls of the die and classify the rolls into 6 cells (the value of 1 to 6). The null hypothesis is that the values of the die follow a uniform distribution. Another way to state the hypothesis is that each cell probability is 1/6. Another example is the testing of the hypothesis of whether the claim frequency of a group of insured drivers follows a Poisson distribution. The cell probabilities are then calculated based on the assumption of a Poisson distribution. In short, the goodness-of-fit test is to test whether the observed counts for one categorical variable come from (or fit) a hypothesized distribution. See Example 1 and Example 2 in the post on goodness-of-fit test.

In the test of homogeneity, the focus is to compare two or more populations (or two or more subpopulations of a population) on the same categorical variable, i.e. whether the categorical variable in question follow the same distribution across the different populations. For example, do two different groups of insured drivers exhibit the same claim frequency rates? For example, do adults with different educational attainment levels have the same proportions of current smokers/former smokers/never smokers? For example, are political affiliations similar across racial/ethnic groups? In this test, the goal is to determine whether cells in the categorical variable have the same proportions across the populations, hence the name of test of homogeneity. In the experiment, researchers would sample each population (or group) separately on the categorical variable in questions. Thus there will be multiple samples (one for each group) and the samples are independent.

In the test of homogeneity, the calculation of the chi-squared statistic would involve adding up the squared differences of the observed counts and expected counts for the multiple samples. For illustration, see Example 1 and Example 2 in the post on test of homogeneity.

_______________________________________________________________________________________________

Test of Independence

The test of independence can be confused with the test of homogeneity. It is possible that the objectives for both tests are similar. For example, a test of hypothesis might seek to determine whether the proportions of smoking statuses (current smoker, former smoker and never smoker) are the same across the groups with different education levels. This sounds like a test of homogeneity since it seeks to determine whether the distribution of smoking status is the same across the different groups (levels of educational attainment). However, a test of independence can also have this same objective.

The difference between the test of homogeneity and the test of independence is one of experimental design. In the test of homogeneity, the researchers sample each group (or population) separately. For example, they would sample individuals from groups with various levels of education separately and classify the individuals in each group by smoking status. The chi-squared test to use in this case is the test of homogeneity. In this experimental design, the experimenter might sample 1,000 individuals who are not high school graduate, 1,000 individuals who are high school graduates, 1,000 individuals who have some college and so on. Then the experimenter would compare the distribution of smoking status across the different samples.

An experimenter using a test of independence might try to answer the same question but is proceeding in a different way. The experimenter would sample the individuals from a given population and observe two categorical variables (e.g. level of education and smoking status) for the same individual.

Then the researchers would classify each individual into a cell in a two-way table. See Table 3b in the previous post on test of independence. The values of the level of education go across the column in the table (the column variable). The values of the smoking status go down the rows (the row variable). Each individual in the sample would belong to one cell in the table according to the values of the row and column variables. The two-way table is to help determine whether the row variable and the column variable are associated in the given population. In other words, the experimenter is interested in finding out whether one variable explains the other (or one variable affects the other).

For the sake of ease in the discussion, let’s say the column variable (level of education) is the explanatory variable. The experimenter would then be interested in whether the conditional distribution of the row variable (smoking status) would be similar or different across the columns. If the conclusion is similar, it means that the column variable does not affect the row variable (or the two variables are not associated). This would also mean that the distribution of smoking status are the same across the different levels of education (a conclusion of homogeneity).

If the conclusion is that the conditional distribution of the row variable (smoking status) would be different across the columns, then the column variable does affect the row variable (or the two variables are associated). This would also mean that the distribution of smoking status are different across the different levels of education (a conclusion of non-homogeneity).

The test of independence and the test of homogeneity are based on two different experimental designs. Hence their implementations of the chi-squared statistic are different. However, each design can be structured to answer similar questions.

_______________________________________________________________________________________________
\copyright 2017 – Dan Ma

The Chi-Squared Distribution, Part 3c

This post is part of a series of posts on chi-squared distribution. Three of the posts, this one and the previous two posts, deal with inference involving categorical variables. This post discusses the chi-squared test of independence. The previous two posts are on chi-squared goodness of fit test (part 3a) and on chi-squared test of homogeneity (part 3b).

The first post in the series is an introduction on chi-squared distribution. The second post is on several inference procedures that are based on chi-squared distribution that deal with quantitative measurements.

The 3-part discussion in part 3a, part 3b and part 3c are three different interpretations of the chi-squared test. Refer to the other two posts for the other two interpretations. We also make remarks below on the three chi-squared tests.

_______________________________________________________________________________________________

Two-Way Tables

In certain analysis of count data of categorical variables, we are interested in whether two categorical variables are associated with one another (or are related to one another). In such analysis, it is useful to represent the count data in a two-way table or contingency table. The following gives two examples using survival data in the ocean liner Titanic.

    Table 1 – Survival status of the passengers in the Titanic by gender group
    \displaystyle \begin{array} {ccccccccc} \text{Survival} & \text{ } & \text{Women}  & \text{ } & \text{Children}  & \text{ } & \text{Men} & \text{ } & \text{ } \\  \text{Status} & \text{ } & \text{ }  & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Yes} & \text{ } & 304  & \text{ } & 56 & \text{ } & 130  & \text{ } & \text{ }      \\ \text{No} & \text{ } & 112  & \text{ } & 56 & \text{ } & 638  & \text{ } & \text{ }       \end{array}

    Table 2 – Survival status of the passengers in the Titanic by passenger class
    \displaystyle \begin{array} {ccccccccc} \text{Survival} & \text{ } & \text{First}  & \text{ } & \text{Second}  & \text{ } & \text{Third} & \text{ } & \text{ } \\  \text{Status} & \text{ } & \text{Class}  & \text{ } & \text{Class}  & \text{ } & \text{Class} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Yes} & \text{ } & 200  & \text{ } & 117 & \text{ } & 172  & \text{ } & \text{ }      \\ \text{No} & \text{ } & 119  & \text{ } & 152 & \text{ } & 527  & \text{ } & \text{ }       \end{array}

Table 1 shows the count data for the survival status (survived or not survived) and gender of the passengers in the one and only voyage of the passenger liner Titanic. Table 2 shows the survival status and the passenger class of the passengers of Titanic. Both tables are contingency tables or two-way since each table relates two categorical variables – survival status and gender in Table 1 and survival status and passenger class in Table 2. Each table summarizes the categorical data by counting the number of observations that fall into each group for the two variables. For example, Table 1 shows that there were 304 women passengers who survived. In both tables, the survival status is the row variable. The column variable is gender (Table 1) or passenger class (Table 2).

It is clear from both tables that most of the deaths were either men or third class passengers. This observation is not surprising because of the mentality of “Women and Children First” and the fact that first class passengers were better treated than the other classes. Thus we can say that there is an association between gender and survival and there is an association between passenger class and survival in the sinking of Titanic. More specifically, the survival rates for women and children were much higher than for men and the survival rates for first class passenger was much higher than for the other two classes.

When a study measures two categorical variables on each individual in a random sample, the results can be summarized in a two-way table, which can then be used for studying the relationship between the two variables. As a first step, joint distribution, marginal distributions and conditional distributions are analyzed. Table 1 is analyzed in here. Table 2 is analyzed here. Though the Titanic survival data show a clear association between survival and gender (and passenger class), the discussion of the Titanic survival data in these two previous posts is still very useful. These two posts demonstrate how to analyze the relationship between two categorical variables by looking at the marginal distributions and conditional distributions.

This post goes one step further by analyzing the relationship in a two-way table using the chi-squared test. The method discussed here is called the chi-squared test of independence, in other words, the test determines whether there is a relationship between the two categorical variables displayed in a two-way table.

_______________________________________________________________________________________________

Test of Independence

We demonstrate the test of independence by working through the following example (Example 1). When describing how the method works in general, the two-way table has r rows and c columns (not including the total row and the total column).

Example 1
The following table shows the smoking status and the level of education of residents (aged 25 or over) of a medium size city on the East coast of the United States based on a survey conducted on a random sample of 1,078 adults aged 25 or older.

    Table 3 – Smoking Status and Level of Education
    \displaystyle \begin{array} {lllllllllll} \text{Smoking} & \text{ } & \text{Did Not}  & \text{ } & \text{High}  & \text{ } & \text{Some} & \text{ } & \text{Bachelor's} \\  \text{Status} & \text{ } & \text{Finish}  & \text{ } & \text{School}  & \text{ } & \text{College or} & \text{ } & \text{Degree}   \\ \text{ } & \text{ } & \text{High}  & \text{ } & \text{Graduate} & \text{ } & \text{Associate}  & \text{ } & \text{or}  \\ \text{ } & \text{ } & \text{School}  & \text{ } & \text{No College} & \text{ } & \text{Degree}  & \text{ } & \text{Higher}      \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Current} & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }  \\ \text{Smoker} & \text{ } & 177  & \text{ } & 141 & \text{ } & 48  & \text{ } & 35    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Former} & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }  \\ \text{Smoker} & \text{ } & 89  & \text{ } & 70 & \text{ } & 26  & \text{ } & 36    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Never} & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }  \\ \text{Smoker} & \text{ } & 210  & \text{ } & 146 & \text{ } & 47  & \text{ } & 53         \end{array}

The researcher is interested in finding out whether smoking status is associated with the level of education among the adults in this city. Do the data in Table 3 provide sufficient evidence to indicate that smoking status is affected by the level of education among the adults in this city?

The two categorical variables in this example are smoking status (current, former and never smoker) and level of education (the 4 categories listed in the columns in Table 3). The researcher views level of education as an explanatory variable and smoking status as the response variable. Table 3 has 3 rows and 4 columns. Thus there are 12 cells in the table. It is helpful to obtain the total for each row and the total for each column.

    Table 3a – Smoking Status and Level of Education
    \displaystyle \begin{array} {lllllllllll} \text{Smoking} & \text{ } & \text{Did Not}  & \text{ } & \text{High}  & \text{ } & \text{Some} & \text{ } & \text{Bachelor's} & \text{ } & \text{Total}\\  \text{Status} & \text{ } & \text{Finish}  & \text{ } & \text{School}  & \text{ } & \text{College or} & \text{ } & \text{Degree}   \\ \text{ } & \text{ } & \text{High}  & \text{ } & \text{Graduate} & \text{ } & \text{Associate}  & \text{ } & \text{or}  \\ \text{ } & \text{ } & \text{School}  & \text{ } & \text{No College} & \text{ } & \text{Degree}  & \text{ } & \text{Higher}    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Current} & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }  \\ \text{Smoker} & \text{ } & 177  & \text{ } & 141 & \text{ } & 48  & \text{ } & 35 & \text{ } & 401      \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Former} & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }  \\ \text{Smoker} & \text{ } & 89  & \text{ } & 70 & \text{ } & 26  & \text{ } & 36 & \text{ } & 221      \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Never} & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }  \\ \text{Smoker} & \text{ } & 210  & \text{ } & 146 & \text{ } & 47  & \text{ } & 53 & \text{ } & 456    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }  \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }  \\ \text{Total} & \text{ } & 476  & \text{ } & 357 & \text{ } & 121  & \text{ } & 124 & \text{ } & 1078         \end{array}

The null hypothesis H_0 in a two-way table is the statement that there is no association between the row variable and the column variable, i.e., the row variable and the column variable are independent. The alternative hypothesis H_1 states that there is an association between the two variables. For Table 3, the null hypothesis is that there is no association between the smoking status and the level of education among the adults in this city.

In Table 3 or 3a, each column is a distribution of smoking status (for each level of education). Another way to state the null hypothesis is that the distributions of the smoking status are the same across the four levels of educations. The alternative hypothesis is that the distributions are not the same.

Our goal is to use the chi-squared statistic to evaluate the data in the two-way table. The chi-squared statistic is based the squared difference between the observed counts in Table 3a and the expected counts derived from assuming the null hypothesis. The following shows how to calculate the expected count for each cell assuming the null hypothesis.

    \displaystyle \text{Expected Cell Count}=\frac{\text{Row Total } \times \text{ Column Total}}{n}

The n in the denominator is the total number of observations in the two-way table. For Table 3a, n= 1078. For the cell of “Current Smoker” and “Did not Finish High School”, the expected count would be 476 x 401 / 1078 = 177.06. The other expected counts are calculated accordingly and are shown in Table 3b with the expected counts in parentheses.

    Table 3b – Smoking Status and Level of Education
    \displaystyle \begin{array} {lllllllllll} \text{Smoking} & \text{ } & \text{Did Not}  & \text{ } & \text{High}  & \text{ } & \text{Some} & \text{ } & \text{Bachelor's} & \text{ } & \text{Total}\\  \text{Status} & \text{ } & \text{Finish}  & \text{ } & \text{School}  & \text{ } & \text{College or} & \text{ } & \text{Degree}   \\ \text{ } & \text{ } & \text{High}  & \text{ } & \text{Graduate} & \text{ } & \text{Associate}  & \text{ } & \text{or}  \\ \text{ } & \text{ } & \text{School}  & \text{ } & \text{No College} & \text{ } & \text{Degree}  & \text{ } & \text{Higher}    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Current} & \text{ } & 177  & \text{ } & 141 & \text{ } & 48  & \text{ } & 35 & \text{ } & 401  \\ \text{Smoker} & \text{ } & (177.06)  & \text{ } & (132.80) & \text{ } & (45.01)  & \text{ } & (46.13) & \text{ } & (401)      \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Former} & \text{ } & 89  & \text{ } & 70 & \text{ } & 26  & \text{ } & 36 & \text{ } & 221  \\ \text{Smoker} & \text{ } & (97.58)  & \text{ } & (73.19) & \text{ } & (24.81)  & \text{ } & (25.42) & \text{ } & (221)      \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Never} & \text{ } & 210  & \text{ } & 146 & \text{ } & 47  & \text{ } & 53 & \text{ } & 456  \\ \text{Smoker} & \text{ } & (201.35)  & \text{ } & (151.01) & \text{ } & (51.18)  & \text{ } & (52.45) & \text{ } & (456)    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }  \\ \text{Total} & \text{ } & 476  & \text{ } & 357 & \text{ } & 121  & \text{ } & 124 & \text{ } & 1018  \\ \text{ } & \text{ } & (476)  & \text{ } & (357) & \text{ } & (121)  & \text{ } & (124) & \text{ } & (1078)       \end{array}

How do we know if the formula for the expected cell count is correct? Look at the right margin of Table 3a or 3b (Total column). The counts 401, 221 and 456 as percentages of the total 1078 are 37.20%, 20.50% and 42.30%. If the null hypothesis that there is no relation between level of education and smoking status is true, we would expect these overall percentages to apply to each level of education. For example, there are 476 adults in the sample who did not complete high school. We would expect 37.20% of them to be current smokers, 20.50% of them to be former smokers and 42.30% of them to be never smokers if indeed smoker status is not affected by level of education. In particular, 37.20% x 476 = 177.072 (the same as 177.06 ignoring the rounding difference). Note that 37.20% is the fraction 401/1078. As a result, 37.20% x 476 is identical to 476 x 401 / 1078. This confirms the formula stated above.

We now compute the chi-squared statistic. From a two-way table perspective, the chi-squared statistic is a measure of how much the observed cell counts deviate from the expected cell counts. The following formula makes this idea more explicit.

    \displaystyle \chi^2=\sum \frac{(\text{Observed Count}-\text{Expected Count})^2}{\text{Expected Count}}

The sum in the formula is over all r \times c cells in the 2-way table where r is the number of rows and c is the number of columns. Note that the calculation of the chi-squared statistic is based on the expected counts discussed above. The expected counts are based on the assumption of the null hypothesis. Thus the chi-squared statistic is based on assuming the null hypothesis.

When the observed counts and the expected counts are very different, the value of the chi-squared statistic will be large. Thus large values of the chi-squared statistic provide evidence against the null hypothesis. In order to evaluate the observed data as captured in the chi-squared statistic, we need to have information about the sampling distribution of the chi-squared statistic as defined here.

If the null hypothesis H_0 is true, the chi-squared statistic defined above has an approximate chi-squared distribution with (r-1)(c-1) degrees of freedom. Recall that r is the number of rows and c is the number of columns in the two-way table (not counting the total row and total column).

Thus the null hypothesis H_0 is rejected if the value of the chi-squared statistic exceeds the critical value, which is the upper tail in the chi-squared distribution (with the appropriate df) of area \alpha (i.e. level of significance). The p-value approach can also be used. The p-value is the probability that the chi-squared random variable under the assumption of H_0 is more extreme than the observed chi-squared statistic.

The calculation of the chi-squared statistic for Table 3b is best done in software. Performing the calculation in Excel gives \chi^2= 9.62834 with df = (3-1) x (4-1) = 6. At level of significance \alpha= 0.05, the critical value is 12.59. Thus the chi-squared statistic is not large enough to reject the null hypothesis. So the sample results do not provide enough evidence to conclude that smoking status is affected by level of education.

The p-value is 0.1412. Since this is a large p-value, we have the same conclusion that there is not sufficient evidence to reject the null hypothesis.

Both the critical value and the p-value are evaluated using the following functions in Excel.

    Critical Value = CHISQ.INV.RT(level of significance, df)
    p-value = 1 – CHISQ.DIST(test statistic, df, TRUE)

_______________________________________________________________________________________________

Another Example

Example 2
A researcher wanted to determine whether race/ethnicity is associated with political affiliations among the residents in a medium size city in the Eastern United States. The following table represents the ethnicity and the political affiliations from a random sample of adults in this city.

    Table 4 – Race/Ethnicity and Political Affiliations
    \displaystyle \begin{array} {ccccccccc} \text{Political} & \text{ } & \text{White}  & \text{ } & \text{Black}  & \text{ } & \text{Hispanic} & \text{ } & \text{Asian} \\  \text{Affiliation} & \text{ } & \text{ }  & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Independent} & \text{ } & 142  & \text{ } & 39 & \text{ } & 36  & \text{ } & 120      \\ \text{Democratic} & \text{ } & 130  & \text{ } & 62 & \text{ } & 46  & \text{ } & 116    \\ \text{Republican} & \text{ } & 165  & \text{ } & 38 & \text{ } & 29  & \text{ } & 95           \end{array}

Use the chi-squared test as described above to test whether political affiliation is affected by race and ethnicity.

The following table show the calculation of the expected counts (in parentheses) and the total counts.

    Table 4a – Race/Ethnicity and Political Affiliations
    \displaystyle \begin{array} {ccccccccccc} \text{Political} & \text{ } & \text{White}  & \text{ } & \text{Black}  & \text{ } & \text{Hispanic} & \text{ } & \text{Asian} & \text{ } & \text{Total}\\  \text{Affiliation} & \text{ } & \text{ }  & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Independent} & \text{ } & 142  & \text{ } & 39 & \text{ } & 36  & \text{ } & 120 & \text{ } & 337  \\ \text{ } & \text{ } & (145.67)  & \text{ } & (46.01) & \text{ } & (36.75)  & \text{ } & (109.57) & \text{ } & (337)    \\ \text{Democratic} & \text{ } & 130  & \text{ } & 62 & \text{ } & 46  & \text{ } & 116 & \text{ } & 354  \\ \text{ } & \text{ } & (151.96)  & \text{ } & (48.34) & \text{ } & (38.60)  & \text{ } & (115.10) & \text{ } & (354)    \\ \text{Republican} & \text{ } & 165  & \text{ } & 38 & \text{ } & 29  & \text{ } & 95 & \text{ } & 327  \\ \text{ } & \text{ } & (140.37)  & \text{ } & (44.65) & \text{ } & (35.66)  & \text{ } & (106.32) & \text{ } & (327)    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ \text{Total} & \text{ } & 437  & \text{ } & 139 & \text{ } & 111  & \text{ } & 331 & \text{ } & 1018  \\ \text{ } & \text{ } & (437)  & \text{ } & (139) & \text{ } & (111)  & \text{ } & (331) & \text{ } & (1018)       \end{array}

The value of the chi-squared statistic, computed in Excel, is 18.35, with df = (3-1) x (4-1) = 6. The critical value at level of significance 0.01 is 16.81. Thus we reject the null hypothesis that there is no relation between race/ethnicity and political party affiliation. The two-way table provides evidence that political affiliation is affected by race/ethnicity.

The p-value is 0.0054. This is the probability of obtaining a calculated value of the chi-squared statistic that is 18.35 or greater (assuming the null hypothesis). Since this probability is so small, it is unlikely that the large chi-squared value of 18.35 occurred by chance alone.

One interesting point that should be made is that the chi-squared test of independence does not provide insight into the nature of the association between the row variable and the column variable. To help clarify the association, it will be helpful to conduct analysis using marginal distributions and conditional distributions (as discussed here and here for the Titanic survival data).

_______________________________________________________________________________________________

Remarks

Some students may confuse the test of independence discussed here with the chi-squared test of homogeneity discussed in this previous post. Both tests can be used to test whether the distributions of the row variable are the same across the columns.

Bear in mind that the test of independence as discussed here is a way test whether two categorical variables (one on the rows and one on the columns) are associated with one another in a population. We discuss two examples here – level of education and smoking status in Example 1 and race/ethnicity and political affiliation in Example 2. In both cases, we want to see if one of the variable is affected by the other variable.

The test of homogeneity is a way to test whether two or more subgroups in a population follow the same distribution of a categorical variable. For example, do adults with different educational attainment levels have the same proportions of current smokers/former smokers/never smokers? For example, do adults in different racial groups have different proportions of independents, Democrats and Republicans?

Now the examples cited for the test of homogeneity seem to be the same examples we work for the test of independence. However, the two tests are indeed different. The difference is subtle. The difference is basically in the way the study is designed.

For the test of independence to be used, the observational units are collected at random from a population and two categorical variables are observed for each unit. Hence the results will be summarized in a two-way table. For the test of homogeneity, the data are collected by random sampling from each subgroup separately. If Example 2 is to use a test of homogeneity, the study would have to sample each racial group separately (say 1000 white, 1000 blacks and so on). Then we compare the proportions of party affiliations across racial group. For Example 2 to work as a test of independence as discussed here, the study would have to observe a random sample of adults and observe the race/ethnicity and party affiliation of each unit.

Another chi-squared test is called the goodness-of-fit test, discussed here. This test is a way of testing whether a set of observed categorical dataset come from a hypothesized distribution (e.g. Poison distribution).

All three tests use the same chi-squared statistic, but they are not the same test.

_______________________________________________________________________________________________

Reference

  1. Moore D. S., McCabe G. P., Craig B. A., Introduction to the Practice of Statistics, 7th ed., W. H. Freeman and Company, New York, 2012
  2. Wackerly D. D., Mendenhall III W., Scheaffer R. L.,Mathematical Statistics with Applications, Thomson Learning, Inc, California, 2008

_______________________________________________________________________________________________
\copyright \ 2017 - \text{Dan Ma}

The Chi-Squared Distribution, Part 3b

This post is a continuation of the previous post (Part 3a) on chi-squared test and is also part of a series of posts on chi-squared distribution. The first post (Part 1) is an introduction on the chi-squared distribution. The second post (Part 2) is on the chi-squared distribution as mathematical tools for inference involving quantitative variables. Part 3, which focuses on inference on categorical variables using the Pearson’s chi-squared statistic, is broken up in three posts. Part 3a is an introduction on the chi-squared test statistic and explains how to perform the chi-squared goodness-of-fit test. Part 3b (this post) focuses on using the chi-squared statistic to compare several populations (test of homogeneity). Part 3c (the next post) focuses on the test of independence.

_______________________________________________________________________________________________

Comparing Two Distributions

The interpretation of the chi-squared test discussed in this post is called chi-squared test of homogeneity. In this post, we show that the chi-squared statistic is employed to test whether the cell probabilities for certain categories are identical across several populations. We start by examining the two-population case, which will be fairly easily extended to the case of more than two populations.

Suppose that a multinomial experiment can result in k distinct outcomes. Suppose that the experiment is performed two times with the two samples drawn from two different populations. Let p_{1,j} be the probability that the outcome in the first experiment falls into the jth category (or cell j) and let p_{2,j} be the probability that the outcome in the second experiment falls into the jth category (or cell j) where j=1,2,\cdots,k. Furthermore, suppose that there are n_1 and n_2 independent multinomial trials in the first experiment and the second experiment, respectively.

We are interested in the random variables Y_{1,1}, Y_{1,2}, \cdots, Y_{1,k} and the random variables Y_{2,1}, Y_{2,2}, \cdots, Y_{2,k} where Y_{1,j} is the number of trials in the first experiment whose outcomes fall into cell j and Y_{2,j} is the number of trials in the second experiment whose outcomes fall into cell j. Then the sampling distribution of each of the following

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{1,j}-n_1 \ p_{1,j})^2}{n_1 \ p_{1,j}}

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{2,j}-n_2 \ p_{2,j})^2}{n_2 \ p_{2,j}}

has an approximate chi-squared distribution with k-1 degrees of freedom (discussed here). Because the two experiments are independent, the following sum

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{1,j}-n_1 \ p_{1,j})^2}{n_1 \ p_{1,j}}+\sum \limits_{j=1}^k \frac{(Y_{2,j}-n_2 \ p_{2,j})^2}{n_2 \ p_{2,j}}

has an approximate chi-squared distribution with k-1+k-1=2k-2 degrees of freedom. When p_{1,j} and p_{2,j}, j=1,2,\cdots,k, are unknown, we wish to test the following hypothesis.

    H_0: p_{1,1}=p_{2,1}, p_{1,2}=p_{2,2}, \cdots, p_{1,j}=p_{2,j}, \cdots, p_{1,k}=p_{2,k}

In other words, we wish to test the hypothesis that the cell probabilities associated with the two independent experiments are equal. Since the cell probabilities are generally unknown, we can use sample data to estimate p_{1,j} and p_{2,j}. How do we do that? If the null hypothesis H_0 is true, then the two independent experiments can be viewed as one combined experiment. Then the the following ratio

    \displaystyle \hat{p}_j=\frac{Y_{1,j}+Y_{2,j}}{n_1+n_2}

is the sample frequency of the event corresponding to cell j, j=1,2,\cdots,k. Furthermore, we only have to estimate p_{1,j} and p_{2,j} using \hat{p}_j for j=1,2,\cdots,k-1 since the estimator of p_{1,k} and p_{2,k} is 1-\hat{p}_1-\hat{p}_2-\cdots-\hat{p}_{k-1}. With all these in mind, the following is the test statistic we will need.

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{1,j}-n_1 \ \hat{p}_j)^2}{n_1 \ \hat{p}_j}+\sum \limits_{j=1}^k \frac{(Y_{2,j}-n_2 \ \hat{p}_j)^2}{n_2 \ \hat{p}_j}

Since k-1 parameters are estimated, the degrees of freedom of this test statistic is obtained by subtracting k-1 from 2k-2. Thus the degrees of freedom is 2k-2-(k-1)=k-1. We test the null hypothesis H_0 against all alternatives using the upper tailed chi-squared test. We use two examples to demonstrate how this procedure is done.

Example 1
A Million Random Digits with 100,000 Normal Deviates is a book with random numbers published by the RAND Corporation in 1955. It contains 1,000,000 random digits and was an important work in statistics and was used extensively in random number generation in the 20th century. A typical way to pick random numbers from the book is to randomly select a page and then randomly select a point on that page (row and column). Then read off the random digits from that point (going down and then continue with the next columns) until obtaining the desired number of digits. We selected 1,000 random digits in this manner from the book and compare them with the random digits generated in Excel using the Rand() function. The following table shows the frequency distributions of the digits from the two sources. In the following table, MRD = Million Random Digits. Test whether the distributions of digits are the same between MRD and Excel.

    \displaystyle \begin{array} {rrrrr} \text{Digit} & \text{ } & \text{Frequency (MRD)}  & \text{ } & \text{Frequency (Excel)}    \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ 0 & \text{ } & 90  & \text{ } & 93  \\ 1 & \text{ } & 112  & \text{ } & 96  \\ 2 & \text{ } & 104  & \text{ } & 105  \\ 3 & \text{ } & 102  & \text{ } & 95  \\ 4 & \text{ } & 84  & \text{ } & 112  \\ 5 & \text{ } & 110  & \text{ } & 103  \\ 6 & \text{ } & 106  & \text{ } & 98  \\ 7 & \text{ } & 101  & \text{ } & 114  \\ 8 & \text{ } & 101  & \text{ } & 106  \\ 9 & \text{ } & 90  & \text{ } & 78  \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ \text{Total} & \text{ } & 1000 & \text{ } & 1000     \end{array}

The frequencies of the digits are for the most part similar between MRD and Excel except for digits 1 and 4. The null hypothesis H_0 is that the frequencies or probabilities for the digits are the same between the two populations. The following is a precise statement of the null hypothesis.

    H_0: p_{1,j}=p_{2,j}

where j=0,1,\cdots,9 and p_{1,j} is the probability that a random MRD digit is j and p_{2,j} is the probability that a random digit from Excel is j. Under H_0, an estimate of p_{1,j}=p_{2,j} is the ratio \frac{Y_{1,j}+Y_{2,j}}{2000}. For digits 0 and 1, they are (90+93)/2000 = 0.0915, (112+96)/2000 = 0.104, respectively. The following two tables show the calculation for the chi-squared procedure.

    Chi-Squared Statistic (MRD)
    \displaystyle \begin{array} {ccccccccc} \text{Digit} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{ } & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 90  & \text{ } & 0.0915 & \text{ } & 91.5  & \text{ } & \frac{(90-91.5)^2}{91.5}      \\ 1 & \text{ } & 112  & \text{ } & 0.1040 & \text{ } & 104  & \text{ } & \frac{(112-104)^2}{104}    \\ 2 & \text{ } & 104  & \text{ } & 0.1045 & \text{ } & 104.5  & \text{ } & \frac{(104-104.5)^2}{104.5}      \\ 3 & \text{ } & 102  & \text{ } & 0.0985 & \text{ } & 98.5  & \text{ } & \frac{(102-98.5)^2}{98.5}      \\ 4 & \text{ } & 84  & \text{ } & 0.0980 & \text{ } & 98  & \text{ } & \frac{(84-98)^2}{98}    \\ 5 & \text{ } & 110  & \text{ } & 0.1065 & \text{ } & 106.5  & \text{ } & \frac{(110-106.5)^2}{106.5}      \\ 6 & \text{ } & 106  & \text{ } & 0.1020 & \text{ } & 102  & \text{ } & \frac{(106-102)^2}{102}      \\ 7 & \text{ } & 101  & \text{ } & 0.1075 & \text{ } & 107.5  & \text{ } & \frac{(101-107.5)^2}{107.5}      \\ 8 & \text{ } & 101  & \text{ } & 0.1035 & \text{ } & 103.5  & \text{ } & \frac{(101-103.5)^2}{103.5}      \\ 9 & \text{ } & 90  & \text{ } & 0.0840 & \text{ } & 84  & \text{ } & \frac{(90-84)^2}{84}  \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 1000 & \text{ } & 1.0 & \text{ } & 1000 & \text{ } & 3.920599983     \end{array}

    Chi-Squared Statistic (Excel)
    \displaystyle \begin{array} {ccccccccc} \text{Digit} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{ } & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 93  & \text{ } & 0.0915 & \text{ } & 91.5  & \text{ } & \frac{(93-91.5)^2}{91.5}      \\ 1 & \text{ } & 96  & \text{ } & 0.1040 & \text{ } & 104  & \text{ } & \frac{(96-104)^2}{104}    \\ 2 & \text{ } & 105  & \text{ } & 0.1045 & \text{ } & 104.5  & \text{ } & \frac{(105-104.5)^2}{104.5}      \\ 3 & \text{ } & 95  & \text{ } & 0.0985 & \text{ } & 98.5  & \text{ } & \frac{(95-98.5)^2}{98.5}      \\ 4 & \text{ } & 112  & \text{ } & 0.0980 & \text{ } & 98  & \text{ } & \frac{(112-98)^2}{98}    \\ 5 & \text{ } & 103  & \text{ } & 0.1065 & \text{ } & 106.5  & \text{ } & \frac{(103-106.5)^2}{106.5}      \\ 6 & \text{ } & 98  & \text{ } & 0.1020 & \text{ } & 102  & \text{ } & \frac{(98-102)^2}{102}      \\ 7 & \text{ } & 114  & \text{ } & 0.1075 & \text{ } & 107.5  & \text{ } & \frac{(114-107.5)^2}{107.5}      \\ 8 & \text{ } & 106  & \text{ } & 0.1035 & \text{ } & 103.5  & \text{ } & \frac{(106-103.5)^2}{103.5}      \\ 9 & \text{ } & 78  & \text{ } & 0.0840 & \text{ } & 84  & \text{ } & \frac{(78-84)^2}{84}  \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 1000 & \text{ } & 1.0 & \text{ } & 1000 & \text{ } & 3.920599983     \end{array}

The value of the chi-squared statistic is 7.841199966, the sum of the two individual ones. The degrees of freedom of the chi-squared statistic is 10 -1 = 9. At level of significance \alpha=0.05, the critical value (the upper area of the chi-squared density curve) is 16.9. Thus we do not reject the null hypothesis that the distributions of digits in these two sources of random numbers are the same. Given the value of the chi-squared statistic (7.84), the p-value is 0.55. Since the p-value is large, there is no reason to believe that the digit distributions are different between the two sources. \square

Example 2
Two groups of drivers (500 drivers in each group) are observed for a 3-year period. The frequencies of accidents of the two groups are shown below. Test whether the accident frequencies are the same between the two groups of drivers.

    \displaystyle \begin{array} {rrrrr} \text{Number of Accidents} & \text{ } & \text{Frequency (Group 1)}  & \text{ } & \text{Frequency (Group 2)}    \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ 0 & \text{ } & 193  & \text{ } & 154  \\ 1 & \text{ } & 185  & \text{ } & 191  \\ 2 & \text{ } & 88  & \text{ } & 97  \\ 3 & \text{ } & 29  & \text{ } & 38  \\ 4 & \text{ } & 4  & \text{ } & 17  \\ 5 & \text{ } & 1  & \text{ } & 3    \\ \text{ } & \text{ } & \text{ } & \text{ } & \text{ }  \\ \text{Total} & \text{ } & 500 & \text{ } & 500     \end{array}

To ensure that the expected count in the last cell is not too small, we collapse two cells (4 and 5 accidents) into one. The following two tables show the calculation for the chi-squared procedure.

    Chi-Squared Statistic (Group 1)
    \displaystyle \begin{array} {ccccccccc} \text{Number of} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{Accidents} & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 193  & \text{ } & 0.347 & \text{ } & 173.5  & \text{ } & \frac{(193-173.5)^2}{173.5}      \\ 1 & \text{ } & 185  & \text{ } &0.376  & \text{ } & 188  & \text{ } & \frac{(185-188)^2}{188}    \\ 2 & \text{ } & 88  & \text{ } &0.185  & \text{ } & 92.5  & \text{ } & \frac{(88-92.5)^2}{92.5}      \\ 3 & \text{ } & 29  & \text{ } & 0.067 & \text{ } & 33.5  & \text{ } & \frac{(29-33.5)^2}{33.5}      \\ 4+ & \text{ } & 5  & \text{ } & 0.025 & \text{ } & 12.5  & \text{ } & \frac{(5-12.5)^2}{12.5}      \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 500 & \text{ } & 1.0 & \text{ } & 500 & \text{ } & 7.562911523     \end{array}

    Chi-Squared Statistic (Group 2)
    \displaystyle \begin{array} {ccccccccc} \text{Number of} & \text{ } & \text{Frequency}  & \text{ } & \text{Estimate}  & \text{ } & \text{Frequency} & \text{ } & \text{Chi-Squared} \\  \text{Accidents} & \text{ } & \text{Observed}  & \text{ } & \text{Cell Probability}  & \text{ } & \text{Expected} & \text{ } & \text{ }    \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }    \\ 0 & \text{ } & 154  & \text{ } & 0.347 & \text{ } & 173.5  & \text{ } & \frac{(154-173.5)^2}{173.5}      \\ 1 & \text{ } & 191  & \text{ } &0.376  & \text{ } & 188  & \text{ } & \frac{(191-188)^2}{188}    \\ 2 & \text{ } & 97  & \text{ } &0.185  & \text{ } & 92.5  & \text{ } & \frac{(97-92.5)^2}{92.5}      \\ 3 & \text{ } & 38  & \text{ } & 0.067 & \text{ } & 33.5  & \text{ } & \frac{(38-33.5)^2}{33.5}      \\ 4+ & \text{ } & 20  & \text{ } & 0.025 & \text{ } & 12.5  & \text{ } & \frac{(20-12.5)^2}{12.5}      \\ \text{ } & \text{ } & \text{ }  & \text{ } & \text{ } & \text{ } & \text{ }  & \text{ } & \text{ }      \\ \text{Total} & \text{ } & 500 & \text{ } & 1.0 & \text{ } & 500 & \text{ } & 7.562911523     \end{array}

The total value of the chi-squared statistic is 15.1258 with df = 4. At level of significance \alpha=0.01, the critical value is 13.2767. Since the chi-squared statistic is larger than 13.2767, we reject the null hypothesis that the loss frequencies are the same between the two groups of drivers. We also reach the same conclusion by looking at the p-value. The p-value of the chi-squared statistic of 15.128 is 0.004447242, which is quite small. So we have reason to believe that the value of the chi-squared statistic 15.1258 is too large to be explained by random fluctuation. Thus we have reason to believe that the two groups have different accident rates. \square

_______________________________________________________________________________________________

Comparing Two or More Distributions

The procedure demonstrated in the previous section can be easily extended to handle more than two distributions. Suppose that the focus of interest is a certain multinomial experiment that results in k distinct outcomes. Suppose that the experiment is performed r times with the samples drawn from different populations. The r iterations of the experiment are independent. Note the following quantities.

    p_{i,j} is the probability that the outcome in the ith experiment falls into the jth cell where i=1,2,\cdots,r and j=1,2,\cdots,k.

    n_i is the number of times the ith experiment is performed.

    Y_{i,j} is the number of trials in the ith experiment whose outcomes fall into cell j where i=1,2,\cdots,r and j=1,2,\cdots,k.

With these in mind, consider the following chi-squared statistics

    \displaystyle \sum \limits_{j=1}^k \frac{(Y_{i,j}-n_i \ p_{i,j})^2}{n_i \ p_{i,j}}

where j=1,2,\cdots,k. Each of the above k statistic has an approximate chi-squared distribution with k-1 degrees of freedom. Since the experiments are independent, the sum of all these chi-squared statistics

    \displaystyle \sum \limits_{i=1}^r \sum \limits_{j=1}^k \frac{(Y_{i,j}-n_i \ p_{i,j})^2}{n_i \ p_{i,j}}

has an approximate chi-squared distribution with df = r(k-1). The null hypothesis is that the cell probabilities are the same across all populations. The following is the formal statement.

    H_0: p_{1,j}=p_{2,j}=p_{3,j}=\cdots =p_{r,j}

where j=1,2,\cdots,k. The unknown cell probabilities are to be estimated using sample data as follows:

    \displaystyle \hat{p}_j=\frac{Y_{1,j}+Y_{2,j}+\cdots+Y_{r,j}}{n_1+n_2+\cdots+n_r}

where j=1,2,\cdots,k. The reasoning behind \hat{p}_j is that if H_0 is true, then the r iterations of the experiment is just one large combined experiment. Then Y_{1,j}+Y_{2,j}+\cdots+Y_{r,j} is simply the number of observations that fall into cell j when H_0 is assumed to be true. Thus \hat{p}_j is an estimate of the cell probabilities p_{1,j}=p_{2,j}=p_{3,j}=\cdots =p_{r,j} for j=1,2,\cdots,k-1.

The next step is to replace the cell probabilities by the estimates \hat{p}_j to obtain the following chi-squared statistic.

    \displaystyle \sum \limits_{i=1}^r \sum \limits_{j=1}^k \frac{(Y_{i,j}-n_i \ \hat{p}_j)^2}{n_i \ \hat{p}_j}

Since we only have to estimate the cell probabilities p_{1,j}=p_{2,j}=p_{3,j}=\cdots =p_{r,j} for j up to k-1, the degrees of freedom of the above statistic is r(k-1)-(k-1)=(r-1) (k-1). In other words, the degrees of freedom is the number of experiments less one times the number of cells less one.

Once all the components are in place, we obtain the critical value of the chi-squared distribution of the df indicated above with an appropriate level of significance to decide on the rejection or acceptance of the null hypothesis. The p-value approach can also be used.

_______________________________________________________________________________________________

Reference

  1. Moore D. S., McCabe G. P., Craig B. A., Introduction to the Practice of Statistics, 7th ed., W. H. Freeman and Company, New York, 2012
  2. Wackerly D. D., Mendenhall III W., Scheaffer R. L.,Mathematical Statistics with Applications, Thomson Learning, Inc, California, 2008

_______________________________________________________________________________________________
\copyright \ 2017 - \text{Dan Ma}

The Chi-Squared Distribution, Part 3a

This post is the part 3 of a three-part series on chi-squared distribution. In this post, we discuss the roles played by chi-squared distribution on experiments or random phenomena that result in measurements that are categorical rather than quantitative (part 2 deals with quantitative measurements). An introduction of the chi-squared distribution is found in part 1.

The chi-squared test discussed here is also referred to as Pearson’s chi-squared test, which was formulated by Karl Pearson in 1900. It can be used to assess three types of comparison on categorical variables – goodness of fit, homogeneity, and independence. As a result, we break up the discussion into 3 parts – part 3a (goodness of fit, this post), part 3b (test of homogeneity) and part 3c (test of independence).

_______________________________________________________________________________________________

Multinomial Experiments

Let’s look at the setting for Pearson’s goodness-of-fit test. Consider a random experiment consisting of a series of independent trials each of which results in exactly one of k categories. We are interested in summarizing the counts of the trials that fall into the k distinct categories. Some examples of such random experiments are:

  • In rolling a die n times, consider the counts of the faces of the die.
  • Perform a series of experiments each of which is a toss of three coins. Summarize the experiments according to the number of heads, 0, 1, 2, and 3, that occur in each experiment.
  • Blood donors can be classified into the blood types A, B, AB and O.
  • Record the number of automobile accidents per week in a one-mile stretch of highway. Classify the weekly accident counts into the groupings 0, 1, 2, 3, 4 and 5+.
  • A group of auto insurance policies are classified into the claim frequency rates of 0, 1, ,2, 3, 4+ accidents per year.
  • Auto insurance claims are classified into various claim size groupings, e.g. under 1000, 1000 to 5000, 5000 to 10000 and 10000+.
  • In auditing financial transactions in financial documents (accounting statements, expense reports etc), the leading digits of financial figures can be classified into 9 cells: 1, 2, 3, 4, 5, 6, 7, 8, and 9.

Each of the example can be referred to as a multinomial experiment. The characteristics of such an experiment are

  • The experiment consists of performing n identical trials that are independent.
  • For each trial, the outcome falls into exactly one of k categories or cells.
  • The probability of the outcome of a trial falling into a particular cell is constant across all trials.

For cell j, let p_j be the probability of the outcome falling into cell j. Of course, p_1+p_2+\cdots+p_k=1. We are interested in the joint random variables Y_1,Y_2,\cdots,Y_k where Y_j is the number of trials whose outcomes fall into cell j.

If k=2 (only two categories for each trial), then the experiment is a binomial experiment. Then one of the categories can be called success (with cell probability p) and the other is called failure (with cell probability 1-p). If Y_1 is the count of the successes, then Y_1 has a binomial distribution with parameters n and p.

In general, the variables Y_1,Y_2,\cdots,Y_k have a multinomial distribution. To be a little more precise, the random variables Y_1,Y_2,\cdots,Y_{k-1} have a multinomial distribution with parameters n and p_1,p_2,\cdots,p_{k-1}. Note that the last variable Y_k is deterministic since Y_k=n-(Y_1+\cdots+Y_{k-1}).

In the discussion here, the objective is to make inference on the cell probabilities p_1,p_2,\cdots,p_k. The hypotheses in the statistical test are expressed in terms of specific values of p_j, j=1,2,\cdots,k. For example, the null hypothesis may be of the following form:

    H_0: p_j=p_{j,0} \text{ for } j=1,2,3,\cdots,k

where p_{j,0} are the hypothesized values of the cell probabilities. It is cumbersome to calculate the probabilities for the multinomial distribution. As a result, it would be difficult (if not impossible) to calculate the exact level of significance, which is the probability of type I error. Thus it is critical to use a test statistic that does not depend on the multinomial distribution. Fortunately this problem was solved by Karl Pearson. He formulated a test statistic that has an approximate chi-squared distribution.

_______________________________________________________________________________________________

Test Statistic

The random variables Y_1,Y_2,\cdots,Y_k discussed above have a multinomial distribution with parameters p_1,p_2,\cdots,p_k, respectively. Of course, each p_j is the probability that the outcome of a trial falls into cell j. The marginal distribution of each Y_j has a binomial distribution with parameters n and p_j with p_j being the probability of success. Thus the expected value and the variance of Y_j are E[Y_j]=n p_j and Var[Y_j]=n p_j (1-p_j). The following is the chi-squared test statistic.

    \displaystyle \chi^2=\sum \limits_{j=1}^k \frac{(Y_j-n \ p_j)^2}{n \ p_j}=\sum \limits_{j=1}^k \frac{(Y_j-E[Y_j])^2}{E[Y_j]} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)

The statistic defined in (1) was proposed by Karl Pearson in 1900. It is defined by summing the squares of the difference of the observed counts Y_j and the expected counts E[Y_j] where each squared difference is normalized by the expected count (i.e. divided by the expected count). On one level, the test statistic in (1) seems intuitive since it involves all the k deviations Y_j-E[Y_j]. If the observed values Y_j are close to the expected cell counts, then the test statistic in (1) would have a small value.

The chi-squared test statistic defined in (1) has an approximate chi-squared distribution when the number of trials n is large. The proof of this fact will not be discussed here. We demonstrate with the case for k=2.

    \displaystyle \begin{aligned} \sum \limits_{j=1}^2 \frac{(Y_j-n \ p_j)^2}{n \ p_j}&=\frac{p_2 \ (Y_1-n p_1)^2+p_1 \ (Y_2-n p_2)^2}{n \ p_1 \ p_2} \\&=\frac{(1-p_1) \ (Y_1-n p_1)^2+p_1 \ ((n-Y_1)-n (1-p_1))^2}{n \ p_1 \ (1-p_1)} \\&=\frac{(1-p_1) \ (Y_1-n p_1)^2+ p_1 \ (Y_1-n p_1)^2}{n \ p_1 \ (1-p_1)} \\&=\frac{(Y_1-n p_1)^2}{n \ p_1 \ (1-p_1)} \\&=\biggl( \frac{Y_1-n p_1}{\sqrt{n \ p_1 \ (1-p_1)}} \biggr)^2 =\biggl( \frac{Y_1-E[Y_1]}{\sqrt{Var[Y_1]}} \biggr)^2 \end{aligned}

The quantity inside the brackets in the last step is approximately normal according to the central limit theorem. Since the square of a normal distribution has a chi-squared distribution with one degree of freedom (see Part 1), the last step in the above derivation has an approximate chi-distribution with 1 df.

In order for the chi-squared distribution to provide an adequate approximation to the test statistic in (1), a rule of thumb requires that the expected cell counts E[Y_j] are at least five. The null hypothesis to be tested is that the cell probabilities p_j are certain specified values p_{j,0} for j=1,2,\cdots,k. The following is the formal statement.

    H_0: p_j=p_{j,0} \text{ for } j=1,2,3,\cdots,k \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)

The hull hypothesis is to be tested against all possible alternatives. In other words, the alternative hypothesis H_1 is the statement that p_j \ne p_{j,0} for at least one j.

The chi-squared test statistic in (1) can be used for a goodness-of-fit test, i.e. to test how well a probability model fit the sample data, in other words, to test whether the observed data come from a hypothesized probability distribution. Example 2 below will test whether the Poisson model is a good fit for a set of claim frequency data.

_______________________________________________________________________________________________

Degrees of Freedom

Now that we have addressed the distribution for the test statistic in (1), we need to address two more issues. One is the direction of the hypothesis test (one-tailed or two-tailed). The second is the degrees of freedom. The direction of the test is easy to see. Note that the chi-squared test statistic in (1) is always a positive value. On the other hand, if the difference between observed cell counts and expected cell counts is large, the large difference would contradict the null hypothesis. Thus if the chi-squared statistic has a large value, we should reject the null hypothesis. So the correct test to use is the upper tailed chi-squared test.

The number of degrees of freedom is obtained by subtracting one from the cell count k for each independent linear restriction placed on the cell probabilities. There is at least one linear restriction. The sum of all the cell probabilities must be 1. Thus the degrees of freedom must be the result of reducing k by one at least one time. This means that the degrees of freedom of the chi-squared statistic in (1) is at most k-1.

Furthermore, in the calculation for the specified cell probabilities, if there is any parameter that is unknown and is required to be estimated from the data, then there are further reductions in k-1. If there is any unknown parameter that needs to be estimated from data, a maximum likelihood estimator (MLE) should be used. All these points will be demonstrated by the examples below.

If the value of the chi-squared statistic in (1) is “large”, we reject the null hypothesis H_0 stated in (2). By large we mean the value of the chi-squared statistic exceeds the critical value for the desired level of significance. The critical value is the upper tail in the chi-squared distribution (with the appropriate df) of area \alpha where \alpha is the desired level of significance (e.g. \alpha=0.05 and \alpha=0.01 or some other appropriate level). Instead of using critical value, the p-value approach can also be used. Critical value or p-value can be looked up using table or computed using software. For the examples below, chi-squared functions in Excel are used.

_______________________________________________________________________________________________

Examples

Example 1
Suppose that we wish to test whether a given die is a fair die. We roll the die 240 times and the following table shows the results.

    \displaystyle \begin{array} {rr} \text{Cell} & \text{Frequency} \\ 1 & 38  \\ 2 & 35 \\ 3 & 37 \\ 4 & 38  \\ 5 & 42 \\ 6 & 50 \\ \text{ } & \text{ } \\ \text{Total} & 240   \end{array}

The null hypothesis

    \displaystyle H_0: p_1=p_2=p_3=p_4=p_5=p_6=\frac{1}{6}

is tested against the alternative that at least one of the equalities is not true. This example is the simplest problem for testing cell probabilities. Since the specified values for the cells probabilities in H_0 are known, the degrees of freedom is one less than the cell count. Thus df = 5. The following is the chi-squared statistic based on the data and the null hypothesis.

    \displaystyle \begin{aligned} \chi^2&=\frac{(38-40)^2}{40}+\frac{(35-40)^2}{40}+\frac{(37-40)^2}{40} \\& \ \ + \frac{(38-40)^2}{40}+\frac{(42-40)^2}{40}+\frac{(50-40)^2}{40}=3.65  \end{aligned}

At \alpha=0.05 level of significance, the chi-squared critical value at df = 5 is \chi_{0.05}^2(5)=11.07049769. Since 3.65 < 11.07, the hypothesis that the die is fair is not rejected at \alpha=0.05. The p-value is P[\chi^2 > 3.65]=0.6. With such a large p-value, we also come to the conclusion that the null hypothesis is not rejected. \square

In all the examples, the critical values and the p-values are obtained by using the following functions in Excel.

    critical value
    =CHISQ.INV.RT(level of significance, df)

    p-value
    =1 – CHISQ.DIST(test statistic, df, TRUE)

Example 2
We now give an example for the chi-squared goodness-of-fit test. The number of auto accident claims per year from 700 drivers are recorded by an insurance company. The claim frequency data is shown in the following table.

    \displaystyle \begin{array}{rrr} \text{Claim Count} & \text{ } & \text{Frequency} \\ 0 & \text{ } & 351  \\ 1 & \text{ } & 241 \\ 2 & \text{ } & 73 \\ 3 & \text{ } & 29  \\ 4 & \text{ } & 6 \\ 5+ & \text{ } & 0 \\ \text{ } & \text{ } & \text{ } \\ \text{Total} & \text{ } & 700   \end{array}

Test the hypothesis that the annual claim count for a driver has a Poisson distribution. Use \alpha=0.05. Assume that the claim frequency across the drivers in question are independent.

The hypothesized distribution of the annual claim frequency is a Poisson distribution with unknown mean \lambda. The MLE of the parameter \lambda is the sample mean, which in this case is \hat{\lambda}=\frac{498}{700}=0.711428571.

Under the assumption that the claim frequency is Poisson with mean \hat{\lambda}, the cell probabilities are calculated using \hat{\lambda}.

    \displaystyle p_1=P[Y=0]=e^{-\hat{\lambda}}=0.4909

    \displaystyle p_2=P[Y=1]=\hat{\lambda} \ e^{-\hat{\lambda}}=0.3493

    \displaystyle p_3=P[Y=2]=\frac{1}{2} \ \hat{\lambda}^2 \ e^{-\hat{\lambda}}=0.1242

    \displaystyle p_4=P[Y=3]=\frac{1}{3!} \ \hat{\lambda}^3 \ e^{-\hat{\lambda}}=0.0295

    \displaystyle p_5=P[Y \ge 4]=1-P[Y=0]-P[Y=1]-P[Y=2]-P[Y=3]=0.0061

Then the null hypothesis is:

    H_0: p_1=0.4909, p_2=0.3493, p_3=0.1242, p_4=0.0295, p_5=0.0061

The null hypothesis is tested against all alternatives. The following table shows the calculation of the chi-squared statistic.

    \displaystyle \begin{array}{rrrrrrrrr}   \text{Cell} & \text{ } & \text{Claim Count} & \text{ } & \text{Cell Probability} & \text{ } & \text{Expected Count} & \text{ } & \text{Chi-Squared}   \\ 1 & \text{ } & 0 & \text{ } & 0.4909 & \text{ } & 343.63 & \text{ } & 0.15807   \\ 2 & \text{ } & 1 & \text{ } & 0.3493 & \text{ } & 244.51 & \text{ } & 0.05039   \\ 3 & \text{ } & 2 & \text{ } & 0.1242 & \text{ } & 86.94 & \text{ } & 2.23515   \\ 4 & \text{ } & 3 & \text{ } & 0.0295 & \text{ } & 20.65 & \text{ } & 3.37639    \\ 5 & \text{ } & 4+ & \text{ } & 0.0061 & \text{ } & 4.27 & \text{ } & 0.70091   \\ \text{ } & \text{ } & \text{ }   \\ \text{Total} & \text{ } & \text{ } & \text{ } & 1.0000 & \text{ } & \text{ } & \text{ } & 6.52091   \end{array}

The degrees of freedom of the chi-squared statistic is df = 5 – 1 -1 = 3. The first reduction of one is due to the linear restriction of all cell probabilities summing to 1. The second reduction is due to the fact that one unknown parameter \lambda has to be estimated using sample data. Using Excel, the critical value is \chi_{0.05}^2(3)=7.814727903. The p-value is P[\chi^2 > 6.52091]=0.088841503. Thus the null hypothesis is not rejected at the level of significance \alpha=0.05. \square

Example 3
For many data sets, especially for data sets with numbers that distribute across multiple orders of magnitude, the first digits occur according to the probability distribution indicated in below:

    Probability for leading digit 1 = 0.301
    Probability for leading digit 2 = 0.176
    Probability for leading digit 3 = 0.125
    Probability for leading digit 4 = 0.097
    Probability for leading digit 5 = 0.079
    Probability for leading digit 6 = 0.067
    Probability for leading digit 7 = 0.058
    Probability for leading digit 8 = 0.051
    Probability for leading digit 9 = 0.046

This probability distribution was discovered by Simon Newcomb in 1881 and was rediscovered by physicist Frank Benford in 1938. Since then this distribution has become known as the Benford’s law. Thus in many data sets, the leading digit 1 occurs about 30% of the time. The data sets for which this law is applicable are demographic data (e.g. income data of a large population, census data such as populations of cities and counties) and scientific data. The law is also applicable in certain financial data, e.g. tax data, stock exchange data, corporate disbursement and sales data. Thus the Benford’s law is a great tool for forensic accounting and auditing.

The following shows the distribution of first digits in the population counts of all 3,143 counties in the United States (from US census data).

    Count for leading digit 1 = 972
    Count for leading digit 2 = 573
    Count for leading digit 3 = 376
    Count for leading digit 4 = 325
    Count for leading digit 5 = 205
    Count for leading digit 6 = 209
    Count for leading digit 7 = 179
    Count for leading digit 8 = 155
    Count for leading digit 9 = 149

Use the chi-squared goodness-of-fit test to test the hypothesis that the leading digits in the county population data follow the Benford’s law. This example is also discussed in this blog post. \square

For further information and more examples on chi-squared test, please see the sources listed in the reference section.

_______________________________________________________________________________________________

Reference

  1. Moore D. S., McCabe G. P., Craig B. A., Introduction to the Practice of Statistics, 7th ed., W. H. Freeman and Company, New York, 2012
  2. Wackerly D. D., Mendenhall III W., Scheaffer R. L.,Mathematical Statistics with Applications, Thomson Learning, Inc, California, 2008

_______________________________________________________________________________________________
\copyright \ 2017 - \text{Dan Ma}