<meta http-equiv="refresh" content="1; url=/nojavascript/"> The Rank Sum Test and Rank Correlation | CK-12 Foundation
Dismiss
Skip Navigation
You are reading an older version of this FlexBook® textbook: CK-12 Probability and Statistics - Advanced Go to the latest version.

12.2: The Rank Sum Test and Rank Correlation

Created by: CK-12

Learning Objectives

  • Understand the conditions for use of the rank sum test to evaluate a hypothesis about non-paired data.
  • Calculate the mean and the standard deviation of rank from two non-paired samples and use these values to calculate a z-score.
  • Determine the correlation between two variables using the rank correlation test for situations that meet the appropriate criteria using the appropriate test statistic formula.

Introduction

In the previous lesson, we explored the concept of nonparametric tests. As review, we use nonparametric tests when analyzing data that are not normally distributed or homogeneous with respect to variance. While parametric tests are preferred since they have more ‘power,’ they are not always applicable in statistical research.

In the last section we explored two tests - the sign test and the sign rank test. We use these tests when analyzing matched data pairs or categorical data samples. In both of these tests, our null hypothesis states that there is no difference between the distributions of these variables. As mentioned, the sign rank test is a more precise test of this question, but the test statistic can be more difficult to calculate.

But what happens if we want to test if two samples come from the same non-normal distribution? For this type of question, we use the rank sum test (also known as the Mann-Whitney \upsilon test) to assess whether two samples come from the same distribution. This test is sensitive to both the median and the distribution of the sample and population.

In this section we will learn how to conduct hypothesis tests using the Mann-Whitney \upsilon test and the situations in which it is appropriate to do so. In addition, we will also explore how to determine the correlation between two variables from non-normal distributions using the rank correlation test for situations that meet the appropriate criteria.

Conditions for Use of the Rank-Sum Test to Evaluate Hypotheses about Non-Paired Data

As mentioned, the rank sum test tests the hypothesis that two independent samples are drawn from the same population. As a reminder, we use this test when we are not sure if the assumptions of normality or homogeneity of variance are met. Essentially, this test compares the medians and the distributions of the two independent samples. This test is considered stronger than other nonparametric tests that simply assess median values. For example, in the image below we see that the two samples have the same median, but very different distributions. If we were assessing just the median value, we would not realize that these samples actually have very different distributions.

When performing the rank sum test, there are several different conditions that need to be met. These include:

  • Although the population need not be normally distributed or have homogeneity of variance, the observations must be continuously distributed.
  • That the samples drawn from the population are independent of one another.
  • That the samples have 5 or more observations. The samples do not need to have the same number of observations.
  • The observations must be on a numeric or ordinal scale. They cannot be categorical variables.

Since the rank sum test evaluates both the median and the distribution of two independent samples, we establish two null hypotheses. Our null hypotheses state that the two medians and the distributions of the independent samples are equal. Symbolically, we could say that Ho: m_1 = m_2 and \sigma_1 = \sigma_2. The alternative hypotheses state that there is a difference in the median and the standard deviations of the samples.

Calculating the Mean and the Standard Deviation of Rank to Calculate a Z-Score

When performing the rank sum test, we need to calculate a figure known as the U statistic. This statistic takes both the median and the total distribution of the two samples into account. The U statistic actually has its own distribution which we use when working with small samples (in this test a ‘small sample’ is defined as a sample less than 20 observations). This distribution is used in the same way that we would use the t and the chi-square distributions. Similar to the t distribution, the U distribution approaches the normal distribution as the size of both samples grows. When we have samples of 20 or more, we do not use the U distribution. Instead, we use the U statistic to calculate the standard z score.

To calculate the U score we must first arrange and rank the data from our two independent samples. First, we must rank all values from both samples from low to high without regard to which sample each value belongs to. If two values are the same, then they both get the average of the two ranks for which they tie. The smallest number gets a rank of 1 and the largest number gets a rank of n where n is the total number of values in the two groups. After we arrange and rank the data in each of the samples, we sum the ranks assigned to the observations. We record both the sum of these ranks and the number of observations in each of the samples. After we have this information, we can use the following formulas to determine the U statistic:

U_1 & = n_1n_2 + \frac{n_1(n_1 + 1)} {2} - R_1 \\U_2 & = n_1n_2 + \frac{n_2(n_2 + 1)} {2} - R_2

where:

n_1 = number of observations in sample 1

n_2  = number of observations in sample 2

R_1  = sum of the ranks assigned to sample 1

R_2  = sum of the ranks assigned to sample 2

We use the smaller of the two calculated test statistics (i.e. – the lesser of U_1 or U_2 ) to evaluate our hypotheses in smaller samples or to calculate the z score when working with larger samples.

When working with larger samples, we need to calculate two additional pieces of information: the mean of the sampling distribution (\mu_U) and the standard deviation of the sampling distribution (\sigma_U ). These calculations are relatively straightforward when we know the numbers of observations in each of the samples. To calculate these figures we use the following formulas:

\mu_U = \frac{n_1n_2} {2}

and

\sigma_U = \sqrt{\frac{[(n_1)] (n_2) (n_1 + n_2 + 1)} {12}}

Finally, we use the general formula for the test statistic to test our null hypothesis:

z = \frac{U - \mu_U} {\sigma_U}

Example:

Say that we are interested in determining the attitudes on the current status of the economy from women that work outside the home and from women that do not work outside the home. We take a sample of 20 women that work outside the home (sample 1) and a sample of 20 women that do not work outside the home (sample 2) and administer a questionnaire that measures their attitude about the economy. These data are found in the tables below:

Women Working Outside the Home Women Working Outside the Home
Score Rank
9 1
12 3
13 4
19 8
21 9
27 13
31 16
33 17
34 18
35 19
39 21
40 22
44 25
46 26
49 29
58 33
61 34
63 35
64 36
70 39
R = 408 R = 408
Women Not Working Outside the Home Women Not Working Outside the Home
Score Rank
10 2
15 5
17 6
18 7
23 10
24 11
25 12
28 14
30 15
37 20
41 23
42 24
47 27
48 28
52 30
55 31
56 32
65 37
69 38
71 40
R = 412 R = 412

Do these two groups of women have significantly different views on the issue?

Solution:

Since each of our samples has 20 observations, we need to calculate the standard z-score to test the hypothesis that these independent samples came from the same population. To calculate the z-score, we need to first calculate the U, the \mu_U and the \sigma_U statistics. To calculate the U for each of the samples, we use the formulas:

U_1 & = n_1n_2 + \frac{n_1(n_1 + 1)} {2} - R_1 = 20 * 20 + \frac{20(20 + 1)} {2} - 408 = 202 \\U_2 & = n_1n_2 + \frac{n_2(n_2 + 1)} {2} - R_2 = 20 * 20 + \frac{20(20 + 1)} {2} - 412 = 198

Since we use the smaller of the two U statistics, we set U = 198. When calculating the other two figures, we find:

\mu_U = \frac{n_1n_2} {2} = \frac{20 * 20} {2} = 200

and

\sigma_u=\sqrt{\frac{(n_1)(n_2)(n_1 + n_2 + 1)} {12}} = \sqrt{\frac{(20)(20)(20 + 20 + 1)} {12}} = \sqrt{\frac{(400)(41)} {12}} = 36.97

When calculating the z-statistic we find,

z = \frac{U - \mu_U} {\sigma_U} = \frac{198 - 200} {36.97} = -0.05

If we set the \alpha=.05, we would find that the calculated test statistic does not exceed the critical value of -1.96. Therefore, we fail to reject the null hypothesis and conclude that these two samples come from the same population.

We can use this z-score to evaluate our hypotheses just like we would with any other hypothesis test. When interpreting the results from the rank sum test it is important to remember that we are really asking whether or not the populations have the same median and variance. In addition, we are assessing the chance that random sampling would result in medians and variables as far apart (or as close together) as observed in the test. If the z-score is large (meaning that we would have a small P-value) we can reject the idea that the difference is a coincidence. If the z-score is small like in the example above (meaning that we would have a large P-value), we do not have any reason to conclude that the medians of the populations differ and that the samples likely came from the same population.

Determining the Correlation between Two Variables Using the Rank Correlation Test

As we learned in Chapter 9, it is possible to determine the correlation between two variables by calculating the Pearson product-moment correlation coefficient (more commonly known as the linear correlation coefficient or r). The correlation coefficient helps us determine the strength, magnitude and direction of the relationship between two variables with normal distributions.

We also use the Spearman rank correlation (also known as simply the ‘rank correlation’ coefficient, \rho or ‘rho’) coefficient to measure the strength, magnitude and direction of the relationship between two variables. The test statistic from this test (\rho or ‘rho’) is the nonparametric alternative to the correlation coefficient and we use this test when the data do not meet the assumptions about normality. We also use the Spearman rank correlation test when one or both of the variables consist of ranks. The Spearman rank correlation coefficient is defined by the formula:

\rho = 1 - \frac{6 \textstyle\sum d^2} {n(n^2 - 1)}

where d is the difference in statistical rank of corresponding observations.

The test works by converting each of the observations to ranks, just like we learned about with the rank sum test. Therefore, if we were doing a rank correlation of scores on a final exam versus SAT scores, the lowest final exam score would get a rank of 1, the second lowest a rank of 2, etc. The lowest SAT score would get a rank of 1, the second lowest a rank of 2, etc. Similar to the rank sum test, if two observations are equal the average rank is used for both of the observations.

Once the observations are converted to ranks, a correlation analysis is performed on the ranks (note: this analysis is not performed on the observations themselves). The Spearman correlation coefficient is calculated from the columns of ranks. However, because the distributions are non-normal, a regression line is rarely used and we do not calculate a non-parametric equivalent of the regression line. It is easy to use a statistical programming package such as SAS or SPSS to calculate the Spearman rank correlation coefficient. However, for the purposes of this example we will perform this test by hand as shown in the example below.

Example:

The head of the math department is interested in the correlation between scores on a final math exam and the math SAT score. She took a random sample of 15 students and recorded each students’ final exam and math SAT scores. Since SAT scores are designed to be normally distributed, the Spearman rank correlation may be an especially effective tool for this comparison. Use the Spearman rank correlation test to determine the correlation coefficient. The data for this example are recorded below:

Math SAT Score Final Exam Score
595 68
520 55
715 65
405 42
680 64
490 45
565 56
580 59
615 56
435 42
440 38
515 50
380 37
510 42
565 53

Solution:

To calculate the Spearman rank correlation coefficient, we determine the ranks of each of the variables in the data set (above), calculate the difference and then calculate the squared difference for each of these ranks.

Math SAT Score (X) Final Exam Score (Y) X Rank Y Rank d d^2
595 68 4 1 3 9
520 55 8 7 1 1
715 65 1 2 - 1
405 42 14 12 2 4
680 64 2 3 -1 1
490 45 11 10 1 1
565 56 6.5 5.5 1 1
580 59 5 4 1 1
615 56 3 5.5 -2.5 6.25
435 42 13 12 1 1
440 38 12 14 -2 4
515 50 9 9 0 0
380 37 15 15 0 0
510 42 10 12 -2 4
565 53 6.5 8 -1.5 2.25
Sum 0 36.50

Using the formula for the Spearman correlation coefficient, we find that:

\rho = 1 - 6 \sum \frac{d^2} {n(n^2 - 1)} = 1 - \frac{6(36.50)} {15(225 - 1)} = 1 - 0.07 = 0.93

We interpret this rank correlation coefficient in the same way as we interpret the linear correlation coefficient. This coefficient states that there is a strong, positive correlation between the two variables.

Lesson Summary

1. We use the rank sum test (also known as the Mann-Whitney \upsilon test) to assess whether two samples come from the same distribution. This test is sensitive to both the median and the distribution of the samples.

2. When performing the rank sum test there are several different conditions that need to be met including that the population not be normally distributed, we have continuously distributed observations, there be an independence of samples, the samples are greater than 5 observations, and that the observations be on a numeric or ordinal scale.

3. When performing the rank sum test, we need to calculate a figure known as the U statistic. This statistic takes both the median and the total distribution of both samples into account.

4. To calculate the test statistic for the rank sum test, we first must calculate something known as the U statistic which is derived from the ranks of the observations in both samples. When performing our hypotheses tests, we calculate the standard score which is defined as

z = \frac{U - \mu_U} {\sigma_U}

5. We use the Spearman rank correlation coefficient (also known as simply the ‘rank correlation’ coefficient) to measure the strength, magnitude and direction of the relationship between two variables from non-normal distributions.

\rho = 1 - \frac{6 \textstyle\sum d^2} {n(n^2 - 1)}

Image Attributions

Files can only be attached to the latest version of None

Reviews

Please wait...
You need to be signed in to perform this action. Please sign-in and try again.
Please wait...
Image Detail
Sizes: Medium | Original
 
CK.MAT.ENG.SE.1.Prob-&-Stats-Adv.12.2

Original text