<meta http-equiv="refresh" content="1; url=/nojavascript/"> Testing a Hypothesis for Dependent and Independent Samples | CK-12 Foundation
Dismiss
Skip Navigation
You are reading an older version of this FlexBook® textbook: CK-12 Probability and Statistics - Advanced (Second Edition) Go to the latest version.

8.5: Testing a Hypothesis for Dependent and Independent Samples

Created by: CK-12

Learning Objectives

  • Identify situations that contain dependent or independent samples.
  • Calculate the pooled standard deviation for two independent samples.
  • Calculate the test statistic to test hypotheses about dependent data pairs.
  • Calculate the test statistic to test hypotheses about independent data pairs for both large and small samples.
  • Calculate the test statistic to test hypotheses about the difference of proportions between two independent samples.

Introduction

In the previous lessons, we learned about hypothesis testing for proportions and means in large and small samples. However, in the examples in those lessons, only one sample was involved. In this lesson, we will apply the principals of hypothesis testing to situations involving two samples. There are many situations in everyday life where we would perform statistical analysis involving two samples. For example, suppose that we wanted to test a hypothesis about the effect of two medications on curing an illness. Or we may want to test the difference between the means of males and females on the SAT. In both of these cases, we would analyze both samples, and the hypothesis would address the difference between the two sample means.

In this lesson, we will identify situations with different types of samples, learn to calculate the test statistic, calculate the estimate for population variance for both samples, and calculate the test statistic to test hypotheses about the difference of proportions or means between samples.

Dependent and Independent Samples

When we are working with one sample, we know that we have to randomly select the sample from the population, measure that sample's statistics, and then make a hypothesis about the population based on that sample. When we work with two independent samples, we assume that if the samples are selected at random (or, in the case of medical research, the subjects are randomly assigned to a group), the two samples will vary only by chance, and the difference will not be statistically significant. In short, when we have independent samples, we assume that one sample does not affect the other.

Independent samples can occur in two scenarios.

In one, when testing the difference of the means between two fixed populations, we test the differences between samples from each population. When both samples are randomly selected, we can make inferences about the populations.

In the other, when working with subjects (people, pets, etc.), if we select a random sample and then randomly assign half of the subjects to one group and half to another, we can make inferences about the population.

Dependent samples are a bit different. Two samples of data are dependent when each observation in one sample is paired with a specific observation in the other sample. In short, these types of samples are related to each other. Dependent samples can occur in two scenarios. In one, a group may be measured twice, such as in a pre-test/post-test situation (scores on a test before and after the lesson). The other scenario is one in which an observation in one sample is matched with an observation in the second sample.

To distinguish between tests of hypotheses for independent and dependent samples, we use a different symbol for hypotheses with dependent samples. For dependent sample hypotheses, we use the delta symbol, \delta, to symbolize the difference between the two samples. Therefore, in our null hypothesis, we state that the difference between the means of the two samples is equal to 0, or \delta=0. This can be summarized as follows:

H_0: \delta=\mu_1-\mu_2=0

Calculating the Pooled Estimate of Population Variance

When testing a hypothesis about two independent samples, we follow a similar process as when testing one random sample. However, when computing the test statistic, we need to calculate the estimated standard error of the difference between sample means, s_{\bar{x}_1-\bar{x}_2}=\sqrt{s^2 \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}.

Here, n_1 and n_2 are the sizes of the two samples, and s^2, the pooled estimate of variance, is calculated with the formula s^2=\frac{\sum(x_1-\bar{x}_1)^2+\sum(x_2-\bar{x}_2)^2}{n_1+n_2-2}. Often, the top part of the formula for pooled estimate of variance is simplified by substituting the symbol SS for the sum of the squared deviations. Therefore, the formula is often expressed as s^2=\frac{SS_1+SS_2}{n_1+n_2-2}.

Example: Suppose we have two independent samples of student reading scores. Calculate s^2.

The data are as follows:

Sample 1 Sample 2
7 12
8 14
10 18
4 13
6 11
10

From these samples, we can calculate a number of descriptive statistics that will help us solve for the pooled estimate of variance:

Descriptive Statistic Sample 1 Sample 2
Number (n} 5 6
Sum of Observations (\sum x) 35 78
Mean of Observations (\bar{x}) 7 13
Sum of Squared Deviations (\sum^n_{i=1} (x_i-\bar{x})^2) 20 40

Using the formula for the pooled estimate of variance, we find that s^2=6.67.

We will use this information to calculate the test statistic needed to evaluate the hypotheses.

Testing Hypotheses with Independent Samples

When testing hypotheses with two independent samples, we follow steps similar to those when testing one random sample:

  • State the null and alternative hypotheses.
  • Choose \alpha.
  • Set the criterion (critical values) for rejecting the null hypothesis.
  • Compute the test statistic.
  • Make a decision: reject or fail to reject the null hypothesis.
  • Interpret the decision within the context of the problem.

When stating the null hypothesis, we assume there is no difference between the means of the two independent samples. Therefore, our null hypothesis in this case would be the following:

H_0: \mu_1=\mu_2 \ \text{or} \ H_0: \mu_1-\mu_2=0

Similar to the one-sample test, the critical values that we set to evaluate these hypotheses depend on our alpha level, and our decision regarding the null hypothesis is carried out in the same manner. However, since we have two samples, we calculate the test statistic a bit differently and use the formula shown below:

t=\frac{(\bar{x}_1-\bar{x}_2)-(\mu_1-\mu_2)}{s_{\bar{x}_1-\bar{x}_2}}

where:

\bar{x}_1-\bar{x}_2 is the difference between the sample means.

\mu_1-\mu_2 is the difference between the hypothesized population means.

s_{\bar{x}_1-\bar{x}_2} is the standard error of the difference between sample means.

Example: The head of the English department is interested in the difference in writing scores between remedial freshman English students who are taught by different teachers. The incoming freshmen needing remedial services are randomly assigned to one of two English teachers and are given a standardized writing test after the first semester. We take a sample of eight students from one class and nine from the other. Is there a difference in achievement on the writing test between the two classes? Use a 0.05 significance level.

First, we would generate our hypotheses based on the two samples as follows:

H_0: \mu_1 &= \mu_2\\H_0: \mu_1 & \neq \mu_2

Also, this is a two-tailed test, and for this example, we have two independent samples from the population and have a total of 17 students who we are examining. Since our sample size is so low, we use the t-distribution. In this example, we have 15 degrees of freedom, which is the number in the samples minus 2. With a 0.05 significance level and the t-distribution, we find that our critical values are 2.13 standard scores above and below the mean.

To calculate the test statistic, we first need to find the pooled estimate of variance from our sample. The data from the two groups are as follows:

Sample 1 Sample 2
35 52
51 87
66 76
42 62
37 81
46 71
60 55
55 67
53

From this sample, we can calculate several descriptive statistics that will help us solve for the pooled estimate of variance:

Descriptive Statistic Sample 1 Sample 2
Number (n) 9 8
Sum of Observations (\sum x) 445 551
Mean of Observations (\bar{x}) 49.44 68.88
Sum of Squared Deviations (\sum^n_{i=1}(x_i-\bar{x})^2) 862.22 1058.88

Therefore, the pooled estimate of variance can be calculated as shown:

s^2=\frac{SS_1+SS_2}{n_1+n_2-2}=128.07

This means that the standard error of the difference of the sample means can be calculated as follows:

s_{\bar{x}_1-\bar{x}_2}=\sqrt{s^2 \left(\frac{1}{n_1}+\frac{1}{n_2} \right)}=\sqrt{128.07 \left(\frac{1}{9}+\frac{1}{8}\right)} \approx 5.50

Using this information, we can finally solve for the test statistic:

t=\frac{(\bar{x}_1-\bar{x}_2)-(\mu_1-\mu_2)}{s_{\bar{x}_1-\bar{x}_2}}=\frac{(49.44-68.88)-(0)}{5.50} \approx -3.53

Since -3.53 is less than the critical value of -2.13, we decide to reject the null hypothesis and conclude that there is a significant difference in the achievement of the students assigned to different teachers.

Testing Hypotheses about the Difference in Proportions between Two Independent Samples

Suppose we want to test if there is a difference between proportions of two independent samples. As discussed in the previous lesson, proportions are used extensively in polling and surveys, especially by people trying to predict election results. It is possible to test a hypothesis about the proportions of two independent samples by using a method similar to that described above. We might perform these hypotheses tests in the following scenarios:

  • When examining the proportions of children living in poverty in two different towns.
  • When investigating the proportions of freshman and sophomore students who report test anxiety.
  • When testing if the proportions of high school boys and girls who smoke cigarettes is equal.

In testing hypotheses about the difference in proportions of two independent samples, we state the hypotheses and set the criterion for rejecting the null hypothesis in similar ways as the other hypotheses tests. In these types of tests, we set the proportions of the samples equal to each other in the null hypothesis, H_0: p_1=p_2, and use the appropriate standard table to determine the critical values. Remember, for small samples, we generally use the t-distribution, and for samples over 30, we generally use the z-distribution.

When solving for the test statistic in large samples, we use the following formula:

z=\frac{(\hat{p}_1-\hat{p}_2)-(p_1-p_2)}{s_{p_1-p_2}}

where:

\hat{p}_1 and \hat{p}_2 are the observed sample proportions.

p_1 and p_2 are the population proportions under the null hypothesis.

s_{p_1-p_2} is the standard error of the difference between independent proportions.

Similar to the standard error of the difference between independent means, we need to do a bit of work to calculate the standard error of the difference between independent proportions. To find the standard error under the null hypothesis, we assume that p_1-p_2=p, and we use all the data to calculate \hat{p} as an estimate for p as follows:

\hat{p}=\frac{n_1 \hat{p}_1+n_2 \hat{p}_2}{n_1+n_2}

Now the standard error of the difference between independent proportions is \sqrt{\hat{p}(1-\hat{p}) \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}.

This means that the test statistic is now z=\frac{(\hat{p}_1-\hat{p}_2)-(0)}{\sqrt{\hat{p}(1-\hat{p}) \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}}.

Example: Suppose that we are interested in finding out which of two cities is more satisfied with the services provided by the city government. We take a survey and find the following results:

Number Satisfied City 1 City 2
Yes 122 84
No 78 66
Sample Size n_1=200 n_2=150
Proportion Who Said Yes 0.61 0.56

Is there a statistical difference in the proportions of citizens who are satisfied with the services provided by the city government? Use a 0.05 level of significance.

First, we establish the null and alternative hypotheses:

H_0: p_1 &= p_2\\H_a:p_1 &\neq p_2

Since we have large sample sizes, we will use the z-distribution. At a 0.05 level of significance, our critical values are \pm 1.96. To solve for the test statistic, we must first solve for the standard error of the difference between proportions:

\hat{p} &= \frac{(200)(0.61)+(150)(0.56)}{350}=0.589\\s_{p_1-p_2} &= \sqrt{(0.589)(0.411) \left(\frac{1}{200}+\frac{1}{150}\right)} \approx 0.053

Therefore, the test statistic can be calculated as shown:

z=\frac{(0.61-0.56)-(0)}{0.053} \approx 0.94

Since 0.94 does not exceed the critical value of 1.96, the null hypothesis is not rejected. Therefore, we can conclude that the difference in the proportions could have occurred by chance and that there is no difference in the level of satisfaction between citizens of the two cities.

Testing Hypotheses with Dependent Samples

When testing a hypothesis about two dependent samples, we follow the same process as when testing one random sample or two independent samples:

  • State the null and alternative hypotheses.
  • Choose the level of significance.
  • Set the criterion (critical values) for rejecting the null hypothesis.
  • Compute the test statistic.
  • Make a decision: reject or fail to reject the null hypothesis.
  • Interpret our results.

As mentioned in the section above, our null hypothesis for two dependent samples states that there is no difference between the means of the two samples. In other words, the null hypothesis is H_0: \delta=\mu_1-\mu_2=0. We set the criterion for evaluating the hypothesis in the same way that we do with our other examples-by first establishing an alpha level and by then finding the critical values using a t-distribution table. Calculating the test statistic for dependent samples is a bit different, since we are dealing with two sets of data. The test statistic that we first need to calculate is \bar{d}, which is the difference in the means of the two samples. This means that \bar{d}=\bar{x}_1-\bar{x}_2. We also need to know the standard error of the difference between the two samples. Since our population variance is unknown, we estimate it by first using the following formula for the standard deviation of the samples:

s^2_d &= \frac{\sum (d-\bar{d})^2}{n-1}\\s_d &= \sqrt{\frac{\sum d^2-\frac{\left (\sum d \right )^2}{n}}{n-1}}

where:

s^2_d is the variance of the samples.

d is the difference between corresponding pairs within the two samples.

\bar{d} is the difference between the means of the two samples.

n is the number in each sample.

s_d is the standard deviation of the samples.

With the standard deviation of the samples, we can calculate the standard error of the difference between the two samples using the following formula:

s_{\bar{d}}=\frac{s_d}{\sqrt{n}}

After we calculate the standard error, we can use the general formula for the test statistic as shown below:

t=\frac{\bar{d}-\delta}{s_{\bar{d}}}

Example: A math teacher wants to determine the effectiveness of her statistics lesson and gives a pre-test and a post-test to 9 students in her class. Our null hypothesis is that there is no difference between the means of the two samples, and our alternative hypothesis is that the two means of the samples are not equal. In other words, we are testing whether or not these two samples are related with the following hypotheses:

H_0: \delta &= \mu_1-\mu_2=0\\H_a: \delta &= \mu_1-\mu_2 \neq 0

The results for the pre-test and post-test are shown below:

Subject Pre-test Score Post-test Score d difference d^2
1 78 80 2 4
2 67 69 2 4
3 56 70 14 196
4 78 79 1 1
5 96 96 0 0
6 82 84 2 4
7 84 88 4 16
8 90 92 2 4
9 87 92 5 25
Sum 718 750 32 254
Mean 79.7 83.3 3.6

Using the information from the table above, we can first solve for the standard deviation of the samples, then the standard error of the difference between the two samples, and finally the test statistic.

Standard Deviation:

s_d=\sqrt{\frac{\sum d^2-\frac{(\sum d)^2}{n}}{n-1}}=\sqrt{\frac{254-\frac{(32)^2}{9}}{8}} \approx 4.19

Standard Error of the Difference:

s_{\bar{d}}=\frac{s_d}{\sqrt{n}}=\frac{4.19}{\sqrt{9}}=1.40

Test Statistic (t-test):

t=\frac{\bar{d}-\delta}{s_{\bar{d}}}=\frac{3.6-0}{1.40} \approx 2.57

With 8 degrees of freedom (number of observations - 1) and a significance level of 0.05, we find our critical values to be \pm 2.31. Since our test statistic exceeds 2.31, we can reject the null hypothesis that the two samples are equal and conclude that the lesson had an effect on student achievement.

Lesson Summary

In addition to testing single samples associated with a mean, we can also perform hypothesis tests with two samples. We can test two independent samples, which are samples that do not affect one another, or dependent samples, which are samples that are related to each other.

When testing a hypothesis about two independent samples, we follow a similar process as when testing one random sample. However, when computing the test statistic, we need to calculate the estimated standard error of the difference between sample means, which is found by using the following formula:

s_{\bar{x}_1-\bar{x}_2} = \sqrt{s^2 \left(\frac{1}{n_1}+\frac{1}{n_2}\right)} \ \text{with} \ s^2=\frac{ss_1+ss_2}{n_1+n_2-2}

We carry out the test on the means of two independent samples in way similar to that of testing one random sample. However, we use the following formula to calculate the test statistic, with the standard error defined above:

t=\frac{(\bar{x}_1-\bar{x}_2)-(\mu_1-\mu_2)}{s_{\bar{x}_1-\bar{x}_2}}

We can also test the proportions associated with two independent samples. In order to calculate the test statistic associated with two independent samples, we use the formula shown below:

z=\frac{(\hat{p}_1-\hat{p}_2)-(0)}{\sqrt{\hat{p}(1-\hat{p}) \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}} \ \text{with} \ \hat{p}=\frac{n_1 \hat{p}_1+n_2 \hat{p}_2}{n_1+n_2}

We can also test the likelihood that two dependent samples are related. To calculate the test statistic for two dependent samples, we use the following formula:

t=\frac{\bar{d}-\delta}{s_{\bar{d}}} \ \text{with} \ s_{\bar{d}}=\frac{s_d}{\sqrt{n}} \ \text{and} \ s_d=\sqrt{\frac{\sum d^2 - \frac{(\sum d)^2}{n}}{n-1}}

Review Questions

  1. In hypothesis testing, we have scenarios that have both dependent and independent samples. Give an example of an experiment with dependent samples and an experiment independent samples.
  2. True or False: When we test the difference between the means of males and females on the SAT, we are using independent samples.
  3. A study is conducted on the effectiveness of a drug on the hyperactivity of laboratory rats. Two random samples of rats are used for the study. One group is given Drug A, and the other group is given Drug B. The number of times that each rat pushes a lever is recorded. The following results for this test were calculated:
Drug A Drug B
\bar{x} 75.6 72.8
n 18 24
s^2 12.25 10.24
s 3.5 3.2

(a) Does this scenario involve dependent or independent samples? Explain.

(b) What would the hypotheses be for this scenario?

(c) Compute the pooled estimate for population variance.

(d) Calculate the estimated standard error for this scenario.

(e) What is the test statistic, and at an alpha level of 0.05, what conclusions would you make about the null hypothesis?

  1. A survey is conducted on attitudes towards drinking. A random sample of eight married couples is selected, and the husbands and wives respond to an attitude-toward-drinking scale. The scores are as follows:
Husbands Wives
16 15
20 18
10 13
15 10
8 12
19 16
14 11
15 12

(a) What would be the hypotheses for this scenario?

(b) Calculate the estimated standard deviation for this scenario.

(c) Compute the standard error of the difference for these samples.

(d) What is the test statistic, and at an alpha level of 0.05, what conclusions would you make about the null hypothesis?

Keywords

\alpha
\alpha is called the level of significance. \alpha =[P(rejecting \ H_0|H_0 \ is \ true) = P(making \ a \ type \ I \ error)
Alpha level
The general approach to hypothesis testing focuses on the type I error: rejecting the null hypothesis when it may be true. The level of significance, also known as the alpha level.
Alternative hypothesis
The alternate hypothesis to be accepted if the default hypothesis is rejected.
\beta
\beta is the probability of making a type II error. \beta = P(not \ rejecting \ H_0|H_0 \ is \ false)=P(making \ a \ type \ II \ error)
Critical region
The values of the test statistic that allow us to reject the null hypothesis.
Critical values
To calculate the critical regions, we must first find the cut-offs, or the critical values, where the critical regions start.
Degrees of freedom
how the shape of Student’s t-distribution corresponds to the sample size (which corresponds to a measure called the degrees of freedom).
Dependent samples
Dependent samples are a bit different. Two samples of data are dependent when each observation in one sample is paired with a specific observation in the other sample.
Hypothesis testing
Testing the difference between a hypothesized value of a parameter and the test statistic.
Independent samples
When we work with two independent samples, we assume that if the samples are selected at random, the two samples will vary only by chance, and the difference will not be statistically significant.
Level of significance
The strength of the sample evidence needed to reject the null hypothesis.
Null hypothesis (H_0)
The default hypothesis, a hypothesis about a parameter that is tested.
One-tailed test
When the alternative hypothesis H_1 is one-sided like \theta = \theta_0 or \theta = \theta_{0^{\prime}}, then the rejection region is taken only on one side of the sampling distribution. It is called one-tailed test.
P-value
We can also evaluate a hypothesis by asking, “What is the probability of obtaining the value of the test statistic that we did if the null hypothesis is true?” This is called the P-value.
Pooled estimate of variance
Here, n_1 and n_2 are the sizes of the two samples, and s^2, the pooled estimate of variance, is calculated with the formula s^2=\frac{\sum(x_1-\bar{x}_1)^2+\sum(x_2-\bar{x}_2)^2}{n_1+n_2-2}.
Power of a test
The power of a test is defined as the probability of rejecting the null hypothesis when it is false.
Standard error of the difference
When testing a hypothesis about two independent samples, we follow a similar process as when testing one random sample. However, when computing the test statistic, we need to calculate the estimated standard error of the difference between sample means, s_{\bar{x}_1-\bar{x}_2} = \sqrt{s^2 \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}.
Student's t-distributions
Student's t-distributions are a family of distributions that, like the normal distribution, are symmetrical, bell-shaped, and centered on a mean.
Test statistic
Before evaluating our hypotheses by determining the critical region and calculating the test statistic, we need to confirm that the distribution is normal and determine the hypothesized mean, \mu, of the distribution. z=\frac{\bar{x}-\mu}{\frac{\sigma}{\sqrt{n}}}
Two-tailed test
The two-tailed test is a statistical test used in inference, in which a given statistical hypothesis, H_0 (the null hypothesis), will be rejected when the value of the test statistic is either sufficiently small or sufficiently large.
Type I error
A type I error occurs when one rejects the null hypothesis when it is true.
Type II error
A type II error occurs when one rejects the alternative hypothesis (fails to reject the null hypothesis) when the alternative hypothesis is true.

Image Attributions

Description

Categories:

Grades:

Date Created:

Feb 23, 2012

Last Modified:

Aug 21, 2014
Files can only be attached to the latest version of None

Reviews

Please wait...
Please wait...
Image Detail
Sizes: Medium | Original
 
CK.MAT.ENG.SE.2.Prob-&-Stats-Adv.8.5

Original text