<meta http-equiv="refresh" content="1; url=/nojavascript/"> Testing a Mean Hypothesis | CK-12 Foundation
Dismiss
Skip Navigation
You are reading an older version of this FlexBook® textbook: CK-12 Probability and Statistics - Advanced Go to the latest version.

8.3: Testing a Mean Hypothesis

Created by: CK-12

Learning Objectives

  • Calculate the sample test statistic to evaluate a hypothesis about a population mean based on large samples.
  • Differentiate the difference in hypothesis testing for situations with small populations and use the Student’s t-distribution accordingly.
  • Understand the results of the hypothesis test and how the terms ‘statistically significant’ and ‘not statistically significant’ apply to the results.

Introduction

In the previous sections, we have covered:

  • the reasoning behind hypothesis testing.
  • how to conduct single and two-tailed hypothesis tests.
  • the potential errors associated with hypothesis testing.
  • how to test hypotheses associated with population proportions.

In this section we will take a closer look at some examples that will give us a bit of practice in conducting these tests and what these results really mean. In addition, we will also look at how the terms statistically significant and not statistically significant apply to these results.

Also, it is important to look at what happens when we have a small sample size. All of the hypotheses that we have examined thus far have assumed that we have normal distributions. But what happens when we have a small sample size and are unsure if our distribution is normal or not? We use something called the Student’s t-distribution to take small sample size into account.

Evaluating Hypotheses for Population Means using Large Samples

When testing a hypothesis for a normal distribution, we follow a series of four basic steps:

  1. State the null and alternative hypotheses.
  2. Set the criterion (critical values) for rejecting the null hypothesis.
  3. Compute the test statistic.
  4. Decide about the null hypothesis and interpret our results.

In Step 4, we can make one of two decisions regarding the null hypothesis.

  • If the test statistic falls in the regions above or below the critical values (meaning that it is far from the mean), we can reject the null hypothesis.
  • If the test statistics falls in the area between the critical values (meaning that it is close to the mean) we fail to reject the null hypothesis.

When we reject the null hypothesis we are saying that the difference between the observed sample mean and the hypothesized population mean is too great to be attributed to chance. If we reject the null hypothesis, we are also saying that the probability that the observed sample mean will have occurred by chance is less than the ? level of .05, .01 or whatever we decide.

When we fail to reject the null hypothesis, we are saying that the difference between the observed sample mean and the hypothesized population mean is probable if the null hypothesis is true. This decision is based on the properties of sampling and the fact that there is not a large difference is reason to not reject the null hypothesis. Essentially, we are willing to attribute this difference to sampling error.

Let’s perform a hypothesis test for the scenarios we examined in the first lesson.

Example:

College A has an average SAT score of 1500. From a random sample of 125 freshman psychology students we find the average SAT score to be 1450 with a standard deviation of 100. Is the sample of freshman psychology students representative of the overall population?

Solution:

Let’s first develop our null and alternative hypotheses:

& H_{0}: \mu=1500 \\& H_{a}: \mu\neq 1500

At a 0.05 significance level, our critical values would be 1.96 standard deviations above and below the mean.

Next, we calculate the standard z-score for the sample of freshman psychology students.

z = \frac{X - \mu} {\sigma_x} = \frac{1500 - 1450} {100 \sqrt{125}} \approx 5.59

Since the calculated z-score of 5.59 falls in the critical region (as defined by a 0.05 significance level or anything with a z-score of above 1.96) we reject the null hypothesis. Therefore, we can conclude that the probability of obtaining a sample mean equal to 1450 if the mean of the population is 1500 is very small and the sample of freshman psychology students is not representative of the overall population. Furthermore, the probability of this difference occurring by chance is less than 0.05.

Example:

The school nurse was wondering if the average height of 7th graders has been increasing. Over the last 5\;\mathrm{years}, the average height of a 7th grader was 145\;\mathrm{cm} with a standard deviation of 20\;\mathrm{cm.} The school nurse takes a random sample of 200 students and finds that the average height this year is 147\;\mathrm{cm.} Conduct a single-tailed hypothesis test using a 0.05 significance level to evaluate the null and alternative hypotheses.

Solution:

First, we develop our null and alternative hypotheses:

& H_{0}:\mu=145 \\& H_{a}:\mu\neq 145

At a 0.05 single-tailed significance level, our critical value for a single-tailed test would be 1.64 standard deviations above the mean.

Next, we calculate the standard z-score for the sample of 7th graders.

z = \frac{X - \mu} {\sigma_X} = \frac{147 - 145} {20 \sqrt{200}} \approx 1.41

Since the calculated z-score of 1.41 does not fall in the critical region (as defined by a 0.05 significance level or anything with a z-score of above 1.67) we fail to reject the null hypothesis. We can conclude that the probability of obtaining a sample mean equal to 147 if the mean of the population is 145 is likely to have been due to chance.

Hypothesis Testing with Small Populations and Sample Sizes

Back in the early 1900’s a chemist at a brewery in Ireland discovered that when he was working with very small samples, the distributions of the mean differed significantly from the normal distribution. He noticed that as his sample sizes changed, the shape of the distribution changed as well. He published his results under the pseudonym ‘Student’ and this concept and the distributions for small sample sizes are now known as “Student’s t-distributions.”

T-distributions are a family of distributions that, like the normal distribution, are symmetrical and bell-shaped and centered on a mean. However, the distribution shape changes as the sample size changes. Therefore, there is a specific shape or distribution for every sample of a given size (see figure below; each distribution has a different value of k, the number of degrees of freedom, which is 1 less than the size of the sample).

We use the Student's t-distribution in hypothesis testing the same way that we use the normal distribution. Each row in the t-distribution table (see excerpt below) represents a different t-distribution and each distribution is associated with a unique number of degrees of freedom (the number of observations minus one). The column headings in the table represent the portion of the area in the tails of the distribution – we use the numbers in the table just as we used the z-scores. Below is an excerpt from the Student's t-table for one-sided critical values.

DF Probability of Exceeding the Critical Value
0.10 0.05 0.025 0.01 0.005 0.001
1 3.078 6.314 12.706 31.821 63.657 318.313
2 1.886 2.920 4.303 6.965 9.925 22.327
3 1.638 2.353 3.182 4.541 5.841 10.215
4 1.533 2.132 2.776 3.747 4.604 7.173
5 1.476 2.015 2.571 3.365 4.032 5.893
6 1.440 1.943 2.447 3.143 3.707 5.208
7 1.415 1.895 2.365 2.998 3.499 4.782
8 1.397 1.860 2.306 2.896 3.355 4.499
9 1.383 1.833 2.262 2.821 3.250 4.296
10 1.372 1.812 2.228 2.764 3.169 4.143

As the number of observations gets larger, the t-distribution approaches the shape of the normal distribution. In general, once the sample size is large enough - usually about 120 - we would use the normal distribution or the z-table instead.

In calculating the t-test statistic, we use the formula:

t = \frac{\bar{X} - \mu} {s_{\bar{x}}}

where:

t = test statistic

\bar{X}= sample mean

\mu = hypothesized population mean

s_{\bar{x}} = estimated standard error

To estimate the standard error (s_{\bar{x}}, we use the formula s/ \sqrt{n} where s is the standard deviation of the sample and n is the sample size.

Example:

The high school athletic director is asked if football players are doing as well academically as the other student athletes. We know from a previous study that the average GPA for the student athletes is 3.10 and that the standard deviation of the sample is 0.54. After an initiative to help improve the GPA of student athletes, the athletic director samples 20 football players and finds that their GPA is 3.18. Is there a significant improvement? Use a .05 significance level.

Solution:

First, we establish our null and alternative hypotheses.

& H_{0}: \mu=3.10 \\& H_{a}: \mu\neq 3.10

Next, we use our alpha level (\alpha) of .05 and the t-distribution table to find our critical values. For a two-tailed test with 19\;\mathrm{degrees} of freedom and a .05 level of significance, our critical values are equal to 2.093 standard errors above and below the mean.

In calculating the test statistic, we use the formula:

t = \frac{\bar{X} - \mu} {s_{\bar{x}}} = \frac{3.18 - 3.10} {0.54/ \sqrt{20}} \approx 0.66

This means that the observed sample mean (3.18) of football players is 0.66 standard errors above the hypothesized value of 3.10. Because t  = 0.66 does not exceed 2.093 (the standard critical value), the null hypothesis is not rejected.

Therefore, we can conclude that the difference between the sample mean and the hypothesized value is not sufficient to attribute it to anything other than sampling error. Thus, the athletic director can conclude that the mean academic performance of football players does not differ from the mean performance of other student athletes.

How to Interpret the Results of a Hypothesis Test

In the previous section, we discussed how to interpret the results of a hypothesis test. As a reminder, when we reject the null hypothesis we are saying that the difference between the observed sample mean and the hypothesized population mean is too great to be attributed to chance. When we fail to reject the null hypothesis, we are saying that the difference between the observed sample mean and the hypothesized population mean is probable if the null hypothesis is true. Essentially, we are willing to attribute this difference to sampling error.

But what is meant by statistical significance? Technically, the difference between the hypothesized population mean and the sample mean is said to be statistically significant when the probability that the difference occurred by chance is less than the significance (\alpha) level. Therefore, when the calculated test statistic (whether it is the z- or the t-score) falls in the area beyond the critical score, we say that the difference between the sample mean and the hypothesized population mean is statistically significant. When the calculated test statistic falls in the area between the critical scores we say that the difference between the sample mean and the hypothesized population mean is not statistically significant.

Lesson Summary

1. When testing a hypothesis for the mean of a distribution, we follow a series of four basic steps:

  • State the null and alternative hypotheses.
  • Set the criterion (critical values) for rejecting the null hypothesis.
  • Compute the test statistic.
  • Decide about the null hypothesis and interpret our results.

2. When we reject the null hypothesis we are saying that the difference between the observed sample mean and the hypothesized population mean is too great to be attributed to chance.

3. When we fail to reject the null hypothesis, we are saying that the difference between the observed sample mean and the hypothesized population mean is probable if the null hypothesis is true.

4. We use the t-distribution in hypothesis testing the same way that we use the normal distribution. However, the t-distribution is used when the sample size is small (typically less than 120) and the population standard deviation is unknown.

5. When calculating the t-statistic, we use the formula:

t = \frac{\bar{X} - \mu} {s_{\bar{x}}}

where:

t = test statistic

\bar{X} = sample mean

\mu= hypothesized population mean

s_{\bar{x}} = estimated standard error, which is computed by \frac{s} {\sqrt{n}}

6. The difference between the hypothesized population mean and the sample mean is said to be statistically significant when the probability that the difference occurred by chance is less than the significance (\alpha) level.

Review Questions

  1. In hypothesis testing, when we work with large samples (typically samples over 120), we use the ___ distribution. When working with small samples (typically samples under 120), we use the ___ distribution.
  2. True or False: When we fail to reject the null hypothesis, we are saying that the difference between the observed sample mean and the hypothesized population mean is probable if the null hypothesis is true.

The dean from UCLA is concerned that the student’s grade point averages have changed dramatically in recent years. The graduating seniors’ mean GPA over the last five years is 2.75. The dean randomly samples 256 seniors from the last graduating class and finds that their mean GPA is 2.85, with a sample standard deviation of 0.65.

  1. What would the null and alternative hypotheses be for this scenario?
  2. What would the standard error be for this particular scenario?
  3. Describe in your own words how you would set the critical regions and what they would be at an alpha level of .05.
  4. Test the null hypothesis and explain your decision
  5. Suppose that the dean samples only 30 students. Would a t -distribution now be the appropriate sampling distribution for the mean? Why or why not?
  6. Using the appropriate t-distribution, test the same null hypothesis with a sample of 30.
  7. With a sample size of 30, do you need to have a larger or smaller difference between then hypothesized population mean and the sample mean to obtain statistical significance? Explain your answer.
  8. For each of the following scenarios, state which one is more likely to lead to the rejection of the null hypothesis.
    1. A one-tailed or two-tailed test
    2. .05 or .01 level of significance
    3. A sample size of n = 144 or n = 444

Review Answers

  1. z, t
  2. True
  3. H_{0}:\mu=2.75,  H_{a}: \mu \neq 2.75
  4. 0.406
  5. When setting the critical regions for this hypothesis, it is important to consider the repercussions of the decision. Since there does not appear to be major financial or health repercussions of this decision, a more conservative alpha level need not be chosen. With an alpha level of .05 and a sample size of 256, we find the area under the curve associated in the z-distribution and set the critical regions accordingly. With this alpha level and sample size, the critical regions are set at 1.96 standard scores above and below the mean.
  6. With a calculated test statistic of 2.463, we reject the null hypothesis since it falls beyond the critical values established with an alpha level of .05. This means that the probability that the observed sample mean would have occurred by chance if the null hypothesis is true is less than 5 \%.
  7. Yes, because the sample size is below 120, in most cases the t -distribution would be the appropriate distribution to use and what you have is s not t.
  8. The critical values for this scenario using the t-distribution are 2.045 standard scores above and below the mean. With a calculated t-test statistic of 0.8425, we do not reject the null hypothesis. Therefore, we can conclude that the probability that the observed sample mean could have occurred by chance if the null hypothesis was true is greater than 5 \%.
  9. You would need a larger difference because the standard error of the mean would be greater with a sample size of 30 than with a sample size of 256.
  10. (a) one-tailed test (b) .05 level of significance (c) n = 144

Image Attributions

Files can only be attached to the latest version of None

Reviews

Please wait...
Please wait...
Image Detail
Sizes: Medium | Original
 
CK.MAT.ENG.SE.1.Prob-&-Stats-Adv.8.3

Original text