<meta http-equiv="refresh" content="1; url=/nojavascript/"> Student’s t-Distribution | CK-12 Foundation
Dismiss
Skip Navigation
You are reading an older version of this FlexBook® textbook: CK-12 Probability and Statistics - Advanced Go to the latest version.

Learning Objectives

  • Use Student’s t-distribution to estimate population mean interval for smaller samples.
  • Understand how the shape of Student’s t-distribution corresponds to the sample size (which corresponds to a measure called the “degrees of freedom.”)

Introduction

In a previous lesson you learned about the Central Limit Theorem. One of the attributes of this theorem was that the sampling distribution of sample mean will follow a normal distribution as long as the sample size is large. As the value of n increases, the sampling distribution is more and more likely to follow a normal distribution. You’ve also learned that when the standard deviation of a population is known, a z-score can be calculated and used with the normal distribution to evaluate probabilities with the sample mean. In real-life situations, the standard deviation of the entire population (\sigma), is rarely known. Also the sample size is not always large enough to emulate a normal distribution. In fact there are often times when the sample sizes are quite small. What do you do when either one or both of these events occur?

t-Statistic

People often make decisions from data by comparing the results from a sample to some hypothesized or predetermined parameter. These decisions are referred to as tests of significance or hypothesis tests since they are used to determine whether the predetermined parameter is acceptable or should be rejected. We know that if we flip a fair coin, the probability of getting heads is 0.5. In other words, heads and tails are equally likely. Therefore, when a coin is spun, it should land heads 50\% of the time. Let’s say that a coin of questionable fairness was spun 40\;\mathrm{times} it landed heads 12 \;\mathrm{times}. For these spins the sample proportion of heads is \hat {p} = \frac{12} {40} = 0.3. If technology is used to determine a 95\% confidence interval to support the standard that heads should land 50\% of the time, the reasonably likely sample proportions are in the interval 0.34505 to 0.65495. The class with \hat {p} = 0.3, is not captured within this confidence interval. Therefore, the fairness of this coin should be questioned; or, in other words, value of 0.5 as a plausible value for the proportion of times this particular coin lands heads when it is spun should be rejected. This data has provided evidence against the standard.

The object is to test the significance of the difference between the sample and the parameter. If the difference is small (as defined by some predetermined amount), then the parameter is acceptable. The statement that the proposed parameter is true is called the null hypothesis. If the difference is large and can’t reasonably be attributed to chance, then the parameter can be rejected.

When the sample size is large, reliable estimates of the mean and variance of the population from which the sample was drawn can be made. Up to this point, we have used the z-score to determine the number of standard deviations a given value lays above or below the mean.

z = \frac{\bar {x} - \mu_0} {\sigma / \sqrt{n}}

where \bar {x} is the sample mean, \mu_0 is the hypothesized mean stated in the null hypothesis H_0: \mu= \mu_0 \ \sigma, is the population standard deviation and n is the sample size.

However the above formula cannot be used to determine how far a sample mean is from the hypothesized mean because the standard deviation of the population is not known. If the value of \sigma is unknown, s is substituted for \sigma and t for z. The t stands for the “test statistic,” and it is given by the formula:

t = \frac{\bar {x} - \mu_0} {s / \sqrt{n}}

where \bar {x} is the sample mean \mu_0 is the population mean, s is the standard deviation of the sample and n is the sample size. The population mean \mu is unknown but an estimate for this value is used. The t-test will be used to determine the difference between the sample mean and the hypothesized mean. The null hypothesis that is being tested is H_0:\mu = \mu_0

So, suppose you want to see if a hypothesized mean passes a 95\% level of confidence. The corresponding confidence interval can be determined by using the graphing calculator:

Press ENTER

x = the number of successes in the sample and

n = the sample size

Press ENTER again. The confidence level will appear on the next screen. The value for t can now be compared with this interval to tell us whether or not the hypothesized mean can be accepted or rejected for this level of confidence.

Example:

The masses of newly produced bus tokens are estimated to have a mean of 3.16 \;\mathrm{grams}. A random sample of 11 tokens was removed from the production line and the mean weight of the tokens was calculated as 3.21\;\mathrm{grams} with a standard deviation of 0.067. What is the value of the test statistic for a test to determine how the mean differs from the estimated mean?

Solution:

t & = \frac{\bar {x} - \mu} {s/ \sqrt{n}} \\t & = \frac{3.21 - 3.16} {0.067/ \sqrt{11}} \\t & \approx 2.48

If the value of t from the sample fits right into the middle of the distribution of t constructed by assuming the null hypothesis is true, the null hypothesis is true. On the other hand, if the value of t from the sample is way out in the tail of the t -distribution, then there is evidence to reject the null hypothesis. Now that the distribution of t is known when the null hypothesis is true, the location of this value on the distribution. The most common method used to determine this is to find a P-value (observed significance level). The P-value is a probability that is computed with the assumption that the null hypothesis is true.

The P-value for a two-sided test is the area under the t-distribution with df = 11 -1, or 10, that lies above t = 2.48 and below t = -2.48. This P-value can be calculated by using technology.

Press 2ND [DIST] Use \downarrow to select 5.tcdf (lower bound, upper bound, degrees of freedom)

This will be the total area under both tails. To calculate the area under one tail divide by 2.

There is only a 0.016 chance of getting an absolute value of t as large as or even larger than the one from this sample (2.48 \le t \le -2.48). The small P-value tells us that the sample is inconsistent with the null hypothesis. The population mean differs from the estimated mean of 3.16.

When the P-value is close to zero, there is strong evidence against the null hypothesis. When the P-value is large, the result form the sample is consistent with the estimated or hypothesized mean and there is no evidence against the null hypothesis.

A visual picture of the P-value can be obtained by using the graphing calculator.

This formula t = \frac{\bar {x} - \mu} {s/ \sqrt{n}} is similar to that used in computing the z statistic with the unknown population standard deviation (\sigma) being substituted with the sample standard deviation.

There are numerous t-distributions and all are determined by a property of a set of data called the number of degrees of freedom. The degrees of freedom refer to the number of independent observations in a set of data. When estimating a mean score from a single sample, the number of independent observations is equal to the sample size minus one. In a single sample, there are n observations but only one parameter that needs to be estimated (the mean). This means that there are n - 1\;\mathrm{degrees} of freedom for estimating variability. In other words df = n - 1, where n is the sample size. The distribution of the t-statistic from samples of size 7 would be described by a t-distribution having 7 - 1 or 6 \;\mathrm{degrees} of freedom. Likewise, a t-distribution with 12\;\mathrm{degrees} of freedom would be used with a sample size of 13.

The t-score produced by this formula is associated with a unique cumulative probability which represents the chance of finding a sample mean less than or equal to \bar {x}, using a random sample of size n. The symbol t_\alpha is used to represent the t-score that has a cumulative probability of (1 - \alpha). If you needed the t-score to have a cumulative probability of 0.95, then \alpha would be equal to (1-0.95) or simply 0.05. This means that the t-score would be written as t_{0.05}. This value depends on the number of degrees of freedom and this value can be determined by using the table of values:

df\p 0.40 0.25 0.10 0.05 0.025 0.01 0.005 0.0005
1 0.324920 1.000000 3.077684 6.313752 12.70620 31.82052 63.65674 636.6192
2 0.288675 0.816497 1.885618 2.919986 4.30265 6.96456 9.92484 31.5991
3 0.276671 0.764892 1.637744 2.353363 3.18245 4.54070 5.84091 12.9240
4 0.270722 0.740697 1.533206 2.131847 2.77645 3.74695 4.60409 8.6103
5 0.267181 0.726687 1.475884 2.015048 2.57058 3.36493 4.03214 6.8688
6 0.264835 0.717558 1.439756 1.943180 2.44691 3.14267 3.70743 5.9588
7 0.263167 0.711142 1.414924 1.894579 2.36462 2.99795 3.49948 5.4079
8 0.261921 0.706387 1.396815 1.859548 2.30600 2.89646 3.35539 5.0413
9 0.260955 0.702722 1.383029 1.833113 2.26216 2.82144 3.24984 4.7809
10 0.260185 0.699812 1.372184 1.812461 2.22814 2.76377 3.16927 4.5869
11 0.259556 0.697445 1.363430 1.795885 2.20099 2.71808 3.10581 4.4370
12 0.259033 0.695483 1.356217 1.782288 2.17881 2.68100 3.05454 4.3178
13 0.258591 0.693829 1.350171 1.770933 2.16037 2.65031 3.01228 4.2208
14 0.258213 0.692417 1.345030 1.761310 2.14479 2.62449 2.97684 4.1405
15 0.257885 0.691197 1.340606 1.753050 2.13145 2.60248 2.94671 4.0728
16 0.257599 0.690132 1.336757 1.745884 2.11991 2.58349 2.92078 4.0150
17 0.257347 0.689195 1.333379 1.739607 2.10982 2.56693 2.89823 3.9651
18 0.257123 0.688364 1.330391 1.734064 2.10092 2.55238 2.87844 3.9216
19 0.256923 0.687621 1.327728 1.729133 2.09302 2.53948 2.86093 3.8834
20 0.256743 0.686954 1.325341 1.724718 2.08596 2.52798 2.84534 3.8495
21 0.256580 0.686352 1.323188 1.720743 2.07961 2.51765 2.83136 3.8193
22 0.256432 0.685805 1.321237 1.717144 2.07387 2.50832 2.81876 3.7921
23 0.256297 0.685306 1.319460 1.713872 2.06866 2.49987 2.80734 3.7676
24 0.256173 0.684850 1.317836 1.710882 2.06390 2.49216 2.79694 3.7454
25 0.256060 0.684430 1.316345 1.708141 2.05954 2.48511 2.78744 3.7251
26 0.255955 0.684043 1.314972 1.705618 2.05553 2.47863 2.77871 3.7066
27 0.255858 0.683685 1.313703 1.703288 2.05183 2.47266 2.77068 3.6896
28 0.255768 0.683353 1.312527 1.701131 2.04841 2.46714 2.76326 3.6739
29 0.255684 0.683044 1.311434 1.699127 2.04523 2.46202 2.75639 3.6594
30 0.255605 0.682756 1.310415 1.697261 2.04227 2.45726 2.75000 3.6460
inf 0.253347 0.674490 1.281552 1.644854 1.95996 2.32635 2.57583 3.2905

From the table it can be determined that t_{0.05} for 2\;\mathrm{degrees} of freedom is 2.92 while for 20 \;\mathrm{degrees} of freedom the value is 1.72.

Since the t-distribution is symmetric about a mean of zero, the following statement is true.

t_\alpha = -t_{1 - \alpha} && \text{and} && t_{1 - \alpha} = -t_\alpha

Therefore, if t_{0.05} = 2.92 then by applying the above statement t_{0.95} = -2.92

A t-distribution is mound shaped, with mean 0 and a spread that depends on the degrees of freedom. The greater the degrees of freedom, the smaller the spread. As the number of degrees of freedom increases, the t-distribution approaches a normal distribution. The spread of any t-distribution is greater than that of a standard normal distribution. This is due to the fact that that in the denominator of the formula \sigma has been replaced with s. Since s is a random quantity changing with various samples, the variability in t is greater, resulting in a larger spread.

Notice in the first distribution graph the spread of the first (inner curve) is small but in the second one the both distributions are basically overlapping, so are roughly normal. This is due to the increase in the degrees of freedom.

Here are the t-distributions for df = 1 and for df = 12 as graphed on the graphing calculator

You are now on the Y = screen.

Y = \mathrm{tpdf}( X, 1) [Graph]

Repeat the steps to plot more than one t-distribution on the same screen.

Notice the difference in the two distributions.

The one with 12 = df approximates a normal curve.

The t-distribution can be used with any statistic having a bell-shaped distribution. The Central Limit Theorem states the sampling distribution of a statistic will be close to normal with a large enough sample size. As a rough estimate, the Central Limit Theorem predicts a roughly normal distribution under the following conditions:

  1. The population distribution is normal.
  2. The sampling distribution is symmetric and the sample size is \le 15.
  3. The sampling distribution is moderately skewed and the sample size is 16 \le n \le 30.
  4. The sample size is greater than 30, without outliers.

The t-distribution also has some unique properties. These properties are:

1. The mean of the distribution equals zero.

2. The population standard deviation is unknown.

3. The variance is equal to the degrees of freedom divided by the degrees of freedom minus 2. This means that the degrees of freedom must be greater than two to avoid the expression being undefined.

\text{Variance} = \frac{\text{df}} {\text{df} - 2}\ \text{and}\ \text{df} > 2

4. The variance is always greater than one, although it approaches 1 as the degrees of freedom increase. This is due to the fact that as the degrees of freedom increase, the distribution is becoming more of a normal distribution.

5. Although the Student t-distribution is bell-shaped, the smaller sample sizes produce a flatter curve. The distribution is not as mounded as a normal distribution and the tails are thicker. As the sample size increases and approaches 30, the distribution approaches a normal distribution.

6. The population is unimodal and symmetric.

Example:

Duracell manufactures batteries that the CEO claims will last 300\;\mathrm{hours} under normal use. A researcher randomly selected 15 batteries from the production line and tested these batteries. The tested batteries had a mean life span of 290 \;\mathrm{hours} with a standard deviation of 50 \;\mathrm{hours}. If the CEO’s claim were true, what is the probability that 15 randomly selected batteries would have a life span of no more than 290\;\mathrm{hours}?

Solution:

t & = \frac{\bar {x} - \mu} {s/ \sqrt{n}} && \text{The degrees of freedom are }  (n-1) = 15-1.  \ \ \text{This means }  14\ \text{degrees of freedom}.\\t & = \frac{290 - 300} {50 / \sqrt{15}}\\t & = \frac{-10} {12.9099}\\t & = -.7745993

Using the graphing calculator or a table of values, the cumulative probability is 0.286, which means that if the true life span of a battery were 300 \;\mathrm{hours}, there is a 28.6\% chance that the life span of the 15 tested batteries would be less than or equal to 290 days. This is not a high enough level of confidence to reject the null hypothesis and count the discrepancy as significant.

You are now on the Y = screen.

Y = \text{tpdf}( -.7745993, 14)  =  [0.286]

Example:

You have just taken ownership of a pizza shop. The previous owner told you that you would save money if you bought the mozzarella cheese in a 4.5\;\mathrm{pound} slab. Each time you purchase a slab of cheese, you weigh it to ensure that you are receiving 72 \;\mathrm{ounces} of cheese. The results of 7 random measurements are 70, 69, 73, 68, 71, 69 and 71\;\mathrm{ounces}. Are these differences due to chance or is the distributor giving you less cheese than you deserve?

Solution:

Begin the problem by determining the mean of the sample and the sample standard deviation.

This can be done using the graphing calculator. \bar {x} \approx 70.143 and s \approx 1.676.

t & = \frac{\bar {x} - \mu} {s/ \sqrt{n}}\\t & = \frac{70.143 - 72} {1.676 / \sqrt{7}}\\t & \approx -2.9315

Example:

In the example before last the test statistic for testing that the mean weight of the cheese wasn’t 72 was computed. Find and interpret the P-value.

Solution:

The test statistic computed in the example before last was -2.9315. Using technology, the P – value is 0.0262. If the mean weight of cheese is 72\;\mathrm{ounces}, the probability that the volume of 7 random measurements would give a value of t greater than 2.9315 or less than -2.9315 is about 0.0262.

Example:

In the previous example, the P-value for testing that the mean weight of cheese wasn’t 72\;\mathrm{ounces} was determined.

a) State the hypotheses.

b) Would the null hypothesis be rejected at the 10\% level? The 5\% level? The 1\% level?

Solution:

a) H_0: The mean weight of cheese, \mu is 72\;\mathrm{ounces}.

H_\alpha: \mu \ne 72

b) Because the P-value of 0.0262 is less than both

Image Attributions

Files can only be attached to the latest version of None

Reviews

Please wait...
You need to be signed in to perform this action. Please sign-in and try again.
Please wait...
Image Detail
Sizes: Medium | Original
 
CK.MAT.ENG.SE.1.Prob-&-Stats-Adv.7.6

Original text