8.5: Testing a Hypothesis for Dependent and Independent Samples
Learning Objectives
- Identify situations that contain dependent or independent samples.
- Calculate the pooled standard deviation for two independent samples.
- Calculate the test statistic to test hypotheses about dependent data pairs.
- Calculate the test statistic to test hypotheses about independent data pairs for both large and small samples.
- Calculate the test statistic to test hypotheses about the difference of proportions between two independent samples.
Introduction
In the previous lessons, we learned about hypothesis testing for proportions and means in large and small samples. However, in the examples in those lessons, only one sample was involved. In this lesson, we will apply the principals of hypothesis testing to situations involving two samples. There are many situations in everyday life where we would perform statistical analysis involving two samples. For example, suppose that we wanted to test a hypothesis about the effect of two medications on curing an illness. Or we may want to test the difference between the means of males and females on the SAT. In both of these cases, we would analyze both samples, and the hypothesis would address the difference between the two sample means.
In this lesson, we will identify situations with different types of samples, learn to calculate the test statistic, calculate the estimate for population variance for both samples, and calculate the test statistic to test hypotheses about the difference of proportions or means between samples.
Dependent and Independent Samples
When we are working with one sample, we know that we have to randomly select the sample from the population, measure that sample's statistics, and then make a hypothesis about the population based on that sample. When we work with two independent samples, we assume that if the samples are selected at random (or, in the case of medical research, the subjects are randomly assigned to a group), the two samples will vary only by chance, and the difference will not be statistically significant. In short, when we have independent samples, we assume that one sample does not affect the other.
Independent samples can occur in two scenarios.
In one, when testing the difference of the means between two fixed populations, we test the differences between samples from each population. When both samples are randomly selected, we can make inferences about the populations.
In the other, when working with subjects (people, pets, etc.), if we select a random sample and then randomly assign half of the subjects to one group and half to another, we can make inferences about the population.
Dependent samples are a bit different. Two samples of data are dependent when each observation in one sample is paired with a specific observation in the other sample. In short, these types of samples are related to each other. Dependent samples can occur in two scenarios. In one, a group may be measured twice, such as in a pre-test/post-test situation (scores on a test before and after the lesson). The other scenario is one in which an observation in one sample is matched with an observation in the second sample.
To distinguish between tests of hypotheses for independent and dependent samples, we use a different symbol for hypotheses with dependent samples. For dependent sample hypotheses, we use the delta symbol, \begin{align*}\delta\end{align*}, to symbolize the difference between the two samples. Therefore, in our null hypothesis, we state that the difference between the means of the two samples is equal to 0, or \begin{align*}\delta=0\end{align*}. This can be summarized as follows:
\begin{align*}H_0: \delta=\mu_1-\mu_2=0\end{align*}
Calculating the Pooled Estimate of Population Variance
When testing a hypothesis about two independent samples, we follow a similar process as when testing one random sample. However, when computing the test statistic, we need to calculate the estimated standard error of the difference between sample means, \begin{align*}s_{\bar{x}_1-\bar{x}_2}=\sqrt{s^2 \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}\end{align*}.
Here, \begin{align*}n_1\end{align*} and \begin{align*}n_2\end{align*} are the sizes of the two samples, and \begin{align*}s^2\end{align*}, the pooled estimate of variance, is calculated with the formula \begin{align*}s^2=\frac{\sum(x_1-\bar{x}_1)^2+\sum(x_2-\bar{x}_2)^2}{n_1+n_2-2}\end{align*}. Often, the top part of the formula for pooled estimate of variance is simplified by substituting the symbol \begin{align*}SS\end{align*} for the sum of the squared deviations. Therefore, the formula is often expressed as \begin{align*}s^2=\frac{SS_1+SS_2}{n_1+n_2-2}\end{align*}.
Example: Suppose we have two independent samples of student reading scores. Calculate \begin{align*}s^2\end{align*}.
The data are as follows:
Sample 1 | Sample 2 |
---|---|
7 | 12 |
8 | 14 |
10 | 18 |
4 | 13 |
6 | 11 |
10 |
From these samples, we can calculate a number of descriptive statistics that will help us solve for the pooled estimate of variance:
Descriptive Statistic | Sample 1 | Sample 2 |
---|---|---|
Number (\begin{align*}n\end{align*}} | 5 | 6 |
Sum of Observations (\begin{align*}\sum x\end{align*}) | 35 | 78 |
Mean of Observations (\begin{align*}\bar{x}\end{align*}) | 7 | 13 |
Sum of Squared Deviations (\begin{align*}\sum^n_{i=1} (x_i-\bar{x})^2\end{align*}) | 20 | 40 |
Using the formula for the pooled estimate of variance, we find that \begin{align*}s^2=6.67\end{align*}.
We will use this information to calculate the test statistic needed to evaluate the hypotheses.
Testing Hypotheses with Independent Samples
When testing hypotheses with two independent samples, we follow steps similar to those when testing one random sample:
- State the null and alternative hypotheses.
- Choose \begin{align*}\alpha\end{align*}.
- Set the criterion (critical values) for rejecting the null hypothesis.
- Compute the test statistic.
- Make a decision: reject or fail to reject the null hypothesis.
- Interpret the decision within the context of the problem.
When stating the null hypothesis, we assume there is no difference between the means of the two independent samples. Therefore, our null hypothesis in this case would be the following:
\begin{align*}H_0: \mu_1=\mu_2 \ \text{or} \ H_0: \mu_1-\mu_2=0\end{align*}
Similar to the one-sample test, the critical values that we set to evaluate these hypotheses depend on our alpha level, and our decision regarding the null hypothesis is carried out in the same manner. However, since we have two samples, we calculate the test statistic a bit differently and use the formula shown below:
\begin{align*}t=\frac{(\bar{x}_1-\bar{x}_2)-(\mu_1-\mu_2)}{s_{\bar{x}_1-\bar{x}_2}}\end{align*}
where:
\begin{align*}\bar{x}_1-\bar{x}_2\end{align*} is the difference between the sample means.
\begin{align*}\mu_1-\mu_2\end{align*} is the difference between the hypothesized population means.
\begin{align*}s_{\bar{x}_1-\bar{x}_2}\end{align*} is the standard error of the difference between sample means.
Example: The head of the English department is interested in the difference in writing scores between remedial freshman English students who are taught by different teachers. The incoming freshmen needing remedial services are randomly assigned to one of two English teachers and are given a standardized writing test after the first semester. We take a sample of eight students from one class and nine from the other. Is there a difference in achievement on the writing test between the two classes? Use a 0.05 significance level.
First, we would generate our hypotheses based on the two samples as follows:
\begin{align*}H_0: \mu_1 &= \mu_2\\ H_0: \mu_1 & \neq \mu_2\end{align*}
Also, this is a two-tailed test, and for this example, we have two independent samples from the population and have a total of 17 students who we are examining. Since our sample size is so low, we use the \begin{align*}t\end{align*}-distribution. In this example, we have 15 degrees of freedom, which is the number in the samples minus 2. With a 0.05 significance level and the \begin{align*}t\end{align*}-distribution, we find that our critical values are 2.13 standard scores above and below the mean.
To calculate the test statistic, we first need to find the pooled estimate of variance from our sample. The data from the two groups are as follows:
Sample 1 | Sample 2 |
---|---|
35 | 52 |
51 | 87 |
66 | 76 |
42 | 62 |
37 | 81 |
46 | 71 |
60 | 55 |
55 | 67 |
53 |
From this sample, we can calculate several descriptive statistics that will help us solve for the pooled estimate of variance:
Descriptive Statistic | Sample 1 | Sample 2 |
---|---|---|
Number (\begin{align*}n\end{align*}) | 9 | 8 |
Sum of Observations (\begin{align*}\sum x\end{align*}) | 445 | 551 |
Mean of Observations (\begin{align*}\bar{x}\end{align*}) | 49.44 | 68.88 |
Sum of Squared Deviations (\begin{align*}\sum^n_{i=1}(x_i-\bar{x})^2\end{align*}) | 862.22 | 1058.88 |
Therefore, the pooled estimate of variance can be calculated as shown:
\begin{align*}s^2=\frac{SS_1+SS_2}{n_1+n_2-2}=128.07\end{align*}
This means that the standard error of the difference of the sample means can be calculated as follows:
\begin{align*}s_{\bar{x}_1-\bar{x}_2}=\sqrt{s^2 \left(\frac{1}{n_1}+\frac{1}{n_2} \right)}=\sqrt{128.07 \left(\frac{1}{9}+\frac{1}{8}\right)} \approx 5.50\end{align*}
Using this information, we can finally solve for the test statistic:
\begin{align*}t=\frac{(\bar{x}_1-\bar{x}_2)-(\mu_1-\mu_2)}{s_{\bar{x}_1-\bar{x}_2}}=\frac{(49.44-68.88)-(0)}{5.50} \approx -3.53\end{align*}
Since \begin{align*}-3.53\end{align*} is less than the critical value of \begin{align*}-2.13\end{align*}, we decide to reject the null hypothesis and conclude that there is a significant difference in the achievement of the students assigned to different teachers.
Testing Hypotheses about the Difference in Proportions between Two Independent Samples
Suppose we want to test if there is a difference between proportions of two independent samples. As discussed in the previous lesson, proportions are used extensively in polling and surveys, especially by people trying to predict election results. It is possible to test a hypothesis about the proportions of two independent samples by using a method similar to that described above. We might perform these hypotheses tests in the following scenarios:
- When examining the proportions of children living in poverty in two different towns.
- When investigating the proportions of freshman and sophomore students who report test anxiety.
- When testing if the proportions of high school boys and girls who smoke cigarettes is equal.
In testing hypotheses about the difference in proportions of two independent samples, we state the hypotheses and set the criterion for rejecting the null hypothesis in similar ways as the other hypotheses tests. In these types of tests, we set the proportions of the samples equal to each other in the null hypothesis, \begin{align*}H_0: p_1=p_2\end{align*}, and use the appropriate standard table to determine the critical values. Remember, for small samples, we generally use the \begin{align*}t\end{align*}-distribution, and for samples over 30, we generally use the \begin{align*}z\end{align*}-distribution.
When solving for the test statistic in large samples, we use the following formula:
\begin{align*}z=\frac{(\hat{p}_1-\hat{p}_2)-(p_1-p_2)}{s_{p_1-p_2}}\end{align*}
where:
\begin{align*}\hat{p}_1\end{align*} and \begin{align*}\hat{p}_2\end{align*} are the observed sample proportions.
\begin{align*}p_1\end{align*} and \begin{align*}p_2\end{align*} are the population proportions under the null hypothesis.
\begin{align*}s_{p_1-p_2}\end{align*} is the standard error of the difference between independent proportions.
Similar to the standard error of the difference between independent means, we need to do a bit of work to calculate the standard error of the difference between independent proportions. To find the standard error under the null hypothesis, we assume that \begin{align*}p_1-p_2=p\end{align*}, and we use all the data to calculate \begin{align*}\hat{p}\end{align*} as an estimate for \begin{align*}p\end{align*} as follows:
\begin{align*}\hat{p}=\frac{n_1 \hat{p}_1+n_2 \hat{p}_2}{n_1+n_2}\end{align*}
Now the standard error of the difference between independent proportions is \begin{align*}\sqrt{\hat{p}(1-\hat{p}) \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}\end{align*}.
This means that the test statistic is now \begin{align*}z=\frac{(\hat{p}_1-\hat{p}_2)-(0)}{\sqrt{\hat{p}(1-\hat{p}) \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}}\end{align*}.
Example: Suppose that we are interested in finding out which of two cities is more satisfied with the services provided by the city government. We take a survey and find the following results:
Number Satisfied | City 1 | City 2 |
---|---|---|
Yes | 122 | 84 |
No | 78 | 66 |
Sample Size | \begin{align*}n_1=200\end{align*} | \begin{align*}n_2=150\end{align*} |
Proportion Who Said Yes | 0.61 | 0.56 |
Is there a statistical difference in the proportions of citizens who are satisfied with the services provided by the city government? Use a 0.05 level of significance.
First, we establish the null and alternative hypotheses:
\begin{align*}H_0: p_1 &= p_2\\ H_a:p_1 &\neq p_2\end{align*}
Since we have large sample sizes, we will use the \begin{align*}z\end{align*}-distribution. At a 0.05 level of significance, our critical values are \begin{align*}\pm 1.96\end{align*}. To solve for the test statistic, we must first solve for the standard error of the difference between proportions:
\begin{align*}\hat{p} &= \frac{(200)(0.61)+(150)(0.56)}{350}=0.589\\ s_{p_1-p_2} &= \sqrt{(0.589)(0.411) \left(\frac{1}{200}+\frac{1}{150}\right)} \approx 0.053\end{align*}
Therefore, the test statistic can be calculated as shown:
\begin{align*}z=\frac{(0.61-0.56)-(0)}{0.053} \approx 0.94\end{align*}
Since 0.94 does not exceed the critical value of 1.96, the null hypothesis is not rejected. Therefore, we can conclude that the difference in the proportions could have occurred by chance and that there is no difference in the level of satisfaction between citizens of the two cities.
Testing Hypotheses with Dependent Samples
When testing a hypothesis about two dependent samples, we follow the same process as when testing one random sample or two independent samples:
- State the null and alternative hypotheses.
- Choose the level of significance.
- Set the criterion (critical values) for rejecting the null hypothesis.
- Compute the test statistic.
- Make a decision: reject or fail to reject the null hypothesis.
- Interpret our results.
As mentioned in the section above, our null hypothesis for two dependent samples states that there is no difference between the means of the two samples. In other words, the null hypothesis is \begin{align*}H_0: \delta=\mu_1-\mu_2=0\end{align*}. We set the criterion for evaluating the hypothesis in the same way that we do with our other examples\begin{align*}-\end{align*}by first establishing an alpha level and by then finding the critical values using a \begin{align*}t\end{align*}-distribution table. Calculating the test statistic for dependent samples is a bit different, since we are dealing with two sets of data. The test statistic that we first need to calculate is \begin{align*}\bar{d}\end{align*}, which is the difference in the means of the two samples. This means that \begin{align*}\bar{d}=\bar{x}_1-\bar{x}_2\end{align*}. We also need to know the standard error of the difference between the two samples. Since our population variance is unknown, we estimate it by first using the following formula for the standard deviation of the samples:
\begin{align*}s^2_d &= \frac{\sum (d-\bar{d})^2}{n-1}\\ s_d &= \sqrt{\frac{\sum d^2-\frac{\left (\sum d \right )^2}{n}}{n-1}}\end{align*}
where:
\begin{align*}s^2_d\end{align*} is the variance of the samples.
\begin{align*}d\end{align*} is the difference between corresponding pairs within the two samples.
\begin{align*}\bar{d}\end{align*} is the difference between the means of the two samples.
\begin{align*}n\end{align*} is the number in each sample.
\begin{align*}s_d\end{align*} is the standard deviation of the samples.
With the standard deviation of the samples, we can calculate the standard error of the difference between the two samples using the following formula:
\begin{align*}s_{\bar{d}}=\frac{s_d}{\sqrt{n}}\end{align*}
After we calculate the standard error, we can use the general formula for the test statistic as shown below:
\begin{align*}t=\frac{\bar{d}-\delta}{s_{\bar{d}}}\end{align*}
Example: A math teacher wants to determine the effectiveness of her statistics lesson and gives a pre-test and a post-test to 9 students in her class. Our null hypothesis is that there is no difference between the means of the two samples, and our alternative hypothesis is that the two means of the samples are not equal. In other words, we are testing whether or not these two samples are related with the following hypotheses:
\begin{align*}H_0: \delta &= \mu_1-\mu_2=0\\ H_a: \delta &= \mu_1-\mu_2 \neq 0\end{align*}
The results for the pre-test and post-test are shown below:
Subject | Pre-test Score | Post-test Score | \begin{align*}d\end{align*} difference | \begin{align*}d^2\end{align*} |
---|---|---|---|---|
1 | 78 | 80 | 2 | 4 |
2 | 67 | 69 | 2 | 4 |
3 | 56 | 70 | 14 | 196 |
4 | 78 | 79 | 1 | 1 |
5 | 96 | 96 | 0 | 0 |
6 | 82 | 84 | 2 | 4 |
7 | 84 | 88 | 4 | 16 |
8 | 90 | 92 | 2 | 4 |
9 | 87 | 92 | 5 | 25 |
Sum | 718 | 750 | 32 | 254 |
Mean | 79.7 | 83.3 | 3.6 |
Using the information from the table above, we can first solve for the standard deviation of the samples, then the standard error of the difference between the two samples, and finally the test statistic.
Standard Deviation:
\begin{align*}s_d=\sqrt{\frac{\sum d^2-\frac{(\sum d)^2}{n}}{n-1}}=\sqrt{\frac{254-\frac{(32)^2}{9}}{8}} \approx 4.19\end{align*}
Standard Error of the Difference:
\begin{align*}s_{\bar{d}}=\frac{s_d}{\sqrt{n}}=\frac{4.19}{\sqrt{9}}=1.40\end{align*}
Test Statistic (\begin{align*}t\end{align*}-test):
\begin{align*}t=\frac{\bar{d}-\delta}{s_{\bar{d}}}=\frac{3.6-0}{1.40} \approx 2.57\end{align*}
With 8 degrees of freedom (number of observations \begin{align*}-\end{align*} 1) and a significance level of 0.05, we find our critical values to be \begin{align*}\pm 2.31\end{align*}. Since our test statistic exceeds 2.31, we can reject the null hypothesis that the two samples are equal and conclude that the lesson had an effect on student achievement.
Lesson Summary
In addition to testing single samples associated with a mean, we can also perform hypothesis tests with two samples. We can test two independent samples, which are samples that do not affect one another, or dependent samples, which are samples that are related to each other.
When testing a hypothesis about two independent samples, we follow a similar process as when testing one random sample. However, when computing the test statistic, we need to calculate the estimated standard error of the difference between sample means, which is found by using the following formula:
\begin{align*}s_{\bar{x}_1-\bar{x}_2} = \sqrt{s^2 \left(\frac{1}{n_1}+\frac{1}{n_2}\right)} \ \text{with} \ s^2=\frac{ss_1+ss_2}{n_1+n_2-2}\end{align*}
We carry out the test on the means of two independent samples in way similar to that of testing one random sample. However, we use the following formula to calculate the test statistic, with the standard error defined above:
\begin{align*}t=\frac{(\bar{x}_1-\bar{x}_2)-(\mu_1-\mu_2)}{s_{\bar{x}_1-\bar{x}_2}}\end{align*}
We can also test the proportions associated with two independent samples. In order to calculate the test statistic associated with two independent samples, we use the formula shown below:
\begin{align*}z=\frac{(\hat{p}_1-\hat{p}_2)-(0)}{\sqrt{\hat{p}(1-\hat{p}) \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}} \ \text{with} \ \hat{p}=\frac{n_1 \hat{p}_1+n_2 \hat{p}_2}{n_1+n_2}\end{align*}
We can also test the likelihood that two dependent samples are related. To calculate the test statistic for two dependent samples, we use the following formula:
\begin{align*}t=\frac{\bar{d}-\delta}{s_{\bar{d}}} \ \text{with} \ s_{\bar{d}}=\frac{s_d}{\sqrt{n}} \ \text{and} \ s_d=\sqrt{\frac{\sum d^2 - \frac{(\sum d)^2}{n}}{n-1}}\end{align*}
Review Questions
- In hypothesis testing, we have scenarios that have both dependent and independent samples. Give an example of an experiment with dependent samples and an experiment independent samples.
- True or False: When we test the difference between the means of males and females on the SAT, we are using independent samples.
- A study is conducted on the effectiveness of a drug on the hyperactivity of laboratory rats. Two random samples of rats are used for the study. One group is given Drug A, and the other group is given Drug B. The number of times that each rat pushes a lever is recorded. The following results for this test were calculated:
Drug A | Drug B | |
---|---|---|
\begin{align*}\bar{x}\end{align*} | 75.6 | 72.8 |
\begin{align*}n\end{align*} | 18 | 24 |
\begin{align*}s^2\end{align*} | 12.25 | 10.24 |
\begin{align*}s\end{align*} | 3.5 | 3.2 |
(a) Does this scenario involve dependent or independent samples? Explain.
(b) What would the hypotheses be for this scenario?
(c) Compute the pooled estimate for population variance.
(d) Calculate the estimated standard error for this scenario.
(e) What is the test statistic, and at an alpha level of 0.05, what conclusions would you make about the null hypothesis?
- A survey is conducted on attitudes towards drinking. A random sample of eight married couples is selected, and the husbands and wives respond to an attitude-toward-drinking scale. The scores are as follows:
Husbands | Wives |
---|---|
16 | 15 |
20 | 18 |
10 | 13 |
15 | 10 |
8 | 12 |
19 | 16 |
14 | 11 |
15 | 12 |
(a) What would be the hypotheses for this scenario?
(b) Calculate the estimated standard deviation for this scenario.
(c) Compute the standard error of the difference for these samples.
(d) What is the test statistic, and at an alpha level of 0.05, what conclusions would you make about the null hypothesis?
Keywords
- \begin{align*}\alpha\end{align*}
- \begin{align*}\alpha\end{align*} is called the level of significance. \begin{align*}\alpha =[P(rejecting \ H_0|H_0 \ is \ true) = P(making \ a \ type \ I \ error)\end{align*}
- Alpha level
- The general approach to hypothesis testing focuses on the type I error: rejecting the null hypothesis when it may be true. The level of significance, also known as the alpha level.
- Alternative hypothesis
- The alternate hypothesis to be accepted if the default hypothesis is rejected.
- \begin{align*}\beta\end{align*}
- \begin{align*}\beta\end{align*} is the probability of making a type II error. \begin{align*}\beta = P(not \ rejecting \ H_0|H_0 \ is \ false)=P(making \ a \ type \ II \ error)\end{align*}
- Critical region
- The values of the test statistic that allow us to reject the null hypothesis.
- Critical values
- To calculate the critical regions, we must first find the cut-offs, or the critical values, where the critical regions start.
- Degrees of freedom
- how the shape of Student’s \begin{align*}t-\end{align*}distribution corresponds to the sample size (which corresponds to a measure called the degrees of freedom).
- Dependent samples
- Dependent samples are a bit different. Two samples of data are dependent when each observation in one sample is paired with a specific observation in the other sample.
- Hypothesis testing
- Testing the difference between a hypothesized value of a parameter and the test statistic.
- Independent samples
- When we work with two independent samples, we assume that if the samples are selected at random, the two samples will vary only by chance, and the difference will not be statistically significant.
- Level of significance
- The strength of the sample evidence needed to reject the null hypothesis.
- Null hypothesis \begin{align*}(H_0)\end{align*}
- The default hypothesis, a hypothesis about a parameter that is tested.
- One-tailed test
- When the alternative hypothesis \begin{align*}H_1\end{align*} is one-sided like \begin{align*}\theta = \theta_0\end{align*} or \begin{align*}\theta = \theta_{0^{\prime}}\end{align*}, then the rejection region is taken only on one side of the sampling distribution. It is called one-tailed test.
- \begin{align*}P-\end{align*}value
- We can also evaluate a hypothesis by asking, “What is the probability of obtaining the value of the test statistic that we did if the null hypothesis is true?” This is called the \begin{align*}P-\end{align*}value.
- Pooled estimate of variance
- Here, \begin{align*}n_1\end{align*} and \begin{align*}n_2\end{align*} are the sizes of the two samples, and \begin{align*}s^2\end{align*}, the pooled estimate of variance, is calculated with the formula \begin{align*}s^2=\frac{\sum(x_1-\bar{x}_1)^2+\sum(x_2-\bar{x}_2)^2}{n_1+n_2-2}\end{align*}.
- Power of a test
- The power of a test is defined as the probability of rejecting the null hypothesis when it is false.
- Standard error of the difference
- When testing a hypothesis about two independent samples, we follow a similar process as when testing one random sample. However, when computing the test statistic, we need to calculate the estimated standard error of the difference between sample means, \begin{align*}s_{\bar{x}_1-\bar{x}_2} = \sqrt{s^2 \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}\end{align*}.
- Student's \begin{align*}t-\end{align*}distributions
- Student's \begin{align*}t-\end{align*}distributions are a family of distributions that, like the normal distribution, are symmetrical, bell-shaped, and centered on a mean.
- Test statistic
- Before evaluating our hypotheses by determining the critical region and calculating the test statistic, we need to confirm that the distribution is normal and determine the hypothesized mean, \begin{align*}\mu\end{align*}, of the distribution. \begin{align*}z=\frac{\bar{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\end{align*}
- Two-tailed test
- The two-tailed test is a statistical test used in inference, in which a given statistical hypothesis, \begin{align*}H_0\end{align*} (the null hypothesis), will be rejected when the value of the test statistic is either sufficiently small or sufficiently large.
- Type I error
- A type I error occurs when one rejects the null hypothesis when it is true.
- Type II error
- A type II error occurs when one rejects the alternative hypothesis (fails to reject the null hypothesis) when the alternative hypothesis is true.