<meta http-equiv="refresh" content="1; url=/nojavascript/"> Inferences about Regression | CK-12 Foundation
Dismiss
Skip Navigation
You are reading an older version of this FlexBook® textbook: CK-12 Probability and Statistics - Advanced Go to the latest version.

9.3: Inferences about Regression

Created by: CK-12

Learning Objectives

  • Make inferences about the regression models including hypothesis testing for linear relationships.
  • Make inferences about regression and predicted values including the construction of confidence intervals.
  • Check regression assumptions.

Introduction

In the previous section, we learned about the least-squares or the linear regression model. The linear regression model uses the concept of correlation to help us predict a variable based on our knowledge of scores on another variable. As we learned in the previous section, this concept is used quite frequently in statistical analysis to predict variables such as IQ, test performance, etc. In this section, we will investigate several inferences and assumptions that we can make about the linear regression model.

Hypothesis Testing for Linear Relationships

Let’s think for a minute about the relationship between correlation and the linear regression model. As we learned, if there is no correlation between two variables (X and Y), then it would be near impossible to fit a meaningful regression line to the points in the scatterplot graph. If there was no correlation and our correlation (r) value was 0, we would always come up with the same predicted value which would be the mean of all the predicted variables (Y). The figure below shows an example of what a regression line fit to variables with no relationship (r=0) would look like. As you can see for any value of X, we always get the same predicted value.

Using this knowledge, we can determine that if there is no relationship between Y and X constructing a regression line or model doesn’t help us very much because the predicted score would always be the same. In other words, a regression model would be highly inaccurate. Therefore, when we estimate a linear regression model, we want to ensure that the regression coefficient in the population (\beta) does not equal zero. Furthermore, it is beneficial to test how strong (or far away) from zero the regression coefficient must be to strengthen our prediction of the Y scores.

In hypothesis testing of linear regression models, the null hypothesis to be tested is that the regression coefficient (\beta) equals zero. Our alternative hypothesis is that our regression coefficient does not equal zero.

H_0&:(\beta)=0\\H_a&:(\beta)\ne 0

We perform this hypothesis test similar to the previous conducted hypothesis test and need to next establish the critical values for the hypothesis test. We use the t-distribution with n-2 degrees of freedom to set such values. The general formula used to calculate the test statistic for testing this null hypothesis is:

t = \frac{\text{observed value} - \text{hypothesized or predicted value}} {\text{Standard Error of the statistic}} = \frac{b - \beta} {s_b}

To calculate the test statistic for this regression coefficient, we also need to estimate the sampling distributions of the regression coefficients. This statistic about this distribution that we will use is the standard error of the regression coefficient (s_b) and is defined as:

S_b = \left (\frac{s_{y * x}} {\sqrt{SS_x}} \right)

where:

s_{y * x} = the standard error of estimate

SS_x = the sum of squares for the predictor variable (X)

Example:

Let’s say that the football coach is using the results from a short physical fitness test to predict the results of a longer, more comprehensive one. He developed the regression equation of Y = .635X+ 1.22 and the standard error of estimate s_{Y*x} = .56. The summary statistics are as follows:

\mathbf{Summary statistics for two football fitness tests.} \\& n=24 && \sum XY=591.50\\& \sum X=118 & & \sum Y=104.3\\& \bar{X}=4.92 & & \bar{Y}=4.35\\& \sum X^2 = 704 & & \sum Y^2 =510.01\\& SS_x =123.83 & & SS_y =56.74

Using a \alpha =.05, test the null hypothesis that, in the population, the regression coefficient is zero (H_0: \beta = 0).

Solution:

We use the t-distribution for this test statistic and find that the critical values in the t-distribution at 22 \;\mathrm{degrees} of freedom (n-2) are 2.074 standard scores above and below the mean. Therefore,

S_b & = \left (\frac{s_{y * x}} {\sqrt{SS_x}} \right) = \left (\frac{.56} {\sqrt{123.83}} \right) = 0.05\\t & = \frac{b - \beta} {s_b} = \frac{0.635 - 0} {0.05} = 12.70

Since the observed value of the test statistic exceeds the critical value, the null hypothesis would be rejected and we can conclude that if the null hypothesis was true, we would observe a regression coefficient of 0.635 by chance less than 5\% of the time.

Making Inferences about Predicted Scores

As we have mentioned, the regression line simply makes predictions about variables based on the relationship of the existing data. However, it is important to remember that the regression line simply infers or estimates what the value will be. These predictions are never accurate 100\% of the time unless there is a perfect correlation. What this means is that for every predicted value, we have a normal distribution (also known as the conditional distribution since it is conditional on the X value) that describes the likelihood of obtaining other scores that are associated with the value of the predicted variable (X).

If we assume that these distributions are normal, we are able to make inferences about each of the predicted scores. One example of making inferences about the predicted scores is identifying probability levels associated with predicted scores. Using this concept, we are able to ask questions such as “If the predictor variable (X value) equals 4.0, what percentage of the distribution of Y scores will be lower than 3?”

The reason that we would ask questions like this depends on the scenario. Say, for example, that we want to know the percentage of students with a 4 on their short physical fitness test that have predicted scores higher than 5. If the coach is using this predicted score as a cutoff for playing in a varsity match and this percentage is too low, he may want to consider changing the standards of the test.

To find the percentage of students with scores above or below a certain point, we use the concept of standard scores and the standard normal distribution. Remember the general formula for calculating the standard score:

\text{Test Statistic} = \frac{\text{Observed Statistic} - \text{Population Mean}} {\text{Standard error}}

Applying this formula to the regression distribution, we find that the corresponding formula would be:

z = \frac{Y - \hat{Y}} {s_{XY}}

Since we have a certain predicted value for every value of X, the Y values take on the shape of a normal distribution. This distribution has a mean (the regression line) and a standard error which we found to be equal to 0.56. In short, the conditional distribution is used to determine the percentage of Y values that are associated with a specific value of X.

Example:

Using our example above, if a student scored a 5 on the short test, what is the probability that they would have a score of 5 or greater on the long physical fitness test?

Solution:

From the regression equation Y = .635X+1.22, we find that the predicted score for X=5 is Y=4.40. Consider the conditional distribution of Y scores for X=5. Under our assumption, this distribution is normally distributed around the predicted value (4.40) and has a standard error of 0.56.

Therefore, to find the percentage of Y scores of 5 or greater, we use the general formula and find that:

z = \frac{Y - \hat{Y}} {s_{Y * X}} = \frac{5 - 4.40} {0.56} = 1.07

Using the z-distribution table, we find that the area to the right of a z score of 1.07 is .1423. Therefore, we can conclude that the proportion of predicted scores of 5 or greater given a predicted score of 5 is .1423 or 14.23\%.

Confidence Intervals

Similar to hypothesis testing for samples and populations, we can also build a confidence interval around our regression results. This helps us ask questions like “If the predictor value was equal to X, what are the likely values for Y?” This gives us a range of scores that has a certain percent probability of including the score that we are after.

We know that the standard error of the predicted score is smaller when the predicted value is close to the actual value and it increases as X deviates from the mean. This means that the weaker of a predictor that the regression line is, the larger the standard error of the predicted score will be. The standard error of a predicted score is calculated by using the formula:

s_{\hat{Y}} = s_{Y * X} \sqrt{1 + \frac{1} {n} + \frac{(X - \bar{X})^2} {SS_x}}

The general formula for the confidence interval for predicted scores is found by using the following formula:

CI = \hat{Y} \underline \pm (t_{cv} s_Y)

where:

\hat{Y} = the predicted score

t_{cv}  = critical value of t for df(n-2)

s_Y  = standard error of the predicted score

Example:

Develop a 95\% confidence interval for the predicted scores from a student that scores a 4 on the short physical fitness exam (X=4).

Solution:

We calculate the standard error of the predicted value using the formula:

s_{\hat{Y}} = s_{Y * X} \sqrt{1 + \frac{1} {n} + \frac{(X - \bar{X})^2} {SS_x}} =  0.56 \sqrt{1 + \frac{1} {24} + \frac{(4 - 4.92)^2} {123.83}} = 0.57

Using the general formula for the confidence interval, we find that

CI & = \hat{Y} \underline \pm (t_{cv} s_Y)\\CI_{95} & = 3.76 \underline \pm (2.074) (0.57)\\CI_{95} & = 3.76 \underline \pm 1.18\\CI_{95} & = (2.58, 4.94)\\2.58 & < CI_{95} < 4.94)

Therefore, we can say that we are 95\% confident that given a students’ short physical fitness test score (X) of 4, the interval from 2.58 to 4.94 will contain the students’ score for the longer physical fitness test.

Regression Assumptions

We make several assumptions under a linear regression model including:

  1. At each value of X, there is a distribution of Y. These distributions have a mean centered around the predicted value and a standard error that is calculated using the sum of squares.
  2. The best regression model is a straight line. Using a regression model to predict scores only works if the regression line is a straight line. If this relationship is non linear, we could either transform the data (i.e., a logarithmic transformation) or try one of the other regression equations that are available with Excel or a graphing calculator.
  3. Homoscedasticity. The standard deviations, or the variances, of each of these distributions for each of the predicted values is equal.
  4. Independence of observation. For each give value of X, the values of Y are independent of each other.

Lesson Summary

  1. When we estimate a linear regression model, we want to ensure that the regression coefficient in the population (\beta) does not equal zero. To do this, we perform a hypothesis test where we set the regression coefficient equal to zero and test for significance.
  2. For each predicted value, we have a normal distribution (also known as the conditional distribution since it is conditional on the X value) that describes the likelihood of obtaining other scores that are associated with the value of the predicted variable (X). We can use these distributions and the concept of standardized scores to make predictions about probability.
  3. We can also build confidence intervals around the predicted values to give us a better idea about the ranges likely to contain a certain score.
  4. We make several assumptions when dealing with a linear regression model including:
  • At each value of X, there is a distribution of Y
  • The regression model is a straight line
  • Homoscedasticity
  • Independence of observations

Review Questions

The college counselor is putting on a presentation about the financial benefits of further education and takes a random sample of 120 parents. Each parent was asked a number of questions including the number of years of education that they have (including college) and their yearly income (recorded in the thousands). The summary data for this survey are as follows:

n = 120 & & r = 0.67 & & \sum X = 1,782 & & \sum Y = 1,854 & & s_x = 3.6 & & s_Y = 4.2 & & SS_x=1542

  1. What is the predictor variable? What is your reasoning behind this decision?
  2. Do you think that these two variables (income and level of formal education) are correlated? Is so, please describe the nature of their relationship.
  3. What would be the regression equation for predicting income (Y) from the level of education (X)?
  4. Using this regression equation, predict the income for a person with 2\;\mathrm{years} of college (13.5 \;\mathrm{years} of formal education).
  5. Test the null hypothesis that in the population, the regression coefficient for this scenario is zero.
    1. First develop the null and alternative hypotheses.
    2. Set the critical values at \alpha =.05.
    3. Compute the test statistic.
    4. Make a decision regarding the null hypothesis.
  6. For those parents with 15\;\mathrm{years} of formal education, what is the percentage that will have an annual income greater than 18,500?
  7. For those parents with 12\;\mathrm{years} of formal education, what is the percentage that will have an annual income greater than 18,500?
  8. Develop a 95\% confidence interval for a predicted annual income when a parent indicates that they have a college degree (i.e. - 16 \;\mathrm{years} of formal education).
  9. If you were the college counselor, what would you say in the presentation to the parents and students about the relationship between further education and salary? Would you encourage students to further their education based on these analyses? Why or why not?

Review Answers

  1. The predictor variable is the number of years of formal education. The reasoning behind this decision is that we are trying to determine and predict the financial benefits of further education (as measured by annual salary) by using the number of years of formal education (the predictor, or the X, variable.
  2. Yes. With an r-value of 0.67, these two variables appear to be moderately to strongly correlated. The nature of the relationship is a relatively strong, positive correlation.
  3. Y = 0.782X+ 3.842
  4. For X=13.5, Y = 14.39 or \$14,390
  5. (a) H_0: \beta = 0, H_a: \beta \ne 0 (b) The critical values are set at t = \underline \pm 1.98 (c) S_b = \left (\frac{s_{y*x}} {\sqrt{SS_x}} \right) = \left (\frac{3.12} {\sqrt{1542}} \right) = .08 ,  t = \frac{b - \beta} {s_b} = \frac{0.792 - 0} {.08} = 9.9 (d) Since the calculated test statistic of 9.9 exceeds the critical value of 1.98, we decide to reject the null hypothesis and can conclude that the if the null hypothesis was true, we would observe a regression coefficient of 0.792 by chance less than 5\% of the time.
  6. For X=15, \hat{Y}=15.57. Therefore, 18.50 has a z-value of 0.93: z = \frac{Y - \ddot {Y}} {s_{Y * X}} = \frac{18.5 - 15.57} {3.12} = 0.93 The z-value of 0.936 has a corresponding p-value of .1677. This means that with 15\;\mathrm{years} of formal education, an estimated 16.77\% of the parents will have an income greater than \$18,500
  7. For X=12, \hat{Y}=13.2. Therefore, 18.50 has a z -value of 0.93: z = \frac{Y - \ddot{Y}} {s_{Y * X}} = \frac{18.5 - 13.25} {3.12} = 1.68 The z-value of 0.936 has a corresponding p-value of .0465. This means that with 15\;\mathrm{years} of formal education, an estimated 4.65\% of the parents will have an income greater than \$18,500
  8. s_{\hat{Y}} = s_{Y*X} \sqrt{1 + \frac{1} {n} + \frac{(X - \hat{X})^2} {SS_x}} = 3.12 \sqrt{1 + \frac{1} {120} + \frac{(16 - 14.85)^2} {1542}} = 3.14

Using the general formula for the confidence interval (CI = \hat{Y} \underline \pm (t_{cv} s_Y)), we find that

CI_{95} & = 16.35 \underline \pm (1.98) (3.14) = 16.35 \underline \pm 6.22\\CI_95 & = (10.13, 22.57)

  1. Answer is to the discretion of the teacher.

Image Attributions

Files can only be attached to the latest version of None

Reviews

Please wait...
Please wait...
Image Detail
Sizes: Medium | Original
 
CK.MAT.ENG.SE.1.Prob-&-Stats-Adv.9.3

Original text