<meta http-equiv="refresh" content="1; url=/nojavascript/"> Multiple Regression | CK-12 Foundation
Dismiss
Skip Navigation
You are reading an older version of this FlexBook® textbook: CK-12 Probability and Statistics - Advanced Go to the latest version.

Learning Objectives

  • Understand the multiple regression equation and the coefficients of determination for correlation of three or more variables.
  • Calculate the multiple regression equation using technological tools.
  • Calculate the standard error of a coefficient, test a coefficient for significance to evaluate a hypothesis and calculate the confidence interval for a coefficient using technological tools.

Introduction

In the previous sections, we learned a bit about examining the relationship between two variables by calculating the correlation coefficient and the linear regression line. But, as we all know, often times we work with more than two variables. For example, what happens if we want to examine the impact that class size and number of faculty members has on a university ranking. Since we are taking multiple variables into account, the linear regression model just won’t work. In multiple linear regression scores for one variable are predicted (in this example, university ranking) using multiple predictor variables (class size and number of faculty members).

Another common use of the multiple regression model is in the estimation of the selling price of a home. There are a number of variables that go into determining how much a particular house will cost including the square footage, the number of bedrooms, the number of bathrooms, the age of the house, the neighborhood, etc. Analysts use multiple regression to estimate the selling price in relation to all of these different types of variables.

In this section, we will examine the components of the multiple regression equation, calculate the equation using technological tools and use this equation to test for significance to evaluate a hypothesis.

Understanding the Multiple Regression Equation

If we were to try to draw a multiple regression model, it would be a bit more difficult than drawing the model for linear regression. Let’s say that we have two predictor variables (X_1 and X_2) that are predicting the desired variable (Y). The regression equation would be:

\ddot{Y} = b_1 X_1 + b_2 X_2 + a

Since there are three variables, each would have three scores and therefore these scores would be plotted in three dimensions (see figure below). When there are more than two predictor variables, we would continue to plot these in multiple dimensions. Regardless of how many predictor variables that we have, we still use the least squares method to try to reduce the distance between the actual and predicted values.

When predicting values using multiple regression, we can also use the standard score form of the formula:

z_{\hat{Y}} = \beta_1 z_1  \beta_2 z_2 + \text{etc}\ \ldots

where:

z_{\hat{Y}} = the predicted or criterion variable

\beta = the regression coefficient

z = the predictor variable

To solve for the regression and constant coefficients, we first need to determine the multiple correlation coefficient (r) and coefficient of determination, also known as the proportion of shared variance (R^2). In a linear regression model, we measured R^2 by adding the sum of the distances from the actual to the points predicted by the regression line. So what does R^2 look like in a multiple regression model? Let’s take a look at the figure above. Essentially, like the linear regression model, the theory behind the computation of the multiple regression equation is to minimize the sum of the squared deviations from the observation to the regression plane.

In most situations, we use the computer to calculate the multiple regression equation and determine the coefficients in this equation. We can also do multiple regression on a TI83/84 calculator (this program can be downloaded from http://www.wku.edu/~david.neal/manual/ti83.html). However, it is helpful to explain the calculations that go into the multiple regression equation so we can get a better understanding of how this formula works.

After we find the correlation values (r) between the variables, we can use the following formulas to determine the regression coefficients for each of the predictor (X) variables:

\beta_1 & = \frac{r_{Y1} - (r_{Y2}) (r_{12})} {1 - r^2_{12}}\\\beta_2 & = \frac{r_{Y2} - (r_{Y1}) (r_{12})} {1 - r^2_{12}}

where:

\beta_1 = the correlation coefficient

r_{Y1} = correlation between the criterion variables (Y) and the first predictor variable (X_1)

r_{Y2} = correlation between the criterion variables (Y) and the second predictor variable (X_2)

r_{12} = correlation between the two predictor variables

After solving for the beta coefficients, we can compute for the b coefficients using the following formulas:

b_1 & = \beta_1 \left (\frac{s_Y} {s_1} \right)\\b_2 & = \beta_2 \left (\frac{s_Y} {s_2} \right)

where:

s_Y = the standard deviation of the criterion variable (Y)

S_1 = the standard deviation of the particular predictor variable (1 for the first predictor variable and so forth)

After solving for the regression coefficients, we can finally solve for the regression constant by using the formula:

a = \bar{Y} - \sum^k_{i = 1} b_i \bar{X}_i

Again, since these formulas and calculations are extremely tedious to complete by hand, we use the computer or TI-83 calculator to solve for the coefficients in the multiple regression equation.

Calculating the Multiple Regression Equation using Technological Tools

As mentioned, there are a variety of technological tools to calculate the coefficients in the multiple regression equation. When using the computer, there are several programs that help us calculate the multiple regression equation including Microsoft Excel, the Statistical Analysis Software (SAS) and the Statistical Package for the Social Sciences (SPSS) software. Each of these programs allows the user to calculate the multiple regression equation and provides summary statistics for each of the models.

For the purposes of this lesson, we will synthesize summary tables produced by Microsoft Excel to solve problems with multiple regression equations. While the summary tables produced by the different technological tools differ slightly in the format, they all provide us with the information needed to build a multiple regression model, conduct hypothesis tests and construct confidence intervals. Let’s take a look at an example of a summary statistics table so we get a better idea of how we can use technological tools to build multiple regression models.

Example:

Let’s say that we want to predict the amount of water consumed by football players during summer practices. The football coach notices that the water consumption tends to be influenced by the time that the players are on the field and the temperature. He measures the average water consumption, temperature and practice time for seven practices and records the following data:

Temperature (F) Practice Time (Hrs) H2O Consumption (in ounces)
75 1.85 16
83 1.25 20
85 1.5 25
85 1.75 27
92 1.15 32
97 1.75 48
99 1.6 48

Figure: Water consumption by football players compared to practice time and temperature.

Here is the procedure for performing a multiple regression in Excel using this set of data.

  1. Copy and paste the table into an empty Excel worksheet
  2. Select Data Analysis from the Tools menu and choose “Regression” from the list that appears
  3. Place the cursor in the “Input Y range” field and select the third column.
  4. Place the cursor in the “Input X range” field and select the first and second columns
  5. Place the cursor in the “Output Range” and click somewhere in a blank cell below and to the left of the table.
  6. Click “Labels” so that the names of the predictor variables will be displayed in the table
  7. Click OK and the results shown below will be displayed.

SUMMARY OUTPUT

Regression Statistics

& \text{Multiple R} && 0.996822 \\& \text{R Square } && 0.993654 \\& \text{Adjusted R Square} &&  0.990481 \\& \text{Standard Error} && 1.244877\\& \text{Observations} && 7

ANOVA
df SS MS F Significance F
Regression 2 970.6583 485.3291 313.1723 4.03E-05
Residual 4 6.198878 1.549719
Total 6 976.8571
Coefficients Standard Error t Stat P-value Lower 95\% Upper 95\%
Intercept -121.655 6.540348 -18.6007 4.92E-05 -139.814 -103.496
Temperature 1.512364 0.060771 24.88626 1.55E-05 1.343636 1.681092
Practice Time 12.53168 1.93302 6.482954 0.002918 7.164746 17.89862

Remember, we can also use the TI-83/84 calculator to perform multiple regression analysis. The program for this analysis can be downloaded at http://www.wku.edu/~david.neal/manual/ti83.html.

In this excerpt, we have a number of summary statistics that give us information about the model. As you can see from the print out above, we have information for each variable on the regression coefficient (\beta), the standard error of the regression coefficient se(\beta) and the R^2 value.

Using this information, we can take all of the regression coefficients and put them together to make our model. In this example, our regression equation would be \hat{Y}  = -121.66 + 1.51X + 12.53Z. Each of these coefficients tells us something about the relationship between the predictor variable and the predicted outcome. The temperature coefficient of 1.51 tells us that for every 1.0 \;\mathrm{degree} increase in temperature, we predict there to be an increase of 1.5 \;\mathrm{ounce} of water consumed if we hold the practice time constant. Similarly, we find that with every 10 \;\mathrm{minute} increase in practice time, we predict players to consume an additional 15 \;\mathrm{ounces} of water if we hold the temperature constant.

With an R^2 of 0.99, we can conclude that approximately 99\% of the variance in the outcome variable (Y) can be explained by the variance in the combined predictor variables. Notice that the adjusted R^2 is only slightly different from the unadjusted R^2. This is due to the relatively small number of observations and the small number of predicted variables. With an R^2 of 0.99 we can conclude that almost all of the variance in water consumption is attributed to the variance in temperature and practice time.

Testing for Significance to Evaluate a Hypothesis, the Standard Error of a Coefficient and Constructing Confidence Intervals

When we perform multiple regression analysis, we are essentially trying to determine if our predictor variables explain the variation in the outcome variable (Y). When we put together our final model, we are looking at whether or not the variables explain most of the variation (R^2) and if this R^2 value is statistically significant. We can use technological tools to conduct a hypothesis test testing the significance of this R^2 value and in constructing confidence intervals around these results.

Hypothesis Testing

When we conduct a hypothesis test, we test the null hypothesis that the multiple R value in the population equals zero (H_0 = R_{\mathrm{pop}} = 0). Under this scenario, the predicted or fitted values would all be very close to the mean and the deviations (\hat{Y} - \bar{Y}) or the sum of squares would be very small (close to 0). Therefore, we want to calculate a test statistic (in this case the F statistic) that measures the correlation between the predictor variables. If this test statistic is beyond the critical values and the null hypothesis is rejected, we can conclude that there is a nonzero relationship between the criterion variable (Y) and the predictor variables. When we reject the null hypothesis we can say something to the effect of “The probability that R^2=XX would have occurred by chance if the null hypothesis were true is less than .05 (or .10, .01, etc.).” As mentioned, we can use computer programs to determine the F-statistic and its significance.

Let’s take a look at the example above and interpret the F value. We see that we have a very high R^2 value of 0.99 which means that almost all of the variance in the outcome variable (water consumption) can be explained by the predictor variables (practice time and temperature). Our ANOVA (ANalysis Of VAriance) table tells us that we have a calculated F statistic of 313.17, which has an associated probability value of 4.03E-05 (0.0000403). This means that the probability that 0.99 of the variance would have occurred by chance if the null hypothesis were true (i.e., none of the variance explained) is 0.0000403. In other words, it is highly unlikely that this large level of explained variance was by chance.

Standard Error of a Coefficient and Testing for Significance

In addition to performing a test to assess the probability of the regression line occurring by chance, we can also test the significance of individual coefficients. This is helpful in determining whether or not the variable significantly contributes to the regression. For example, if we find that a variable does not significantly contribute to the regression we may choose not to include it in the final regression equation. Again, we can use computer programs to determine the standard error, the test statistic and its level of significance.

Looking at our example above we see that Excel has calculated the standard error and the test statistic (in this case, the t-statistic) for each of the predictor variables. We see that temperature has a t-statistic of 24.88 and a corresponding p-value of 1.55E-05 and that practice time has a t-statistic of 6.48 and a corresponding p-value of 0.002918. Depending on the situation, we can set our critical values at 0.10, 0.05, 0.01, etc. For this situation, we will use a p-value of .05. Since both variables have t-values that exceed the critical value, we can determine that both of these variables significantly contribute to the variance of the outcome variable and should be included in the regression equation.

Calculating the Confidence Interval for a Coefficient

We can also use technological tools to build a confidence interval around our regression coefficients. Remember earlier in the lesson we calculated confidence intervals around certain values in linear regression models. However, this concept is a bit different when we work with multiple regression models.

For the predictor variables in multiple regression, the confidence interval is based on t-tests and is the range around the observed sample regression coefficient, within which we can be 95\% (or any other predetermined level) confident the real regression coefficient for the population lies. In this example, we can say that we are 95\% confident that the population regression coefficient for temperature is between 1.34 (the Lower 95\% entry) and 1.68 (the Upper 95\% entry). In addition, we are 95\% confident that the population regression coefficient for practice time is between 7.16 and 17.90.

Lesson Summary

1. In multiple linear regression, scores for one variable are predicted using multiple predictor variables. The regression equation we use is

Y = b_1 X_1 +b_2 X_2 + \text{etc}.

2. When calculating the different parts of the multiple regression equation we can use a number of computer programs such as Microsoft Excel, SPSS and SAS.

3. These programs calculate the multiple regression coefficients, combined R^2 value and confidence interval for the regression coefficients.

Supplemental Links

Review Questions

The lead English teacher is trying to determine the relationship between three tests given throughout the semester and the final exam. She decides to conduct a mini-study on this relationship and collects the test data (scores for Test 1, Test 2, Test 3 and the final exam) for 50 students in freshman English. She enters these data into Microsoft Excel and arrives at the following summary statistics:

& \text{Multiple R} && 0.6859 \\& \text{R Square} && 0.4707 \\& \text{Adjusted R Square} && 0.4369 \\& \text{Standard Error} && 7.5718 \\& \text{Observations} && 50

ANOVA
df SS MS F Significance F
Regression 3 2342.7228 780.9076 13.621 .0000
Residual 46 2637.2772 57.3321
Total 49 4980.0000
Coefficients Standard Error t Stat P-value
Intercept 10.7592 7.6268
Test 1 0.0506 .1720 .2941 .7700
Test 2 .5560 .1431 3.885 .0003
Test 3 .2128 .1782 1.194 .2387
  1. How many predictor variables are there in this scenario? What are the names of these predictor variables?
  2. What does the regression coefficient for Test 2 tell us?
  3. What is the regression model for this analysis?
  4. What is the R^2 value and what does it indicate?
  5. Determine whether the multiple R is statistically significant.
  6. Which of the predictor variables are statistically significant? What is the reasoning behind this decision?
  7. Given this information, would you include all three predictor variables in the multiple regression model? Why or why not?

Review Answers

  1. There are 3 predictor values – Test 1, Test 2 and Test 3.
  2. The regression coefficient of 0.5560 tells us that every 0.5560 \;\mathrm{percent} change in Test 2 is associated with a 1.000 \;\mathrm{percent} change in the final exam when everything else is held constant.
  3. From the data given, the regression equation is Y = 0.0506 \ \ X_1 + 0.5560 \ \ X_2 +0.2128 \ \ X_3 +10.7592.
  4. The R^2 value is 0.4707 and indicates that 47\% of the variance in the final exam can be attributed to the variance of the combined predictor variables.
  5. Using the print out, we see that the F statistic is 13.621 and has a corresponding p value of 0.000. This means that the probability that the observed R value would have occurred by chance if it was not significant is very small (slightly greater than 0.000)
  6. Test 2. Upon closer examination, we find that only the Test 2 predictor variable is significantly significant since the t value of 3.885 exceeds the critical value (as evidenced by the low p value of .003).
  7. No. It is not necessary to include Test 1 and Test 3 in the multiple regression model since these two variables do not have a significant test statistic that exceeds the critical value.

Image Attributions

Files can only be attached to the latest version of None

Reviews

Please wait...
Please wait...
Image Detail
Sizes: Medium | Original
 
CK.MAT.ENG.SE.1.Prob-&-Stats-Adv.9.4
ShareThis Copy and Paste

Original text