9.2: Least-Squares Regression
Learning Objectives
- Calculate and graph a regression line.
- Predict values using bivariate data plotted on a scatterplot.
- Understand outliers and influential points.
- Perform transformations to achieve linearity.
- Calculate residuals and understand the least-squares property and its relation to the regression equation.
- Plot residuals and test for linearity.
Introduction
In the last section we learned about the concept of correlation, which we defined as the measure of the linear relationship between two variables. As a reminder, when we have a strong positive correlation, we can expect that if the score on one variable is high, the score on the other variable will also most likely be high. With correlation, we are able to roughly predict the score of one variable when we have the other. Prediction is simply the process of estimating scores of one variable based on the scores of another variable.
In the previous section we illustrated the concept of correlation through scatterplot graphs. We saw that when variables were correlated, the points on this graph tended to follow a straight line. If we could draw this straight line it, in theory, would represent the change in one variable associated with the other. This line is called the least squares or the linear regression line (see figure below).
Calculating and Graphing the Regression Line
Linear regression involves using existing data to calculate a line that best fits the data and then using that line to predict scores. In linear regression, we use one variable (the predictor variable) to predict the outcome of another (the outcome or the criterion variable). To calculate this line, we analyze the patterns between two variables and use a series of calculations to determine the different parts of the line.
To determine this line we want to find the change in that will be reflected by the average change in . After we calculate this average change, we can apply it to any value of to get an approximation of . Since the regression line is used to predict the value of for any given value of , all predicted values will be located on the regression line itself. Therefore, we try to fit the regression line to the data by having the smallest sum of squared distances from each of the data points to the line itself. In the example below, you can see the calculated distance from each of the observations to the regression line, or residual values. This method of fitting the data line so that there is minimal difference between the observation and the line is called the method of least squares which we will discuss further in the following sections.
As you can see, the regression line is a straight line that expresses the relationship between two variables. When predicting one score by using another, we use an equation equivalent to the slope-intercept form of the equation for a straight line:
where:
the score that we are trying to predict
the slope of the line
the intercept (value of when )
While the linear regression equation is equivalent to the slope intercept form (swapping for and for ), the form above is often used in statistical regression.
To calculate the line itself, we need to find the values for (the regression coefficient) and (the regression constant). The regression coefficient is a very important calculation and explains the nature of the relationship between the two variables. Essentially, the regression coefficient tells us that a certain change in the predictor variables is associated with a change in the outcome or the criterion variable. For example, if we had a regression coefficient of , we would say that a “ change in is associated with a change in .” To calculate this regression coefficient we can use the formulas:
or
where:
correlation between variables and
standard deviation of the scores
standard deviation of the scores
In addition to calculating the regression coefficient, we also need to calculate the regression constant. The regression constant is also the -intercept and is the place where the line crosses the -axis. For example, if we had an equation with a regression constant of , we would conclude that the regression line crosses the -axis at . We use the following formula to calculate the regression constant:
Example:
Find the least squared regression line (also known as the regression line or the line of best fit) for the example measuring the verbal SAT score and GPA that was used in the previous section.
Student | SAT Score | GPA | |||
---|---|---|---|---|---|
1 | |||||
2 | |||||
3 | |||||
4 | |||||
5 | |||||
6 | |||||
7 | |||||
Sum |
Using these data, we first calculate the regression coefficient and the regression constant:
Now that we have the equation of this line, it is easy to plot on a scatterplot. To plot this line, we simply substitute two values of and calculate the corresponding values to get several pairs of coordinates. Let’s say that we wanted to plot this example on a scatterplot. We would choose two hypothetical values for (say, and ) and then solve for in order to identify the coordinates and . From these pairs of coordinates, we can draw the regression line on the scatterplot.
Predicting Values Using Scatterplot Data
One of the uses of the regression line is to predict values. After calculating this line, we are able to predict values by simply substituting a value of a predictor variable into the regression equation and solving the equation for the outcome variable . In our example above, we can predict a students’ GPA from their SAT score by plugging in the desired values into our regression equation .
For example, say that we wanted to predict the GPA for two students, one of which had an SAT score of and the other of which had an SAT score of . To predict the GPA scores for these two students, we would simply plug the two values of the predictor variable ( and ) into the equation and solve for (see below).
Student | SAT Score | GPA | Predicted GPA |
---|---|---|---|
1 | |||
2 | |||
3 | |||
4 | |||
5 | |||
6 | |||
7 | |||
Hypothetical | |||
Hypothetical |
We are able to predict the values for for any value of within a specified range.
Transformations to Achieve Linearity
Sometimes we find that there is a relationship between and , but it is not best summarized by a straight line. When looking at the scatterplot graphs of correlation patterns, we called these types of relationships curvilinear. While many relationships are linear, there are quite a number that are not including learning curves (learning more quickly at the beginning followed by a leveling out) or exponential growth (doubling in size with each unit of growth). Below is an example of a growth curve describing the growth of complex societies.
Since this is not a linear relationship, one may think that we may not be able to fit a regression line. However, we can perform something called a transformation to achieve a linear relationship. We commonly use transformations in everyday life. For example, the Richter scale measuring for earthquake intensity, and the idea of describing pay raises in terms of percentages are both examples of making transformations on non-linear data.
Let’s take a closer look at logarithms so that we can understand how they are used in nonlinear transformations. Notice that we can write the numbers and as , etc. We can also write the numbers , and as , etc. All of these equations take the form: where is the power to which the base must be raised. We call the logarithm because it is the power to which the base must be raised to yield the number. Applying this equation, we find that , etc. and , etc. Because of these rules, variables that are exponential or multiplicative (in other words, non-linear models) are linear in their logarithmic form.
In order to transform data in the linear regression model, we apply logarithmic transformations to each point in the data set. This is most easily done using either the TI-83 calculator or a computer program such as Microsoft Excel, the Statistical Package for Social Sciences (SPSS) or Statistical Analysis Software (SAS). This transformation produces a linear correlation to which we can fit a linear regression line.
Let’s take a look at an example to help clarify this concept. Say that we were interested in making a case for investing and examining how much return on investment one would get on over time. Let’s assume that we invested in the year 1900 and this money accrued interest every year. The table below details how much we would have each decade:
Year | Investment with Each Year |
---|---|
1900 | |
1910 | |
1920 | |
1930 | |
1940 | |
1950 | |
1960 | |
1970 | |
1980 | |
1990 | |
2000 | |
2010 |
If we graphed these data points, we would see that we have an exponential growth curve.
Say that we wanted to fit a linear regression line to these data. First, we would transform these data using logarithmic transformations.
Year | Investment with Each Year | Log of amount |
---|---|---|
1900 | ||
1910 | ||
1920 | ||
1930 | ||
1940 | ||
1950 | ||
1960 | ||
1970 | ||
1980 | ||
1990 | ||
2000 | ||
2010 |
If we graphed these transformed data, we would see that we have a linear relationship.
Outliers and Influential Points
An outlier is an extreme observation that does not fit the general correlation or regression pattern (see figure below). By definition, an outlier is defined as an unusual observation; therefore, the inclusion of this observation may affect the slope and the intercept of the regression line. When examining the scatterplot graph and calculating the regression equation, it is worth considering whether extreme observations should be included or not.
Let’s use our example above to illustrate the effect of a single outlier. Say that we have a student that has a high GPA, but suffered from test anxiety the morning of the SAT verbal test and scored a . Using our original regression equation, we would expect the student to have a GPA of . But in reality, the student has a GPA equal to . The inclusion of this value would change the slope of the regression equation from to which is quite a large difference.
There is no set rule when trying to decide whether or not to include an outlier in regression analysis. This decision depends on the sample size, how extreme the outlier is and the normality of the distribution. As As a general rule of thumb, we should consider values that are the inter-quartile range below the first quartile or above the third quartile as outliers. Extreme outliers are values that are the inter-quartile range below the first quartile or above the third quartile.
Calculating Residuals and Understanding their Relation to the Regression Equation
As mentioned earlier in the lesson, the linear regression line is the line that best fits the given data. Ideally, we would like to minimize the distance of all data points to regression line. These distances are called the error and also known as the residual values. As mentioned, we fit the regression line to the data points in a scatterplot using the least-squares method. A “good” line will have small residuals. Notice in the figure below that this calculated difference is actually the vertical distance between the observation and the predicted value on the regression line.
To find the residual values we subtract the predicted value from the actual value . Theoretically, the sum of all residual values should be since we are finding the line of best fit with the predicted values as close as possible to the actual value. However, since we will have both positive and negative residuals, it does not make much sense to use this sum as an indicator since the residuals cancel each other out and total zero. Therefore, we try to minimize the sum of the squared residuals or .
Example:
Calculate the residuals for the predicted and the actual GPA scores from our sample above.
Solution:
Student | SAT Score | GPA | Predicted GPA | Residual Value | Residual Value Squared |
---|---|---|---|---|---|
Plotting Residuals and Testing for Linearity
To test for linearity and when determining if we should drop extreme observations (or outliers) from the analysis, it is helpful to plot the residuals. When plotting, we simply plot the -value for each observation on the axis and then plot the residual score on the -axis. When examining this scatterplot, the data points should appear to have no correlation with approximately half of the points above and the other half below . In addition, the points should be evenly distributed along the -axis too. Below is an example of what a residual scatterplot should look like if there are no outliers and a linear relationship.
If the plots of the residuals do not form this sort of pattern, we should exam them a bit more closely. For example, if more observations are below , we may have a positive outlying residual score that is skewing the distribution and vice versa. If the points are clustered close to the -axis, we could have an -value that is an outlier (see below). If this does occur, we may want to consider dropping the observation to see if this would impact the plot of the residuals. If we do decide to drop the observation, we will need to recalculate the original regression line. After this recalculation, we will have a regression line that better fits a majority of the data.
Lesson Summary
- Prediction is simply the process of estimating scores on one variable based on the scores of another variable. We use the least-squares (also known as the linear) regression line to predict the value of a variable.
- Using this regression line, we are able to use the slope, -intercept and the calculated regression coefficient to predict the scores of a variable .
- When there is a nonlinear relationship, we are able to transform the data using logarithmic and power transformations. Since logarithms and power transformations are exponential in nature, this allows us to produce a linear relationship to which we can fit a regression line.
- The difference between the actual and the predicted values is called the residual value. We can calculate scatterplots of these residual values to examine outliers and test for linearity.
Review Questions
The school nurse is interested in predicting scores on a memory test from the number of times that a student exercises per week. Below are her observations:
Student | Exercise Per Week | Memory Test Score |
---|---|---|
- Please plot this data on a scatterplot ( axis – Exercise per week; axis – Social Events).
- Does this appear to be a linear relationship? Why or why not?
- What regression equation would you use to construct a linear regression model?
- What is the regression coefficient in this linear regression model and what does this mean in words?
- Calculate the regression equation for these data.
- Draw the regression line on the scatterplot.
- What is the predicted memory test score of a student that exercises per week?
- Do you think that a data transformation is necessary in order to build an accurate linear regression model? Why or why not?
- Please calculate the residuals for each of the observations and plot these residuals on a scatterplot.
- Examine this scatterplot of the residuals. Is a transformation of the data necessary? Why or why not?
Review Answers
- Answer to the discretion of the teacher.
- Yes. When plotted, the data appear to be negatively correlated and in a linear pattern.
- . This regression coefficient means that every percent change in memory test score is associated with a one percent change in exercise per week.
- Answer to the discretion of the teacher
- If a student exercised per week, we would expect that they would have a memory test score of .
- No. A data transformation is not necessary because the relationship between the two variables is linear.
- See Table Below.
Student | Exercise Per Week | Memory Test Score | Predicted Value | Residual Score |
---|---|---|---|---|