<meta http-equiv="refresh" content="1; url=/nojavascript/"> Scatterplots and Linear Correlation | CK-12 Foundation
Dismiss
Skip Navigation
You are reading an older version of this FlexBook® textbook: CK-12 Probability and Statistics - Advanced Go to the latest version.

9.1: Scatterplots and Linear Correlation

Created by: CK-12
 0  0  0

Learning Objectives

  • Understand the concept of bivariate data, correlation and the use of scatterplots to display bivariate data.
  • Understand when the terms “positive,” “negative” “strong,” and “perfect” apply to correlation between two variables in a scatterplot graph.
  • Calculate the linear correlation coefficient and coefficient of determination using technology tools to assist in the calculations.
  • Understand properties and common errors of correlation.

Introduction

So far we have learned how to describe the distribution of a single variable and how to perform hypothesis tests that determine if samples are representative of a population. But what if we notice that two variables seem to be related to one another and we want to determine the nature of the relationship. For example, we may notice that scores for two variables – such as verbal SAT score and GPA – are related and that students that have high scores on one appear to have high scores on another (see table below).

A table of verbal SAT values and GPAs for seven students.
Student SAT Score GPA
1 595 3.4
2 520 3.2
3 715 3.9
4 405 2.3
5 680 3.9
6 490 2.5
7 565 3.5

These types of studies are quite common and we can use the concept of correlation to describe the relationship between variables.

Bivariate Data, Correlation Between Values and the Use of Scatterplots

Correlation measures the relationship between bivariate data. In general, bivariate data are data sets with two observations that are assigned to the same subject. In our example above, we notice that there are two observations (verbal SAT score and GPA) for each ‘subject’ (in this case, a student). Can you think of other scenarios when we would use bivariate data?

As mentioned, correlation measures the relationship between two variables. If we carefully examine the data in the example above we notice that those students with high SAT scores tend to have high GPAs and those with low SAT scores tend to have low GPAs. In this case, there is a tendency for students to ‘score’ similarly on both variables and the performance between variables appears to be related.

Scatterplots display these bivariate data sets and provide a visual representation of the relationship between variables. In a scatterplot, each point represents a paired measurement of two variables for a specific subject. Each subject is represented by one point on the scatterplot which corresponds to the intersection of imaginary lines drawn through the two observations in the bivariate data set. Therefore, each point represents a paired measurement (see below).

Correlation Patterns in Scatterplot Graphs

Simply examining a scatterplot graph allows us to obtain some idea about the relationship between two variables. Typical patterns include:

  • A positive correlation - When the points on a scatterplot graph produce a lower-left-to-upper-right pattern (see below), we say that there is a positive correlation between the two variables. This pattern means that when the score of one observation is high, we expect the score of the other observation to be high as well and vice versa.

  • A negative correlation – When the points on a scatterplot graph produce a upper-left-to-lower-right pattern (see below), we say that there is a negative correlation between the two variables. This pattern means that when the score of one observation is high, we expect the score of the other observation to be low and vice versa.

  • A perfect correlation – If there is a perfect correlation between the two variables, all of the points in the scatterplot will lie on a straight line (see below).

  • Zero correlation – A scatterplot in which the points do not have a linear trend (either positive or negative) is called a zero or a near-zero correlation (see below).

When examining scatterplots, we also want to look at the magnitude of the relationship. If we drew an imaginary oval around all of the points of the scatterplot, we would be able to see the extent or the magnitude of the relationship. If the points are close to one another and the width of the imaginary oval is small, this means that there is a strong correlation between the variables (see below).

However, if the points are far away from one another and the imaginary oval is very wide, this means that there is a weak correlation between the variables (see below).

Correlation Coefficients

While examining scatterplots gives us some idea about the relationship of two variables, we use a statistic something called the correlation coefficient to give us a more precise measurement of the relationship between two variables.The correlation coefficient is an index that describes the relationship between two variables and can take on values between -1.0 and +1.0. We can tell a lot from a correlation coefficient including:

  • A positive correlation coefficient (0.10, 0.56, etc.) indicates a positive correlation.
  • A negative correlation coefficient (-0.32, -0.82, etc.) indicates a negative correlation.
  • The absolute value of the coefficient indicates the magnitude or the strength of the relationship. The closer the absolute value of the coefficient is to 1, the stronger the relationship. For example, a correlation coefficient of 0.20 indicates that there is not mush of a relationship between the variables while a coefficient of -0.90 indicates that there is a strong linear relationship.
  • The value of a perfect positive correlation is 1.0 while the value of a perfect negative correlation is -1.0.
  • When there is no linear relationship between two variables, the correlation coefficient is 0.

The most often used correlation coefficient is the Pearson product-moment correlation coefficient, or the linear correlation, which is symbolized by the letter r. To understand how this coefficient is calculated, let’s suppose that there is a positive relationship between two variables (X and Y). If a subject has a score on X that is above the mean, we expect them to have a score on Y that is above the mean as well. Pearson developed his correlation coefficient by computing the sum of cross products which is multiplying the two scores (X and Y) for each subject and then adding these cross products across the individuals. Then, he divided this sum by the number of subjects minus one. In short, this coefficient is the mean of the cross products of scores.

Because Pearson was measuring the difference between two variables, he used standard scores (z-scores, t-scores, etc.) when determining the coefficient. Therefore, the formula for this coefficient is:

r_{xy} = \frac{\sum z_x z_y} {n - 1}

In other words, the coefficient is expressed as the sum of the cross products of the standard z -scores divided by the number of degrees of freedom.

The equivalent formula that uses the raw scores rather than the standard scores is called the raw score formula, which is:

r_{xy} = \frac{n \sum XY - \sum X \sum Y} {\sqrt{[n \sum X^2 - (\sum X)^2] [n \sum Y^2 - (\sum Y)^2]}}

Again, this formula is most often used when calculating correlation coefficients from original data. Note that n is used instead of n-1 because we are using actual data and not z-scores. Let’s use our example from the introduction to demonstrate how to calculate the correlation coefficient using the raw score formula.

Example:

What is the Pearson product-moment correlation coefficient for these two variables?

The table of values for this example.
Student SAT Score GPA
1 595 3.4
2 520 3.2
3 715 3.9
4 405 2.3
5 680 3.9
6 490 2.5
7 565 3.5

In order to calculate the correlation coefficient, we need to calculate several pieces of information including XY, X^2 and Y^2. Therefore:

Values of
Student SAT Score (X) GPA (Y) XY X^2 Y^2
1 595 3.4 2023 354025 11.56
2 520 3.2 1664 270400 10.24
3 715 3.9 2789 511225 15.21
4 405 2.3 932 164025 5.29
5 680 3.9 2652 462400 15.21
6 490 2.5 1225 240100 6.25
7 565 3.5 1978 319225 12.25
Sum 3970 22.7 13262 2321400 76.01

Applying the formula to these data we find:

r_{xy} & = \frac{n \sum XY - \sum X \sum Y} {\sqrt{[n \sum X^2 - (\sum X)^2] [n \sum Y^2 - (\sum Y)^2]}} = \frac{7 * 13262 - 3970 * 22.7} {\sqrt{[7 * 2321400 - 3970^2] [7 * 76.01 -22.7^2]}}\\& = \frac{2715} {2864.22} \approx 0.95

The correlation coefficient not only provides a measure of the relationship between the variables, but also gives us an idea about how much of the total variance of one variable can be associated with the variance of another. For example, the correlation coefficient of 0.95 that we calculated above tells us that to a high degree the variance in the scores on the verbal SAT is associated with the variance in the GPA and vice versa. For example, we could say that factors that influence the verbal SAT, such as health, parent college level, etc. would also contribute to individual differences in the GPA. The higher the correlation we have between two variables, the larger the portion of the variance that can be explained.

The calculation of this variance is called the coefficient of determination and is calculated by squaring the correlation coefficient (r^2). The result of this calculation indicates the proportion of the variance in one variable that can be associated with the variance in the other variable. We can think about this concept by examining a series of overlapping circles. The varying degrees of overlap in the circles reflect the proportion of the variance in Y that can be associated with the variance in X. We will study this concept more in depth in later sections.

The Properties and Common Errors of Correlation

Again, correlation indicates the linear relationship between two variables – it does not necessarily state that one variable is caused by another. For example, a third variable or a combination of other things may be causing the two correlated variables to relate as they do. Therefore, it is important to remember that we are interpreting the variables and the variance as not causal, but instead as relational.

When examining correlation, there are three things that could affect our results:

  • Linearity
  • Homogeneity of the group
  • Sample size

As mentioned, the correlation coefficient is the measure of the linear relationship between two variables. However, while many pairs of variables have a linear relationship, some do not. For example, let’s consider performance anxiety. As a person’s anxiety about performing increases, so does their performance up to a point (we sometimes call this ‘good stress’). However, at that point the increase in the anxiety may cause their performance to go down. We call these non-linear relationships curvilinear relationships.

We can identify curvilinear relationships by examining scatterplots (see below). One may ask why curvilinear relationships pose a problem when calculating the correlation coefficient. The answer is that if we use the traditional formula to calculate these relationships, it will not be an accurate index and we will be underestimating the relationship between the variables. If we graphed performance against anxiety, we would see that anxiety has a strong affect on performance. However, if we calculated the correlation coefficient, we would arrive at a figure around zero. Therefore, the correlation coefficient is not always the best statistic to use.

Another error we could encounter when calculating the correlation coefficient is homogeneity of the group. When a group is homogeneous or possessing similar characteristics, the range of scores on either or both of the variables is restricted. For example, suppose we are interested in finding out the correlation between IQ and salary. If only members of the Mensa Club (a club for people with IQs over 140) are sampled, we will most likely find a very low correlation between IQ and salary since most members will have a consistently high IQ but their salaries will vary. This does not mean that there is not a relationship – it simply means that the restriction of the sample limited the magnitude of the correlation coefficient.

Finally, we should consider sample size. One may assume that the number of observations used in the calculation of the coefficient may influence the magnitude of the coefficient itself. However, this is not the case. While the number in the sample size does not affect the coefficient, it may affect the accuracy of the relationship. The larger the sample, the more accurate of a predictor the correlation coefficient will be on the relationship between the two variables.

Lesson Summary

  1. Bivariate data are data sets with two observations that are assigned to the same subject. Correlation measures the direction and magnitude of the linear relationship between bivariate data.
  2. When examining scatterplot graphs, we can determine if correlations are positive, negative, perfect or zero. A correlation is strong when the points in the scatterplot are close together.
  3. The correlation coefficient is a precise measurement of the relationship between the two variables. This index can take on values between and including -1.0 and +1.0.
  4. To calculate the correlation coefficient, we most often use the raw score formula which allows us to calculate the coefficient by hand. This formula is: r_{xy} = \frac{n \sum XY - \sum X \sum Y} {\sqrt{[n \sum X^2 - (\sum X)^2] [n \sum Y^2 - (\sum Y)^2]}}.
  5. When calculating correlation, there are several things that could affect our computation including curvilinear relationships, homogeneity of the group and the size of the group.

Review Questions

  1. Please give 2 scenarios or research questions where you would use bivariate data sets.
  2. In the space below, please draw and label four scatterplot graphs showing (a) a positive correlation, (b) a negative correlation, (c) a perfect correlation and (4) zero correlation.
  3. In the space below, please draw and label two scatterplot graphs showing (a) a weak correlation and (b) a strong correlation.
  4. What does the correlation coefficient measure?

The following observations were taken for five students measuring grade and reading level.

A table of grade and reading level for five students.
Student Number Grade Reading Level
1 2 6
2 6 14
3 5 12
4 4 10
5 1 4
  1. Draw a scatterplot for these data. What type of relationship does this correlation have?
  2. Use the raw score formula to compute the Pearson correlation coefficient.

A teacher gives two quizzes to his class of 10 students. The following are the scores of the 10 students.

Quiz results for ten students.
Student Quiz 1 Quiz 2
1 15 20
2 12 15
3 10 12
4 14 18
5 10 10
6 8 13
7 6 12
8 15 10
9 16 18
10 13 15
  1. Compute the Pearson correlation coefficient (r) between the scores on the two quizzes.
  2. Find the percentage of the variance (r^2) in the scores of Quiz 2 associated with the variance in the scores of Quiz 1.
  3. Interpret both r and r^2 in words.
  4. What are the three factors that we should be aware of that affect the size and accuracy of the Pearson correlation coefficient?

Review Answers

  1. Various answers are possible. Answers could include scores between two tests, effectiveness of two medications, behavior patterns, etc.
  2. Various answers are possible.
  3. Various answers are possible.
  4. The correlation coefficient measures the nature and the magnitude of the linear relationship between two variables.
  5. The scatterplot should show the 5 points plotted in a line. This is a perfect correlation.
  6. r = 1.00
  7. r = 0.568
  8. r^2 = 0.323
  9. The correlation between the two quizzes is positive and is moderately strong. Only a small proportion of the variance is shared by the two variables (32.3\%)
  10. Curvilinear relationships, homogeneity of the group and small group size.

Image Attributions

Files can only be attached to the latest version of None

Reviews

Please wait...
Please wait...
Image Detail
Sizes: Medium | Original
 
CK.MAT.ENG.SE.1.Prob-&-Stats-Adv.9.1
ShareThis Copy and Paste

Original text