Regression analysis is the name for a family of techniques that attempts to predict one variable (an outcome or dependent variable) from another variable, or set of variables (the predictor or independent variables).

We will illustrate this first with an example of linear regression, also called (ordinary) least squares (OLS) regression. When people say “regression” without any further description, they are almost always talking about OLS regression. Figure 1 shows a scatterplot of data from a group of British ex-miners, who were claiming compensation for industrial injury. The x-axis shows the age of the claimant, and the y-axis shows the grip strength, as measured by a dynamometer (this measures how hard the person can squeeze two bars together).

Running through the points is the line of best fit, or regression line. This line allows us to predict the conditional mean of the grip strength—that is, the mean value that would be expected for a person of any age.

The line of best fit, or regression line, is calculated using the least squares method. To illustrate the least squares method, consider Figure 2, which is simplified, in that it has only four points on the scatter-plot. For each point, we calculate (or measure) the vertical distance between the point and the regression line—this is the residual, or error, for that point. Each of these errors is squared and these values are summed. The line of best fit is placed where it minimizes this sum of squared errors (or residuals)— hence, it is the least squares line of best fit, which is sometimes known as the ordinary least squares line of best fit (because there are other kinds of least squares lines, such as generalized least squares and weighted least squares). Thus, we can think of the regression line as minimizing the error (note that in statistics, the term error is used to mean deviation or wandering, not mistake).

None

Figure 1 Scatteplot Showing Age Against Grip Strength With Line of Best Fit

The position of a line on a graph is given by two values—the height of the line and the gradient of the line. In regression analysis, the gradient may be referred to as b1 or β1 (β is the Greek letter beta). Of course, because the line slopes, the height varies along its length. The height of the line is given at the point where the value of the x-axis (that is, the predictor variable) is equal to zero. The height of the line is called the intercept, or y-intercept, the constant, b0 (or β0), or sometimes α (the Greek letter alpha).

Calculation of the regression line is straightforward, given the correlation between the measures. The slope of the line (b1) is given by

None
None

Figure 2 Example of Calculation of Residuals

where

r is the correlation between the two measures,

sy is the standard deviation of the outcome variable, and

sx is the standard deviation of the predictor variable.

The intercept is given by

None

where

i is the intercept,

Y¯ is the mean of the outcome variable, and

X¯ is the mean of the predictor variable.

In the case of the data shown in Figure 1, the intercept is equal to 50.9, and the slope is −0.41. We can calculate the predicted (conditional mean) grip strength of a person at any age, using the equation

None

where ŝ is the predicted strength and a is the age of the individual. Notice the hat on top of the s, which means that it is predicted, not actual. A very similar way to write the equation would be

None

In this equation, we are saying that s is the person's actual strength, which is equal to the expected value plus a deviation for that individual. Now we no longer have the hat on the s, because the equation is stating that the person's actual score is equal to that calculated, plus e, that is, error.

Each of the parameters in the regression analysis can have a standard error associated with it, and hence a confidence interval and p value can be calculated for each parameter.

Regression generalizes to a case with multiple predictor variables, referred to as multiple regression. In this case, the calculations are more complex, but the principle is the same—we try to find values for the parameters for the intercept and slope(s) such that the amount of error is minimized. The great advantage and power of multiple regression is that it enables us to estimate the effect of each variable, controlling for the other variables. That is, it estimates what the slope would be if all other variables were controlled.

We can think of regression in a more general sense as being an attempt to develop a model that best represents our data. This means that regression can generalize in a number of different ways.

Types of Regression

For linear regression as we have described it to be appropriate, it is necessary for the outcome (dependent) variable to be continuous and the predictor (independent) variable to be continuous or binary. It is frequently the case that the outcome variable, in particular, does not match this assumption, in which case a different type of regression is used.

Categorical Outcomes

Where the outcome is binary—that is, yes or no— logistic or probit regression is used. We cannot estimate the conditional mean of a yes/no response, because the answer must be either yes or no—if the predicted outcome score is 0.34, this does not make sense; instead, we say that the probability of the individual saying yes is 0.46 (or whatever it is). Logistic or probit regression can be extended in two different ways: For categorical outcomes, where the outcome has more than two categories, multinomial logistic regression is used. Where the outcome is ordinal, ordinal logistic regression is used (SPSS refers to this as PLUM – PoLytomous Universal Models) and models the conditional likelihood of a range of events occurring.

Count Outcomes

Where data are counts of the number of times an event occurred (for example, number of cigarettes smoked, number of times arrested), the data tend to be positively skewed, and additionally, it is only sensible to predict an integer outcome—it is not possible to be arrested 0.3 times, for example. For count outcomes of this type, Poisson regression is used. This is similar to the approaches for categorical data, in that the probability of each potential value is modeled—for example, the probability of having been arrested 0 times is 0.70, one time is 0.20, three times 0.08, and four times 0.02.

Censored Outcomes

Some variables are, in effect, a mixture of a categorical and a continuous variable, and these are called censored variables. Frequently, they are cut off at zero. For example, the income an individual receives from criminal activities is likely to be zero, hence it might be considered binary—it is either zero or not. However, if it is not zero, we would like to model how high it is. In this situation, we use Tobit regression— named for its developer, James Tobin, because it is Tobin's Probit regression (Tobin himself did not call it this, but it worked so well that it stuck). Another type of censoring is common where the outcome is time to an event, for example, how long did the participant take to solve the problem, how long did the patient survive, or how long did the piece of equipment last. Censoring occurs in this case because, for some reason, we didn't observe the event in which we were interested—the participant may have given up on the problem before he or she solved it, the patient may have outlived the investigator, or the piece of equipment may have been destroyed in a fire. In these cases, we use a technique called Cox proportional hazards regression (or often simply Cox regression).

Uses of Regression

Regression analysis has three main purposes: prediction, explanation, and control.

Prediction

A great deal of controversy arises when people confuse the relationship between prediction in regression and explanation. A regression equation can be used to predict an individual's score on the outcome variable of interest. For example, it may be the case that students who spend more time drinking in bars perform less well in their exams. If we meet a student and find out that he or she never set foot inside a bar, we might predict that he or she will be likely do better than average in his or her assessment. This would be an appropriate conclusion to draw.

Explanation

The second use of regression is to explain why certain events occurred, based on their relationship. Prediction requires going beyond the data—we can say that students who spend more time in bars achieve lower grades, but we cannot say that this is because they spend more time in bars. It may be that those students do not like to work, and if they didn't spend time in bars, they would not spend it working—they would spend it doing something else unproductive. Richard Berk has suggested that we give regression analysis three cheers when we want to use it for description, but only one cheer for causal inference.

Control

The final use of regression is as a control for other variables. In this case, we are particularly interested in the residuals from the regression analysis. When we place the regression line on a graph, everyone above the line is doing better than we would have expected, given his or her levels of predictor variables. Everyone below the line is doing worse than we would have expected, given his or her levels of predictor variables. By comparing people's residuals, we are making a fairer comparison. Figure 2 reproduces Figure 1, but two cases are highlighted. The case on the left is approximately 40 years old, the case on the right approximately 70 years old. The 40-year-old has a higher grip strength than the 70-year-old (approximately 31 vs. approximately 28 kg). However, if we take age into account, we might say that the 40-year-old has a lower grip strength than we would expect for someone of that age, and the 70-year-old has a higher grip strength. Controlling for age, therefore, the 70-year-old has a higher grip strength.

We will give a concrete example of the way that this is used. The first example is in hospitals in the United Kingdom. Dr Foster (an organization, rather than an individual) collates data on the quality of care in different hospitals—one of the most important variables that it uses is the standardized mortality ratio (SMR). The SMR models an individual's chance of dying in each hospital. Of course, it would be unfair to simply look at the proportion of people treated in each hospital who died, because hospitals differ. They specialize in different things, so a hospital that specialized in heart surgery would have more patients die than a hospital that specialized in leg surgery. Second, hospitals have different people living near them. A town that is popular with retirees will probably have higher mortality than a hospital in a town that is popular with younger working people. Dr Foster attempts to control for each of the factors that is important in predicting hospital mortality, and then calculates the standardized mortality ratio, adjusting for the other factors. It does this by carrying out regression, and examining the residuals.

Jeremy Miles
10.4135/9781412952644.n379

Further Reading

Berk, R.(2003).Regression analysis: A constructive critique.Thousand Oaks, CA: Sage.
Cohen, J., Cohen, P., Aiken, L., & West, S.(2003).Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.).Mahwah, NJ: Erlbaum.
Robbins, J. M., Webb, D. A., & Sciamanna, C. N.(2004).Cardiovascular comorbidities among public health clinic patients with diabetes: The Urban Diabetics Study.BMC Public Health, 5, 15. Retrieved from http://www.biomedcentral.com/1471-2458/5/15
  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles