This volume covers the commonly ignored topic of heteroskedasticity (unequal error variances) in regression analyses and provides a practical guide for how to proceed in terms of testing and correction. Emphasizing how to apply diagnostic tests and corrections for heteroskedasticity in actual data analyses, the book offers three approaches for dealing with heteroskedasticity:

variance-stabilizing transformations of the dependent variable; calculating robust standard errors, or heteroskedasticity-consistent standard errors; and; generalized least squares estimation coefficients and standard errors.

The detection and correction of heteroskedasticity is illustrated with three examples that vary in terms of sample size and the types of units analyzed (individuals, households, U.S. states). Intended as a supplementary text for graduate-level courses and a primer for quantitative researchers, the book fills the gap between the limited coverage of heteroskedasticity provided in applied regression textbooks and the more theoretical statistical treatment in advanced econometrics textbooks.

### Heteroskedasticity-Consistent (Robust) Standard Errors

As I discussed in Chapter 1, the main problem with using OLS regression when the errors are heteroskedastic is that the sampling variance (standard errors) of the OLS coefficients as calculated by standard OLS software is biased and inconsistent. Moreover, the direction of the bias is in general unknown and could be in opposite directions for the coefficients of different predictors. The “obvious” solution would be to calculate the OLS coefficients' sampling variance correctly according to Equation 1.2 if we know or can estimate σ2 Ω, whose diagonal elements are the error variance for each case in the analysis. The resulting corrected OLS standard errors are inefficient (larger) in comparison with using knowledge of σ2 Ω for an EGLS regression analysis, ...