Skip to main content icon/video/no-internet

Systematic errors result from bias in measurement or estimation strategies and are evident in the consistent over- or underestimation of key parameters. Good survey research methodology seeks to minimize systematic error through probability-based sample selection and the planning and execution of conscientious survey design. There are two key sources of systematic error in survey research: sample selection and response bias.

In the context of survey research, systematic errors may be best understood through a comparison of samples in which respondents are randomly selected (i.e. a probability sample) and samples in which respondents are selected because they are easily accessible (i.e. a convenience sample). Consider, for example, a research project in which the analysts wish to assess attitudes about a town's public library. One way to proceed might be to post students in front of the main entrance to the library and to have them survey individuals as they enter and leave the library. Another strategy would be to randomly assign households to participate in the survey, in such a way that every household in the town has a nonzero probability of participating, and to send students to the households to conduct the interview.

How would the results of the survey differ as a consequence of these differences in sample selection? One might reasonably expect that the first research design, in which the sample is composed entirely of current library patrons, yields a sample that uses the library more frequently and, as a consequence, has more favorable views about the library and its importance to the community than does the larger, more representative sample of the entire town. In this case, the selection bias inherent in the convenience sample of library patrons would lead the researchers to systematically overestimate the use of, and support for, the town's public library.

Although in the library example, systematic error resulted from the selection bias in the sampling mechanism, William G. Cochran, Frederick G. Mosteller, and John Tukey state that systematic errors more often result from bias in measurement. In the context of the physical sciences, it is easy to imagine a scale that consistently overestimates the weight of whatever it measures by five units. There are analogies to this scale in survey research: In election surveys, for example, researchers often want to identify the issues or problems most important to voters and assess their effect on voting decisions. Typically, survey items about the most important issue are asked as the open-ended question, What do you think is the most important problem facing this country today? An alternative format uses a closed-ended question, in which respondents are presented with a list of issues and asked how important they perceive each issue to be. The proportion of the sample that reports that they consider an issue to be "very important" in the closed-ended format is typically much larger than the proportion of the sample who identify the issue when asked the open-ended question. Social desirability, priming, and frame effects also shift responses in predictable ways and complicate the measurement of attitudes and opinions. Similarly, question structure and order effects can generate spurious patterns in survey responses that undermine the ability to evaluate public opinion.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading