Skip to main content icon/video/no-internet

Random error, as with most error in research design is not “error” in the sense of being an obvious mistake. Rather, it is variability that occurs in data due simply to natural inconsistency that exists in the world. Random error does not alter the measured values of a variable in a consistent or systematic way, but it does contribute to an imperfect measure of a variable.

Effects on Data

Random error contributes variability in all data collection measures, such that according to classical test theory, the observed value of a variable (x0) is not actually the “true” value of the variable, but instead it is the true value (xt) of the variable plus the value of the error (e). This idea can be written as x0 = xt + e. The error term may be made up of both random error (er) and systematic error (es), which is error that alters the values of the data in a consistent manner. Thus,

Random error is equally likely to make an observed value of a variable higher or lower than its true value. Although the random error can lead to erroneous conclusions about the values of individual data points, the impact of random error on the value of any one observed variable is generally canceled out by other imperfect observed values of those data. Thus when a sample of data is collected, it is assumed that the random variability will not consistently contribute to the values of the data in one direction. Even though the random error contributes variability to observed data, the overall mean of all the observed data is assumed to be the same as the mean of the true values of the measured variable. The random error simply creates more “noise” in the data; even so, the noise is centered on the true value of the variable.

Because data measures are never perfect, hypothesis testing consists of conducting statistical analyses to determine whether the variability that exists in data is simply due to random error or whether it is due to some other factor, such as systematic error, or the effect of an independent variable. In hypothesis testing, the variability in the samples is compared with the estimated variability in the population in order to determine whether it is probable that the sample variability is equivalent to the variability expected due to random error. If there is more variability than one would expect simply due to random error, then the variance is said to be “statistically significant.”

Expected Value

When multiple samples are taken from a population, the values of the statistics calculated from those samples are assumed to contain random error. However, the mean of random error in those samples is assumed to be zero. In addition, because the random error is equally likely to increase or decrease the observed value of the true variable, and it is not expected to have a systematic impact, random error is generally assumed to have a normal distribution. If multiple samples are taken from a normal distribution with a mean of zero, then the mean of those samples will also be zero. Thus, given that the mean of the normally distributed random error distribution is zero, the expected value of random error is zero. Consequently, the mean value of the measured data is equal to the mean value of the true data plus the mean random error, which is zero. Thus even though random error contributes to data measures, the mean of the measured data is unaffected by the random error and can be expected to be equal to its true value.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading