Skip to main content icon/video/no-internet

Parameter Random Error

When fitting a statistical model and estimating parameters, the variability in parameter estimates due to random sampling is called parameter random error. This is in contrast to systematic bias in parameter estimates, which may arise from model misspecification or convenience sampling. This entry describes parameter random error in the context of educational measurement, specifically as it relates to item response theory (IRT).

Source

In most uses of IRT, item parameters are estimated from a calibration sample prior to operational use. If we denote the population item parameters as Γ, then item parameter estimates Γ that arise from samples 1 to m that are randomly drawn from a population of test takers will be Γ1, Γ2, … , Γm. The variability among these sets of estimates is the random error for Γ.

Item parameter estimates are often treated as the population values and are used in test construction, ability estimation, linking/equating, or item selection in computerized adaptive testing. Traditionally, IRT was primarily used for the development of large-scale educational achievement and ability tests in which items were calibrated on large samples (e.g., several thousand test takers). In these cases, parameter random error was assumed to be negligible. Recently, interest in and use of IRT has increased dramatically, and the applications of IRT models have extended to settings where large sample sizes may be unavailable. In these cases, parameter random error should not be ignored.

Effects

Whether items are part of a fixed-form assessment or a computerized adaptive testing model, they are often selected for use partially on the basis of their item parameters. For example, maximum Fisher information is a commonly used item selection algorithm that tends to select items with a large discrimination parameter. Likewise, items with high discrimination parameters are also frequently selected in fixed-form assessments in order to build a desirable test information function. Because of the relationship of test information with the standard errors of maximum likelihood ability estimates, neglecting parameter random error can lead to underestimation of standard errors. Although several methods have been proposed to correct for parameter random error in ability estimation, neglecting this additional error may lead to misinterpretation of ability estimates.

Outside of ability estimation and test construction, parameter random error also has implications in linking and equating. Studies have found that random error in parameter estimates can produce poor estimates of linking coefficients using common equating methods. However, other studies have found that certain methods (e.g., test response function linking/equating method) are fairly robust to item calibration error in the case of a sufficiently large number of common items between the forms.

See also Equating; Item Information Function; Item Response Theory; Score Linking; Simple Random Sampling

Alex Brodersen Can Shao Ying Cheng
10.4135/9781506326139.n500

Further Readings

Hambleton, R. K., & Jones, R. W. (1994). Item parameter estimation errors and their influence on test information functions. Applied Measurement in Education, 7(3), 171186. doi:http://dx.doi.org/10.1207/s15324818ame0703_1
Kaskowitz, G. S., & De Ayala, R. J. (2001). The effect of error in item parameter estimates on

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading