Skip to main content icon/video/no-internet

Parameter Mean Squared Error

The parameter mean squared error (MSE), also known as empirical mean squared error, indicates the deviation of an estimated value from the expected value of a given parameter. The lower the MSE is, the better accuracy an estimated value or an estimation method presents. Mathematically, it is formulated as the average of the squared deviations across a certain number of estimations; thus, the MSE is always a positive value. To calculate the MSE, one needs to know the expected values of the parameters, which normally are unknown in statistical analysis. For this reason, the MSE is commonly used as an evaluation criterion in conjunction with the Markov chain Monte Carlo (MCMC) method, in which data are randomly sampled from probability distributions rather than collected from the real world. This entry introduces the definition and calculation of MSE and, through an example, discusses its usefulness within MCMC methods.

Calculation of the MSE

If an estimation procedure is repeated a number of times, one should be able to calculate the average squared deviation of an estimator from the expected value of a given parameter across all replications. Let x denote the expected value of a parameter and xi denote the estimator of x from the ith, i = 1, … , T, replication. Then, the MSE for the estimator can be written as

MSE= 1T i=1T(x^ix)2.

At times, one may need to estimate a set of parameters. For example, in item response theory calibration, the ability parameters for a group of examinees need to be estimated. Let X be a vector of the expected values of N parameters and Xi be a vector of estimators for the ith, i = 1, … , T, replication. Then, the MSE for the estimators can be written as

MSE= 1T1N i=1Tj=1N(x^ij xj)2,

where xij and xj are the jth elements in Xi and X, respectively.

There are also occasions in which researchers are interested in the accuracy of the output of a function with respect to its expected value. For example, a psychometrician attempts to examine the accuracy of the equating results obtained by using the estimated linking coefficients and related equating functions. Let fs be the “true” function of s, which is built upon the expected values for all related coefficients. Let fis denote the estimated function of s, in which all coefficients are estimators from the ith, i = 1, … , T, estimation. Then, the MSE for the estimated function can be expressed as

MSE= 1Ti=1T[f^i(s)f(s)]2.

In a study using MCMC simulation methods, the MSE is typically viewed as an index that summarizes the total errors occurring during a given statistical process (e.g., estimation, equating, and scaling). In fact, based on the types of source, there exist two types of error: systematic and random. The former relates to constant inaccuracy and is also known as bias, whereas the latter is unpredictable and occurs only by chance. Correspondingly, the MSE can be further broken down into two components to represent the systematic and the random error, respectively. Equation 1, for instance, can be rewritten as

MSE= 1T i=1T(x^ix)2=(x¯ix)2+ 1Ti=1T(x^ix¯i)2, 

where

x¯i=1Ti=1Tx^i.

As shown in Equation 4, the MSE is the sum of squared bias, (xix)2, and sample variance, 1Ti=1T(x^ix¯i)2. The square roots of the sample variance are the empirical standard errors of the estimator, which indicate the consistency of the estimator and estimation methods.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading