Skip to main content

Reliability

Encyclopedia
Edited by: Published: 2018
+- LessMore information
Download PDF

According to the Standards for Educational and Psychological Testing, reliability (also referred to as measurement precision) refers to the consistency of assessment results over independent administrations of the testing procedure. The assessment results can be examinees’ scores or raters’ ratings of examinees’ performances on an assessment. Reliability is a central concept in measurement and a necessary condition when building a validity argument. Indeed, if an assessment fails to yield consistent results, it is imprudent to make any inferences about what a score signifies. Reliability is high if the scores or ratings for each examinee are consistent over replications of the testing procedure. Reliability coefficients range from 0 to 1, with 0 being extremely unreliable and 1 representing perfect reliability. There is no absolute critical value for acceptable reliability as the need for precision depends on the stakes of the assessment. Typically, high-stake assessments (e.g., college admission tests) necessitate higher reliability standards than low-stake assessments (e.g., classroom examinations). This entry describes the most popular methods for estimating reliability as well as factors impacting reliability from both the classical and modern test theory perspectives.

Methods to Estimate Reliability

In classical test theory, the consistency of test scores is evaluated mainly in terms of reliability coefficients, and defined in terms of the correlation between scores derived from replications of the test procedure on a sample of test takers. There are four broad types/categories of reliability coefficients: stability coefficients, equivalence coefficients, internal consistency coefficients, and coefficients based on interrater agreement. Each type of coefficient reflects the variability associated with different data-collection designs and interpretations or uses of scores.

Stability Coefficients: The Test–Retest Method

The test–retest method, a measure of stability, is used to determine the consistency of the examinees’ scores on a test over time. The test–retest coefficient is obtained by correlating the scores of identical tests administered to the same examinees twice under similar testing conditions. Carry-over effects and the interval of time between the two test administrations can influence the test–retest coefficient, so this method is most appropriate for tests measuring traits that are not susceptible to carry-over effects and that are stable across time intervals. In practice, the longer the time interval between administrations, the lower the estimated reliability.

Equivalence Coefficients: The Alternate Forms Method

The alternate forms method, a measure of equivalence, is used to examine the consistency of two sets of scores on two parallel forms of a test. The alternate form coefficient is obtained by correlating the scores of parallel (or equivalent) forms of a test to the same examinees under similar conditions in close succession. That is, one form is administered to a group of examinees followed (at a well-chosen close time point) by the administration of an alternate form. The quality or similarity of the parallel forms can influence the alternate form coefficient. In practice, if the forms are not parallel, the alternate form method produces low estimates of reliability.

Internal Consistency Coefficients: Split-Half, KR-20, and Coefficient α Methods

Both measures of stability and equivalence require two administrations of (or parallel forms of) a test, but the administration of two tests can be impractical or unnecessary in reality. Internal consistency coefficients, which require a single test administration, are used to assess the consistency of the examinees’ responses to the items within a test. There are two broad classes of methods for estimating internal consistency coefficients. The first class is generally denoted as split-half procedures. The second class of methods requires an analysis of the variance–covariance structure of the item responses. With respect to the split-half methods, a test is administered to a group of examinees, then the test is split into two parallel halves, and the two sets of scores from the two split halves are correlated. This half-test reliability estimate is then used to calculate the full test reliability using the Spearman-Brown prophecy formula, which is written as follows:

ρXXn=2ρAB1+ρAB,

where ρXXn is the reliability projected for the full-length test with n being the number of total items in a test, and ρAB is the correlation between the half-tests A and B.

When calculating reliability based on item covariance, the two most widely used procedures are KR-20 (Kuder Richardson-20) and coefficient α (often referred to as Cronbach’s α). Coefficient α(α^) is computed by

α^=kk1σ^i2(1σ^X2),

where k is the number of items on the test, σ^i2 is the variance of item i, and σ^X2 is the total test variance. KR-20, a special case of coefficient α for dichotomously scored items only, is also based on the proportion of persons passing each item and the standard deviation of the scores.

Coefficients Based on Interrater Agreement: Interrater Method

The interrater method, a measure of consistency of ratings, is used to examine the consistency of observed performances over different raters or observers. It is obtained by having two or more observers rate a performance of any kind and calculating the percentage of agreement between observations. The interrater approach is the preferred method when calculating the reliability of assessments/performances such as constructed responses, speeches, debates, or musical performances. Variation among raters and variability in the interpretation of assessment results are the two potential sources of error influencing interrater reliability.

Factors Affecting Reliability

In this section, the factors that impact the reliability of assessment results are discussed. Although individual characteristics (e.g., motivation, fatigue, health, and ability) as well as the quality of assessment itself (e.g., clarity of instructions and test difficulty) inevitably impact all reliability estimates, here, the focus is on the three most widely cited sources of error with respect to reliability.

Test Length

Generally speaking, the longer the measure is, the more reliable the measure is. As test length increases, the proportion of the student’s score that can likely be attributed to error decreases. For example, low ability students may answer a single item correctly, even if guessing; however, it is much less likely that low ability students will correctly answer all items on a 20-item test via guessing. The use of longer measures minimizes the impact of singular human error. Other test characteristics being equal (e.g., item quality), a measure with 40 items should have higher reliability than one with 20 items. The relationship between reliability and test length can be mathematically shown in the Spearman-Brown prophecy formula mentioned previously. The formula is based on the assumption that, when tests are shortened or lengthened, items of comparable content and statistics to those already in the test are deleted or added. For example, if the reliability of a 20-item test is determined to be 0.75, and the length of the test is doubled by adding items of comparable content and statistics, then the predicted reliability of the new test would be

0.86(2×0.75(1+0.75)).

Spread of Scores

Because reliability is sample dependent, all other factors being equal, the greater the spread of scores, the higher the reliability estimate. Indeed, larger reliability coefficients result when examinees remain in the same relative position in a group across multiple administrations of an assessment. To be sure, errors of measurement have less influence on the relative position of individuals when the differences among group members are large (when there is a large spread of scores). Consequently, anything that reduces the possibility of shifting positions in the group (e.g., a heterogeneous sample of examinees) also contributes to larger reliability coefficients.

Objectivity of Scoring

The objectivity of scoring influences reliability in the sense that the error introduced by the scoring procedure varies with respect to the extent that human judgment is required. With objective items such as multiple-choice or matching items, the scoring presents little opportunity for the introduction of human error. Constructed response items and performance assessments, however, often involve the subjective judgments of human raters or scorers. Consequently, they are subject to different degrees of scoring error, depending on the nature of the question and the scoring procedures. For example, short-answer constructed response items tend to be more objectively scoreable than longer, more complex student responses (e.g., essays) and products (e.g., projects).

Standard Error of Measurement (SEM)

Within a classical test theory framework, an examinee’s observed test score (X) is composed of two parts: the true score (T) and the error score (E):

X=T+E

The true score can be interpreted as the average of the observed scores obtained over an infinite number of repeated administrations with the same test or parallel forms of the test. The error score is the difference between the observed test score and the true score.

The SEM is an estimate of the extent to which an examinee’s scores vary across administrations. For example, for a group of examinees, each individual has a true score and several possible observed scores around the individual’s true score. Theoretically, each examinee’s personal distribution of possible observed scores around the examinee’s true score has a standard deviation. The SEM is the average of these individual error standard deviations for the group.

Another way of thinking about reliability is that it refers to the extent to which students’ scores are free from errors of measurement. Assuming errors are random and independent, the observed score variance (σX2) can be further decomposed into the variance in true scores (σT2) and the variance in the errors of measurement (SEM,σE2). The reliability coefficient (or the correlation between two measures of the same trait) can also be mathematically defined as the ratio of true score variance to observed score variance. SEM (σE) is a function of the standard deviation of observed scores (σX) and the reliability coefficient (ρXX′):

σE=1ρXXσX.

Note that as the reliability coefficient increases, the SEM decreases.

Classification Consistency and Accuracy

Decision consistency (DC) refers to the extent to which classifications of examinee decisions agree based on two independent administrations of the same exam or two parallel forms of an exam. Decision accuracy (DA) refers to the extent to which the actual classifications based on observed scores agree with the “true” classifications. The DC and DA are important for assessments with a purpose to classify examinees into performance categories (as is often the purpose of criterion-referenced tests). Similar to classical reliability with respect to the consistency of overall assessment results, consistency of students’ classifications is also a necessary condition when building a validity argument for criterion-referenced tests. Without certain confidence in the consistency of students’ classifications, any inferences based on the classifications would be dubious.

Methods to Estimate DC and DA

When calculating or determining DC and DA, the two most common indices are the agreement index P and Cohen’s κ. The agreement index P is defined as the proportion of times that the same decision would be made based on two parallel forms of a test. It can be expressed as

P=j=1JPjj,

where J is the number of performance categories, and Pjj is the proportion of examinees consistently classified into the jth category across the two administrations or forms of a test. If Form 1 is one set of observed scores, and Form 2 is replaced with the true scores or another criterion measure, then P becomes the DA index. To get a more interpretable measure of decision-making consistency, Cohen’s κ can be computed as follows:

κ=P0PC1PC,P0=j=1JPjj,PC=j=1JPj.P.j,

where P0 is the observed proportion of agreement, PC is the expected proportion of agreement, Pjj is the proportion of examinees consistently classified into the jth category, and Pj. and P.j are the marginal proportions of examinees falling in the jth category across the two administrations of the test, respectively. PC represents the DC expected by chance.

κ can be thought of as the proportion of agreement that exists above and beyond that which can be expected by chance alone. κ has a value between −1 and 1. A value of 0 and below indicates that the decisions are as consistent as the decisions based on two tests that are statistically independent. In other words, the decisions are very inconsistent and the reliability of classifications is extremely low. A value of 1 indicates that the decisions are as consistent as the decisions based on two tests that have perfect agreement.

Reliability From Item Response Theory (IRT) Perspective

Unlike classical reliability, which uses a single value to describe a measure’s average reliability, in IRT, reliability is not uniform across the entire range of proficiency levels. Scores at both ends of the proficiency level generally have more errors associated with them than scores at the center of the proficiency distribution. IRT emphasizes the examination of item and test information in lieu of classical reliability. In mathematical statistics, the term (Fisher) information conveys a similar, but more technical, meaning. It is defined as the reciprocal of the precision with which a parameter could be estimated. For instance, in IRT, an interest is in estimating the value of the ability parameter (θ) of an examinee, which is denoted by θ^. All ability estimates have a variance (σ2|θ^), which is a measure of the precision with which a given ability level can be estimated. The amount of information (I) at a given ability level is the reciprocal of this variance and can be shown as follows:

I|θ=(1(σ2|θ^)).

The higher the information at a given ability level, the more precise the item parameter estimate tends to be than one with lower information.

Under IRT, each item on a test measures the proficiency level or ability of an examinee. Therefore, the amount of information for any single item can be computed at any ability level. The mathematical definition of the amount of item information depends upon the particular IRT model employed. For the one-parameter logistic and Rasch models, the item information is a function of the item difficulty parameter. For the two-parameter logistic model, the item information is a function of the item discrimination and item difficulty parameters, whereas for the three-parameter logistic model, the item information is a function of item discrimination, item difficulty, and pseudo-guessing parameters. Generally speaking, item information functions tend to have a bell shape. Highly discriminating items have tall, narrow information functions that provide considerable information but over a narrow range (Figure 1), whereas less discriminating items provide less information over a wider range (Figure 2). The highest item information of Item 1 is 1, whereas the highest item information of Item 2 is 0.25.

Figure 1 Item information function for Item 1

Note: This item is simulated using 2PL model with an item discrimination parameter of 2.0 and item difficulty parameter of 1.0 on the logistic scale.

Figure

Figure 2 Item information function for Item 2

Note: This item is simulated using 2PL model with an item discrimination parameter of 1.0 and item difficulty parameter of 1.0 on the logistic scale.

Figure

Because items are conditionally independent of each other given an individual’s score, the test information function (TIF) is simply the sum of information of all items on a test. Assume that a test with the 2 items above, the TIF of the test looks like that shown in Figure 3.

Figure 3 Test information function

Note: The TIF of this test is composed of two items: one with item discrimination of 2.0 and item difficulty of 1.0, and the other one with item discrimination of 1.0 and item difficulty of 1.0.

Figure

The TIF is 1.25 (the sum of item information of Items 1 and 2) and it is modal around 1.0, which is the item difficulty of both items.

The conditional SEM, the reciprocal of the test information at a given trait level (θ), is obtained as follows:

σE|θ=1TIF.

The aggregate SEM, which is analogous to the SEM from CTT perspective, is obtained as follows:

σE=1TIF.

That is, the measurement error is equal to the square root of the reciprocal of the test information and it is interpreted in the same way as the traditional SEM. With a large item bank, TIFs can be manipulated to control measurement error very precisely because the TIF shows the degree of precision at each individual proficiency level.

Final Thoughts

The reliability—as it is a precursor to establishing test score validity—of a measure is a critical consideration. Reliability and the SEM can be obtained from both classical and IRT perspectives and they are conceptually the same. The choice of method for establishing an assessment’s reliability should be determined in light of the data collection design (e.g., two test administrations or single test administration, the same test or parallel forms available) and the intended interpretation and/or use of scores (e.g., stability, equivalence, internal consistency, or classification consistency). The level of precision required depends on both the purpose and stakes of the assessment. To ensure reliable results when designing assessments, one should encourage test takers to perform their best, have scoring criteria that are readily available by test takers and raters (when appropriate), allow enough time, and have enough items. Ultimately, the purpose of any assessment is to provide meaningful feedback about what examinees know and are able to do. Well-developed assessments yielding consistent results are key to this goal.

See also Classical Test Theory; Internal Consistency; Item Response Theory; Split-Half Reliability; Test Information Function; Test–Retest Reliability; Validity

Fen Fan Jennifer Randall
10.4135/9781506326139.n584
Further Readings
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
Crocker, L. M., & Algina, J. (1986). Introduction to classical and modern test theory. New York, NY: Holt, Rinehart, and Winston.
Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park, CA: SAGE.
Livingston, S. A., & Lewis, C. (1995). Estimating the consistency and accuracy of classifications based on test scores. Journal of Educational Measurement, 32(2), 179197.
Thorndike, R. M., & Thorndike-Christ, T. M. (2009). Measurement and evaluation in psychology and education (
8th ed.
). Boston, MA: Pearson.

Reader's Guide

  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

      Copy and paste the following HTML into your website