Internal Consistency Reliability
Internal consistency reliability estimates how much total test scores would vary if slightly different items were used. Researchers usually want to measure constructs rather than particular items. Therefore, they need to know whether the items have a large influence on test scores and research conclusions.
This entry begins with a discussion of classical reliability theory. Next, formulas for estimating internal consistency are presented, along with a discussion of the importance of internal consistency. Last, common misinterpretations and the interaction of all types of reliability are examined.
To examine reliability, classical test score theory divides observed scores on a test into two components, true score and error:

where X = observed score, T = true score, and E = error score.
If Steve's true score on a ...
Looks like you do not have access to this content.
Reader's Guide
Descriptive Statistics
Distributions
Graphical Displays of Data
Hypothesis Testing
Important Publications
Inferential Statistics
Item Response Theory
Mathematical Concepts
Measurement Concepts
Organizations
Publishing
Qualitative Research
Reliability of Scores
Research Design Concepts
Research Designs
Research Ethics
Research Process
Research Validity Issues
Sampling
Scaling
Software Applications
Statistical Assumptions
Statistical Concepts
Statistical Procedures
Statistical Tests
Theories, Laws, and Principles
Types of Variables
Validity of Scores
- All
- A
- B
- C
- D
- E
- F
- G
- H
- I
- J
- K
- L
- M
- N
- O
- P
- Q
- R
- S
- T
- U
- V
- W
- X
- Y
- Z