Test–Retest Reliability
Test–retest reliability is one way to assess the consistency of a measure. The reliability of a set of scores is the degree to which the scores result from systemic rather than chance or random factors. Reliability measures the proportion of the variance among scores that are a result of true differences. True differences refer to actual differences, not measured differences. That is, if you are measuring a construct such as depression, some differences in scores will be caused by true differences and some will be caused by error. For example, if 90% of the differences are a result of systematic factors, then the reliability is .90, which indicates that 10% of the variance is based on chance or random factors. Some examples of chance or ...
Looks like you do not have access to this content.
Reader's Guide
Descriptive Statistics
Distributions
Graphical Displays of Data
Hypothesis Testing
Important Publications
Inferential Statistics
Item Response Theory
Mathematical Concepts
Measurement Concepts
Organizations
Publishing
Qualitative Research
Reliability of Scores
Research Design Concepts
Research Designs
Research Ethics
Research Process
Research Validity Issues
Sampling
Scaling
Software Applications
Statistical Assumptions
Statistical Concepts
Statistical Procedures
Statistical Tests
Theories, Laws, and Principles
Types of Variables
Validity of Scores
- All
- A
- B
- C
- D
- E
- F
- G
- H
- I
- J
- K
- L
- M
- N
- O
- P
- Q
- R
- S
- T
- U
- V
- W
- X
- Y
- Z