Cohen's Kappa
Cohen's Kappa coefficient (κ) is a statistical measure of the degree of agreement or concordance between two independent raters that takes into account the possibility that agreement could occur by chance alone.
Like other measures of interrater agreement, κ is used to assess the reliability of different raters or measurement methods by quantifying their consistency in placing individuals or items in two or more mutually exclusive categories. For instance, in a study of developmental delay, two pediatricians may independently assess a group of toddlers and classify them with respect to their language development into either “delayed for age” or “not delayed.” One important aspect of the utility of this classification is the presence of good agreement between the two raters. Agreement between two raters could be ...
Looks like you do not have access to this content.
Reader's Guide
Descriptive Statistics
Distributions
Graphical Displays of Data
Hypothesis Testing
Important Publications
Inferential Statistics
Item Response Theory
Mathematical Concepts
Measurement Concepts
Organizations
Publishing
Qualitative Research
Reliability of Scores
Research Design Concepts
Research Designs
Research Ethics
Research Process
Research Validity Issues
Sampling
Scaling
Software Applications
Statistical Assumptions
Statistical Concepts
Statistical Procedures
Statistical Tests
Theories, Laws, and Principles
Types of Variables
Validity of Scores
- All
- A
- B
- C
- D
- E
- F
- G
- H
- I
- J
- K
- L
- M
- N
- O
- P
- Q
- R
- S
- T
- U
- V
- W
- X
- Y
- Z