Skip to main content

Inter-Rater Reliability

Edited by: Published: 2018
+- LessMore information
Download PDF

Inter-rater reliability, which is sometimes referred to as interobserver reliability (these terms can be used interchangeably), is the degree to which different raters or judges make consistent estimates of the same phenomenon. For example, medical diagnoses often require a second or third opinion. Competitions, such as judging of art or a figure skating performance, are based on the ratings provided by two or more raters. Researchers might have raters assigning scores for degree of pathology in an individual or type of verbal response in a study examining communication. In the area of psychometrics and statistics, reliability is the overall trustworthiness of a measure. Common terms to describe reliability include consistency, repeatability, dependability, and generalizability. High reliability is achieved if similar results are produced under consistent ...

Looks like you do not have access to this content.

Reader's Guide

  • All
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Y
  • Z

      Copy and paste the following HTML into your website