Inter-Rater Reliability

Inter-rater reliability, which is sometimes referred to as interobserver reliability (these terms can be used interchangeably), is the degree to which different raters or judges make consistent estimates of the same phenomenon. For example, medical diagnoses often require a second or third opinion. Competitions, such as judging of art or a figure skating performance, are based on the ratings provided by two or more raters. Researchers might have raters assigning scores for degree of pathology in an individual or type of verbal response in a study examining communication. In the area of psychometrics and statistics, reliability is the overall trustworthiness of a measure. Common terms to describe reliability include consistency, repeatability, dependability, and generalizability. High reliability is achieved if similar results are produced under consistent ...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles