Interrater Reliability
The concept of interrater reliability essentially refers to the relative consistency of the judgments that are made of the same stimulus by two or more raters. In survey research, interrater reliability relates to observations that in-person interviewers may make when they gather observational data about a respondent, a household, or a neighborhood in order to supplement the data gathered via a questionnaire. Interrater reliability also applies to judgments an interviewer may make about the respondent after the interview is completed, such as recording on a 0 to 10 scale how interested the respondent appeared to be in the survey. Another example of where interrater reliability applies to survey research occurs whenever a researcher has interviewers complete a refusal report form immediately after a refusal takes ...
Looks like you do not have access to this content.
Reader's Guide
Ethical Issues In Survey Research
Measurement - Interviewer
Measurement - Mode
Measurement - Questionnaire
Measurement - Respondent
Measurement - Miscellaneous
Nonresponse - Item-Level
Nonresponse - Outcome Codes And Rates
Nonresponse - Unit-Level
Operations - General
Operations - In-Person Surveys
Operations - Interviewer-Administered Surveys
Operations - Mall Surveys
Operations - Telephone Surveys
Political And Election Polling
Public Opinion
Sampling, Coverage, And Weighting
Survey Industry
Survey Statistics
- All
- A
- B
- C
- D
- E
- F
- G
- H
- I
- J
- K
- L
- M
- N
- O
- P
- Q
- R
- S
- T
- U
- V
- W
- X
- Y
- Z