Cohen’s kappa (κ) constitutes one classic technique of measuring the level of consistency between two raters. This entry discusses measuring intercoder reliability using κ and presents two approaches for characterizing κ.
Suppose there is a researcher investigating the extent to which a particular news company produces reports in favor of a certain presidential candidate. The investigator would first generate a sample frame (e.g., news aired or published within certain time period) and then randomly select a predetermined number of news articles to be used for analysis. The researcher will finally create a coding protocol whereby each unit of analysis (e.g., word, sentence, paragraph, whole article) can be judged whether or not it contains elements conveying favorable attitudes toward the ...
Looks like you do not have access to this content.