Content analysis is one of the most important but complex research methodologies in the social sciences. In this thoroughly updated Second Edition of The Content Analysis Guidebook, author Kimberly Neuendorf draws on examples from across numerous disciplines to clarify the complicated aspects of content analysis through step-by-step instruction and practical advice. Throughout the book, the author also describes a wide range of innovative content analysis projects from both academia and commercial research that provide readers with a deeper understanding of the research process and its many real-world applications.
Reliability can be defined as the extent to which a measuring procedure yields the same results on repeated trials (Carmines & Zeller, 1979). When human coders are used in content analysis, this typically translates to intercoder reliability, or the amount of agreement or correspondence on a measured variable among two or more coders or raters. Two other types of coder reliability, less studied and less frequently applied, are intracoder reliability, which considers the stability of a given coder’s measurements over time, and (intercoder) unitizing reliability, which assesses whether coders can agree on the delineation of units of data collection when that is part of the coding protocol.
Reliability is sometimes viewed as related to replicability. A unique perspective on this is given by Rourke ...