Skip to main content icon/video/no-internet

Reliability refers to the consistency and stability of research results and is one of two foundational elements (the other being validity) in conducting rigorous research. Reliability assesses the extent to which the results and conclusions drawn from a case study would be reproduced if the research were conducted again. Reliability in case study research is normally addressed through three techniques: (1) triangulation, (2) interrater reliability, and (3) an audit trail.

Conceptual Overview and Discussion

The concept of reliability is associated with positivist research and addresses the reproducibility of results. By contrast, validity assesses the accuracy of results. The goal of reliability is to minimize bias and error in the collection and analysis of data to the point that the same results and conclusions would be reached if the research were conducted again.

A common example of reliability is the task of weighing oneself on a bathroom scale. If repeated attempts indicate the same weight, the scale can be said to be reliable. Note that a reliable scale is not necessarily an accurate one: Even though the scale gives a consistent measure, it may indicate a weight that is consistently higher or lower than your actual weight. Thus, reliability can exist without validity, but not vice versa. Put another way, reliability is a necessary but not sufficient condition for validity.

Consistency and stability are two dimensions of reliability. Consistency refers to the degree to which the results can be independently re-created within an acceptable margin of error and is a form of measurement error. Consistency can be thought of as the level of variability in the method or instrument of measurement. Stability refers to the degree to which the results can be replicated independently at a later point in time and is similar to the replication of an experiment; if the same case were to be re-examined at a later point in time, would the results be the same?

As the use of case studies has gained acceptance within the positivist community, concepts of rigor such as reliability have been increasingly applied to the methodology. However, the importance of reliability in case studies depends to some extent on the researcher's epistemological perspective. Researchers who adhere to a social constructive or interpretive research philosophy may see case studies as a way to examine a phenomenon embedded within a unique situation at a certain point in time. They may therefore conclude that evaluating reliability is inappropriate, because the research cannot be reproduced.

Application

Reliability in case study research can be assessed by applying three commonly used techniques to address the dimensions of consistency and stability: (1) interrater reliability, (2) triangulation, and (3) an audit trail. These techniques are discussed next in the larger context of consistency and stability.

Consistency

There are two components to consistency: equivalency and internal consistency.

Equivalency

Equivalency is concerned with consistency of observation at a point in time. Case study research is susceptible to error in observation, in particular when a single researcher performs the observation and analyzes the data. In case study research the researcher can be viewed as part of the measurement process. Just as a physical instrument may have error in measurement, so too can an individual in observing or in applying coding or categorization to the qualitative data, introducing bias that impacts reliability. Addressing equivalency requires that steps be taken to minimize the measurement bias of the researcher.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading