Validity refers to the extent to which a concept is actually represented by the indicators of such concepts. Often confused with reliability, which refers to the consistency of measures, validity extends beyond measurements and quantitative assessment of particular research questions. Because case studies may comprise quantitative and/or qualitative data and approaches, validity is more or less an issue of research quality. This has implications for both how validity is assessed and the determination of whether assessment is at all applicable.

Conceptual Overview and Discussion

How is one able to trust the conclusions of case study research and judge the extent to which the findings actually speak to the concepts with which the research and case (or cases) are concerned? This discussion of validity is at the heart of these vexing questions. However, there is no easy, formulaic approach to assessing validity in case studies. This is because there are both a wide range of types of validity as well as a preponderance of different types of data and approaches to case studies. If one believes that it is advantageous to have a broad range of perspectives and approaches within the social sciences, then one must contend with the fact that an equally appreciative and nuanced understanding of validity as it applies to many different research contexts is required.

Often, validity is mistakenly assumed to be concerned only with issues of measurement. This reflects a presupposition of both quantitative data as well as philosophic/ontological presumptions of an independently existing social reality. This narrow appreciation of the underlying quality concerns regarding research offers a quandary whereby we desire assurances about the quality of the research but lack a programmatic checklist of characteristics to assess. Indeed, although fundamental concepts of trustworthiness are found across research traditions, the particular ways of assessing such legitimacy are deeply and historically rooted in each tradition. This being the case, this entry examines a variety of types of validity, but without a prescriptive formula for any general application to case studies. Through the use of some examples, the implications of different types of validity are illustrated.

Face Validity

Face validity is concerned with how well the study, case, measurement, or data acquisition tool (e.g., a survey or interview transcripts) represents an intuitive and commonsense understanding of a phenomenon. In essence, it concerns a sense that individuals would reasonably find the applicability of the data or method to the research problem credible. This issue of credibility is therefore broadly applicable to case studies of all types.

Ecological Validity

This type of validity is fundamentally concerned with whether the findings of the researchers' inquiries actually bear any resemblance to the lived experience of those whom the researchers are studying. The researchers might imagine a situation whereby they create a nuanced version of what they believe to be the central concerns and trials of individuals in a particular case, only to find that their interpretation or analyses are essentially unrecognizable to the people they have studied. Although there is a compelling argument to be made that the results of social science do not need to be recognized by the research subjects, it is also the case that studies without some evidence of ecologic validity beg the “So what?” question. Thus, ecological validity interrogates the extent to which research may be rigorous and yet may not be applicable or relevant to the actual experience of those within the case study.

Predictive and Concurrent Validity

As one might expect, predictive validity is focused on the future. The central concern with predictive validity is how well one might expect to be able to consistently and accurately predict the future on the basis of the present. This naturally requires the passage of time if one is to link the two. Concurrent validity refers to a situation in which both criterion and predictor are measured at the same time. Although there is no temporal relationship between these two types of validity, Robert Guion identifies that both types of validity place an emphasis upon outcomes. Because of the difficulties in achieving predictive validity, concurrent validity is sometimes seen as a reasonable substitute.

In quantitative methodologies, achieving predictive validity requires specific consideration of sampling frames with a particular focus upon randomized selection of participants. This highlights the idea that, in order for findings from one study to be applied to other contexts, let alone future outcomes, the generalizability of the study needs to be critically assessed. In qualitative case studies the idea of predictive validity would relate to how well one might extrapolate the findings to future actions and outcomes. Is the past truly the best predictor of the future? How do we know? These questions must be addressed if a qualitative case study makes predictive inferences.

In general, we may accept claims of predictive validity only as long as no significant changes occur in the context in which the validity has earlier been established; that is, changes that substantially affect the context in which the specific relationship between predictor and criterion are embedded potentially affect one's ability to generalize the established predictive validity to the new situation. Thus, predictive validity is likely to be quite hard to establish. Nevertheless, in research that is intended to inform practice or policy predictive validity is quite desirable. For example, we might wish to use the past job applicant interviewing strategies in a particular organization to predict how effective future successful job applicants would be at their jobs. Theoretically, this would require a random selection of individuals and a large enough sample size to uncover the relationship between the interviewing and the outcomes. In this example, there is also the serious problem of knowing the job performance only of successful applicants (i.e., we might reasonably ask whether unsuccessful applicants could also perform well at the job). For reasons such as those offered in this example, concurrent validity is often substituted for predictive validity in such cases, given the likely constraints faced in terms of resources. In this example, we might accept that our sample will be constrained to only successful job applicants and how well they perform their duties after hiring.

Predictive validity may be both prized and difficult to obtain in real-world situations. Practical constraints often cause researchers to substitute concurrent validity and use it in ways that are not necessarily conceptually sound when predicting the future. For these reasons, the debate concerning how well concurrent validity overlaps with predictive validity is both salient and ongoing.

Measurement (Construct) Validity

This aspect of validity is largely concerned with how well researchers have succeeded in actually measuring the particular concept or phenomenon they purport to be investigating. Thus, this type of validity is keenly focused upon quantitative methodologies. When measuring something, the reliability of the measure researchers use is critical. Reliability is a central aspect of measurement (construct) validity, because if the measurements themselves are not consistent and stable across time and contexts then researchers are unable to be confident that they are actually measuring the concept they are studying. As an analogy, if researchers measure the temperature of a room with a thermometer that is not consistently accurate within a specific temperature range, how are they to know whether the reading is not error rather than the actual temperature of the room? In short, they need to be assured that the measurements are reliable and that the relationships they uncover are legitimate (see the next two sections of this entry, which address internal and external validity), hence the often-repeated advice that reliability is a necessary but insufficient condition of measurement (construct) validity.

Does construct validity matter when one is conducting qualitative case studies? Yvonna Lincoln and Egon Guba suggest that in this situation the idea of dependability closely approximates reliability, and they describe dependability as being concerned with how findings of the study are applicable at other times.

Internal Validity

When researchers speak of internal validity they are concerning themselves with the concluded causal relationship between variables. Although this terminology is typically used in quantitative methodologies, the concept can be extrapolated to some qualitative studies as well. Internal validity is an issue of how well the particular relationships described in the research actually can be ascertained to be the primary dynamic at play, rather than an artifact of some other process. For example, are we able to conclude that an employee's job satisfaction is the primary reason that he stays in a particular job, or rather is it the poor alternative employment opportunities available to him during the time of the study?

External Validity (Generalizability)

External validity concerns are related to the idea of generalizability: the ability to take the findings from one study and apply the same relationships and conclusions to other populations and contexts. Quantitative studies attempt to ensure generalizability through the use of representative sampling. Qualitative case studies, on the other hand, often suggest that their very strength is in not achieving significant possibilities of generalizability; that is, the specific context of each situation requires nuanced investigation, and generalizability is unattainable if this context specificity is to be maintained. The very nature of a case study is one that implies some sort of restriction of sample, be it context, time, or population characteristics that define the range restriction. The idea of statistical generalizability does not seem to be appropriately applied to most qualitative case studies. Nevertheless, the concept of external validity is appropriately applied to such cases. For example, Robert K. Yin suggests that case studies can be viewed as generalizing to theory and not to populations. Thus, if a series of findings in a case can be understood in terms of the existing theory or literature, that constitutes a type of external validity. In a similar way, grounded theorists, such as Barney Glaser, who use a particular inductive method to create theory, could argue that their data generalize to theory to the extent that they uncover the very theory to which the data generalize! This discussion highlights the fact that often, one type (or one interpretation) of validity is counterpoised with another type, resulting in the need to balance one's research to intelligently maximize the type of validity seen as most pressing to the researchers. Thomas Cook and Donald Campbell, although focused on quasi-experimental research design, offer one of the most thorough examinations of competing and overlapping threats to validity, and this examination should form part of any broader study of validity.

Convergent (Divergent) Validity

The degree to which a present criterion or predictor is related logically and empirically to similar (convergent) or dissimilar (divergent) constructs provides improved evidence that the relationship researchers may find is theoretically defensible. The choice of other constructs that may be hypothesized to be convergent or divergent in nature with respect to what the researchers are studying is largely driven by the extant literature and/or theorizing. Thus, in an effort to bolster the arguments for the relationships researchers may uncover in case study research, they can usefully consider how well these findings agree or disagree with other aspects of the case they are examining. They might, for example, expect that people who engage in high levels of positive organizational citizenship behaviors are likely to have higher levels of organizational commitment and lower levels of reported deviant or counterproductive workplace behaviors.

The Validity of Validity?

As evidenced in the preceding discussions, validity is complex in that the application of a particular type of validity may vary across qualitative and quantitative studies. Likewise, the nature of many case studies would be to call into question issues of some types of validity. There is, however, the more difficult situation posed by some research that questions the very applicability of validity as a “quality of research” issue.

It is particularly difficult to broach the discussion about validity issues and qualitative research that does not adhere to a positivist worldview. In these situations, questions of quality remain to be addressed, but perhaps not quite as didactically as when statistical and measurement evaluations are applied to quantitative research. Although some scholars do argue for a wholesale appropriation of validity concepts from the natural sciences, others argue about the complete lack of appropriateness of such standards of assessment to every research tradition. Even the terminology for particular aspects of what one might term validity may become different. For example, Lincoln and Guba offer the concepts of credibility, transferability, dependability, and confirmability for qualitative research, which could be seen as loosely paralleling issues of internal validity, external validity, reliability, and objectivity, respectively. Still others argue for the idea of relevance, which could be seen as related to the particular contribution offered through a piece of research, although this could be seen as an endorsement of using instrumentality of research as a quality assessment tool.

More important, there is more than just semantics at stake when one considers the potential problems found in applying ideas of validity across the different domains and traditions of research. For example, the idea that one can generalize the findings from a select group (i.e., a sample) to a larger population seems benign to the quantitative researcher who uses this premise often. To a feminist researcher, this approach might be regarded as problematic, because it does not address individuals as individuals; neither does it seem to address the different effects of the social context upon such individuals. So, although case studies are often seen as providing limited generalizability, particularly from the statistical perspective, many scholars argue that the rich contextualization offered in qualitative case studies contributes to a greater ecological validity while at the same time not claiming to be statistically generalizable. Likewise, postpositive perspectives that argue for a multivocal representation of research topics (e.g., postmodernism or postcolonialism) could well find offensive the ideas of consistency found in discussions of validity. Critical theorists and sociology-of-knowledge specialists would likely question how it has come to pass that ideas of validity originating in positivist research have migrated to other research domains (and query what this tells us about social power). Clearly, the researcher's perspective shapes rather dramatically how he or she will approach the question and application of validity.

As with all research, case studies are a balancing act in terms of their strengths and their potential liabilities from a research design perspective. Consideration of validity in all its forms is a useful exercise regardless of the particular philosophic or traditional biases researchers may have. Through careful and open-minded examination of the relative strengths and weaknesses of a particular case and data, analytic perspective researchers are able to design and evaluate research with a view toward quality while appreciating the opportunities that research diversity offers them.

Critical Summary

This entry has explored the idea of validity in case studies. Validity is largely concerned with whether the claims, implications, and conclusions found in a piece of research can be justifiably made. In this respect, validity relates to legitimacy, quality control, and, to some extent, trustworthiness. With the enormous breadth of research designs, data, and analytic techniques found in the diverse domain of case study research, validity is difficult to concisely define, let alone prescriptively assess. This, however, does not render the consideration of validity unimportant. Perhaps most effective is to keep in mind both the broad discussion contained in this entry as well as the commonly held quality control standards applicable to any given research tradition. In this manner, one may address concerns about validity within the legitimate boundaries of any given disciplinary practice while simultaneously preventing the inappropriate invocation of nonapplicable standards to a particular piece of research.

Anthony R. Yue

Further Readings

Bryman, A.Bell, E.(2007).Business Research methods (
2nd ed.
). Oxford, UK: Oxford University Press.
Cook, T. D.Campbell, D. T.(1979).Quasi-experimentation: Design and analysis for field settings.Boston: Houghton Mifflin.
Glaser, B. G.Strauss, A. L.(2006).The discovery of grounded theory: Strategies for qualitative research.London: Aldine Transaction. (Original work published 1967)
Guion, R. M.(1965).Personnel testing.New York: McGraw-Hill.
Lincoln, Y. S.Guba, E.(1985).Naturalistic inquiry.Beverly Hills, CA: Sage.
Yin, R. K.(2009).Case study research: Design and methods (
4th ed.
). Thousand Oaks, CA: Sage.
  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles