Skip to main content icon/video/no-internet

Metaevaluation is the evaluation of evaluations (and of evaluators). It must not be confused with meta-analysis, which is a particular and relatively recent development in research integration; namely, the synthesis of multiple research studies of the same phenomenon, studies that may or may not be evaluative, into an overall conclusion. Meta-analyses in the social sciences are typically syntheses of merely empirical studies, not evaluative studies. The literature review that is often part of an evaluation project will often conclude with a meta-analysis that has an evaluative conclusion because it is integrating a number of evaluative studies; however, that is not, as such, a metaevaluation but merely a synthesis of evaluations. The key element in metaevaluation (here, MEV) is that it evaluates the evaluations to which it refers; it does not merely summarize them. Of course, a review of prior evaluations might also be evaluative about them as well as about the evaluands to which they refer, in which case it would be both a meta-analysis and a MEV.

The importance of MEV arises particularly from two implications of its present incarnation. On the one hand, it is ethically and symbolically crucial because it shows that evaluation is a reflexive (self-referent) subject and hence that the evaluator is not above being evaluated—what's sauce for the goose is sauce for the gander. This is often reassuring to those being evaluated, but it is not a mere therapeutic gesture: The metaevaluation should be incorporated into serious evaluations because it shows a commitment to selfimprovement, perhaps even a touch of humility, nearly always well-justified in a youthful discipline such as evaluation. This feature is reminiscent of the requirement in psychoanalytic certification that analysts themselves be psychoanalyzed, a requirement that shows that psychoanalysis is also a self-referent discipline. Although one might argue that the general professional imperative is to ensure that one is regularly evaluated by someone else with the required skills, for evaluators this means there is an obligation to ensure that one's work is, at least from time to time, subject to MEV.

The second key point about MEV stems from the fact that, loosely speaking, MEV is a reflection of the general scientific commitment to independent confirmation of one's conclusions. It has by now been developed to a level where it can provide a very sophisticated check on the validity of the evaluation under examination. There are a number of significantly different ways to do MEV, and doing more than one is rarely redundant. The simplest consists of replication of the original study and its methodology followed by critical comparison of the results from the two efforts. This is the straight confirmation approach to MEV. An approach that is somewhat more powerful involves using a different methodology to evaluate the same evaluand; this reflects part of what is meant by triangulation in the usual discussions of scientific methodology. Both those approaches involve a cost that is likely to be comparable to the cost of the original study. There are other approaches that, although much less expensive, are still powerful and often fruitful: They focus on design critique. The usual genesis of these is from some standard set of requirements that have been proposed as necessary elements in a good evaluation design. One then reviews an evaluation by comparing it to this list. The most common of these approaches to MEV involves applying the Program Evaluation Standards to the evaluation. A detailed guide to using this approach for MEV has been developed by Dan Stufflebeam and is available on the checklist Web site (http://www.wmich.edu/evalctr/checklists/checklistmenu.htm). Another approach uses Scriven's Key Evaluation Checklist as the template for judging an evaluation: This checklist can be found at the same site. A special use of this kind of MEV is its use by the original evaluator as a way to review his or her own work in the draft stage. Obviously, evaluators cannot do a full-scale confirmation or triangulation study in the available time or resource framework, but they can run it through a comparison with one of these general-purpose checklists. If the one used in the MEV is not the one used in the original design, the evaluator gains a degree of triangulation from this procedure. The General Accounting Office has also developed its own checklist for doing MEVs.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading