Skip to main content icon/video/no-internet

Evaluability assessment was thrust on the evaluation scene in the 1970s and was initially thought to show great promise for improving programs and saving valuable evaluation resources that might have been wasted by evaluating programs that were not ready to be evaluated. After a short burst of interest and activity, the process appears to have lost much of its appeal among evaluators. This entry provides a definition of evaluability assessment and offers some conjectures as to why a tool with such demonstrated promise seems to have all but disappeared from the practice of evaluation—at least as the practice is described in published literature.

EVALUABILITY ASSESSMENT: A DEFINITION

Evaluability assessment (EA) is a systematic process for describing the structure of a program and for analyzing the plausibility and feasibility of achieving objectives; their suitability for in-depth evaluation; and their acceptability to program managers, policy makers, and program operators. This is accomplished by the following process:

  • Program intent is clarified from the points of view of key actors in and around the program.
  • Program reality is explored to clarify the plausibility of program objectives and the feasibility of program performance.
  • Opportunities to improve program performance are identified.

Two primary outcomes are expected from an EA:

  • Definition of a program's theory. This includes the underlying logic (cause and effect relationships) and functional aspects (activities and resources), with indications of types of evidence (performance indicators) for determining when planned activities are implemented and when intended and unintended outcomes are achieved.
  • Identification of stakeholder awareness of and interest in a program. This means stakeholders' perceptions of what a program is meant to accomplish, their concerns or worries about a program's progress toward goal attainment, their perceptions of adequacy of program resources, and their interests in or needs for evaluative information on a program.

When an impact evaluation is anticipated, both of these outcomes should be attained before the evaluation is designed. When a program is being planned, or when improvement is the intent, only Outcome 1 may be pursued: Having a defined program framework increases the likelihood that program staff will manage their programs to achieve intended impacts, whether or not the impacts are to be measured. When the purpose is a preparatory step to further evaluation, these outcomes permit a clear indication of whether an intensive evaluation is warranted and, if so, what components or activities in the program can provide the most desirable data. In essence, they prevent evaluators from committing two types of error: Type III, measuring something that does not exist, and Type IV, measuring something that is of no interest to management or policy makers (Scanlon, Horst, Nay, Schmidt, & Waller, 1979).

Type III error exists when the program has not been implemented, when the program is not implemented as intended, or when there is no testable relationship between the program activity carried out and the program objectives being measured. Type IV occurs when the evaluator brings back information that policy makers and management have no need for or cannot act on. Both types of error are avoidable if an evaluability assessment is conducted. Type III errors may be avoided by defining the program and describing the extent of implementation; Type IV, by determining from the stakeholders what they consider important about the program and the evaluation.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading