Skip to main content icon/video/no-internet

Responsive evaluation is an orientation, a predisposition, to the formal evaluation of education and social service programs. It is a disposition favoring personal experience. It draws on and disciplines the ordinary ways people perceive quality and worth. More than most other formal approaches, it draws attention to program activity, to program uniqueness, and to the cultural plurality of people. This responsive predisposition can be recognized in all evaluative research, for every composite and executive summary will include at least traces of experiences of goodness and badness.

A responsive evaluation study is a search and documentation of program quality. The essential intellectual process is a responsiveness to key issues or problems, especially those recognized by people at the sites. It is not particularly responsive to program theory or stated goals but more to stakeholder concerns. The design of the study usually develops slowly, with continuing adaptation of design and data gathering paced with the evaluators becoming well acquainted with program activity, stakeholder aspiration, and social and political contexts.

Issues of policy and practice are often taken as the “conceptual organizers” for the inquiry, more so than needs, objectives, hypotheses, group comparisons, and economic equations. Issues are organizational perplexities and social problems, drawing attention especially to unanticipated responsibilities and side effects of program efforts. The term issue draws thinking toward the interactivity, particularity, and subjective valuing already felt by persons associated with the program. (Examples of issue questions: Are the eligibility criteria appropriate? Do these simulation exercises confuse the students about dependable sources of information?) Stakeholders have a variety of concerns. They are proud, protective, indignant, improvement minded, troubled by costs. Responsive evaluators inquire, negotiate, and select a few concerns around which to organize the study.

These evaluators look for attainments and for troubles and coping behavior as well. To become acquainted with a program's issues, evaluators observe its activities, interview those who have some stake in the program, and examine relevant documents. These are not necessarily the data-gathering methods for informing the interpretation of program quality but are needed for the initial planning and evolving focus of the study. Management of the responsive evaluation study usually remains flexible; both quantitative and qualitative data are gathered.

Observations and Judgments

Directed toward discovery of merit and shortcoming in the program, responsive evaluation recognizes multiple sources of valuing as well as multiple grounds. It is respectful of multiple, sometimes even contradictory, standards held by different individuals and groups and is reluctant to push for consensus.

Ultimately, the evaluators describe the program's activity, examine its issues, and make summary interpretations of worth, but first they observe and inquire. They exchange draft descriptions and tentative interpretations with data providers, surrogate readers, and other evaluation specialists to tease out misunderstanding and misrepresentation. In their reports, they provide ample description of activities over time and personal viewing so that, with the reservations and best judgments of the evaluators, the report readers can make up their own minds about program quality.

There is a common misunderstanding that responsive evaluation features collaborative methods. It sometimes fits well with them, but the two are not the same. With help from program staff members, evaluation sponsors, and others, the evaluator considers alternative issues and methods. Often clients will want strong emphasis on outcomes, and often evaluators press for more attention to the processes. They negotiate. Usually the evaluation team, ultimately, directly or indirectly decides what the study will be because it is the team that knows more about what different methods can accomplish and what methods its evaluators can do well—and it is this team that will carry them out. Preliminary emphasis often, especially for external evaluators, is on becoming acquainted with the activity but also with the history and social context of the program. The program's philosophy may be phenomenological, participatory, instrumental, in pursuit of accountability, anything. Method depends partly on the situation. For it to be a good evaluation, the methods should fit the “here and now” and have potential for serving the evaluation needs of the various parties concerned.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading