Skip to main content icon/video/no-internet

Reporting has, historically, consisted primarily of comprehensive written reports prepared and delivered by the evaluator as one-way communication to evaluation audiences. This approach follows the traditions of social science research reporting, where objectivity, replicability, and comprehensiveness are standard criteria of quality. The primary burden of the evaluation's utility is placed on the content of the report and its use by evaluation clients and audiences. Indeed, early complaints about evaluations focused on their lack of use, citing in particular that reports went unread and findings were not considered. To improve use, evaluators have more recently begun to employ varied and interactive forms of reporting—often (but not always) in addition to traditional, comprehensive written reports. The defining characteristics of evaluation reporting are its purposes, audiences, content, and format.

Purposes

Evaluation reporting typically serves one or both of two main purposes and generally follows the original purpose of the evaluation endeavor. First, reports can help improve a program. Formative reports are intended to inform decision making by program staff, management, and others about changes that will increase the program's effectiveness. Second, summative reports are provided to demonstrate a program's effectiveness (its merit or worth) and as a means of accountability to funders, other primary stakeholders, the public, and others interested in the type of program evaluated. The primary objective of any report is that audience members understand the information being presented. Understanding is a requisite for any type of evaluation use, whether that use is conceptual (to further understanding) or instrumental (to stimulate action).

Audiences

Audiences for evaluation reports are often numerous and varied. Generally, audiences are considered to be in one of three categories. Primary audiences usually request the evaluation, are the major decision makers associated with the program being evaluated, or have funded the program. Sometimes, these groups include the program's staff, supervisors, managers, or external constituents.

Secondary audiences are involved with the program in some way. They may have little or no daily contact with the program but may have a strong interest in the results of the evaluation. Secondary audiences may use the evaluation findings in some aspects of their work and decision-making processes. These groups might include some program participants, their supervisors or managers, and individuals whose work will be affected by decisions based on the evaluation results.

Tertiary audiences are yet more distanced from the inner workings of the program but may want to stay informed about the program and would be interested in receiving the evaluation's results. These groups might include future program participants, audiences who represent the general public interest, and members of the program's profession (e.g., educators interested in professional practices related to school reform).

Identification of audiences and understanding of their characteristics are critical for ensuring that evaluation reports are used. Audiences vary across a number of characteristics that mediate the type of evaluation report that will be most meaningful for them. For this reason, evaluators have long been advised to produce as many types of reports (with varied content and using different formats) from an evaluation as may be needed to meet audience needs. These characteristics include the audience's accessibility, education level, familiarity with the program, familiarity with research and evaluation, role in decision making, and attitude toward or interest level in the program or the evaluation. Where an audience stands on any one of these dimensions plays some role in determining the evaluation report's content and/or delivery method. For example, program officers of a funding agency or other busy executives may be highly interested in an evaluation's findings but feel they do not have time to read a lengthy report or meet for a discussion about it. A single-page summary report, which they can read quickly and at the time that best suits them, is probably the best format choice. Audience access can vary in other ways too. For instance, an evaluation of a development project in a Third World country could include numerous agencies from different countries. A report formatted to be viewed from a Web site may be accessible to all parties, whereas a large document to be downloaded from the site might not.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading