Summary
Contents
Subject index
The Second Edition of Building Evaluation Capacity provides 89 highly structured activities which require minimal instructor preparation and encourage application-based learning of how to design and conduct evaluation studies. Ideal for use in program evaluation courses, professional development workshops, and organization stakeholder trainings, the activities cover the entire process of evaluation, including: understanding what evaluation is; the politics and ethics; the influence of culture; various models, approaches and designs; data collection and analysis methods; communicating and reporting progress and findings; and building and sustaining support. Each activity includes an overview, instructional objectives, minimum and maximum number of participants, range of time required, materials needed, primary instructional method, and procedures for facilitators to help learners in the most common evaluation practices.
Issues of Validity and Sampling
Issues of Validity and Sampling
Background
This section includes activities that address
- Understanding quantitative and qualitative definitions of reliability and validity
- Understanding various approaches and issues related to sampling
The following information is provided as a brief introduction to the topics covered in these activities.
Reliability and Validity
Reliability or precision refers to the consistency across different forms or different replications of the testing process. Further, it indicates the level of measurement error that exists in the instrument or the data. If the data (or the instrument) are considered unreliable, then the data are considered unrelated to the phenomenon or the concept being measured. Thus, the results are not replicable and cannot be repeated.
Error, leading to a lack of reliability, can result from various sources. One ...
- Loading...