Skip to main content
SAGE
Search form
  • 00:00

    [MUSIC PLAYING]

  • 00:12

    ROB FISCHER: Hi, I'm Rob Fischer.I'm a professor at the Jack, Joseph, and Morton MandelSchool of Applied Social Sciences at Case WesternReserve University, and today I'dlike to talk about data collection in programevaluation.Now, you may be familiar with data collectionin most traditional research.I want to talk about the particulars of data collectionas it needs to be addressed within a program evaluationenvironment.

  • 00:43

    ROB FISCHER [continued]: So two points I'd like to make on this distinction about datacollection and program evaluation.In evaluation, we need to be collecting databoth on the delivery of the program, it's implementation,and the outcomes of participants.So we need to keep our eye on data collectionto represent both how the program was deliveredand the experiences of those who participated in the program.

  • 01:08

    ROB FISCHER [continued]: Ultimately, though, the data approachesthat we have available to us are very similar to what wefind in traditional research.So if we think about the three kind of areas of datathat we need to have on hand to conduct the evaluation,firstly, we need data about the participants themselves.

  • 01:30

    ROB FISCHER [continued]: So these are characteristics usually collected at baselineabout participants, such as their characteristics,their demographics, their risk levels, anything that theybring to the program experience thatcould impact how well they accomplish the outcomes thatare intended.Secondly, is that measuring the service itself.

  • 01:53

    ROB FISCHER [continued]: What is the amount and type of services thatwere received by participants?So we need this at the participant levelto know exactly what the dose of program service was.It's not enough to know that someone was in a programor not in a program.We need to know how much of the program they got.

  • 02:14

    ROB FISCHER [continued]: And this requires more than a just simple attendance data,but also measures of other perhaps supplemental servicesthat might have been offered to program participants.And then thirdly, the main other bucket of data that we needis on the outcomes themselves.And here we're going to be relying on data about outcomesas they play out over time.

  • 02:38

    ROB FISCHER [continued]: Initial outcomes having to do with changes in attitudein knowledge, and if those outcomes areachieved, following them forward to changes in behavioror status for participants.We talk about data collection.When it comes right down to it, wehave to actually select the specific measures that wewant to use in the evaluation.

  • 03:02

    ROB FISCHER [continued]: And here we are going to draw heavilyon what we know from traditional researchin placing a priority on both the validity and reliabilityof measures.So in evaluation, if we have the opportunityto use a validated instrument thathas been proven in other literatureto have the properties of validity and reliability,that's something we would try to do.

  • 03:27

    ROB FISCHER [continued]: But I'll be quite frank that often wedon't have that luxury.Often we're evaluating a space where maybe measures have notbeen developed adequately.So to put a fine point on it, the validityis simply the extent to which the measure measureswhat we think it should.Does it measure the construct that we're after?Reliability is the extent to whichthe measure produces the same results repeatedlyover time with accuracy.

  • 03:55

    ROB FISCHER [continued]: And then thirdly, we want to select measuresthat are sensitive to the level of changethat we expect from a program.So a program that is very intensive, we coulduse a more sensitive measure, because it might producea larger effect than a very brief intervention, whereperhaps it's very modest changes.

  • 04:17

    ROB FISCHER [continued]: So if you think about a weight scale as a type of measurement,validity is the idea that the scale actuallyreports our correct weight.So if we step on the scale and it measures 205 pounds,I know it's probably pretty close.

  • 04:39

    ROB FISCHER [continued]: It's a valid measure.The reliability is the extent to whichI can step on and off the scale repeatedly and getthat same number from it.So those are aspects of measurementthat we need to pay attention to.If we're in a place where we haveto use a measure that has unknown validityand reliability, we just need to report thatand say that this is more exploratory on the useof this measure.

  • 05:07

    ROB FISCHER [continued]: So a final point on selecting measuresis very particular to evaluation reality.And that is that the measures that we choosehave to actually be accessible to us and usable in practice.And so this is a place where we may have a wonderful measurethat's been validated and reliable, in reliabilityand validity.

  • 05:31

    ROB FISCHER [continued]: But when we go to use that measure,it has a high price tag that's out of the pricerange for the program partners to use it,or you need special training that'snot going to be possible for those who will be implementingthe measure.Or it's a literacy level that's too high for the clientele thatserve by the program.

  • 05:51

    ROB FISCHER [continued]: So these are all realities that wehave to confront when we select our measuresand then move into the space of evaluating the program.On the data collection, we can think about two decisionsthat we need to make.One is the data source-- the who part.

  • 06:13

    ROB FISCHER [continued]: Who will the data come from?And then secondly, the data method.The how.How will we get the data from the source?So when we think of data sources,the usual suspects are here.Certainly program participants are a first categorythat we need to consider.

  • 06:35

    ROB FISCHER [continued]: And for authentic program evaluation,participant voice must be included for itto be a legitimate undertaking.We also might have access to existing records,and these could be held within the program itself, the agency,or it could be held by other agencies-- collaboratingagencies such as school districts or the court system.

  • 06:58

    ROB FISCHER [continued]: We might also rely on sources such as trained observers whowe train to go in and do observation of programs.And then finally, we might have access to mechanical measures,such as weight, or medical measurements,health-related measurements.So these are all sources, but then the how part,the how we will actually extract them.

  • 07:22

    ROB FISCHER [continued]: We also draw heavily from traditional research.We can use survey methods, which can take various forms-- paperand pencil, web-based, they can be handled by an interviewer.Various forms there.We can also use more qualitative methods,such as interviewing focus group,which would give us much deeper depth on some of the issuesthat we're studying.

  • 07:47

    ROB FISCHER [continued]: If we're accessing data records of any kind-- hopefullyelectronic, but it could also be paper-based recordss--we would use data extraction techniqueswhere we are summarizing or receivingextracts from an electronic record system.And then finally a final method wouldbe observation, which is really restrictedto kind of qualitative assessment of program delivery.

  • 08:16

    ROB FISCHER [continued]: So in talking about the availability of different datasources and methods, we often mightbe interested in just selecting one of each.The reality is that each method and each data sourceoften bring with it specific limitations or weaknesses.So there may be issues of bias underlying the data thatcome from a particular source, or theremay be missing data that are a significant issue--particularly in electronic records or paper-based records.

  • 08:47

    ROB FISCHER [continued]: And so those limitations force usto think about using multiple sources and multiple methodsas a way to compensate for those issues thatoccur in any single source.Now, this increases the complexity of the evaluation.It increases the burden on us as both researchersand the program that we're participating with.

  • 09:10

    ROB FISCHER [continued]: It's going to take more time and more involvementfrom various stakeholders in orderfor us to get these additional sources and methods broughtto bear on the questions of interest.So now I'd like to illustrate the importanceof such multiple sources from an example of an evaluationthat we conducted.

  • 09:30

    ROB FISCHER [continued]: This is data from an evaluation of a family-based programfocused on children's behavior.These were families where they had a middle school child whowas having behavioral issues, and theycame to an eight-week program thatwas family focused where parents and children were involvedtogether.

  • 09:52

    ROB FISCHER [continued]: And what you see here is just summary datafrom a measure called the behavior problems index.And the first thing I would point out to youis that you have four clusters of bars there.The top two bars are ratings of the boys in this programby their teachers and their parents.So it's the same boys rated from two perspectives.

  • 10:15

    ROB FISCHER [continued]: And then the lower two bars are the girlsin the program, also rated from those two perspectives.And you have pre and post-data here.The orange bars are the pre-data before the eight weeksand the blue bars are the post-data.Shorter bars mean fewer behavior problems.So the first thing you might conclude by looking at thisis at post-test all of the ratingsagree that there are fewer behavior problems.

  • 10:44

    ROB FISCHER [continued]: But if we look a little further, if welook at the ratings of boys by these twodifferent perspectives, teachers and parents,you can see in the top two clustersthat teachers and parents agree the level of behavior problemsamong boys before the program, and groupssee declines in behavior problems following the program.

  • 11:07

    ROB FISCHER [continued]: But now when we look at girls in the program,you can see that parents before the programsee significantly more behavior problems than the teachersof the same girls.And while they both see reductions in behavior problemsacross the course of the program, among parents,it's a much more significant decline.

  • 11:30

    ROB FISCHER [continued]: So we have to ask ourselves, by having multiple data sources,what are we learning?And here we have to investigate why would teachers and parentsdisagree about the behavior problemsof the girls in the program?And in our discussions with the partners and also usingqualitative methods, we had to ask ourselves,is this accurate or not?

  • 11:54

    ROB FISCHER [continued]: And what we found was the in investigating how girls areacting in the classroom versus at home,it's entirely plausible that they are having more behaviorproblems in their home setting with their parentsthan they are in the classrooms.And secondly, we have to accept that these girls are notin classrooms by themselves, theyare with the boys-- in the upper bars-- who everyone agrees ishaving more behavior problems.

  • 12:24

    ROB FISCHER [continued]: So by comparison, girls in the same classroomsmay not look like they're having behavior problems comparedto the boys who may be having more acting up.So here what we determined in the studyis it was valuable to have the multiple sources because welearned a little bit more about the authenticity of behavioracross two settings.

  • 12:50

    ROB FISCHER [continued]: Now the last part is we apply statistical testingto these data, and we end up with a somewhat morechallenging finding to explain.And that is that teachers find significant improvementamong boys but not girls, and parentssee significant improvement among girls but not boys.

  • 13:12

    ROB FISCHER [continued]: But I think statistical testing is one application of this,but also just understanding the patternand how the consistency of the pattern may inform our action.So here, working with the program partners,we really did determine that thiswas an authentic assessment of the behavior,and it differs by gender of the children involvedin the program.

  • 13:43

    ROB FISCHER [continued]: So in this tutorial, we have talkedabout the use of data collection within program evaluation.While we are using the traditional research methodsand data collection methods that we may be familiar with,we are applying them in very different settings.So we've talked about how we need data both on participantsand the services they receive.

  • 14:05

    ROB FISCHER [continued]: We've focused on how, though we mightwant valid and reliable measures,we may not have those available to us in particular evaluationsthat we undertake.We also focused on how we can makedecisions about data sources and the methods in evaluation,and how crucial it is to make surethat we include multiple sources and multiple methodsas a strategy to compensate for the weaknesses of any onemethod or source.

  • 14:43

    ROB FISCHER [continued]: [MUSIC PLAYING]

Video Info

Publisher: SAGE Publications Ltd.

Publication Year: 2017

Video Type:Tutorial

Methods: Program evaluation, Data collection

Keywords: attitudes and behavior; challenges, issues, and controversies; decision making; families; gender and behavior; program implementation; voice and visibility ... Show More

Segment Info

Segment Num.: 1

Persons Discussed:

Events Discussed:

Keywords:

Abstract

Professor Rob Fischer discusses the particular needs and challenges of data collection in program evaluation studies. He emphasizes that researchers must collect multiple types of data from different sources and using different methods.

Looks like you do not have access to this content.

An Introduction to Data Collection in Program Evaluation

Professor Rob Fischer discusses the particular needs and challenges of data collection in program evaluation studies. He emphasizes that researchers must collect multiple types of data from different sources and using different methods.

Copy and paste the following HTML into your website