Skip to main content
SAGE
Search form
  • 00:11

    DR. ROB FISCHER: Hi.I'm Rob Fischer.I'm a Professor at the Jack Josephand Morton Mandel School of Applied Social Sciences at CaseWestern Reserve University.And today I want to talk with you about outcome measurement.Specifically, in the environment of program evaluation.Now outcome measurement is very much focused on the changesthat occur for participants in programs.

  • 00:35

    DR. ROB FISCHER [continued]: And when I say participant, keep in mindthat this could be individuals, but it might alsobe couples, families, classrooms,or entire neighborhoods.It depends on the unit of intervention of the programthat we are studying.So the outcomes that we are focused on,and this is a crucial definition for us,is outcomes are about changes in the participants themselves.

  • 01:02

    DR. ROB FISCHER [continued]: Changes having to do with things like knowledge, attitude,changes in their behavior, or their status.So there's often confusion about the difference between outcomesand things like outputs, which are really measures of service.It's very important for us to keep our eye on the ballas far as making sure that outcomes meet our definitionof changes in participants.

  • 01:28

    DR. ROB FISCHER [continued]: So where do outcomes come from?Well, they have to be derived from the theory of change thatunderlies the program that we're studying.The theory of change is simply the rationalefor why doing this type of programshould bring about the changes that wedesire for the participants.And that theory of change is oftenhidden within the understanding of programsby those who design them and deliver them.

  • 01:55

    DR. ROB FISCHER [continued]: So it is something that needs to bebrought to the surface for us in this discussion of evaluation.And the way that we bring that theory of change to lightis through something called a logic model.Which is really just a simple depiction or thumbnailsketch of what the program's theory is.

  • 02:20

    DR. ROB FISCHER [continued]: To actually give form to the logic modelwe need to think about what the piecesand parts of this picture are.And to remind ourselves that the logic model doesn'tcome from the researcher.It doesn't come from the evaluator.It has to be designed collaborativelywith the stakeholders of the program.

  • 02:42

    DR. ROB FISCHER [continued]: So here, crucially, we would wantto involve the program operator or deliverer, the funder,certainly the evaluator would be present.But other voices like participantsand other stakeholders might alsobe included in this discussion.And so collectively we hope that group can, reflecting together,determine what the consensus view of the theory of changeis.

  • 03:08

    DR. ROB FISCHER [continued]: And then to actually articulate this in the picturewe will have four categories of materialthat we reflect in a logic model.Inputs, activities, outputs, and outcomes.Inputs are simply those resourcesthat are used by the program to accomplish its mission.Usually staff, materials, locations.

  • 03:32

    DR. ROB FISCHER [continued]: The activities are the simple statementof what the program is.What it does.It is not a laundry list or to-do listof all the activities of the program.It's something as simple as a transitional housingprogram for homeless first time mothers with a childunder the age of two.Something that tells you exactly what this program is doing.

  • 03:55

    DR. ROB FISCHER [continued]: And then the last two categories haveto do with what the program produces.The outputs are those countable units of servicethat we're very familiar with counting, oftenin the nonprofit and social sector.How many classes?How many participants?How many sessions were delivered?How many materials were distributed?All those things that show us the effortof the program that can be readily counted.

  • 04:21

    DR. ROB FISCHER [continued]: But we know very well that outputs don't necessarilylead to outcomes.So the next stage is to trace forward,for participants, what changes occuron the outcomes of interest.So I want to share with you an example of whata logic model can look like.

  • 04:41

    DR. ROB FISCHER [continued]: This is an example from a transitional housingprogram for homeless young women with their first child.And I'm not going to go through all the parts here.In the lower half what you see are the inputs,activities, and outputs.Now these are things that we're often verycomfortable with measuring.

  • 05:04

    DR. ROB FISCHER [continued]: We want to relay what the inputs are, as a wayto show what the scale and scope of the program is.So it matters whether it's done by full time staff, part timestaff, volunteers, to really give youa sense of the bones of the program.The activity is that simple statementof what it's accomplishing or what it's about.

  • 05:25

    DR. ROB FISCHER [continued]: And the outputs are those countable units of serviceas I just mentioned.I really want to focus my commentson the upper half of the logic model, whichlay out the outcomes.So the first thing you'll notice isthat they're really four vertical threads goingthrough this logic model.And these threads were developed working with the programstaff themselves.

  • 05:48

    DR. ROB FISCHER [continued]: When we asked them, what outcomesare you seeking with the familiesthat come into the facility?The first four things that they mentionedhad to do with employment, use of public assistance,parenting, and family plan.Now there were certainly many other outcomesthat were mentioned.

  • 06:09

    DR. ROB FISCHER [continued]: But for logic model work, to bring clarity and consensus,a lot of those things dropped awayas part of the conversation.Because they were not the top priorities.Or they are not shared by all stakeholders of the program.These were the four areas where therewas high agreement about the importanceand priority of these outcomes.

  • 06:31

    DR. ROB FISCHER [continued]: The other thing you'll notice about these fourth threadsis that each one of them is sequenced from initialto intermediate outcome.So conceptually, you can think about those initial outcomesas being closer to what the program does.And then the intermediate is a little fartherout in time for participants.So on the far left there, you cansee that the initial focus in the employment spaceis for women to obtain jobs or to pursue job preparation work.

  • 06:60

    DR. ROB FISCHER [continued]: And then if that outcome is achieved,then women obtain employment or obtain better employmentwould be the intermediate outcome.So each of these is a sequence of outcomes.And sometimes the sequence may be surprisingas we have that dialogue with our stakeholders.So the second one around the use of public assistance,the initial outcome for these familieswas that they would gain access to public assistance.

  • 07:25

    DR. ROB FISCHER [continued]: Things like food assistance, housing, health care, childcare assistance.So when you show an outcome that's says,the initial plan is to increase dependenceon public assistance.That may be counter intuitive to some stakeholders.But when you see it in relation to the next outcome, whichis that then they decrease their reliance on those services,it makes perfect sense.

  • 07:52

    DR. ROB FISCHER [continued]: And in fact, because these were homeless families,the idea of stabilizing them by having them get accessto services for which they were eligible,was completely consistent with the theorybehind helping homeless families achieve stability.The other areas around becoming a better parent.

  • 08:14

    DR. ROB FISCHER [continued]: Learning skills and then using those skills.And then on family planning.The idea that quick second pregnancies were threateningto these families in terms of their future stabilityand a lot of the other outcomes the mothers were seeking.But that, over some period in time,family planning might include additional birthsfor this young mom if it made sense, given howhere circumstances had changed.

  • 08:42

    DR. ROB FISCHER [continued]: And then the last point is you can see all thesecoming together into a long term outcome, whichis that families continue progresstoward economic self-sufficiency.Now I would love for that box to say familiesbecome self-sufficient.And initially, that's what that long term outcome was.But the reality for these homeless families.

  • 09:03

    DR. ROB FISCHER [continued]: The idea of actually attaining economic self-sufficiencywas very much a challenge for many of them.So it's the idea of, as for any family,continued progress towards that milestone of self-sufficiency.

  • 09:25

    DR. ROB FISCHER [continued]: So we've worked very hard on our logic model.And we've framed out what those outcomes of interest are.Now the next step is that we actuallyhave to select the way we will measure each of those outcomes.So this is selecting indicators to represent the outcomesthat we have placed into the logic model.

  • 09:45

    DR. ROB FISCHER [continued]: And those indicators would be the specific waysof measuring the outcomes.So for example, if we want to increaseknowledge or confidence about parenting skills,how will we do that?What source and method will we use?Will we use a standardized assessment?Will we use observation of these moms?Will we use self-report of moms or significant othersabout parenting skills.

  • 10:09

    DR. ROB FISCHER [continued]: So those are decisions that we have to make.So we want to offer some guidanceon the general selection of outcome indicators.First, outcomes have to do with participants.So we really have to make sure that wehave represented participant experience in our outcomes.We have to start with that.

  • 10:30

    DR. ROB FISCHER [continued]: And whether or not we think participant voiceis the most reliable, or valid, or unbiased view of experience,it's important to include participantsas a data source in this.Secondly, we're often trying to measure pre- to post- changein outcome measurement.We're measuring something before the program starts,and something after the program ends, and thenperhaps over time.

  • 10:58

    DR. ROB FISCHER [continued]: So that pre- to post- change can be a valuable wayof assessing differential achievementon an outcome over time.Obviously we want to select outcomes that are closelylinked to the program.But many times programs think that theyneed to select measures that sound good,rather than maybe what they are specifically doing wherethe action of the program is.

  • 11:23

    DR. ROB FISCHER [continued]: But we need to make sure any timewe've identified an outcome, to be able to point back to whatthe program itself does.And to say, here is the activity that links to that outcome.Because otherwise it raises the question,why is this outcome here?And I mentioned the client or participantperspective is crucial.

  • 11:45

    DR. ROB FISCHER [continued]: Self-report data can have certain biases in them.But they have a certain validity,regardless of whether we agree with what they say,it's important to include that clientself-report in some fashion in the evaluation.Simply as a way to validate other data that wemay get from competing sources.

  • 12:06

    DR. ROB FISCHER [continued]: So this all sounds good.But there are also some concerns, some pitfalls,that we might experience in the selection of outcomes.Because we're working in an applied environment,often with the funder at the table, sometimesprograms have gotten resourced or fundedin a way that promotes a certain outcome thatmaybe doesn't match up with the program reality.

  • 12:28

    DR. ROB FISCHER [continued]: Well, those outcomes have to be included in some fashionif they are the funders crucial outcome,and the program accepted the fundingto produce that outcome, it has to be measured.But there may be opportunity to includeadditional measures that maybe support the interim programtheory.There are also some issues with once we've selected outcomes,there are some risks in how a program might change to producemore of that outcome.

  • 12:57

    DR. ROB FISCHER [continued]: So if we-- once staff and other partners knowwhat we're measuring for the evaluation,it may be that, even unconsciously,they're changing their behavior to producemore of that outcome.And not through mechanisms to produce better servicesbut through how they change the practices of the program.

  • 13:20

    DR. ROB FISCHER [continued]: So we need to make sure that if the outcome changes,it's due to the actual benefits of the program thatare being delivered.Not because of some change in tactics or measurement.And then thirdly.Again, some of the outcomes that we select wehope they all linked to what the program does.

  • 13:40

    DR. ROB FISCHER [continued]: But sometimes they also are very muchintimately tied to what goes on around the program.So anytime you have outcomes relatedto markets such as the labor force, or the housingmarket, or public assistance, where the control of thoseis outside the purview of the program,we need to make sure we're careful about interpretingthose results.

  • 14:05

    DR. ROB FISCHER [continued]: So during-- if you run a job trainingprogram in a community that is highly distressedand unemployment is high, your resultsare going to be worse in terms of job placement and retention,then in areas that are having thriving economic times.So we need to have in a way to understand participant outcomeresults within those contexts that may impact their success.

  • 14:33

    DR. ROB FISCHER [continued]: So I want to make some key distinctions in our logic modelwork for outcome measurement.Firstly, one that we just made.And that's the distinction between outcomesversus indicators.So outcomes are kind of the conceptual changes that wehope to see in participants.Economic sustainability or improved academic performance.

  • 14:58

    DR. ROB FISCHER [continued]: Indicators are those specific measuresthat we will use to represent the outcome.So if it's about employment, are we talking about retentionin employment, earnings, job satisfaction, If it'sabout academic success for children, are wetalking about normal progress in school proficiency testscores, grade point average?

  • 15:24

    DR. ROB FISCHER [continued]: What are those specific measurement pointsthat we'll use to represent the outcome?And what's crucial here is we might have a consensusview about what the outcome is but thenwe might disagree about what indicators areappropriate to represent it.All right.Another key distinction for us in our logic model workhas to do with where the outcomes fallin relation to participants program experience.

  • 15:49

    DR. ROB FISCHER [continued]: Previously in the logic model, wetalked about initial, intermediate, and long termoutcomes.One generalized way of thinking about thisis whether outcomes are proximal to the program,in proximity or close to it, or distal.So this is a discussion for the partnersto think about what things do we wantto be held accountable for, in terms of the programevaluation.

  • 16:11

    DR. ROB FISCHER [continued]: And proximal outcomes are those thingsthat we have the most confidence that the program shouldproduce.The distal outcomes that are often to the future,are often those things that we hope for but are often impactednot just by participants program experience,but many other factors in their community experience.So we need to sort out which are those close programoutcomes that we want to be focused on versus those thatare perhaps on our agenda but are not somethingthat we may even be able to measure.

  • 16:45

    DR. ROB FISCHER [continued]: And then the last point is this entire tutorialhas been about outcome measurement.It is not about claiming impact or claiming program effect.Because all we're doing is measuringwhether an outcome changed while a participant was in a program.Which is important.

  • 17:05

    DR. ROB FISCHER [continued]: And it's hopefully consistent with our theoryabout how change should occur.But we really haven't proved that the change we observeis due to the program experience.We need much more rigorous evaluation designto actually make a claim about causality.So that would be where we would use comparison, or controlgroups, or matched cohort studies, or other designs thatwould allow us to actually attribute differencesin the outcome to the program that we're evaluating.

  • 17:39

    DR. ROB FISCHER [continued]: But it's really important that we do this outcome measurementwork as a foundation before we apply a more rigorous design.We should really have a good sense of what the outcomes areand be able to show that they are occurring accordingto the agenda for the program partners,before we employ more rigorous studydesign to test whether the program produced those results.

  • 18:11

    DR. ROB FISCHER [continued]: In this tutorial we've talked specificallyabout outcome measurement as a set of tacticswithin program evaluation.We've talked about the definition of outcomesas being changes in participants in their knowledge, attitude,and behavior.We've discussed how to unpack the theory of changebehind a program using the logic model as a wayto depict what we expect from program experience.

  • 18:40

    DR. ROB FISCHER [continued]: And then we have also talked about the importanceof conveying the connection between outcomes.From initial to intermediate outcomes,and perhaps to long term.To convey what is it that we should expect to seeplay out over time for participants?Some of our guidance has focused on making surethat we represent the participantsthemselves in our outcome measurement methods.

  • 19:07

    DR. ROB FISCHER [continued]: That we think about pre- to post- measurementas a tactic to capture change.And finally, making sure that we includemeasures that are based on client self-reportas a way to authentically representthe experiences of clients within the program.

Abstract

Professor Rob Fischer describes outcome measurement in program evaluation. He emphasizes the importance of stakeholder participation in research design, pre- and post-program measurement, and the difference between outcome and impact.

Looks like you do not have access to this content.

An Introduction to Outcome Measurement

Professor Rob Fischer describes outcome measurement in program evaluation. He emphasizes the importance of stakeholder participation in research design, pre- and post-program measurement, and the difference between outcome and impact.

Copy and paste the following HTML into your website