Skip to main content
Search form
  • 00:00


  • 00:12

    SUNG-WOO CHO: Hi.My name is Sung-Woo Cho, and I'm a researcher and lecturerformerly of Columbia University.And today I'll be talking about an introductionto Randomized Controlled Trials, or RCTs.So just a few things about myself.My previous work has mostly been on community college students'progress using quantitative data,and currently I work on evaluatingeducation and workforce development programs.

  • 00:40

    SUNG-WOO CHO [continued]: My focus primarily is mostly on impact evaluation.And I'll talk more about what an impact evaluation isand how it's defined.And Randomized Controlled Trials, or RCTs,are usually the best way to answerthis question, this impact evaluation question.So a few objectives for this presentation.

  • 01:02

    SUNG-WOO CHO [continued]: We'll learn more about impact evaluationsand why they're important in the social sciences.We'll learn more about RCTs themselves.We'll learn more about selection bias, and why they're a problemand how RCTs can help resolve the problem.And lastly, we'll see an example of an RCTin the field of education.

  • 01:29

    SUNG-WOO CHO [continued]: So what works?This is a pretty large question sometimesin the social sciences, and this is often the biggest questionfirst social scientists for any evaluation or research workthat they're doing.So, again, does something work or not?And in impact evaluation helps answer this questionwith the right data.

  • 01:52

    SUNG-WOO CHO [continued]: So we'll try to measure the impactof a program or a practice on outcomes that we can measure,and that's basically the deal with impact evaluations.And an outcome, to define this in our fields,is a immeasurable endpoint.So, for example, completing a degreeor persisting into the next year,this is a measurable endpoint for students or peoplethat we're tracking.

  • 02:17

    SUNG-WOO CHO [continued]: So, again, with an impact evaluation,it's a clear question of whether something is working or not.So with impact evaluations, there'sa causal element to this.So, for example, can you find outif a program is causing degree completion to increase.So this is a little different from what people sometimescall descriptive analyses where there is no causal element,but you're running descriptive statisticsand figuring out what the averages are for an outcome,for example.

  • 02:47

    SUNG-WOO CHO [continued]: But often, with descriptive analyses,there's no causal element.With an impact evaluation, there is a causal element.You're trying to figure out whether a program iseffectively driving outcomes to change.So impact evaluations are differentfrom many other analyses that use quantitative data, such asdescriptive analyses.And a rule of thumb is if there is no causal questionto answer, then it's probably not an impact evaluation.

  • 03:21

    SUNG-WOO CHO [continued]: So with RCTs, these are often the best wayto answer causal questions in an impact evaluation.So in an RCT, the main idea is that people are randomly placedinto either the program condition, the treatment group,or they're not, the control group.The intervention is the program or practicethat the treatment group receives,but the control group does not.

  • 03:48

    SUNG-WOO CHO [continued]: With this diagram of RCTs, at the very leftwe have an entire group of peoplethat perhaps we want to study.So we go through a consent processwhere out of that entire group wehave some people who consent to being the study,they want to be in the study, and thenthere are people who don't want to be in the study who areleft out of the study sample.

  • 04:08

    SUNG-WOO CHO [continued]: So the study sample has people from the entire groupof population of people that you're looking into,and these are the people who have provided consent to beinvolved in a randomized trial.When randomization occurs, peopleare randomly placed into either the treatment or the controlgroup.

  • 04:35

    SUNG-WOO CHO [continued]: So let me talk a little bit about selection bias.So this is an issue that is pervasive in a lotof social science research that researchers oftenwant to either lessen or eliminate altogether.Let's imagine a scenario in whicha school decides to start an after school tutoring program.

  • 04:57

    SUNG-WOO CHO [continued]: So students are allowed, in this case,to choose to enroll in the program or not,if they don't want to.So in that case, if the students arechoosing to be in the program, itmight be the case that the people whoare choosing to be in the program actuallyhave higher grades or are more motivated to be in the program.

  • 05:18

    SUNG-WOO CHO [continued]: And as a result, if we're measuring GPA as an outcome,for example, those motivated, higher grade studentsmight have higher outcomes than studentswho are not as motivated.So with selection bias, students or peopleare choosing to go into the conditionthat they want to be in, and as a result,those might be driving outcomes to changeinstead of the program in and of itself.

  • 05:44

    SUNG-WOO CHO [continued]: So in this case, if researchers are evaluatingthe impact of the program, and the main outcome is GPA, again,if students are choosing to be in this after school program,then those students who might be more motivatedor have higher grades already, theymight be driving these increased outcomes, these increased GPAoutcomes, compared to students who choose not to bein the after school program.

  • 06:08

    SUNG-WOO CHO [continued]: In this diagram that I've created about selection,from the study sample of people, these peopleare choosing to be in the treatment or the controlcondition.So in this case, there are some studentswho choose to be in the after schoolprogram and some students who do not,who are in the control group.So when we track these students' outcomes over time,we go over to the right-hand column wherethe students who are in the treatment grouphave much higher GPAs, for example,than the students who were in the control condition whohave lower GPAs.

  • 06:41

    SUNG-WOO CHO [continued]: And it could have been the case, once again,that the students who chose to be in the after school programmight have been more motivated, mighthave been more excited to be in the program, hadhigher grades in the first place,and that's the reason why they had higher GPAs after wetracked these students.So with selection bias, it's really hard to tell--or in some cases it's impossible to tell--if the program is increasing GPAs, the outcome,for the treatment students or if students who alreadyhave higher grades and higher motivationend up with high GPAs.

  • 07:17

    SUNG-WOO CHO [continued]: So it's really hard to, what we call,disentangle the effect of the program on outcomesbecause we have people's characteristics,their motivation, that might alsobe driving outcomes to change and not just the programitself.So that was an example of selection bias taking place.People select into the treatment or the control condition,and as a result, they bias the results.

  • 07:42

    SUNG-WOO CHO [continued]: That is, it's hard to tell if the program or the makeupof the people is causing outcomes to change.You can't tell whether it's either the program itselfthat's only driving outcomes to changeor if it's the program and the makeup of the peoplethat might be causing outcomes to change.

  • 08:08

    SUNG-WOO CHO [continued]: So how do RCTs remove this sort of selection bias?So by using an RCT to measure the impact of a program,an evaluator would randomly assign peopleto the treatment or the control group.So the randomization of being in the treatment or control groupremoves the element of selection,and therefore selection bias.

  • 08:32

    SUNG-WOO CHO [continued]: So if there are no instances of people choosingto be in the treatment or the control conditionbecause everything is randomized,then there should be no selection and thereforeno selection bias.So by randomly selecting people into the two groups,the makeup of the two groups shouldbe very similar to one another.

  • 08:52

    SUNG-WOO CHO [continued]: So if you imagine if you have hundreds of people, maybe eventhousands of people, being randomly selectedinto either the treatment or the control group,the larger the samples get, the makeup of the groupshould become more and more similar to one another.If you look at the averages of racial makeup or average age,it should start to become very, very similar to one anotheracross the treatment and the control conditions.

  • 09:22

    SUNG-WOO CHO [continued]: So what if randomization isn't possible?Randomization might not be an optionif, say, for example, the treatment isgiven to an entire population.So if the school that we've talked aboutwants all the students to enroll in their after school program,then a randomization might not even be an option.

  • 09:44

    SUNG-WOO CHO [continued]: Or if parents in the school don't want their studentsto be randomly selected into a treatment or a controlcondition, then in that case randomizationmight not be possible, not for research reasons,but for other considerations.So in those cases, to measure impact,a QED, a Quasi-Experimental Design,is the next best option.

  • 10:07

    SUNG-WOO CHO [continued]: So in a QED, the treatment and the controlgroup people are match together based on their characteristics,and the outcomes are later measured.So in this case, people are not being randomly putinto a treatment or a control condition,but using the data that you've already collected on people,you are effectively matching the treatment group peopleto very similar control group people basedon the characteristics, all the datathat you have on people's characteristics,at your disposal.

  • 10:37

    SUNG-WOO CHO [continued]: And then you track them until you have other outcomes,and then look at the differences in outcomes.So in terms of measuring impact for-- RCTs,this measurement of differences across the treatmentand control conditions on outcomes--it's also very similar for QEDs and for RCTs.

  • 10:58

    SUNG-WOO CHO [continued]: So how do we measure the impact of a program using RCTs?So in this diagram here, we have the treatment GPA outcomeon the left-hand side, and we have the control groupGPA on the right-hand side.So we found that for the treatment group,their average GPA outcome is 3.8.And then for the control group, their average GPA is 3.2.

  • 11:22

    SUNG-WOO CHO [continued]: To measure the impact of the program using a RandomizedControl Trial, it's fairly straightforward.All you have to do is just measure the differencesin average outcomes between the treatment and the controlgroup.So in this case, 3.8 minus 3.2 would equal 0.6.And that is the impact of the after schoolprogram on GPA outcomes.

  • 11:51

    SUNG-WOO CHO [continued]: So in conclusion, since the only difference between the twogroups is that the treatment group received the program,you can be confident that the program in and of itselfcaused the difference in outcomes.So for an RCT, since people are randomlyput into a treatment or a control group,and the only difference between the two groupsis that the treatment group received the program,you should be very confident that the program,and the program in and of itself,caused the difference in outcomes.

Video Info

Publisher: SAGE Publications Ltd.

Publication Year: 2017

Video Type:Tutorial

Methods: Randomized control trials, Evaluation, Quasi-experimental designs

Keywords: after-school education; choice behavior; grades (scholastic); outcomes; outcomes of education; treatment ... Show More

Segment Info

Segment Num.: 1

Persons Discussed:

Events Discussed:



Researcher Sung-Woo Cho introduces the concept of randomized controlled trials (RCTs). He explains how randomization can eliminate the effects of selection bias, and how quasi-experimental design (QED) can be employed in a study if randomization is not possible.

Looks like you do not have access to this content.

An Introduction to Randomized Controlled Trials

Researcher Sung-Woo Cho introduces the concept of randomized controlled trials (RCTs). He explains how randomization can eliminate the effects of selection bias, and how quasi-experimental design (QED) can be employed in a study if randomization is not possible.