Skip to main content
SAGE
Search form
  • 00:00

    SPEAKER: In this lesson, we're goingto talk about power and size, or more precisely, sample size.But first, let's start out with candy.Let's assume we have two jars of candy.There are 50 red and 50 green candies in each jar.Now, let's assume that the jars are not translucent,meaning that we can't see through them.

  • 00:22

    SPEAKER [continued]: Let's assume that we wanted to seeif the distribution of green and red candieswas really the same for both jars.So we draw a sample of 10 candies from each jarand count the number of green ones.And we get 2 green ones for jar Aand 8 green ones for jar B, so 20% versus 80%.Can we now assume that the distribution of colors

  • 00:44

    SPEAKER [continued]: is different in both jars?Not really.The sample size of 10 is quite small, and a result like thatis absolutely possible by chance,even if the true distribution was 50-50.So in order to get a more reliable and accurate result,we'd have to enlarge the sample size.Let's say we drew 50 candies from each jar.

  • 01:06

    SPEAKER [continued]: Now, the result will probably look more like this--27 green candies from jar A versus 24 from jar B, so 54%versus 48%.So you see, that as the sample size gets bigger,the accuracy increases.And, of course, if we had drawn 100 candies from each jar,the accuracy would be 100%, since we

  • 01:27

    SPEAKER [continued]: sampled the entire population of candies.And what does that mean for the population of patients?Well, when introducing a new drug, what we reallywould like to know is, what would happenif the entire population of patientswith the disease of interest would receive this drug?How many patients would we be able to cure or save?In other words, what is the true or real world effect

  • 01:48

    SPEAKER [continued]: of the drug?Or actually, we'd like to know whatwould happen if we gave the drug of interestto 50% of the population while the other 50%didn't receive the drug.Let's say that the recovery rate was 80% in patients whoreceived the drug and 40% in patients who didn't.That would be the true effect of the drug.Now, a trial of the entire population

  • 02:09

    SPEAKER [continued]: is usually impossible.So we do the next best thing and drawa sample that's as large and representative as possible.So as with the candy, the recovery ratesin the sample population will be an approximationof the true effect.So let's say in the group who receivedthe drug, 73% recovered, whereas in the placebo group,

  • 02:30

    SPEAKER [continued]: 47% of patients recovered.As you can see, these numbers are approximationsof the real world numbers, and as with the candies,they get better as the sample size increases.So we know that ideally, in orderto get high accuracy in our approximation of the real worldnumbers, we'd like to have a large sample size.However, the more patients we recruit, the more money

  • 02:53

    SPEAKER [continued]: and time we have to invest, and these are usually limited.So let's look at the possible conclusions onecan draw from a study.Here's a 2 by 2 table.On the top, you see the actual, real life facts.Two treatments can actually be no different or different.Our studies results are shown on the left.We can either see no difference in treatment effectsin our study, or we can see a difference.

  • 03:16

    SPEAKER [continued]: When we see no difference in effects when there reallyis none in the real world, we made a true conclusion.When we see a difference when there really is one,we also made a true conclusion.On the other hand, if we see a difference when there's none,we made an error.And when we see no difference when there really is one,we also made an error.

  • 03:36

    SPEAKER [continued]: When we see a difference in our studywhen there really is none, that's called a Type I error.The probability of making a Type I error is called alpha.When we see no difference in effects when there reallyis one, that's called a Type II error.The probability of a Type II error is called beta.The acceptable cutoff of alpha is generally

  • 03:56

    SPEAKER [continued]: chosen at 0.05, or 5%.What does it mean if you read the resultsof a study and the authors state that a drug improvedthe outcome, say, by 20% and that alpha or the p-valuewas below 0.05?Well, this means that the probabilityof detecting this difference by chance,meaning detecting this difference even

  • 04:17

    SPEAKER [continued]: though in the real life there was no difference, is below 5%.In other words, our conclusion in this case might be wrong,but the probability of this being so is below 5%,and this is generally agreed to be acceptable.The ability to correctly concludethat there's a difference between treatmentsis called power.It's generally agreed that a study should have 80% power.

  • 04:39

    SPEAKER [continued]: In other words, if there really is a difference,our study should have enough powerto detect the difference with 80% probability.The more patients are in a study, the higher its power.Since these two cells have to add up to 100% and since poweris generally 80%, then beta or the probabilityof making a Type II error is generally around 20%, or 0.2.

  • 05:01

    SPEAKER [continued]: So alpha is the probability of concluding that the treatmentsdiffer when they really don't.Beta is the probability of concludingthat the treatments don't differ when they really do.Power is 1 minus beta, and that'sthe probability of concluding that the treatments differwhen they really do.[MUSIC PLAYING]

Video Info

Series Name: Interpreting Randomized Trials

Episode: 3

Publisher: Medmastery GmbH

Publication Year: 2017

Video Type:Tutorial

Methods: Clinical research, Statistical power, Sample size, Type I errors, Type II errors

Keywords: clinical research; probability errors; randomized clinical trials; Sample size and power; type I errors; type II errors; validity and reliability ... Show More

Segment Info

Segment Num.: 1

Persons Discussed:

Events Discussed:

Keywords:

Abstract

Assessing power and sample size when interpreting randomized clinical trials is explained, including type I and type II errors.

Looks like you do not have access to this content.

Assessing Power and Errors

Assessing power and sample size when interpreting randomized clinical trials is explained, including type I and type II errors.

Copy and paste the following HTML into your website