Skip to main content
SAGE
Search form
  • 00:00

    [MUSIC PLAYING]

  • 00:14

    RICHARD PARKER: Hello, my name is Richard Parker.I'm senior statistician at the Edinburgh clinical trials unitat the University of Edinburgh.My background is in applied medical statistics.So in this video, I'll give an overview of sample sizecalculations and clarify the various components of a samplesize calculation.

  • 00:35

    RICHARD PARKER [continued]: I will also give some tips and keypoints to consider for successfully generatingvalid sample size calculations.This is based mainly on my own experiencein applied medical statistician.

  • 00:57

    RICHARD PARKER [continued]: So sample size determination is oneof the most important aspects of the design of any researchstudy, and many of these studies will require a formal samplesize calculation.And the main purpose of a sample size calculation,is at the study design stage to tell usexactly how many participants we need to recruit to our study.

  • 01:18

    RICHARD PARKER [continued]: So it's not very helpful if the study is alreadyfinished recruitment.In this case, a sample size calculation is not very useful.The sample size calculation needsto be thought about as early as possible in the researchprocess.And an appropriate and valid sample size calculation

  • 01:39

    RICHARD PARKER [continued]: is necessary for us to achieve an ethical study design,and a wrong sample size may lead tonegative ethical consequences.So there is an ethical mandate for a good sample sizecalculation.So we want to avoid two extremes.One is having a sample size, which is too large,

  • 02:01

    RICHARD PARKER [continued]: and therefore, this constitutes a waste of resources.In this case, resources would have been betterspent elsewhere, an additional participant burdencould have been avoided.On the other hand, we don't want to sample sizes too small.And therefore, we're not able to detectclinically important differences in the population, which

  • 02:23

    RICHARD PARKER [continued]: is also unethical.So it's a question of balance.Having a sample size that is not too small and not too large.Sample size calculations are usuallybased on the primary outcome in a study,but they can take many different forms.For example, you can construct a sample size calculation

  • 02:44

    RICHARD PARKER [continued]: based on the width of a 95% confidence intervalor as is more commonly done formal power calculation.Here, we are interested in having enough statistical powerto detect certain effects.For example, the difference between groups and concludingthat this is a real effect in the population.

  • 03:06

    RICHARD PARKER [continued]: Power calculation's always correspondto hypothesis testing.So if we are interested in testinga null hypothesis of no difference between groupsagainst an alternative hypothesisthat there is a difference between treatmentgroups in a clinical trial, we will invariablyneed to specify a power calculation at the study design

  • 03:29

    RICHARD PARKER [continued]: stage.So let's consider an example of a power calculation.Suppose we're interested in designing a trialto assess the effectiveness of an antihypertensive treatment

  • 03:49

    RICHARD PARKER [continued]: in patients with type 2 diabetes.Further suppose that our primary outcome is diastolic bloodpressure, then how do we go about constructing a samplesize calculation?So let's look at all the key ingredientsor more in turn or more formally the sample size parameters

  • 04:12

    RICHARD PARKER [continued]: that we will need for this calculation.So first of all, the power.This is the probability of rejecting the null hypothesisgiven that it is false.It's the probability of concludingthat there is a difference between groupsgiven that there really is one.So this is a 1 minus the type 2 error.

  • 04:33

    RICHARD PARKER [continued]: So when we say type 2 error, we meanthat we fail to detect a difference between groupsgiven that there really is one.So the power, statistical power is 1 minus that type 2 error.And the type I error of significance level, this

  • 04:53

    RICHARD PARKER [continued]: is the probability of rejecting the null hypothesis giventhat it is true.It's the probability of concludingthat there is a difference between groupsgiven that there isn't one.Usually, the power, we choose the powerto be 90% and the significance level to be 5%,

  • 05:14

    RICHARD PARKER [continued]: but there's no obligation to choose these values.Sometimes it helps when we're constructing sample sizecalculations to be flexible with these values,they don't have to be fixed.The exact values for each study willdepend on the study context and the relative importanceof type 1 and type 2 error.

  • 05:37

    RICHARD PARKER [continued]: For early phase or exploratory studies,we have to be careful that the type 1 errorrate, the type 1 error level is not set to be too small.Sometimes even 20% may be appropriatefor an exploratory study.We also specify two sided, corresponding two sided

  • 06:00

    RICHARD PARKER [continued]: hypothesis testing.Here, we're interested in an effect in either direction.So given that our outcome diastolic blood pressureis continuous, then we also need an estimateof the expected standard deviation of the outcomewithin each group.The big question is, where do we geta reliable estimate of this?

  • 06:22

    RICHARD PARKER [continued]: Well, the best place to look is in previous publicationsof studies recruiting from similar populations.In our case, we need to look at previous studyreports involving diabetic patients thatinclude diastolic blood pressure as an outcome.If we are confident we have a reliable estimate of this,

  • 06:43

    RICHARD PARKER [continued]: then we can use this in the calculations.How confident we can be about the reliabilityof the estimate depends on when the study was conducted,and also the similarity of the patient populationand the study procedures.But if there are no similar studies or if we'redoubtful about the validity of the estimate

  • 07:05

    RICHARD PARKER [continued]: that it is best to design and run a pilotstudy first to estimate this key quantity.And how big should the pilot study be?Well, there's lots of guidance regarding this.But for continuous outcomes, 35 participants per grouphas been recommended for continuous outcomes.If our outcome is binary, for example, yes or no outcome,

  • 07:30

    RICHARD PARKER [continued]: then we need some idea of the expected proportionwith the event.But if we don't know this, then one solutionmay be to assume a proportion close to 0.5in each group or 50% in each group,because this conservatively generatesthe maximum possible sample size in this case.

  • 07:50

    RICHARD PARKER [continued]: Now, one of the most important components of the sample sizecalculation is called the target difference.This is the true difference or true effectthat we want to detect in the population.So in my experience, this differenceis often misunderstood.People often think it means the observed differenceor the difference we have seen in previous studies,

  • 08:13

    RICHARD PARKER [continued]: But it's not.It's the true effect we want to detect in the population.And we're aiming for a true difference that is notjust clinically relevant, but ideally,the smallest value of undisputed clinical importance.For the target difference, we are wanting a differencethat is small enough to be realistic,

  • 08:34

    RICHARD PARKER [continued]: also large enough to be clinically relevant.So clinical relevance is really important.It is unethical to power a trial on the basis of a clinicallyirrelevant difference and this would be a waste of resources.Sometimes, if we don't know what the target difference is,

  • 08:56

    RICHARD PARKER [continued]: it can be instructive to construct sample sizecalculations based on a range of differences.Excluding those that we know are definitely notclinically relevant.And the target difference requires clinical input.It cannot be derived solely with reference to statistics.

  • 09:17

    RICHARD PARKER [continued]: In the case of our example after discussionamong the investigators, we may settleon a difference in the diastolic blood pressure of twofor example, then that's it, we haveall the ingredients we need for the sample size calculation.So in our example, this is when we're

  • 09:38

    RICHARD PARKER [continued]: designing a clinical trial in diabetic patients,our sample size calculation looks like this.We need a total sample size of 300 patients per group, whichwill give us 90% power to detect a mean differencein diastolic blood pressure of two,assuming a standard deviation of 7.5

  • 09:60

    RICHARD PARKER [continued]: and a two sided 5% significance level.So it's just like baking a cake.You have all these ingredients that youneed to combine in order to produce a cake.So when performing sample size calculations,you need to combine all the right ingredients,for example, power, significance level, target difference

  • 10:21

    RICHARD PARKER [continued]: to produce the sample size target.And the ingredients may differ dependingon the outcome, analysis method, and context of the study.And also extending this metaphor further,the cake is only as good as the ingredients used to produce it.For sample size calculations, we need reliable estimates

  • 10:43

    RICHARD PARKER [continued]: of the input parameters used to construct it.And if we don't have reliable estimates,then we could either perform a pilot or feasibilitystudy to produce better estimatesor we could perform sensitivity analysison our sample size calculation.For example, varying the parametersto see how the resulting sample size changes.

  • 11:13

    RICHARD PARKER [continued]: Now, to actually implement the sample size calculationin practice, books with formula can help us,or there's lots of online resources,or we could use sample size software, such as nQueryor G Power, or other sample size software packages.So details and references for theseare given at the end of this video.

  • 11:36

    RICHARD PARKER [continued]: So sometimes our study design or analysis methodis very unusual, and so we might needto use simulation methods to derive an appropriate samplesize.So this involves generating realistic artificial datausing statistical software packages,and artificially imposing a difference.

  • 11:59

    RICHARD PARKER [continued]: For example, for our clinical trial in diabetic patients,we could generate two groups of artificially generateddiastolic blood pressure with a standard deviation of 7.5,which differ by 2, and then apply the analysis method,for example, a t-test and then find out what power do we

  • 12:21

    RICHARD PARKER [continued]: get from doing this.We can then use methods like thisto derive an appropriate sample size.So simulation methods are sometimesreally useful if we just can't find the formula in booksor if there's no function in sample size software.

  • 12:49

    RICHARD PARKER [continued]: Sensitivity analysis are very useful to performafter we've done the sample size calculation.So for example, if only a very tiny increase in our targetdifference leads to an enormous reduction in our requiredsample size, then we may need to consider

  • 13:10

    RICHARD PARKER [continued]: whether that extra tiny difference in the targetdifference is really worth it.Also, if the required sample size increases dramaticallywith a tiny increase in our assumptionabout the expected standard deviation of the outcome,then we may need to consider if we need a larger sample

  • 13:30

    RICHARD PARKER [continued]: size for protection if for example, our standard deviationis higher than expected, and graphical methodsare really useful to determine howthe sample size changes if we vary the input parameters.For example, this graph shows the sample size per groupplotted against the target difference

  • 13:52

    RICHARD PARKER [continued]: for diastolic blood pressure sample size example.Here, we see that the required sample sizeis really sensitive to values of the target differencewhen they approach one.In the same way, we can generate a graphto show how the sample size changeswhen plotted against our assumption

  • 14:14

    RICHARD PARKER [continued]: about the standard deviation.So I hope this video helps to clarifythe key points associated with sample size calculations.For further information on overcoming obstacles in samplesize calculations, please see my articlein Sage research methods case studies,

  • 14:34

    RICHARD PARKER [continued]: called overcoming obstacles to deriving sample sizecalculations.That there is also a list of further resourcesand references in this article.And most general statistical textbookscontain sample size calculations sections.There are also some dedicated books

  • 14:56

    RICHARD PARKER [continued]: on sample size calculations.For example, see books and articles writtenby Professor Stephen Julius.There are also free online power calculators,such as sealed envelope and dedicated sample sizesoftware, such as any nQuery samplesize software, or G Power.

  • 15:16

    RICHARD PARKER [continued]: So you will see some references at the end of the video.Thanks so much for watching this and Ihope you found it helpful.

  • 15:37

    RICHARD PARKER [continued]: [MUSIC PLAYING]

Abstract

Richard Parker, Senior Statistician at the University of Edinburgh, discusses sample size and power calculations, including sensitivity analysis and the importance of and how to implement sample size calculations.

Looks like you do not have access to this content.

Overview of Sample Size and Power Calculations

Richard Parker, Senior Statistician at the University of Edinburgh, discusses sample size and power calculations, including sensitivity analysis and the importance of and how to implement sample size calculations.

Copy and paste the following HTML into your website