Skip to main content
Search form
  • 00:13

    Ipsos MORI is a market research company based in the UK.And we carry out a wide range of research.So we'll do anything from helpingcompanies build their reputation,understand their customers.We'll do political polls.We also do research to help government departments evaluatepolicies they've implemented.And we'll also carry out large scale social research

  • 00:34

    surveys in order to understand long-term social trends.We're part of the wider Ipsos group,and the Ipsos Group's got 16,000 employees,and it's a multinational company.So we've got offices in 84 countries.We're currently carrying out the Welcome Trust Monitor Wave3, which is a survey on behalf of the Wellcome Trust.

  • 00:55

    My name is Ethan Greenwood, and I'm the project managerfor the Wellcome Trust Monitor.The Wellcome Trust funds biomedical research.It also campaigns on policy issuessuch as anti-microbe resistance, and it alsofunds resources for teachers, for science teachers.The Wellcome Trust Monitor itself is a larger survey.

  • 01:17

    It's a nationally representative sampleof 1,600 adults 18-plus UK wide on attitudestowards biomedical research and science.We're working with a number of teams within the WellcomeTrust.And they're all interested in various different aspectsof science and biomedical research.So for instance, there's a team that'sinterested in what influences people's food buying behavior.

  • 01:39

    We're also looking at people's general knowledgeof certain times within biomedicine.So do people understand the conceptof antibiotic resistance?What do they understand genetically modified foodto be?So there's quite a few different topicsthat go into the questionnaire.And some of these topics have been asked for the previous two

  • 01:60

    waves, so back since 2009-- whereas, some of themare new this time.This allows us both to see how various opinions have shiftedover time, but also keep up to date,and keep a handle on emerging topics.There are a wide number of biasesthat can come into any survey.

  • 02:20

    So you can have sampling bias, whereby, the results areout purely because you've interviewed only 1,000 peopleor so, instead of the whole population.But you can also get other forms of bias.So you can have non-response biaswhere those people you do interviewhappen to be different from those you don't interview.

  • 02:42

    Or you get social desirability bias,whereby, people give answers to the interviewas they think the interviewer might want to hear.You can even get bias from data processing mistakes,or measurement bias, whereby, different people interpretquestions in different manners.So our task is to take all of these different forms of bias

  • 03:03

    and try to minimize them at every single stage,so that ultimately the results we getare as accurate as possible.The Wellcome Trust Monitor is done once every three years.And other large social surveys are done more frequentlythan that.So they might be done annually.Some of the large ones that feed into official labor forcestatistics, for instance, are continuous.

  • 03:25

    And results are reported every quarter.For something like the Wellcome Trust Monitor,though, it's important have results thatcan be looked at as up to date.But also, it's unnecessary to have themas frequently as annually or quarterly,given that many of the things thatare looked at in the survey, whilst they'llbe subject to long-term trends, they're

  • 03:46

    less likely to be subject to monthly or quarterly trends.And tracking this level of detailis of less interest to the Wellcome Trust.Carrying out more frequent surveyswould also, obviously, cost more.The Wellcome Trust Monitor is designedto be a very robust survey, the results which are shared

  • 04:10

    to academics and policymakers.And ultimately, it's not meant to be a fast turnaroundsurvey that give an indication of people's thoughts.It's actually meant to be an accurate, robust survey thatcan be used by many people, and can stand up to scrutiny.The Wellcome Trust Monitor is a surveywhere the aim is to do it to the highest quality, which

  • 04:32

    means the statistics produced are highly reliable.What that means is we use a random probability samplingmethodology, which means we get allthe addresses in the country and take a random sample of them.And then knock on the door, and then wetake a random selection of adults within the household.Now in an ideal world, what we do

  • 04:52

    is select 1,600 adults living in the United Kingdom completelyrandomly.And they'd all agree to take part,and our interviewers would go out and interview them.And then we'll have our results.But there are three problems with this approach.So first of all, we don't have, in the United Kingdom,a population register like some Scandinavian companies do,whereby we can just pick people at random.

  • 05:14

    Second problem is that, even if wecould do that, there'd be some people that, however much wetried to persuade them, wouldn't want to take part.Or maybe they just wouldn't be around.We wouldn't be able to contact them.And the third problem is that if we were to pick peoplecompletely at random, interviewerswould spend most of their time traveling between addresses.And it would be a very expensive way to do field work.

  • 05:37

    So to get over these problems, instead of sampling people,we actually sample addresses from the post codesand the post office's list of every residential addressin the United Kingdom.And we select just over 3,000 addresses.And then we send interviewers outto interview one adult at each of these addresses.

  • 05:59

    We know that a certain number of people won't want to take partor won't be around.So we take this into account, and by samplingabout 3,000 addresses, we know that we'llend up with about 1,600 interviewswith actual individuals.So when we're selecting the actual addressesfor interviewers to go out to, we do this in two stages.

  • 06:19

    So first of all, we'll select 129 post code sectorsthroughout the country.And the post code sectors contain--each one will contain around 2,000, 2,500 addresses.But they're relatively small in area.And then, within each of these post code sectors,we select 25 addresses.And then, these are the addresseswhere our interviewers go to try to get interviews.

  • 06:42

    So each interviewer is actually workingin a fairly small geographic area, whichmakes field work efficient in that they,in between addresses, don't have to spend all their timetraveling around the country.Another issue we have is that when interviewers actuallyarrive at a household, there mightbe one adult living there, or there might be two or more.

  • 07:03

    And it's very important that the adult that's interviewedis randomly selected.If, for instance, we just allowed any adultin the household to take part, wemight find a situation where someonethat's really interested in science elects to take part.And all of our results would be biasedin favor of people who are more interested in science.So on arrival at a household, the interviewer

  • 07:24

    will first enumerate all the adults living in the household.And then, a random adult will be selected.And they will try to get the interview with that adult,regardless of whether the adult is there or mightrefuse to take part.So the methodology of using what we'recalling a random probability samplings

  • 07:45

    method where we actually pre-select addresses, sendinterviewers out, try to get as high a response rate as we can,that's essentially the gold standard of research in termsof getting robust results.It's much more robust than an alternative strategy,such as equator methodology where you can just essentially

  • 08:05

    go and interview anyone you find,as long as you interview the right proportionof males, females, those in different regions, et cetera.We also decided on a face-to-face methodologybecause some of the concepts discussed in the intervieware actually quite complicated.So we're asking about people's understanding

  • 08:27

    of various scientific terms.We even ask a question to gauge whether people can understandwhat might be the best method of testingwhether a particular drug works.And in order to administer these questions properly and getaccurate responses, it's much betterto actually have an interviewer there face-to-face,

  • 08:48

    in order to get a good rapport between the respondentand the interviewer.And they use a method call CAPI, which is Computerized AssistedPersonal Interviewing, in which they have touch screen tablets.And they'll go in, and the computerwill essentially guide them through the questionsthat they need to ask.That basically means you can make

  • 09:08

    valid statistical inferences from the data you collect,which generally you can't from online web panels or quotasurveys, strictly speaking.In terms of the sampling, my main focusis ensuring we have actually followedstrict random probability principles whendrawing the sample.The reason for that is the random probability sampling

  • 09:30

    methodology is the bedrock from whichthe inferences about the wider population can be drawn.If you get that wrong, then strictly speaking,any inferences you draw are indicative only,and aren't necessarily representativeof the population.We're interviewing in all four countries of the United Kingdom

  • 09:50

    in proportion to population.If we were to do that, we'll find that now,naturally, we might only get about 8% of our interviewsin Scotland, because proportionately,that's how many people live in Scotland comparedto the rest of the country.So this gives us survey estimatesthat are perfectly representative

  • 10:12

    at the national level.But then, let's say we're specifically interested in whatpeople in Scotland think, and we wantto look at them in particular and maybe dosome sort of analyses and look at whatdo people who are older in Scotland think,versus people that are younger.Our sample size simply won't be large enoughfor us to make any reliable conclusions.

  • 10:33

    So what's sometimes done on surveysis that you'll do a boost sample,whereby, you'll will interview all around the country.But then you might do a large numberof interviews in Scotland.So you might do kind of 200 or 300just so you can get more accurate results on that level.So all surveys are subject to sampling error.

  • 10:55

    And something error describes the amountthat a survey estimate is out from the true populationpurely as a result of interviewingjust a certain smaller number of people,rather than interviewing absolutely everybody.For instance, if we do a survey of 1,600 people,and we find that 70% of those people say

  • 11:17

    they're interested in science.We can say with 95% confidence that the true proportionof people that are interested in science in the populationis plus or minus two percentage points from that surveyestimate.So we can say it'll be between 68% and 72%.

  • 11:37

    You need a relatively small samplein order for the competent ints to be just twoor three percentage points out.So for instance, a survey of 1,000 or 1,600 people,whether it's done in a small country like Walesor a large country like China, will still

  • 11:58

    produce survey estimates that areequally accurate on the basis of sampling error.Everyone that takes part in a surveyis given a 10-pound gift card after they've completely it.And they'll also receive a letter before an interviewervisit, describing the survey, the sorts of thingsthey'll be asked, and telling them that they'llbe given those gift cards.

  • 12:20

    And the research generally finds that giving an incentivelike this increases the overall response rateby several percentage points.And this is important to minimizethe bias in the survey.So by offering an incentive, we encourage those peoplethat may be less interested in the topic to take part.And in the absence of that, we might

  • 12:41

    find that we've interviewed predominantly those people thatjust happened to be interested in scienceand interested in doing the survey.And that has the risk of actually biasingthe actual results we report.Hey.How are you?Good to see you, how are you?Good.Good, thanks.Great.Thanks for coming.No problem.

  • 13:04

    So just want to discuss the competent interviews--Yeah.Absolutely.--from the questionnaire.Questions we ask about food and drink,about what are the issues that guides people's choiceswhen they choose food in the supermarket.There were some challenges around that, weren't there?We just finished the setup phase of the survey,and this has primarily involved agreeing the questionnaire

  • 13:25

    that we're going to use.So there's been a lot of discussions between usand the Wellcome Trust, first of all,to decide what sort of topics they're interested in exploringthis wave, and also in deciding how many questions they wantto keep from previous wave so that they can track trendsover time.And how many questions they want that are completely new so

  • 13:46

    that they can keep up to date.This has involved quite a lot of discussions and testing.We've gone out and done cognitive interviews, whichinvolve asking members of the publicto answer questions we come up with,but then also describe the questions in their own words,tell us what they're thinking as they answer it,

  • 14:07

    give us feedback on whether they thought it made sense to them,and actually looking at the answer they give.So that in some instances we can say, well actually,we thought this question might work--but actually, people aren't really understanding it.And we need to refine it.For us a successful methodology isone that minimizes the amount of biasand produces the most accurate results possible representative

  • 14:29

    of the United Kingdom, in this case,but in the most efficient manner in termsof cost in field work resource.

Video Info

Publisher: SAGE Publications Ltd.

Publication Year: 2017

Video Type:In Practice

Methods: Survey research, Random sampling, Computer-assisted personal interviews, Sampling error

Keywords: accuracy; attitudes; challenges, issues, and controversies; public opinion; rapport

Segment Info

Segment Num.: 1

Persons Discussed:

Events Discussed:



Practitioner Tom Huskinson describes the sampling methods and incentives Ipsos MORI uses in conducting the Wellcome Trust Monitor survey. He explains that these methods are used to minimize bias and error.

Looks like you do not have access to this content.

Survey Research: Nationally Representative Surveys with Ipsos MORI

Practitioner Tom Huskinson describes the sampling methods and incentives Ipsos MORI uses in conducting the Wellcome Trust Monitor survey. He explains that these methods are used to minimize bias and error.

Copy and paste the following HTML into your website