Skip to main content
SAGE
Search form
  • 00:05

    I'm Professor Harvey Goldstein.I'm Professor of Social Statisticsat the University of Bristol.Well, I did a degree in pure mathematics,and then decided during the course of that degreethat I was really interested in statistics,which was a much more applied disciplinethan pure mathematics.

  • 00:31

    And so I started off, after doing some time as a researchassistant to University College London, in medical statistics.So for about seven years, I workedin a pediatric institute, the Institute of Child Health,working on child growth and development issues.And during that time, I also startedworking on a very large British cohort study, the NationalChild Development Study, which many people will know.

  • 01:01

    It's still going.And that introduced me to the social-educational statisticalside of those large-class surveys.It started out life in 1958 as a birth study,so they were looking at births and particularly mortalityaround the time of birth.

  • 01:26

    And there was never any intentionto follow up that study.And then the Plowden committee thatwas looking at primary schools camealong-- this is in the mid-1960s--and they wanted some information about primary schoolchildren, and their achievements, and so on.And there was this group who'd all been surveyed,17,000 in 1958, so they were then age seven.

  • 01:52

    So the decision was taken to put some money into go back to that cohort of children.And that work was undertaken at the National Children's Bureau,and I was succonded part-time to be the statisticianon that project.Yes, it was.

  • 02:14

    I'd already got an interest in socialist statistics,but it was quite a jump.So for a time, I was covering medical and social-educationalstatistics.It was an interesting time for me.But, of course, it was the late 1960s,and we had very, very primitive computing facilities then.

  • 02:37

    So it was quite a different era.The problems were quite different to what they are now.It was literally impossible.Not only could you not do it, but you couldn't eventhink about how you might do it.Because the technology to support allthat had not been invented.

  • 02:59

    And it's one of the interesting thingsthat methodology very often follows technology.That the existence atop of the technologyencourages you to think about new ways of doing thingsthat you would never have sat down and donebefore the techniques were there to do it,before the machinery was there to do it.

  • 03:26

    Well, basically what we were doing was multiple regression.Almost all of it was multiple regression.And then we thought it was a real breakthrough whenwe started doing logistic regression, whichvery early days, around about 1970, very few peoplehad heard of it.It was a very new technique.And we thought it was great that wewere on the frontier of the applicationof this new technique, which, of course, is now standard.

  • 03:57

    Well, let me say what I think it was doing, first of all.And we had enough computing power to do more progression,to look at relationships, for example,between educational achievement and social background,and to introduce co-variates and co-founders into the modelto see if we could explain that relationship,and how much variation was explained.

  • 04:26

    So we were able to do, as we werean embryo, a lot of things that are stillthe staple of social data analysis.So we were a bit, perhaps, quite crudelyand in terms of the time taken, long-windedly,but we were able to do all those things.

  • 04:48

    So we were learning the hard way.It would often take hours to do an analysis, which would nowtake a fraction of a second.But, of course, what we weren't able to dowas to do a lot of experimentation,exploring different models and such simply because of the timeit would have taken to do that.

  • 05:09

    So we had to a lot more thinking about whatit was, what variables we needed to put in our models, what werethe important variables, exploratory techniques,deciding out of a list of 3,000 variables,which 20 variables should go into the model.That wasn't an option for us.

  • 05:32

    Yeah, it was a large data set.And it's still a large data set, but it was a gigantic dataset in those days.Oh, well, you've got to fast forward a long time.I left the National Children's Bureau, where I eventuallyfinished up after the institute of Child Health in 1977,and I went to the Institute of Educationto take up a chair in statistics.

  • 05:59

    And it was during that period that I began to get interestedin multilevel models.But it wasn't until the mid-1980swhen we were able to conceptualize and formallydo multilevel modeling, write down the models,and actually generate computer software to do the analysis.

  • 06:20

    Now I, of course, wasn't the only one doing this.There were probably two or three groupsaround the world in different placesthat we're trying to tackle this issue.And it is to do with, what was then called,the unit of analysis problem.So you've got data, educational data,and traditionally, you analyze relationshipsusing individual level data.

  • 06:46

    You have a collection of individuals,and you look at relationship between education attainment,social background, birth weight, whatever it happens to be.And then education, you've also got high level units.You've got schools.So do you work at levels of school?Do you look at school averages and relate school averagesto each other?

  • 07:07

    Or do you in some way use a combination of both?And it was that kind of debate, whichhad been around in the literature for maybe 10 or 15years totally unresolved, that quite separately several of ussaw a solution to, which eventuallybecame multilevel modeling.And I won't forget the day when this conference, we allgot together and presented our different approaches,and discovered we got the same answers.

  • 07:36

    That was an interesting time, and it took off from there.Let me think of a typical research student.You've got data, as I said a little earlier, which consistsof data on individual pupils.

  • 07:59

    Let's keep an education example.But these pupils don't behave independentlybecause they're educated together.So the fact that you know two people belongto the same school already tells yousomething about their likely attainment, for example.

  • 08:21

    Why?Because schools are selective.You're looking at secondary schools.If you've got a selective school system,even if you haven't, some schoolswill tend to select a high-achieving intake,and others a low-achieving intake, and so on.And that, of course, will affect whathappens to them during school and in termsof things like exam results.

  • 08:42

    So the knowledge of which institution they learn inis information, which will tell yousomething about the student.And so what you want to do is when you'redoing your analysis, you want to use that information--or in other words the knowledge that these children areeducated in a school A, and those childrenare educated in school B-- to informyour analysis for two reasons.

  • 09:08

    First reason is if you don't use it,you've come to the wrong conclusions.In statistical terms, you get the wrong standard errors.Now that's a problem that had been known for a long time,but nobody quite knew how to deal with it.And there were kind of ways of fixing it up,but they weren't very satisfactory.So that's the first problem you come upon.

  • 09:29

    The second issue is really more important.If you incorporate in your statistical model, informationabout the institution they're in,actually your model results will thentell you something about what happensin different institutions about differencesbetween different institutions.

  • 09:54

    Because the information, the identity of the institution,is there in your model, and that allowsyou to look, for example, at how much of the variationin achievement is due to being in different schools,as opposed to being different individuals.What proportion of variation is at the school level,as opposed to the individual level.

  • 10:15

    So it opens up a new perspective on educational research,and talking about the influences on thingslike attainment behavior and so on.And it's not just education.These models apply to a whole varietyof uses, both social and, indeed,the natural and medical sciences.

  • 10:39

    And we began to realize that during the 1980sas these models began to be appliedin a quite different contexts.A rather special case for two-level datais longitudinal data, where you have repeated measurementson the same individuals.So if you like, the lowest level is the repetitionof the measurement-- you think of,for example, measuring children's heights over time--and the higher level is the individual child.

  • 11:08

    So as with children within schools,you've got measurements within children,but those children themselves mightbe grouped within schools or within other classes like,if it's a medical example, within hospitals.And so you've added a third level to the hierarchy.But, of course, you may also have, what we call,cross-classified data.

  • 11:32

    So going back to education, children not onlybelong to different schools.They belong to different neighborhoods.So if the neighborhoods have an influence on their performancethrough the social characteristicsof the neighborhood, for example, then youwant to count both the neighborhood that they live inand the school they go to.

  • 11:53

    But these are not hierarchy-nested.They're crossed because any given school will draw childrenfrom different neighborhoods.And for a good neighborhood, childrengoing to different schools.So it's a cross-classification of those units.And the realization we needed to do with those kinds of datacame a little bit later, and so we became interestedin developing models for cross-classifications.

  • 12:19

    And then have a further complication because childrendon't-- as would be very nice as statisticians--stay in the same school during their careers.They move schools.And how do you cope with that?You need to take all the schools they've been to into account,and that led us to develop want is known as multiple membershipmodels.There's been plenty of other things to do.

  • 12:43

    We thought around some of the early 90sthat maybe we'd done all that needed to be done,and we look for an exit strategy from all of this.And then, of course, we went on to elaborate further,and we're still doing that.It proved to be a very fertile field of study that opened upall sorts of other things that wereboth interesting and important to doto address the complexity of real-life data.

  • 13:16

    They're all predictors.And then most recently we've been looking particularlyat ways of not just modeling, as it were,the predicted values of somethinglike attainment, according to the schools you're in,but looking at ways of modeling the amountof variation between children.

  • 13:40

    So, for example, in one particular school,there may be much smaller variation attainmentthan in the other school.Girls, we know, are less variable than boysin terms of their achievements.And traditionally, statistical modelshave looked at average differences between boysand girls, for example, but not in the differencesin variability.

  • 14:07

    But if you start thinking about it,those are actually quite interesting things to do.Is there more homogeneity in some situations than in others?So modeling the variability is actually, in some ways,more important than modeling the means.And it expresses itself in models for, what we call,segregation models, which is to do with how much variationis there, for example, in the proportion of ethnic minoritiesin different schools?

  • 14:37

    That's essentially looking at the variation between schools,and seeing how that might change over time.So we're increasingly interested in exploring in that direction.I guess in the late 1980s, we startedgetting into league tables, school league tables, whicheveryone's heard about, and we'd been thinkingabout this for some time.

  • 15:12

    And we were always concerned about this because many people,it's a selection factor.If you start comparing schools, just in terms,for example, of exam results, then to a large extent,you're reflecting the intake or the selective natureof the intake of the schools, so there's no level playing field.So we started talking about so-called value-added analyzes,which is OK, let's adjust for selection factors.

  • 15:38

    Let's adjust for the intake of the schools.And we would expect that to pretty well accountfor differences between schools.It turns out it doesn't.It reduces the differences between schools,but there's still something going on.Now it maybe that our models are misspecified.

  • 16:01

    We hadn't taken enough things into account.And I think that's a strong possibility.You can't measure everything necessarily that is relevant.But then we got interested in saying, well, OK,that's a bit crude.As were differentially effective,maybe if you look at differences between schoolsfor high-achieving, as opposed to low-achieving pupils,then you'll find that that might explainsome of the overall variation between schools.

  • 16:40

    The interesting thing is it doesn't.The other interesting thing is that it's there.So some schools perform differentlyfor high-achieving or low-achieving schools.So, in fact, you could take a pair of schools--and this occurs-- where for low-achieving schoolsfor low-achieving pupils, school B does better than school A,but that reverses for high-achieving pupils.

  • 17:05

    And what we were beginning to discoverwas the complexity of all of this.And that then encouraged us to look for methodologythat could address what we began to seeas a real-life complexity.So that our initial ideas of all we have to dois just adjust for prior achieving,we put a co-variant into our model, and that does it.

  • 17:32

    It doesn't work.The real-life is much more complicated.And because of that, you need more complicatedstatistical models.The level of complexity of your statistical analysishas to approach what the real, true lifecomplexity is out there.

  • 17:54

    And that, I think, is the justificationif you ask the justification of the whole research methodprogram that a lot of us are currently involved in.And there's a feeling out there among some peoplethat our models are too complicated.People don't understand them.And I think the response is unless you have complex models,you can't begin to understand the complexityof social reality.

  • 18:23

    Now the fact that you have complex modelsdoes not mean you can't explain them in simple termsthat your median Ph.D. Student or your median Claphamomnibus person can't understand.And that's kind important, I think, to realize.

  • 18:46

    Well, with some skepticism, I think.A, it's exciting because it means there'sall these data out there.When I started my career, it took us a long timeto calculate data, and there wasn't very much of it,and you had to milk it for everything.Now in the era of big data and gigantic data,we're overwhelmed, as you say, we're overwhelmedwith this stuff, and it's nice.

  • 19:13

    So in education in England, we have the National Pupil Record,the National Pupil Data Set, which is a wonderful resource.And to give them real credit, the various administrationsthat have been running the education departmenthave done a super job, whether they know it or not.

  • 19:38

    And for some reasons, they probablydid it because they wanted to produce league tables easily.But it has been a wonderful resource for researchersand has been well-supported by the education department.So hats off to them.Some are critical, but they've done a great jobin supplying the data, data available to researchers,and that's great.

  • 20:02

    And it's a very nice example, I think,of how big data have been exploited.We found out all sorts of things about the education system.Because of the size and comprehensiveness of the data,we could never have dreamed of doing it before.So that, I think, is a very nice example.Now another big data that's comingon stream, work and pensions data, health data particularly.

  • 20:28

    But you do have to be careful because youdo have to understand the provenance of the dataand the quality of the data.Lots of these data result or willresult from the linking of separate administered datafiles.And that process is subject to error,and sometimes errors can be big.

  • 20:49

    And if they're not recognized, they can destroy your analyzes.So it's not just the data out there to be used.You actually need a deep understandingof where they come from, what their quality is,and how you can deal with the problems that are alwaysgoing to be inherent in very large data sets.

  • 21:12

    So we need to develop that sensitivity to data,so there's a whole lot of methodology around that.We've just begun to scrape the surface.And the others, of course, I talkedabout miniature data sets.There's all the data scraped from social media, whichis probably harrier than all the rest of the data.

  • 21:34

    We're not used to dealing with those kinds of data.And people are beginning to discoversome of the drawbacks of relying on the data,in, what I might call, a naive way.So all of that, there's a lot of workfor methodologies and statisticianstogether with computer scientiststo try and unravel some of these problems.

  • 21:57

    So I think next 5 to 10 years aregoing to be a period of understandingwhat this advent of big data all means for us.Well, ask as many questions as you can.Don't take anything for granted.

  • 22:19

    Perhaps, I'll give an example from plans that we currentlyare working on, which is linked data.So the record linkage issue a big one,and there's been a lot of publicity about this.When it goes wrong, it goes wrong in a big way very often.But the problem very often is whenyou're linking big data sets, the tendencytraditionally is to link.

  • 22:47

    And because you can link most of the records,often forget about the ones you can't link or kind of shutyour eyes a little bit.And some say, well, some of these links are a bit dubious,but we think they're all right.We think we matched the right people,and that data set is released.

  • 23:08

    But the people who use the data setare not aware of what's gone on in the process of producingthat data set.And when you ask-- this is a common occurrence-- whenyou ask the people who have linkedthe data sets to give you information and aboutthe linking process-- what's the error rate,and how many didn't you link, how do you actuallydo the linking-- it's a little bit of a black box.

  • 23:37

    So one of the things we've been trying to do recentlyto say, look, if you're going to link, what you must dois supply with a link data set, all the information associatedwith the linking process.Most of all, what are the drawbacks?What are the problems with it?What's the error rate that we can expect?

  • 23:57

    What are the characteristics of the people you didn't link?Tell us about that.Because if they're a biased group, which they almost alwaysare, you can make bad inferences from whenyou start doing your modeling.And that's the sort of message that wehope will start hitting home.

  • 24:18

    It's a very important message.And I think it's part of the general issuethat-- I'd want to say, look, whenyou do a statistical analysis, youmust understand the whole processfrom the point at which the data are collected rightthrough to the process analysis.There has to be an integration of that process.

  • 24:41

    And I think what we have at the moment is it's been atomized.It's divided into separate blocks.And each block feeds into the other,but very often the right questions are not asked.So the final block doesn't know whathappened in the previous blocks, and we need to.

  • 25:02

    To do it properly, we need to do it.So that's another lesson for big data.It has to be linked up.There has to be consistency across the whole process.It's no good, government just saying, look,we want to promote big data.That's fine, and here you are.That's not good enough.It needs to pay attention to this whole processand making sure there's a coherent consistencyacross the whole process of data acquisition linking analysis.

  • 25:34

    You need to understand how the data areproduced before you start talking about comparing them.You need to understand the context in which they'reproduced.The definitions of terms, the classifications usedmay differ subtly.The rules may be the same.The protocols may be the same, but the way they're usedmight subtly differ.

  • 25:56

    So there's quite an area there for whatone might call qualitative research,going and seeing what's happening,actually questioning the people thatare collecting the data to try to gain insight or a narrativeabout the data and their structure.I think the media has a very heavy responsibility here,and it hasn't always discharged that responsibility well, Ithink, professionally.

  • 26:26

    That's a difficult one.Even the best of the media, the serious media,doesn't do it very well in many cases.And I think there's a kind of-- Idon't want to sound arrogant about this--but I think there's an educational issue.I think the professionals, the people that dounderstand and deal with data analysis,and there is a scientific community out there that does.

  • 26:52

    Over a period of hundreds of years,we have established protocols and ways of working,and that means something.I think we have responsibility to try and transmitthose values and those understandingsto the people that are there to interpret, I find it.

  • 27:13

    And I think the media has a real role.I don't think we should be writingarticles for whatever it is, the Mail on Sunday,or the Guardian, about the implication of our results.That's actually a journalistic role,but the journalists have a duty to understand what they'redoing, and to talk to researchers properlyand responsibly.

  • 27:39

    And I do worry about the media because it is nowso easy to pick up, because of open access and everything,what's going on in the professional journals.All the media exposure of the debateabout the effect of statins, for example,and prescriptive statins, that's found a way.

  • 28:02

    But so much of research is provisional,and you don't want to frighten people.You don't want to scare people into doing this or not doingthat on the basis of research.We've seen a lot of that in 2014,particularly around the statins issue, peoplehaving to withdraw articles from the British Medical Journaland so on.

  • 28:24

    And that wouldn't be a big deal had it not been headline news.So I think the journalists need to come to terms with this.I'm not saying that this results shouldn't be reportingand shouldn't be popularized.They should, but it some way has to be found.And doing it in a way that is responsibleand that recognizes that medical research, scientific researchin general, is a long-term affair.

  • 28:52

    And in a sense, all results are provisional,and they will change over time.It's quite a subtle message, a different one to get across.And I have to say, researchers do not blame this.We are all encouraged to have impact.What the hell does impact mean?This is a silly term, but it's very ofteninterpreted as getting our stuff out therein a public forum in the press.

  • 29:20

    And I think that that's really rather misguided.We shouldn't be doing that.We shouldn't be expected to do that.We need to have impact in the terms of advancing knowledge,but that knowledge is always provisional.And if that knowledge is simply to help another researcherbuild on the knowledge, enhance it, correct it, whatever,that is tremendous impact.

  • 29:44

    The impact is not to appear on kind of breakfast televisionor on "The Today Show," saying I have justfound this wonderful relationship between takingthis drug and doing well at school.That is not what impact is about.And some way of the media responsibly dealingwith that sort of thing, I think, needs to be found.

  • 30:11

    It's a balance between not wanting to kind of keepthe public at arm's length, but also not misinformingthe public because of the provisional natureof what you're doing.It's difficult. It's not an easy job,and we haven't got the balance right at the moment.I'm very clear about that.I think there is a role for the sort of public professional,the person that somewhere is in between the researchcommunity and the sort of popularization of it.

  • 30:49

    And I think the best science journalists do that.I won't name any, but there are some,and there are some very responsible peoplewho do a very good job.And in some of the broadcast media,so the radio, more or less programming,is a wonderful example of how to do it.

  • 31:10

    And I could name other specific people, like Lorrie Taylorand so on, David Spiegel Horton and suchare very good, but somewhat isolatedexamples of how you can do that well.We need more of that kind of thing.

  • 31:31

    Well, in terms of people embarking on a research career,it's kind of difficult. Who am I to give you advice?I think I've been terribly lucky.I landed in an area that was a growing area .I stumbled across multi-level modelsbecause the people I happened to be talking to youand the work I was doing suddenlycoincided with some kind of computer developmentand software developments, and we kind of fell into it.

  • 32:04

    And it just turned out that it actuallywas an area that was ripe for exploitation,and we sort of seized the opportunity.And so we were lucky.It took off.So who am I to say what you should do?But do seize the opportunities when they arise,lay the foundations well, educate yourself, look around.

  • 32:31

    See what's going on, listen to people, be skeptical of claims.Develop a nose for bullshit.This is very important.Don't be swayed by the glib presenters.They're entertaining.

  • 32:52

    They do well, but question it.Always kind of think, well, just imagine a difference scenario.What are the saying?The only advice I want to give-- and thisis very personal advice-- is just question everything.The people won't thank you for it,and they certainly won't thank you for itif it drifts off into your personal relationships,so keep it separate.

  • 33:25

    But in your professional life, develop a habitof questioning everything.It won't necessarily make you popular.Seek popularity elsewhere.But even if it's something you don't explicitly do,question it in your own mind.

  • 33:45

    Try and imagine a different scenario,what it would be like.Is what they're saying universally true?Can I imagine a situation in which it wouldn't hold?That's the only thing I want to say to anybody.Keep an open mind by questioning, imaginingother things, other worlds, other scenarios,and seeing where the things stand up.

  • 34:14

    Don't take too many things for granted.Try and work out who the people that are not bullshitting--there are a few people out there bullshitting--and go with them.And don't be afraid to change your direction.I changed direction a few times.

  • 34:34

    It's quite a good thing.And you'll find, if you're lucky,that actually something that happenedto you 10 or 15 years ago, suddenlybecomes relevant, like music.So pick up from where you can't.Just pick up different things, and try and sort of seewhether they work.

  • 34:56

    Is this going to be useful for what I'm doing now?But don't take it from me.This is just my experience.It's very idiosyncratic.Go your own way.You need some luck, but you can kind ofprepare yourself to take advantage of the luck as well.

  • 35:20

    Be eclectic.Don't be too pure.And don't get too hung up on grand theories.If someone comes along and says, I'vegot a theory of everything, be skeptical.

Abstract

Professor Harvey Goldstein discusses his career in large cohort research and the development of multilevel modeling. Multilevel modeling allows researchers to take different factors and different types of factors into account when looking at causal relationships in data.

Looks like you do not have access to this content.

Harvey Goldstein Discusses Multilevel Modeling

Professor Harvey Goldstein discusses his career in large cohort research and the development of multilevel modeling. Multilevel modeling allows researchers to take different factors and different types of factors into account when looking at causal relationships in data.

Copy and paste the following HTML into your website