Podcast
Despite the current hype around ‘fake news’, the dissemination of biased and misleading information is far from a new phenomenon. In this ‘post-truth’ era you might hope to find a respite from subjectivity in cold, hard numbers. On the contrary, propagandists are becoming increasingly sophisticated at methods of falsification and misrepresentation of numerical data. More than ever we learn to live by the maxim “There are three kinds of lies: lies, damned lies, and statistics.”
Some concerned parties have invested themselves in combating this malpractice, in an effort to expose and discourage the flawed or dishonest use of numbers. One such statistics vigilante is David Spiegelhalter, professor of the Public Understanding of Risk at the University of Cambridge and current president of the Royal Statistical Society.
In this podcast David talks about trust and communication in statistics. “There’s always been the use of statistics and numbers and facts as rhetorical devices to try and get people’s opinion across, and to in a sense manipulate our emotions and feelings on things,” he tells interviewer David Edmonds. “People might still think that statistics and numbers are cold, hard facts but they’re soft, fluffy things. They can be manipulated and changed, made to look big, made to look small, all depending on the story that someone wants to tell.”
David discusses how we should determine which communicators and organizations to trust and what organizations themselves should strive for. This leads on to a discussion about how ordinary citizens can be empowered to critically engage with data by asking key questions such as “What am I not being told?” and “Why I am hearing this?”. Rather than individually tackling every bit of fake news, he aims to inoculate others against its influence.
DE: DAVID EDMONDS
DS: DAVID SPIEGELHALTER
DE:This is Social Science Bites with me, David Edmonds. Social Science Bites is a series of interviews with leading social scientists and is made in association with Sage Publishing. There’s nothing novel about propaganda and the manipulation of news and research. Fake news is not new, but the scale and sophistication of it is. Partly this involves the falsification and misrepresentation of numbers, of statistics. Some people--the good guys--are doing their best to combat this. What’s the probability that they’ll succeed? Well, it’s increased by having on their side David Spiegelhalter. He’s an eminent Cambridge statistician with the title of Professor of the Public Understanding of risk. David Spiegelhalter, welcome to Social Science Bites.
DS:Lovely to be here.
DE:The topic we’re talking about today is trust and communication in statistics. We live in an era of fake news where people don’t trust figures and stats anymore.
DS:I don’t completely agree with that. Obviously, this is a major topic of discussion in society now about whether we can believe what we hear in the mainstream media or on social media, but there’s always been the use of statistics, and numbers, and, quotes, “facts” as rhetorical devices to try to get people’s opinion across and to, in a sense, manipulate our emotions and our feelings about things. People still might think that statistics and numbers are cold hard facts, but actually they’re not. They’re soft fluffy things. They can be manipulated and changed, made to look big, made to look small, depending on the story that someone wants to tell. Let’s take the 350 million pounds a week that was plastered all over the side of the Brexit bus-- that the EU is costing us. Now, the fact that that number is wrong-- let’s forget that for a moment. It looks a very big number, but if you say, oh, well, that’s 50 million pounds a day, and there’s 70 million people in the UK, so that’s about 80 P a day each. That’s a packet of cheese and onion crisps. So maybe that’s not such a big amount to pay in the EU, even if it were correct.
DE:Well, is there not such a thing as a correct number?
DS:I can count things. I could say the number of unemployed went up in January by 3,000, and that’s how it was reported in the BBC. Is that a correct number? Well, there’s two aspects about that. First of all, what do you mean by unemployed? The definition of unemployed has changed time and time again, so it’s very difficult to know what you mean, unemployed. And the other thing is that, if you click down on the website, you find that the margin of error on that 3,000 change was plus or minus 77,000 because they don’t count unemployment claimants as based on the labor force survey. So actually there’s huge uncertainty about that number. Now, that’s a number that the BBC reports as a fact. It’s not a fact at all.
DE:But there are facts. You accept there are facts.
DS:Oh, yes. Yeah, and I’m not going to get in a whole discussion about what is truth, although it’s amazing how quickly you do get down that line. Oh yes, there are facts, and I really value them. And I think it’s terribly important that we do have good information. We know that some things are more reliable than others, but it’s still an issue that it is very difficult to talk about anything without having some sort of line, some sort of emphasis one way or another. And of course this has been explored a lot in the work of Danny Kahneman and others to do with the framing of numbers whether you talk about a heart surgery having a 2% mortality, as they do in the States, or a 98% survival, as we do in the UK. Sounds a lot better, doesn’t it? So we know that this sort of framing changes people’s emotional response to numbers. And the main thing, if we’re trying to be honest about numbers, I think is to recognize that and to do multiple frames, for example. You tell the story in different ways. If you’re telling someone about the chance that they’ll survive an operation, you say well 2 in a 100 people like you won’t survive the operation, but 98 will.
DE:So if an organization presented information in that way, that would give us reason, perhaps, to trust it more. How do we know which figures to trust and which not to trust?
DS:Trust is a really tricky issue. I’ve been hugely influenced by the Cambridge philosopher Onora O’Neill, who’s a philosopher of Kant. She’s a wonderful woman, but she’s had a huge influence on me. And not only me, but a huge influence on the way that trust is discussed in society in Britain certainly. So her argument is that we all hear organizations saying they want to be trusted. Everyone wants to be trusted, and she said that’s totally the wrong approach. We should not try to be trusted. That is not in our control. What is in our control is to demonstrate trustworthiness. So the language about trust among institutions in the UK has changed from wanting to be trusted to wanting to demonstrate trustworthiness.
DE:And how do you do that? How do you move from wanting to be trusted to being trustworthy?
DS:Well, again, we go back to Onora O’Neill. She’s very good at her rules of three. So she says that someone might be trustworthy if they can demonstrate honesty, competence, and reliability. And in particular, when it comes to communicating information or statistics, she has got a lovely list that I now just chant as some sort of religious mantra almost that information should be accessible, usable, and accessible. If you’re going to communicate information to somebody people should be able to get at it. They should be able to understand it, and they should be able to check it. What this does is mean that transparency of an institution is not-- I love this term-- fishbowl transparency. Fishbowl transparency is when you just go blah and shove everything up on your website in obscure small print PDFs and things like that. And you say, oh well, we’re being transparent about our activities, but actually it’s useless because it doesn’t help anyone find the information, or use it, or to check its veracity, or whatever. So transparency is more than just shoving everything up on a website. It’s to do with really almost actively working with the community of people who are interested in your work. In order to make sure they can get at the information, they can understand it, and that’s what professionally now I work in all the time. And it is making quite complex statistical ideas comprehensible to people. And this is the crucial thing. They can assess how reliable it is. Now, not everyone will want to do that. You can’t check everything you hear, and she’s very clear that it shouldn’t be up to every individual to check the trustworthiness of every message they hear. They should be able to know that someone else is looking at it on their behalf, whether that’s journalists, or regulators, or whatever. But if you are a concerned citizen, or an NGO, or a journalist, or something like that, you should be or to see where that information came from and how good it is. What I think this really leads into-- and I think this is a terribly exciting development in society today-- is empowering ordinary citizens to know how to ask questions about what they’re being told, about what to look out for.
DE:So are these skills of scrutiny, skills of interrogation-- are they things that can be taught? Because for most people, numbers are difficult.
DS:It’s true. Numbers are difficult. People ask me, why does everyone find probability and statistics so unintuitive and difficult? And I just say, well, I’ve been working in this area for 40 years now, and I’ve finally concluded it’s because they really are unintuitive and difficult. And many people now are trying to develop the appropriate tools, the sort of examples to help people do this. Trying to teach people critical thinking and dealing with misinformation is not new. That’s been around forever. I think the difference now is that the misinformation is being done on an industrial scale and in a very scientific way. We’re making this podcast. We’re right in the middle of the news about Cambridge Analytica and Facebook. It has received an enormous amount of coverage, and what that’s shown is that misinformation is now not even being done on an industrial scale. It’s being done in a scientific way, and what this suggests is that the countering misinformation is also now a scientific activity.
DE:But it means there’s a kind of arms race with those manufacturing misinformation being combated by the good guys like you.
DS:Yeah, isn’t it great? There’s a growing community of what you might call an evidence community. We’ve got a small center philanthropically funded that’s looking into evidence and risk communication, but then there’s various other charities-- Science Media Center, Sense About Science, fact checking organizations like Full Fact, the Alliance for Useful Evidence, and then bigger, more established groups like the Institute for Fiscal Studies-- all of whom are concerned with good evidence, not with a political agenda. We’re not on one side or another at all. We’re actually desperately trying to be unbiased, but we have a massive bias in favor of good transparent evidence and empowering people to evaluate that evidence. It’s a growing community. I call it the league of factfulness. Evidence superheroes-- that’s what we are.
DE:When you say you have no bias, if an organization--maybe a political organization--puts a fact out there and the fact is true, but it just misses something else-- it manipulates by omission-- pointing that out is political, isn’t it?
DS:No, I don’t think so. We’re trying to scrutinize what people say. And you’ve identified an absolutely crucial issue, which makes this whole topic both fascinating and difficult, in that you cannot critique what you’re being told by just looking at what you’re being told. This is what makes it fascinating, of course. There’s two things you should always ask yourself. What am I not being told, and why am I hearing this? We don’t hear facts at random. We hear facts because people want us to know them. Almost everything that appears in newspapers, on social media-- somebody has planted that by a press release or in some way wants us to know that in order, usually, to change our emotions or our feelings. We can see it all the time when we just read many scientific stories in the news--that we will hear something is associated with something else. Vitamin B2 prevents miscarriages and things like that. And then you realize that what’s happened is that they’ve searched loads of different things, and they’ve done loads of tests, and they found the one that has the biggest association. And they’re telling us about this, and they’re not telling us about all the things that aren’t associated. And that’s a well-known scientific problem in terms of--they’re called multiple testing. And there are ways of dealing with it, but it’s very difficult to deal with it if you’re not being told what they actually did. And I’m currently making a program for BBC World Service on teaching school kids to detect fake claims. We went to Silicon Valley and sat in on a class of 15-year-olds who are being taught how to search the internet. You think, well, everyone knows how to search the internet. No, no. They were being taught these wonderful skills that US fact checkers use, which is, when presented with a website that’s making a claim--a historical claim or anything like that--the old fashioned way of checking it was to say, oh, look, at the domain name. Check with the name of the organization, things like that. And these kids are being taught that’s rubbish. You cannot tell the veracity of a website by looking at the website, because people are so good at putting up fake websites. They can make them look fantastically reputable. So what you have to do is to check what other people say about the website. You have to search horizontally, not vertically. You have to open up multiple tabs and search about what people are saying about the website that you’re looking at. And that’s the way to find out who is behind this organization, or is it a front for something else? Where does it come from? What do people say about it? And they learn this, and they take it on board. And they go home and teach their parents wonderful simple skills to learn that very empowering. It’s called inoculation against fake news. That is, again, a whole experimental program on inoculation. You tell people about the fake news and the way people do it before they come across it, and so that when they see it, they’re ready for it, and they feel empowered by detecting it. All of us, whether we’re left, or right, or anything like that, don’t like being taken in. Nobody likes being conned, and everyone loves detecting a con.
DE:All social scientists deal with stats. What are their obligations in terms of being able to master the stats that they’re presenting to the wider world?
DS:Yeah, I’m a statistician. I’m currently president of the Royal Statistical Society, so I love stats. It upsets me when they’re misused either in the sort of political discourse or in science. We know about the problems about the reproducibility of the scientific literature, and much of that is laid at the foot of the misuse of statistics. I think one of the problems with statistics is that people tend to think of it as just a bag of tricks, a bag of tools. Can you apply a t-test? Can you interpret a regression coefficient? Blah, blah, blah, et cetera. And in fact, the real statistical skills that I use a lot of the time in my professional life almost nothing to do with that. It’s to do with, again, what isn’t there, what isn’t being calculated, how much selection has gone on in arriving at that final analysis. Again, when reading a scientific paper, you cannot judge it by reading the conclusions, because how did they get to that conclusion? The crucial thing is, did they decide on that outcome measure before they did the experiment? Did they decide on the factors they’re going to adjust for? Did they decide on who to include right from the beginning? Because otherwise you can’t trust the result. It can suggest something. It might be an exploratory idea, but it cannot be used for confirmation anything unless they’ve pre-specified what they’re going to do, because we know you can endlessly tweak things, and change things, and select. And areas in psychology-- particularly social psychology is where this has received most attention, but also in neuroscience. The experimental results are just not as reliable as they should be because of the misuse of statistics. Some of the best statisticians I know are social scientists because they’ve done so much empirical statistical work. So I’ve got a huge amount of respect for much of the statistics being done in social science. There are also, partially just because of the massive quantity of stuff being done and the subtlety of some of the ideas, some not very good work, and there is misuse. Everyone wants their p less than 0.05. Everybody wants their discovery, but of course the pressure for that is not just a matter of the scientists. It’s also the journals. It’s the promotions, the fame or whatever. Everyone wants to get their Ted talk. There are pressures that lead to some pretty poor science going on.
DE:Do we live in a golden or dark age for statistics?
DS:That’s a great question. I think we live in really quite a golden age, because the rise in big data is considered an old fashioned term now, but certainly data science, a massive growing industry. People have realized that we can use information and harness it-- quantitative information-- to help us in so many ways, let alone the development in machine learning and AI, which is harnessing numerical and observational data to make important judgments. So I think this is a golden age, but there’s still always a danger that people get too concerned with the techniques and lose track of much more fundamental issues about what is the generalizability of what they’re trying to do. Generalizability and transparency, the ability to explain what you’ve done and how the algorithm works, is a crucial issue.
DE:We’re in an era now where often algorithms are being run by machines, not by humans. What does that mean for trust and trustworthiness?
DS:I think this is terribly important. Now, there’s algorithms that you want to operate completely automatically-- the one that when you pick up your camera phone that it will identify faces, the one that will in the end drive our cars for us, and so on, the one that does all sorts of extraordinary things. So far, the machine translation, the vision-- they’re brilliant. And they all work, and I don’t want to have to ask them every time, well, why do you know that’s a face or whatever like that. This is very different from algorithms that will be making decisions about our credit worthiness, about whether we should be given parole or not, or the ones I work on is how likely you are to live after your breast cancer surgery and how long you might live. And those are the ones we work on and communicate to people. And it seems to me that those algorithms-- one has a real responsibility and, increasingly, a legal obligation to explain where they came from, and there’s multiple reasons for that. The first one that, for example, is in the credit and the sentencing type algorithms that there can be implicit bias. Even if it’s illegal to include race, for example, in an algorithm, if you just ask about other factors, you can determine it. For example, when I buy my insurance-- life insurance or whatever-- they’re not allowed to ask me my race, but they can ask me about family history of disease, and they can ask me my postcode, and they ask me all sorts of things. And they can guess my race very accurately indeed from that. So by transparency, we could identify whether there’s implicit bias or not. And the other reason for transparency, of course, is to actually explain what’s going on, to know about what its reliability-- what pieces of information are being used, and how much are they being weighted? And if the algorithm is either far too complex to explain, or is proprietary and you can’t work out what’s going on, then actually I think that’s pretty unacceptable.
DE:But is there a danger that these machines in future will be so complicated that even David Spiegelhalter, the Professor of the Public Understanding of Risk, won’t be able to understand how they’ve reached their conclusions?
DS:Well, that’s already the case. If you’ve put things through a really multi-level neural network-- although people are really trying to understand to be able to explain how it came to its conclusions, that’s very difficult indeed. One thing you can always do is, if you can get access to the algorithm, you can always start just fiddling the inputs and see how the outputs change. And you can do that even with a black box. I don’t about you, but when I buy online insurance, I always tell massive lies to find out what’s driving the premium. Is it my age, or where I live, or something like? I’m just curious. I thought everyone did that, but never mind. So you can do that at least, even with a real black box. But from even a legal point of view, the need for transparency, I think, is tremendously important.
DE:You’re a statistician. Does that make you a social scientist?
DS:Good question. Statisticians-- sometimes we’re call mathematicians, and I’m not that, although I do work in the department of maths. I don’t think I’m a social scientist, but I’m not a scientist either. I’m not a physical scientist, but stats is an odd thing. It’s an enabling technology. It’s a system of ideas of using quantitative evidence. So I’m an evidence person. Statisticians are often called evidence policemen because they spend their time going around telling people off for bad behavior. I do my fair share of that myself, but sometimes I do things that could be considered within social science. And of course, my interest now has moved so much more from the techniques and the exact methodology towards the use of statistics in society today and in the discourse about evidence in society today. And that, of course, is social science, if not humanities.
DE:David Spiegelhalter, thank you very much indeed.
DS:Thank you very much for letting me go on and on.
DE:Social Science Bites is made in association with Sage Publishing. For more interviews, go to socialsciencespace.com.