Skip to main content
Search form
  • 00:05

    [Health Data Science - Using Real World Evidence in MedicalResearch][Paul Taylor, PhD - Reader, Institute of Health InformaticsUniversity College London]

  • 00:10

    PAUL TAYLOR: I'm going to talk about the useof big data in health research.It's not a phrase that I like, in a way,but it's one that's used a lot.It's a bit of a cliche.It's sort of jargon, really.The other thing I don't like about itis that when people talk about big data, very oftenthey're talking about the kind of datathat Google or Walmart have.

  • 00:30

    PAUL TAYLOR [continued]: And in health care very often, we'renot using data of that kind of scale.It's perhaps bigger data than people used to have.But it's not really the kind of datawhere you need specialist computers to store itor to process it.So very often, we are looking at data setsof maybe a million patients or even 10 million patients.But that kind of data you can actuallyanalyze on only your laptop.So one of the things that's interesting about this kind

  • 00:53

    PAUL TAYLOR [continued]: of work is that the data that we're usingis the data that's collected to support routine patient care.It's the data that your doctor records about you,your GP, what gets recorded about you in hospital.So it's different in character from the kindof traditional research data that youmight have in a randomized controlled trialor an epidemiological study.It's messier and there's a lot of missing data

  • 01:16

    PAUL TAYLOR [continued]: and there's very often data that's ambiguous.It's difficult perhaps to be clear about why it was recordedor whether it really means what it's supposed to mean.So there's a lot of challenges in how we interpret it.And they're also ethical issues because a lot of this datawas elicited from the patient with their consent

  • 01:38

    PAUL TAYLOR [continued]: to support their care.But it wasn't taken in order to support the kind of researchwe're doing.So in a sense, we're using it without consent.So we have to be very careful that we'reusing it in a way that's responsible and legitimate.So one good example of this kind of researchis the paper that Liam Smith and his colleagues at the LondonSchool of Hygiene and Tropical Medicinedid about 14 years ago now.

  • 01:60

    PAUL TAYLOR [continued]: And the impetus for that was the controversythat Andrew Wakefield started with his suggestionthat maybe the MMR vaccine was causing autism.And that led to a collapse in vaccination rates.And so there was an urgent need for somebodyto produce an evidence that would reassure the public.But it would be very difficult to produce anything quickly.You know, you couldn't do a randomized controlled trial

  • 02:20

    PAUL TAYLOR [continued]: or a traditional study, which requireda lengthy period of follow up.But Liam was able to find data on 1,294 childrenwho'd been diagnosed with autism, and over 4,000patients whose situation was comparable to them,but where there wasn't a diagnosis of autism.And by comparing vaccination rates in the two groups,was able to show that the vaccination rates were pretty

  • 02:43

    PAUL TAYLOR [continued]: comfortable in the two groups.And so it was extremely unlikely that the MMR vaccinewas causing autism, which was reallyuseful in restoring public confidence in the safetyof the MMR vaccine.And they were able to access that amount of data becauseof a project whereby thousands of GP practicesvoluntarily submitted anonymized data on their patients

  • 03:04

    PAUL TAYLOR [continued]: to a resource that would be available for researchers.At the time it was called the General Practice ResearchDatabase.Now it's called the Clinical Practice Research Datalinkand it has data on over 10 million patients.And it contains data on diagnoses, on symptoms,on prescriptions, and vaccinations.

  • 03:25

    PAUL TAYLOR [continued]: So it's a really useful resource for researchers.But, of course, the data is taken reallywithout the patient's consent and that's problematic.But if you ask for consent, A, itwould be very difficult and complicated and expensive.But also, there would be a worry that the patients whodeclined to give consent would be, perhaps, particular

  • 03:46

    PAUL TAYLOR [continued]: in some way, so the data you had would no longerbe representative and it wouldn't quitetell you what you need to know.But the ethical basis for it is that the data is anonymized.And legally, in this country at least,if the data is anonymized we say that the patient no longerhas a stake in it.It's no longer personal data.It's not subject to the same protections

  • 04:07

    PAUL TAYLOR [continued]: and the same constraints as other forms of sensitive data.[How do researchers access the Clinical Research PracticeDatalink?]The Clinical Practice Research Datalinkhave quite a rigorous process that you have to go throughin order to get hold of the data so that it can only beused by bona fide researchers.

  • 04:28

    PAUL TAYLOR [continued]: And you have to undertake to use it in a responsible way.So that you couldn't, for example,try and identify a patient from it.Commercial companies are allowed to access the data,but they're not allowed to use it for commercial purposes.So if, for example, a pharmaceutical companywants to do some research, they coulddo research that showed that a drug was useful or effective.

  • 04:49

    PAUL TAYLOR [continued]: But they couldn't do research to saythat it was better than a competitor's or to doanything that would count as marketingor that kind of research.[How does data linking function with multiple sources of healthdata?]One of the exciting things that wedo with this kind of research is if we can link different datasets, we can answer very different kinds of questions

  • 05:11

    PAUL TAYLOR [continued]: to ones that might not have been able to answer before.So, for example, if we can take somebodies primary care dataand combine that with their secondary care data,then we can look, for example, at the way a conditionwhich is managed in primary care, such as blood pressure,for example, has impacts on cardiovascular outcomes, whichmight be detected in the hospital.And with a very large data set, and we

  • 05:31

    PAUL TAYLOR [continued]: have data on millions of patientsthrough the Clinical Practice Research Datalinkand linked to hospital access statistics,we can answer very specific questions about the differencesin the way that blood pressure affectsdifferent categories of cardiovascular diseasein different age groups.And actually it's a much more heterogeneous picturethan people realize.So there's some really exciting work that's being done there.

  • 05:52

    PAUL TAYLOR [continued]: But that is problematic because, obviously,in order to link two data sets, youhave to be able to identify all of the individuals in the datasets.So you can't do that with anonymized data.So the way that this is approachedis that a what's called a trusted third party is engaged.The two data sets are sent to the trusted third party,in this country would generally an NHS agency which commands

  • 06:15

    PAUL TAYLOR [continued]: public confidence and respect, and theywould do the linkage there.And then the anonymized linked data wouldbe released to the researchers.So the researchers never see the identified data,though still working with entirely anonymized data.But there is this awkward step whereby at some pointthe personal data has to be released

  • 06:36

    PAUL TAYLOR [continued]: from the hospital, the GP surgery, where it's held.And that is problematic.One of the things that's been controversial in this area,and it's been something of a problem in recent years,was the government attempted to create a national databaseby mandating that GPs would be obligedto return all information on their patientsthat were stored on computer to a central database called

  • 06:57

    PAUL TAYLOR [continued]: in many ways, that would have been great.It would've been a fantastic resource for researchersand it would've been brilliant for the NHSand it would have allowed people to do some really significantanalysis that would have helped with research,but also help just with the management of the NHS.But GP is in a slightly different positionto hospitals.So that kind of data is routinelybeing returned essentially to the government by hospitals.

  • 07:18

    PAUL TAYLOR [continued]: And that's the basis by which hospitals are reimbursed.They began a sort of rebellion almostwhere they were working in conjunctionwith confidentiality campaigners to organize a opt out campaign.And I think in the end something like 1.5 million patientssaid that they wanted to opt out of the scheme.

  • 07:38

    PAUL TAYLOR [continued]: And they're allowed to opt out and Jeremy Huntset up a mechanism by which people would do that.But it kind of undermined the credibility of the exercise.And in the end, the government abandoned it.And I think there's been a lot of cautionnow about data collection efforts in this space.And I think we're kind of living with the consequences

  • 07:59

    PAUL TAYLOR [continued]: of that rather ill thought out initiative.[What are some of the implications of data ownershipand consent in medical research?]People often talk, and we talk very naturally,about the patient's data as though the data belongsto the patient.And in truth, it doesn't.And the legal status of data was actually quite complicated.

  • 08:19

    PAUL TAYLOR [continued]: So in one sense, a data item is a fact.It's just a truth and it doesn't belong to anyone.But if you do some work in recording data or collatingdata, then that's your sort of intellectual propertyand you do have some rights over that as the person whodid the work to create it.

  • 08:40

    PAUL TAYLOR [continued]: But the patient doesn't actually own it at all.That the patient has a set of rights and the clinicianhas a set of duties towards the patient.And those constrain the way in which the data can be used.So it's not that no patient doesn'thave any stake in the data, but theydon't have ownership of it.It's very important that the patients are kept on board.

  • 09:04

    PAUL TAYLOR [continued]: I think that, as a researcher, wehave to be very clear that we're doingwhat we're doing with the patient's explicit or otherwiseconsent and that we don't do anything whichwe wouldn't want to be seen on the front page of The DailyMail.You have to be very clear that what you're doingis the right thing.

  • 09:25

    PAUL TAYLOR [continued]: What you're doing wouldn't excitecontroversy or disapproval.[What different types of data are used in medical research?]So there are lots of differences between routinely collecteddata and the kind of research datawhich has traditionally been used to answer questions.And one thing that you have to bear in mind

  • 09:46

    PAUL TAYLOR [continued]: is you're using data which wasn'trecorded for the purpose for which you're using it.It was recorded maybe for administrative purposesor for some other reason.And that can introduce biases.And a good example of that is the use of mortalityto measure the quality of care in hospitals.And this was something that Brian Jarman introducedin the wake of the scandal around the performance

  • 10:11

    PAUL TAYLOR [continued]: of cardiac surgeons at the Bristol Royal Infirmary.And obviously if you're using mortalityas a measure of quality, you haveto record other information in orderto make sure that you're doing a like for like comparisonbetween different hospitals because obviously hospitalswith different case mixes will have very different mortality

  • 10:32

    PAUL TAYLOR [continued]: rates.And what they found was that therewere things which would substantiallyaffect that the metric that they were using where hospitalscould, for example, be more diligent about recordingwhether or not a patient was receivingpalliative care because that would very substantially affectthe normalization process which altered the quality measure.

  • 10:55

    PAUL TAYLOR [continued]: And so this was shown really quite starkly at Mid-Staffs.So when these mortality measures showed that the Mid-Staffshospital was problematic-- and this was the first sign thatthings were going wrong there--the first reaction of managers therewasn't to go and check the quality of nursingor to try and improve what they were doing.Their first reaction was to go down to the coding department

  • 11:15

    PAUL TAYLOR [continued]: where people were recording the hospital episode statistics,the data that was actually used in these metrics,and increase the proportion of patients who recorded as beingreceiving palliative care.So the proportion of patients who died,who were on the record as being palliative care patients,went from roughly zero to roughly 40%.And that had a significant impact on the statistics.

  • 11:35

    PAUL TAYLOR [continued]: But obviously it didn't improve the patient carein any way at all.And I think that's a really good example of waysin which the management processes can bias what'srecorded in the data.Another more recent example wouldbe come out of the controversy around the mortality ratefor patients admitted at the weekend.Jeremy Hunt wanted to use the suggestion that mortality

  • 11:59

    PAUL TAYLOR [continued]: was much worse for patients admitted at weekends in orderto change doctor's contracts.And because that was controversial,that research got a great deal more scrutinythan other research might.And people identified lots of reasonswhy the effects might actually be an artifact.So there was a suggestion that the information thatwas recorded and used to assess the severity of patients

  • 12:21

    PAUL TAYLOR [continued]: admitted at the weekend compared to patients admittedon the week wasn't really robust or detailed enoughto allow a comparison.Now, I have no idea who was right.It was just clear from the controversythat actually there were lots of waysthat you could pick at this and maybethe effect certainly wasn't as strong as it might have seemed.

  • 12:42

    PAUL TAYLOR [continued]: [What are some methodological challenges of working withmedical data at a large scale?]And one of the difficulties that we've confronted in a projectthat I'm working on at the moment,is knowing that you don't know what happened to patients.I'm working on an ophthalmology projectand we've got data on patients who were

  • 13:03

    PAUL TAYLOR [continued]: treated for a period of time.And we've got a record when they received treatments.And we've got a record of their quality of vision,their visual acuity--just results of an eye test--over that period.But then the record maybe stops.And we don't know whether or not thatwas because the patient decided it wasn't worth continuingwith treatment or whether the patient died

  • 13:25

    PAUL TAYLOR [continued]: or whether the patient moved away.All we've got is the data that we've got.So it's very hard to look at thatand be confident that you're not--that there's not some artifact in the dataset as a result of the missing data.[How do you deal with missing data?]

  • 13:47

    PAUL TAYLOR [continued]: There's lots of statistical techniquesthat you can apply with names like imputationand multiple imputation and so on.And of course, sometimes the missing datacan be interesting.I mean, there's some work that a colleague of minedid looking at missing data and actually foundthat it was the fact that the patients where data was missingwere at greater risk than other patients.

  • 14:09

    PAUL TAYLOR [continued]: So sometimes you can learn something simply from the factthat the data is missing.But I think that overall you justhave to be really careful that you're notreading too much into the data.And very often, I would say, if you'vegot a choice between a study that'sdone using this kind of data, you've

  • 14:29

    PAUL TAYLOR [continued]: got the advantage of size and scale and richness.But you're in a much weaker methodological positionthan a randomized controlled trial.So you'd probably prefer if you couldto rely on the results of a randomized controlled trial.Except it becomes more complicatedagain because if you look at randomized controlled trials,very often they're done in a very circumscribed population.

  • 14:52

    PAUL TAYLOR [continued]: So when we compared our ophthalmic dataon patients treated with anti-VEGF, whichis the current state of the art treatment for agerelated macular degeneration, we foundthat the trials that we use to demonstratethat that change was effective, while on patients with quitelow vision where you'd expect to be able to demonstrate

  • 15:13

    PAUL TAYLOR [continued]: an improvement.Whereas from a clinical point of view,you're very often interested in patientswith early stage disease who haven'tsuffered the same reduction in their visual acuityas result of the disease and seeing what vision youcan save and actually get much better outcomes if you treatearly.The work that we were able to do with the routinelycollected data was actually really usefulin demonstrating that the treatments were

  • 15:35

    PAUL TAYLOR [continued]: effective in a much wider populationthan people had been able to studyin randomized controlled trials.[How can big health data be integrated with moretraditional randomized controlled trials?]So people are getting increasinglyinterested in trying to use routinely collected datato do the work that has, up to now,

  • 15:57

    PAUL TAYLOR [continued]: generally been done by randomized controlled trialsand looking at different ways of doingrandomized controlled trials.So that might work in a way which is actuallyvery close to a randomized controlled trialbut automating the recruitment of patients who are justattending their GP surgery.So one thing that's really exciting

  • 16:19

    PAUL TAYLOR [continued]: is if you can automate the mapping from the inclusionand exclusion criteria of a trialfrom what gets recorded in routine data in the managementof the patient, then you can sortof automatically recruit into a trail, more or less without--obviously, there has to be some consent processif you're changing the treatments in some way,but you might not be.And that would hugely expand the potential

  • 16:43

    PAUL TAYLOR [continued]: for carrying out randomized controlled trials.[What are some examples of recent big health dataresearch, and why are they controversial?]So one controversy that blew up relatively recentlywas the transfer of data from the Royal Free Hospitalto Google DeepMind.And that's proved very controversial.

  • 17:03

    PAUL TAYLOR [continued]: And what happened there was that clinicians at the Royal FreeHospital went to Google DeepMind and said that we'vegot this problem with patients who are dying because we arenot detecting early enough that they've got chronic kidneyinjury.And Google DeepMind said, well thiswould be a great project for us to get involved withand we can produce an app, which would alert cliniciansto this very early on.

  • 17:23

    PAUL TAYLOR [continued]: And they produced something very, very quicklyand the clinicians were really happy about itand thought it was really good.But then the data sharing agreementbetween Google DeepMind and the Royal Freewas leaked to the press.And it turned out that a substantial amount of datahad been transferred to a third party server, a trustedthird party, where it was then being

  • 17:44

    PAUL TAYLOR [continued]: accessed by Google DeepMind.And that was quite alarming because and A,it was identifiable data, and B, people reallydidn't quite know what it was being used for because the appthat the Google DeepMind created for the Royal Free didn'tseem to require the kind of data that was being transferred.

  • 18:05

    PAUL TAYLOR [continued]: And Google DeepMind, of course, is famous as an AI company.And so the thought was that maybe theywere using it for something else because they seemedto have ambitions which went beyond what they said they weredoing for the Royal Free.And then as the controversy sort of worked its way through,the Information Commissioner's office got involved

  • 18:27

    PAUL TAYLOR [continued]: and DeepMind were asked some questionsand their response was that they were usingthe data to test the algorithm.And the position that the Royal Free adoptedwas that it was legitimate for them to have identifiable databecause they were an IT provider.They were in the same position as any other company providingIT services to the trust, who would naturally,in the course of their usual business, store data.

  • 18:50

    PAUL TAYLOR [continued]: But the Information Commissioner's officesaid, well, if you say that you'reusing the data to test an algorithm, whichis what they said they were doing,we would say that's not within remitand therefore, you should not have done this.And I'm not quite sure whether anyone'sbeen held to account or fined or anything,but certainly the position is that that transfer shouldn't

  • 19:12

    PAUL TAYLOR [continued]: have happened.And it's been, again, I think quite damagingin terms of public confidence.Partly because it's Google.And so this is a big company and there'sa degree of anxiety in the general publicabout what kinds of things Google knows about all of us.And also it plays into this anxiety

  • 19:34

    PAUL TAYLOR [continued]: about the role of commercial interests in researchand people being less willing to volunteertheir data that's being used for a commercial purpose.A lot of the coverage of it, I thought,was slightly missed the point because a lot of anxietyis about privacy.That people are saying I don't want Google to have my data.

  • 19:56

    PAUL TAYLOR [continued]: And actually, I don't think there's a concern there.I think it's unlikely that Googlewill be sort of accessing people's informationand using it inappropriately or using itfor some other commercial purpose.I would trust them that far.The concern that I have really is whether the value

  • 20:20

    PAUL TAYLOR [continued]: of the asset--that is in some sense a public asset,it's created by a state-sponsored enterprise,the NHS--is being given away to commercial companies whoare then going to use it to create artifacts whichthe NHS will then have to buy.We don't really know what the commercial arrangementis between the Royal Free Hospital and Google DeepMind.I trust, in a way, that the integrity

  • 20:45

    PAUL TAYLOR [continued]: of the clinicians involved, that they as individuals wereacting in a way that they thoughtwould benefit their patients.Because from their point of view,that data is just sitting there.No one's doing anything with it.So they may as well give it to a company that can thendo something useful with it.But on the other hand, look at itfrom a different perspective.We're seeing a situation where a very, very valuable asset

  • 21:07

    PAUL TAYLOR [continued]: of the UK, of the NHS, is being given to these companies.And we're not quite sure on what commercial basis.But really at the end of it, it'sgoing to create a product which the NHS isgoing to have to buy.And so we should be doing more, I think,to try and make sure that there are some benefits to the NHS--Not the Royal Free Hospital itself

  • 21:27

    PAUL TAYLOR [continued]: just, but the NHS as a whole--from the use of this data to create these products.It has to be said, we're very bright peopleand we think we're doing great stuff,but we can't really compete with the likes of Googlewhen it comes to their capacity to attracttalent and the resources that they have.So the best initiatives in this space

  • 21:48

    PAUL TAYLOR [continued]: are probably going to come from those kind of companies.And we have to be realistic about that.But that doesn't mean we should justgive in and let them have everything they want for free.I wouldn't take that attitude.[What concerns are there around linking health data and patientprivacy?]So one of the concerns that people

  • 22:10

    PAUL TAYLOR [continued]: have with anonymized data is privacy.It used to be the case that things wererelatively straightforward.We all knew that identifiable data hadto be treated very carefully.And then we were given a greater latitude in whatwe did with anonymized data.But now we have this kind of gray area in between, which ispotentially identifiable data.

  • 22:30

    PAUL TAYLOR [continued]: Because actually if you know enough about somebody,even if their name and date of birth and addressand the most obvious identifiers are removed,it's still possible to uniquely identify that individual.And Latanya Sweeney, who is a researcher at Harvard,has done some really compelling work in this space.So, for example, in the States in Washington State,

  • 22:51

    PAUL TAYLOR [continued]: you can buy the anonymized hospitalrecords for a few hundred dollars, I think it is.And so she bought this data set and thenshe compared it with a database of printed newsarticles from the same period of timeand searched for newspaper reports

  • 23:12

    PAUL TAYLOR [continued]: containing the word hospitalization.And was able to identify a number of reportsof hospitalization in which the hospitalized individual wasnamed.And then by comparing what was in the newspaper articlewith a named individual with whatwas recorded in the hospital, was actuallyable to identify a few dozen individualsand in some of those cases.And most of those cases, there wasn't anything controversial

  • 23:35

    PAUL TAYLOR [continued]: because these were road traffic accidents.But in some of them, there was informationrecorded which the individual wouldn'thave wanted known in the anonymous hospital records,about alcohol, about drugs, or about payment issues.And I think that shows just how easy itis to re-identify a de-identified record.And I think that means it's difficult to give

  • 23:56

    PAUL TAYLOR [continued]: the public an absolute guarantee that they shouldn't worrythat someone's got access to the databecause it's all been anonymized because you can nevergive that absolute guarantee.So what we have at the moment is that the guaranteeis based on the integrity of the researchers.So we undertake as professionals that we would neverdo that kind of thing.It's not we're not curious, but really what

  • 24:18

    PAUL TAYLOR [continued]: we do doesn't lead us down that kind of avenue.But I think it's quite hard to get that messageacross to the public because they don't reallyunderstand what we do and they don't reallyunderstand the kind of data, that patient data,from our perspective is.We're looking at very large databases, very kindof abstract information.And it's not really like looking over the handwritten notes

  • 24:39

    PAUL TAYLOR [continued]: of a general practitioner.It's much more abstract.And I think, perhaps, much less alarming if people knew that.But it's a difficult message to get across the general public Ithink.[What steps should be taken to anonymize data to ensure itcan't be reidentified?]This has been an area that's received

  • 25:00

    PAUL TAYLOR [continued]: quite a lot of attention.Fiona Caldecott, who's written a series of reportson the use of patient data, has thought a lot about this.And I think really it has to be to dowith the trust in the researcher.So this talk about data safe havens and this ideathat there would be controlled environments

  • 25:21

    PAUL TAYLOR [continued]: in which the data could be stored so it would onlybe accessible under certain conditions by certain people.And we adhere very, very strictly to that in our work.And I think that gives people some securitythat it should never be on a USB stick or on a laptopor in something where it could just

  • 25:42

    PAUL TAYLOR [continued]: get lost and get into the wrong hands by accident.So you do have to take these precautions.And we make sure that all of our studentswho have access to the data do access itin a defined way and everybody whoaccesses it gets the relevant information governancetraining.So it's all to do with simple things and being responsible.

  • 26:05

    PAUL TAYLOR [continued]: But again, I think it's a difficult messageto get across to the general public that that's all thereand you can have confidence in thatbecause it's just a world that so few people know about.[What new big data sources are becoming available for medicalresearch?]So far, I've really just talked about the kindof data that gets recorded in looking after patients

  • 26:25

    PAUL TAYLOR [continued]: in the traditional way that a GP or a hospital doctorwould take notes.But the world is changing very quickly.And the kinds of data that's becoming availableis changing very quickly.And we do get into big data when we start thinkingabout genome sequences.There's a project starting at UCLHwhere we're going to routinely sequence

  • 26:46

    PAUL TAYLOR [continued]: the genome of all patients comingthrough UCLH who consent to create a database for research.And the hope there is that patients will get very involvedin that research and drive the kinds of questionsthat get pursued.Medical imaging is being used much more widely.And there are many more different kindsof imaging, which record very different information

  • 27:08

    PAUL TAYLOR [continued]: about the body and the function of the body.And, of course, there's informationthat's outside of health care that we can alsouse to answer health questions.So you can answer a lot.You get a lot of information from somebody's smartphoneor from analyzing social media data or kindsof other sorts of information.And I think one thing that's happening nowis that people are starting to use voice controlled

  • 27:30

    PAUL TAYLOR [continued]: devices like Alexa and so on.And then you've got a whole other category of datathat's been recorded by people.There's a project somebody did a pilot for recently.It's obviously it's early days for usingthis kind of technology.But they were seeing whether you could detect from the voicerecords that were recorded by Alexa early signs of dementia.

  • 27:53

    PAUL TAYLOR [continued]: So you can imagine all kinds of ways in which this kind of datacould be used to understand the progress of diseasein ways that we haven't really begun to think about up to now.So there's a guy I'm in touch with the moment who'sgot a company.And they have a service that theyprovide to people who are being treated for substance abuse.

  • 28:16

    PAUL TAYLOR [continued]: And that tracks the patient's movements.So these are volunteers, right?These are patients who want to belooked after because they recognizethat they've got a difficulty.And the app will identify where they're goingand whether they're deviating from their typical patternsof behavior.

  • 28:36

    PAUL TAYLOR [continued]: And then it may be that some kind of AI applied to this datawill be able to detect people who are beginning to experiencesome kind of relapse.And there's similar approach to patients with schizophreniawhere early signs of a worsening of the conditionare disruption of sleep patterns.And you can pick those up from the way people

  • 28:57

    PAUL TAYLOR [continued]: use their electronic devices.


Paul Taylor, PhD, Reader at the Institute of Health Informatics University College London, discusses the use of real-world medical data in health research, including how health care data differs from other big data, how researchers can access health data, implications of data ownership and consent, types of data used in medical research, methodological challenges working with large-scale medical data, integration of health data with traditional randomized trials, examples of recent big health data research, patient privacy concerns, and new big data sources for medical research.

Looks like you do not have access to this content.

Health Data Science: Using Real World Evidence in Medical Research

Paul Taylor, PhD, Reader at the Institute of Health Informatics University College London, discusses the use of real-world medical data in health research, including how health care data differs from other big data, how researchers can access health data, implications of data ownership and consent, types of data used in medical research, methodological challenges working with large-scale medical data, integration of health data with traditional randomized trials, examples of recent big health data research, patient privacy concerns, and new big data sources for medical research.

Copy and paste the following HTML into your website