SPEAKER 1: You know, what we are now doing is the Curtis Meinertlecture.Doctor Meinert was a professor at Johns Hopkinsand director of the Center for Clinical trialsfrom 1990 to 2005.He was in the Clinical Trial leadershipsince the 1960s, a founding member of SCT, a founding
SPEAKER 1 [continued]: editor of the Society for Clinical TrialsJournal, Controlled Clinical Trials,which is now the Clinical Trials journal from 1980 to 1993.The inaugural SCT fellow in 2006.Has a website there.And the reason that you're all here for this opening talk,
SPEAKER 1 [continued]: and I think is a matter of us really getting inspired,is the joint Curtis Meinert lecture.And it is really my pleasure to introduce Dr. Susan Ellenberg.And she'll be presenting on "Clinical trialsand 21st century, ongoing challenges and emergingissues."Dr. Ellenberg has had an absolutely outstanding career
SPEAKER 1 [continued]: in bio statistics and clinical trials,has been the statistical PI of numerous large scale randomizedtrials, and is a professor of bio statisticsat the University of Pennsylvania.It is truly an honor to be introducing her here.
SPEAKER 1 [continued]: And without further ado, I'm goingto hand things over to her, because shehas a very compelling talk.And I think you all are going to enjoy it greatly.So join me in welcoming Dr. Ellenberg.
DR. SUSAN ELLENBERG: Well, good morning, everyone.I feel very, very honored to be giving this lecture.Curt was one of the very first people that I met in my journeyin the world of clinical trials.I met him back in the mid 1970s at the meetingsthat preceded the meetings of the Society
DR. SUSAN ELLENBERG [continued]: for Clinical Trials, preceded the formation of the society.I think they were called somethinglike Meetings on long term clinical trials,and some of you in the audience remember those meetings.He's been a mentor to me over the years,and it's wonderful to be here to give this lecture.
DR. SUSAN ELLENBERG [continued]: So for those of you who haven't seen this bird yet,this is the liver bird.It's an iconic symbol of Liverpool, but I'm toldhas nothing to do with the name Liverpool.I've been here since Saturday and have had a chanceto go to some of the museums around,including the Museum of Liverpoolwhere I took this picture.So I hope all of you will have the opportunity
DR. SUSAN ELLENBERG [continued]: to explore this city.It's been very-- it's very interesting.So clinical trials timeline.I think we all accept the start of the modern eraof clinical trials as in the middleof the 20th century with the streptomycin trialsat the MRC and the US VA.
DR. SUSAN ELLENBERG [continued]: In 1962, a major happening, the US Food and Drug Cosmetic Actwas amended to require demonstration of efficacyas well as safety.Many people are often astonished to hear that before 1962, itwas not required to show that a drug actually did anythingbefore it was approved.
DR. SUSAN ELLENBERG [continued]: In 1964, the first version of the Declarationof Helsinki, the international statementof bio-ethical issues in medical research was published.1968, the UK Medicines Act.1979, the Belmont Report in the USwas the origin of the US IRB rules.
DR. SUSAN ELLENBERG [continued]: In 1996, the Good Clinical Practice documentof the International Conference of harmonizationwas published, which I think establisheda lot of very strong principles for good researcharound the world, and then the EU directive in 2001.So these are a few of the important timelinesfor clinical trials.
DR. SUSAN ELLENBERG [continued]: Over the last 70 years or so, a variety of issues have arisen.Some are newly emerging, some are with us all the time,or intensify or not.I listed some of them on this slide.I'm not going to talk about all of them.Some of them, like targeted designs, are somewhat newer,
DR. SUSAN ELLENBERG [continued]: some of them, like subgroup findings,have been talked about for many, many years.So I would like to focus this morning on someof the big picture issues.The issue of getting the right answer faster,getting a real world answer, reducingthe cost of drug development, individualizing treatment,ethical issues in trial design, and maybe at the end
DR. SUSAN ELLENBERG [continued]: a little bit about the political scene.So let's start off with individualisation of treatment.Matching drugs to patients is something thathas inspired a lot of people.It sounds really terrific.And the idea is not entirely new.I remember back when I started in clinical trials
DR. SUSAN ELLENBERG [continued]: and I was working in cancer research,they had something called the human tumor clonogenicassay, which involved taking samples of tumors from people,putting them in Petri dishes and testing drugs with them to seewhether that particular tumor from that particular patient
DR. SUSAN ELLENBERG [continued]: would be responsive to it.Didn't work very well, but the concept of individualisationhas been with us for a long time.The most progress and the most activity in this areahas been in cancer, where now drugs are largelydesigned to target tumors with certain geneticcharacteristics.And there certainly have been some successes.We've had certain tumor mutations that have turned
DR. SUSAN ELLENBERG [continued]: out-- well you can find drugs that are very,very responsive--that lead to great responses, tumors that havecertain types of mutations.But there's also uncertainties.We found that drugs that are targetedto particular mutations maybe don't work terrifically well,maybe they work modestly, maybe they also
DR. SUSAN ELLENBERG [continued]: work in people with tumors without those mutations.And we've seen a few examples of that.So this is an area where we're still learning,but it's certainly--we're very actively pursuing it.And people are looking into this in other areas as well.And this has been a boon for peoplewho like to figure out new study designs.
DR. SUSAN ELLENBERG [continued]: So we have many different approachesfor understanding which drugs work well and in which people.So we have, of course, randomizing only those whoexpress the molecular target, thereare designs that allow us to randomizeeverybody an all comers design.But maybe we'll allocate more alpha to the targeted subgroup,
DR. SUSAN ELLENBERG [continued]: maybe we'll have a multi stage design,we'll select randomize-- we'll select responsive subjectsat the end of that stage and thencontinue the trial with more alpha allocatedto that subset for final analysis.We could randomize--We've seen trials designs where you randomize actuallyinitially to a marker based strategy versus a non
DR. SUSAN ELLENBERG [continued]: marker based strategy.And then rather than--and then to the regimen within those groups.So there are a lot of different approaches.And now we have basket trials and platform trials, againintended to determine what treatmentswork best in which patient subsets with the hopethat these designs will be more efficient.Basket trials, typically in cancer research
DR. SUSAN ELLENBERG [continued]: certainly all the time that I was involved,you focus on the site.We have drugs for lung cancer, or certain typesof lung cancer drugs for breast cancer, drugsfor prostate cancer.A basket trial focuses on the particular tumor mutation,rather than the site of tumor.So a particular tumor mutation mightbe seen both in a lung cancer and in breast cancer.
DR. SUSAN ELLENBERG [continued]: So these trials will have a basket of tumor sites,but everybody will have the same tumor mutation.Platform trials, a little bit like the randomized phasetwo trials that have been around for a long time where we canevaluate multiple treatments.Some of these designs will use response adaptive
DR. SUSAN ELLENBERG [continued]: randomization.They may also consider patient characteristicslooking for the best treatment for patients sub types.And the Bayesian adaptive designsare the ones that have been most prominent here,the I-SPY2 type studies.So what are some of the issues with the moveto individualisation of therapy?
DR. SUSAN ELLENBERG [continued]: Well, first, we might say which diseases and conditionswill really require individualized treatment.It's going to be a lot more work to study multiple subsetsinstead of just one.And we certainly do have drugs that work broadlyacross the population.So sorting that out, I think, is going to be important.
DR. SUSAN ELLENBERG [continued]: And related to that, because of the efficiency issue,we're focusing on targets delay identification of broadlyeffective treatments.What about a drug like platinum, whichis broadly effective in some areas of cancer,would we be slowed down in identifying somethinglike that?Are our assays sufficiently sensitive and specific?
DR. SUSAN ELLENBERG [continued]: Finding people with certain targets,people who express certain molecular targets,depends on whether we have an accurate assayto find that out.So that's a challenge.The highly complex approaches thathave been developed that require regular simulations to refine
DR. SUSAN ELLENBERG [continued]: the allocation algorithm, are these really goingto be substantially more efficientthan simpler approaches?I think that's still a question that remains to be answered.And finally, the response adaptive randomization, whichhas been around for a while.There are remaining ethical debates,
DR. SUSAN ELLENBERG [continued]: there are always ethical debates about whether it is, in fact,more ethical to increase the probability of being assignedto what appears to be the effective treatmentrather than continue the one to one randomizationall the way through.Is that more ethical or there problems with that?And finally, there is the issue of cost.
DR. SUSAN ELLENBERG [continued]: Is doing this going to lead to very, very expensive drugsfor a very small subset of the populationsand are we going to be able to afford it?All right, moving on to pragmatic trials.This is a big topic now.Although again, the concept has been around for a long time.Schwartz and Lellouch published their paper in 1967
DR. SUSAN ELLENBERG [continued]: on explanatory and pragmatic trials,where explanatory trials were defined as trialsto answer a very specific scientific question.And the implication there would beto conduct a trial where you controlled heterogeneity asmuch as possible so you could isolate the treatment effect.Whereas a pragmatic trial, the purpose
DR. SUSAN ELLENBERG [continued]: was to answer a practical question,which treatment to use under real world conditions.Were the results intended to be widely generalizable?And the large sample trials that we had starting in the 1980swere good examples of a pragmatic trial.So what is this real world approach?
DR. SUSAN ELLENBERG [continued]: Well, we want minimally restrictive entry criteriato maximize generalizability, alsominimal restrictions on the treatment approachother than the treatment under study.We want to focus on how to make these trials lessresource intensive so that we can afford to do them.You can't do a very large trial as you figure out ways
DR. SUSAN ELLENBERG [continued]: to reduce the resources.And so there's a lot of effort nowto see if we can integrate trials,see how we can integrate trials into clinical care,maybe retrieving data from electronic health recordsrather than requiring completion of data forms,challenging since these EHRs have been largely developed
DR. SUSAN ELLENBERG [continued]: for cost issues, payment issues ratherthan for scientific research.We need to simplify the informed consent process.But there is a lot of effort now.There's an NIH collaboratory and the Patient Centered OutcomesResearch Institute in the US that are fostering
DR. SUSAN ELLENBERG [continued]: the development of such trials.The Clinical Trial Transformation Initiative,which is a public private partnership of the US FDAand Duke University, are very muchfocused on pragmatic trials.So here are some examples of some trialsthat are either ongoing or have recently been completed.
DR. SUSAN ELLENBERG [continued]: Will lengthening duration of dialysis sessionsprolong the lives of people with end stage renal disease?Will use of regional, rather than general anesthesia,in hip fracture repair result in more rapid recovery?Can a multidisciplinary approach thatintegrates psycho-social services with medical careimprove outcomes in people with chronic pain?
DR. SUSAN ELLENBERG [continued]: So these are very real world types of questions.They're not issues that a pharmaceutical companyis likely to pursue, and it'd be interesting to seehow these developed.But there are certainly issues with pragmatic trialsthat those of us who are involved with themare having to deal with.
DR. SUSAN ELLENBERG [continued]: The increased heterogeneity that we're looking forto get generalizable results means that there'sgoing to be larger variance.We're going to require bigger studies,and especially for cluster randomized trialswhich have been very commonly used in pragmatic trials,will require many more patients.
DR. SUSAN ELLENBERG [continued]: The informed consent issues have been discussed a lot.The question is whether you even need informed consentin some of these trials, where what you're comparingare things that if you weren't doing a trial,the clinic, the physician, would just arbitrarily choose.And so there's been a fair amount of debate about that.
DR. SUSAN ELLENBERG [continued]: And it's even more complicated in cluster randomized trialswhere the clinic, or the hospital, or some other sitehas already been randomized.And so the people coming there reallydon't have an issue they may not reallyhave an option of going somewhere else.So there's a lot of debate about when you need informed consent
DR. SUSAN ELLENBERG [continued]: and how much we can simplify informed consentto make it doable in these kinds of studies.Can pragmatic trials have a role in drug development?Well I think that's particularly challenging.Simplifying and reducing data collectionwill limit the ability to explore data if the results are
DR. SUSAN ELLENBERG [continued]: not as hoped.Often, a drug company may do a study,it comes out not positive, but there is a lotof biological plausibility.People want to explore the data and see whether there'ssome explanation for why it didn't come outthat way, some fix that could be made.It'd be difficult to do unless less data are collected.
DR. SUSAN ELLENBERG [continued]: The regulatory requirements in some casesmay block attempts at simplification.There was a recent trial, a large trial,comparing pain relievers.The precision trial, which was mandated by the FDAafter there were concerns that one or moreof these standard pain relievers might be more associated
DR. SUSAN ELLENBERG [continued]: with cardiovascular adverse eventsand asked the companies to do a big trialas a trial comparing naproxen, ibuprofen, and celecoxib.And we had a conference a couple of years agoon pragmatic trials.And I invited the statistician for this studyto come and talk about this.I thought that surely a trial this large,
DR. SUSAN ELLENBERG [continued]: about 25,000 people, had to have some pragmatic aspects.So he surprised me.He said there was nothing, nothing pragmaticabout this trial.There was so much concern on the regulatory arenaabout the safety issues that everythinghad to be done every bit as meticulously and as rigorously
DR. SUSAN ELLENBERG [continued]: and as carefully as in any of the trialsthat they were doing to get a new drug approved.So we had one talk in our conference on pragmatic trials.That was about a decidedly non pragmatic trial.So we'll see whether there reallymay be a role in drug developmentfor a pragmatic trials, but we'll see.
DR. SUSAN ELLENBERG [continued]: So I want to say a little bit about hypothesis testing,because it's been sort of in the news lately.It's been well-established in medical researchbut there's been push back.In 1990, Ken Rothman, an epidemiologist then at Harvard,founded the journal Epidemiology and banned p-values.And he said, we're not publishing any p-values
DR. SUSAN ELLENBERG [continued]: in our journal.We will publish confidence intervals.And so there was an initial throwback to p-values,but more recently, a psychology journalhas banned all statistical inferential proceduresand just urged authors to do a better job of presentinggood descriptive statistics.
DR. SUSAN ELLENBERG [continued]: And felt that smart people readingthese descriptive statistics willbe able to figure things out.So it has occurred to me, why is it that there are hypothesistesting haters?And so here are some of the argumentsthat people have made.First, hypothesis tests have beeninterpreted as absolute arbiters of truth,
DR. SUSAN ELLENBERG [continued]: rather than as what they are.And I think, as most of us would use them,they're tools to help us make our best guesses at truth.But it's certainly true that in some areasand certainly in some journals, the p less than 0.05is the be all and the end all.Second, people argue that hypothesis tests don't formallyaccount for lots of information external to the experiment that
DR. SUSAN ELLENBERG [continued]: are relevant to decision making.Well, of course that's true.I have a colleague at the FDA who tells mehe doesn't care how many studies of homeopathyare done with a p-value less than 0.05,he doesn't believe that homeopathyhas any effect because his understanding of the biology
DR. SUSAN ELLENBERG [continued]: is such that it can't possibly work.So we all take external information into account.And it's true that the hypothesis testsdon't account for that.Third, and this is a traditional bugaboo,hypothesis tests don't allow us to say how likely something isto be true, even though it's often misinterpreted that way.That's what most people really want to say
DR. SUSAN ELLENBERG [continued]: and we can't do that in the hypothesis testing framework.Finally, results of hypothesis tests tell usnothing about the importance of the result.And I've heard the arguments that well, the null hypothesisis never exactly true and if you do a trial big enough,you'll find a significant difference.That's really not so much of a problem for usin clinical trials, having too many, too big a study,
DR. SUSAN ELLENBERG [continued]: so that we have a meaningless result.But it's certainly an argument that can be made.So in light of a lot of this debate,the American Statistical Associationconvened a group of statisticiansto draft a statement about statistical significanceand p-values.And this document offers six principlesthat address some of the misconceptions.Now, it's not a consensus statement
DR. SUSAN ELLENBERG [continued]: and some of the people who were involvedsay they disagree with most of it.So you have to read this with that in mind.But the publication of the statementappeared last year in The American Statistician.And there's supplementary material,including comments from other statisticians.You get a feeling for what the debates are.
DR. SUSAN ELLENBERG [continued]: So these are the six principles that were published.First, p-values can indicate how incompatiblethe data are with a specified statistical model.p-values don't measure the probabilitythat the studied hypothesis is trueor the probability that the data were produced by random chancealone, very common misinterpretation.Scientific conclusions in business or policy decisions
DR. SUSAN ELLENBERG [continued]: shouldn't be based only on whether a p-value passesa specific threshold.Proper inference requires full reporting and transparencyA p-value doesn't measure the size of an effector the importance of a result, and by itself doesn'tprovide a good measure of evidence regardinga model or hypothesis.So I think it's important to recognize this debate.
DR. SUSAN ELLENBERG [continued]: I think a question is, can we use statistical toolsother than classical hypothesis testingto draw more information from our dataand possibly lead to better decision making?Do we really need to choose between hypothesis testingand Bayesian approaches or likelihood approachesor some other testing?Maybe we need to use a variety of approaches.
DR. SUSAN ELLENBERG [continued]: But perhaps most importantly, can weimprove our communication of the meaning of our analysisto our collaborators in a better waythan we have done in the past?Now I'd like to spend the rest of my time talkingabout emerging challenges to the Randomized Control Trialparadigm.So as most of you know and as I mentioned
DR. SUSAN ELLENBERG [continued]: at the beginning, Bradford Hill, at the London School of Hygieneand Tropical Medicine, pushed the use of randomizationin the trials of streptomycin for treatment of tuberculosisback in the late 1940s.And there's been push back on the concept of randomizationand randomized trials over the years.
DR. SUSAN ELLENBERG [continued]: Early on, there was a substantial resistance,particularly among oncologists.A paper by Gehan and Freireich in the New England Journalof Medicine in 1974 reflected their viewthat if last year's treatment wasn't very satisfactory,it's really not ethical to randomize peoplebetween this new exciting treatment that looks like it's
DR. SUSAN ELLENBERG [continued]: likely to be much better against what we know is unsatisfactory.Everybody should get the new treatment,it's not really ethical to do otherwise.But there were others who argued for alternative approaches.Milton Weinstein, that same year in the New England Journal,thought that we could accomplish perhaps even moreby careful matching, blocking, and adjustment procedures
DR. SUSAN ELLENBERG [continued]: than we could with randomization.And Hellman and Hellman, in 1991,dismissed the idea that randomized trial was reallythe only way to get valid informationor that all information from randomized trials is valid.So there's been regular push back.But in the late 1980s and early 90s,
DR. SUSAN ELLENBERG [continued]: we faced a different kind of challenge.Those challenges were from researchers from academics.In the 80s, we faced the challengefrom the patient community.We had a very vocal community arguingthat we were not making treatments for AIDSavailable quickly enough.We had demonstrations, we had the kinds of things
DR. SUSAN ELLENBERG [continued]: that we'd never seen before from patient groups.And there was a lot of fear that the AIDS activistswere going to undermine the whole research paradigm.In fact, that community wanted correct answers as muchas anybody.And a group of statisticians worked with them
DR. SUSAN ELLENBERG [continued]: to think about appropriate study designs for HIV.And one of the-- a paper that came outof this effort published by the late David Byarand many others including myself,included in this paper which looked at a variety of issuesand study designs, looked at what
DR. SUSAN ELLENBERG [continued]: would be appropriate criteria for a historically controlledtrial for a situation where we reallydid not need randomization?And these are the criteria.And I think they are quite valid.One, there's no treatment to serveas an appropriate control.Second, there's sufficient experienceto show that untreated patients havea uniformly poor prognosis.
DR. SUSAN ELLENBERG [continued]: That's a really tough criterion to meet.Third, the therapy is not expectedto have substantial side effects that could compromisethe potential benefit.Fourth, that there was a justifiable expectationof a sufficiently large benefit to make the resultsinterpretable.And finally, there's a strong scientific rationalefor treatment that would support wide acceptance
DR. SUSAN ELLENBERG [continued]: of positive findings.So I'd like you to keep these in mindas I go into some discussion of what happened in the Ebolaepidemic of 2014 and 2015.This was the largest Ebola outbreak ever.Almost 30,000 individuals were infected,more than 11,000 people died.There were no known drugs for Ebola or vaccines.
DR. SUSAN ELLENBERG [continued]: It was highly infectious.There was clearly a very high fatality rate.It occurred in areas with limited health care facilities.And there was much debate about the feasibility and ethicsof conducting randomized trials of potential treatments.Research organizations who went insupported the initiation of trials.
DR. SUSAN ELLENBERG [continued]: The humanitarian organizations who were there first and werecompletely beset by trying to deal with this,felt strongly that randomisation to a usual care controlwould be unethical.They were seeing people dying in drovesand felt that they must do something activeand not do a comparative trial.
DR. SUSAN ELLENBERG [continued]: So what actually was the Ebola mortality?Well, one paper in 2014, early on in the epidemic,reported an overall mortality of 74%in Sierra Leone, one of the countries that was involved.But there was a substantial variation thatwas 57% in the youngest group.Almost everybody in the oldest group died.
DR. SUSAN ELLENBERG [continued]: These are clearly very high rates, but they did vary.A later report the following year in the New England Journalagain on patients, a larger group of patients in SierraLeone reported a much lower overall mortality of 31%and noted that in just three months,the mortality decreased from 48% to 23%.
DR. SUSAN ELLENBERG [continued]: Other papers reported widely varying mortality statisticsand an overall death rate for 2014 was estimated to be 37%.So what are the implications?The diminishing mortality over timewas very likely due to the bringingin of improved supportive care measures.
DR. SUSAN ELLENBERG [continued]: Fluid replacement and electrolytes,standard treatments for diseases wherepeople had severe vomiting and diarrhea and losing fluids.As experience was gained, it wouldbe expected that mortality would continue to decrease.The mortality varied so much by age.And undoubtedly, other factors which
DR. SUSAN ELLENBERG [continued]: were not always able to be measuredwould certainly complicate historical comparisons.And I think if you think about the Byar criteriathat I mentioned before, I think we can see that a historicallycontrolled trial would really notbe able to yield convincing results unless the treatmenteffect was very, very large, and maybe not even then.
DR. SUSAN ELLENBERG [continued]: So what happened in the Ebola epidemic?Some trials-- some randomized trials,many non-randomized trials were implemented.As I'll show you in a minute, the trialsdid not start until the epidemic was waning.That's not an issue of whether we should do randomized trialsor not.The enrollment was too limited to yield definitive results.
DR. SUSAN ELLENBERG [continued]: And so in the wake of all of this,there's been a lot of soul searching and discussionabout how to accomplish more in future outbreaks.The United States National Academy of Medicinewas charged with developing recommendations for researchconduct in future outbreaks.And the report of this committee,on which I had the privilege to serve,
DR. SUSAN ELLENBERG [continued]: was released in the middle of April.It's entitled "Integrating clinical researchinto epidemic response, the Ebola experience."And you can see you can find this on the NationalAcademy of Medicine website.These are the members of the committee.You will know some of these people.It was really a wonderful committee.And we met in Africa, in London, and the US,
DR. SUSAN ELLENBERG [continued]: and talked to everybody we could findwho was involved in the issue.Now, what this shows is the course of the epidemic.Different colors reflect different countries
DR. SUSAN ELLENBERG [continued]: and these boxes represent when studies were started.Here's where the disease was first confirmed.And as you can see, all of the studies,both in treatment trials and vaccine trials,they were not initiated until the epidemic was wellon the wane.
DR. SUSAN ELLENBERG [continued]: And that was due to a variety of reasons, one of which,though, was trying to get things coordinatedand get people to agree on what should actually be done.So these are some points that are made in the report--that we made in our report.The chaotic clinical and public health
DR. SUSAN ELLENBERG [continued]: needs clashed with research goals with no consensus on whator how to study.There were strong disagreements about the priorityfor patient care versus research.And if you think about what the situation was at that time,it's very easy to sympathize with peoplewho are on the ground coming in first and thinking about how
DR. SUSAN ELLENBERG [continued]: are they going to add random allocationand collecting really good data to everything elsethat they were trying to do.But there was strong disagreementabout whether it was ethical and feasible to conductrandomized control trials.There was a lack of local capacity or experiencewith Ebola or with clinical research.
DR. SUSAN ELLENBERG [continued]: There was not enough community engagement.And so there was fear, there were rumors,there was mistrust, there was violence.The availability of experimental therapeuticsgiven to international responders.You may remember that there were people from developed countrieswho became infected, were flown back to their home countries
DR. SUSAN ELLENBERG [continued]: and treated with experimental agents.And that led to concerns that there reallywere good drugs that were being withheld from the communitiesat major risk.And there was really inadequate coordinationamong the research groups.People were competing for trial approvalsand for patients and sites just at the time
DR. SUSAN ELLENBERG [continued]: that the epidemic was waning.So the principles that were consideredby the committee in regard to the ethics of doing researchduring outbreaks are listed here.We had to think about the scientific and social valueof what we were doing, make sure that we had respectfor persons, community engagement was really
DR. SUSAN ELLENBERG [continued]: important, there must be concern for participant welfareand interests, there had to be a favorable risk-benefit balance,justice in the distribution of benefits and burdens,and post trial access to effective treatments.So our conclusions.We concluded strongly that the requirementsfor ethical research do apply to research in emergency context.
DR. SUSAN ELLENBERG [continued]: But assessment and approval, the processof making sure that everybody agreedthat we could go ahead with these studies,could be expedited.We felt strongly that randomizationis necessary in almost--I see it says most, but it reallyshould be almost all cases to get interpretable results.A fundamental ethical requirementis to not do studies where you're not
DR. SUSAN ELLENBERG [continued]: going to be able to interpret the results.The trials without randomized controlslimit incremental learning.We would not be able to learn about treatments and vaccineswith moderate efficacy, which is the reality of mostclinical trials.I have a couple of tables that--only point of these tables are the studiesto show that almost all the trials produced
DR. SUSAN ELLENBERG [continued]: inconclusive results.There were five treatment trials,the prevail trial that was run by the NIHwas suggestive of benefit, but it ended,like I said, before the--started while the epidemic was waningand didn't have enough patients to really test for sure.The vaccine trials, there was one trial that
DR. SUSAN ELLENBERG [continued]: used a ring cluster design, an innovative type design that didsuggest efficacy.I think we all felt that it was likely protective,but there were some issues with howthe study was conducted and analyzedthat made us not entirely sure.Others produced good information on immunogenicity and safetybut could not really assess the efficacy.
DR. SUSAN ELLENBERG [continued]: So the conclusions and the recommendationsare that clinical research has to beintegrated into response efforts from the very beginning.That clinical care, public health, and researchare fundamentally linked.That community engagement is essential.And that we found that local communities can understand
DR. SUSAN ELLENBERG [continued]: and accept research concepts like randomization and consent,but it takes time.It takes an understanding of the local beliefs, traditions,and customs.It takes the right message and the right messengers.And what we're hoping to do during this timebetween epidemics, is to develop plansto be able to move ahead more quickly when
DR. SUSAN ELLENBERG [continued]: the next outbreak, whether it's Ebola or something else,emerges.I want to just mention political challenges.The availability of medical productsis a popular topic for politicians.I mean who could disagree with gettingeffective drugs available faster, givingdying patients more options, and reducingthe cost of drug development?These are messages that we hear from politicians all the time.
DR. SUSAN ELLENBERG [continued]: And FDA, at least, is regularly subject to legislative Acts.The FDA Modernization Act of 1997,and a whole succession of them after that,aimed largely at these goals.I worry that some of this is maybe moving to extremes.
DR. SUSAN ELLENBERG [continued]: I think many of us were concerned that an early leadingcandidate for a new commissioner of the US Food and DrugAdministration in this new administrationwas on record as promoting the eliminationof the efficacy requirement.And so there is quite concern that someonecould be even proposed for a position like commissioner
DR. SUSAN ELLENBERG [continued]: of the FDA who didn't believe it was important to provethat drugs actually worked before putting themon the market.We are certainly living in interesting times.So to conclude, clinical trials have always face challenges.The challenges today I think are pretty exciting and motivating.Can we really be successful at individualizing therapy
DR. SUSAN ELLENBERG [continued]: without drastically increasing costs?Can we reduce costs by embedding researchinto clinical practice, getting more generalizableresults in the process, and still gettinginterpretive results that we can hang our hats on?Can we improve outcomes with new technologies,imaging, mobile apps, haven't really talked about that,
DR. SUSAN ELLENBERG [continued]: but that's certainly an exciting area for clinical trials.I would say that advancing the valueof a need for rigorous approaches to medical researchis more important than ever.Thank you for your attention.
Publisher: Society for Clinical Trials
Publication Year: 2017
Keywords: activism; clinical research; clinical trials; confidence intervals; control groups; data analysis; data collection; descriptive statistics; drug testing; ebola virus; emergency response guidelines and regulations; ethical considerations; generalizability; hypothesis testing; individualized care plans; infectious disease; informed consent to treatment; political climate; pragmatism; public health crisis management; randomized clinical trials; regulation and regulatory agencies; research design; research ethics; Statistical power; timeliness of healthcare ... Show More
Segment Num.: 1
Dr. Susan Ellenberg delivers the Curtis Meinert Lecture keynote address, on the ongoing challenges and emerging issues for clinical trials in the twenty-first century, at the 38th Annual Meeting of the Society of Clinical Trials.
Looks like you do not have access to this content.
Dr. Susan Ellenberg delivers the Curtis Meinert Lecture keynote address, on the ongoing challenges and emerging issues for clinical trials in the twenty-first century, at the 38th Annual Meeting of the Society of Clinical Trials.