Quantitative assessment of adverse events in clinical trials – comparison of methods at an interim and the final analysis.
In clinical study reports, adverse events (AEs) are commonly summarized using the incidence proportion despite cumulative incidence function been advocated as the most appropriate method to account for different exposure time and competing events.
In this presentation, we compare different methods to estimate the probability of one selected AE. Besides considering the final analysis at the time of the Clinical Study Report, we especially investigate the capability of the proposed methods to provide a reasonable estimate of the AE probability at an early interim analysis. Robustness of the methods in the presence of a competing event is evaluated using data from a breast cancer study. The potential bias of each method is quantified in a simulation study.
Valentine Jehl is a senior quantitative safety scientist at Novartis. She received her Master’s degree in applied mathematics at the Louis Pasteur University in Strasbourg.
She started her carrier as statistician with a CRO in Brussel. She then joined Novartis in Basel, where she supported major submissions and development programs for the oncology franchise. After 9 years in this role, Valentine joined the quantitative safety group in April 2016, where she now promotes the use of quantitative methods for safety, with a particular focus on Adverse Drug Reactions.
Comparison of time-to-first event and recurrent event methods in multiple sclerosis trials.
Randomized clinical trials in multiple sclerosis (MS) frequently use the time to the first confirmed disability progression (CDP) on the Expanded Disability Status Scale (EDSS) as an endpoint. However, especially in progressive forms of MS where CDP is typically the primary endpoint, a substantial proportion of subjects may experience repeated disability events. Recurrent event analyses could therefore increase study power and improve clinical interpretation of results.
We present results from two simulation studies which compare analyses of the time to the first event with recurrent event analyses (including negative binomial, Andersen-Gill, and Lin, Wei, Ying, and Yang models). The first simulation study is generic and recurrent events data is simulated according to a mixed non-homogeneous Poisson process. The second simulation study is MS-specific: we first simulate longitudinal measurements of the ordinal EDSS scale using a multi-state model and then derive recurrent event data based on this. Simulation parameters are chosen to mimic typical MS trial populations in relapsing-remitting or primary progressive MS, respectively, and include scenarios with heterogeneity (frailties). Based on the results from the simulation studies, the presentation will conclude with recommendations for the choice of the endpoint, and analysis method of MS trials with disability progression endpoints.
Qing is a statistician working at Roche Basel. She is currently the project lead statistician for the Ocrevus (ocrelizumab) program, and had been supporting the program from initial study readouts, filing preparations, US and EU approvals, to market access and scientific communication over the past years. Before joining Roche in 2014 she has worked in HIV research at the Institute for Clinical Epidemiology and Biostatistics at University Hospital Basel. She received her Master in Mathematics and PhD in Biostatistics at the University of Cambridge (UK).
Filip De Ridder
A time to event model for early efficacy signal dose finding in epilepsy clinical trials.
Time to-event endpoints have been proposed as alternatives to establish the effect of anti-epileptic drugs in clinical trials. These endpoints may reduce exposure to placebo or ineffective treatments, thereby facilitating trial recruitment and improving safety. Time to baseline seizure count is defined as the number of days until a subject experienced a number of seizures equal to the baseline seizure count. A post hoc analysis of completed Phase III trails with perampanel showed that an analysis of the time to baseline count endpoint is consistent with the classical endpoints (median % seizure rate reduction, percentage of patients achieving a 50% or greater reduction in seizure frequency)1.
We investigated the performance of the time to baseline seizure endpoint by (1) a post hoc analysis of topiramate and carisbamate clinical trial data and (2) clinical trial simulation using a longitudinal model for daily seizures counts. This model included key features of daily seizure count data, such as a large between subject variability in baseline seizure rate and drug response, a large variability of the number of seizures per day and clustering of seizures over time.
The re-analysis of topiramate and carisbamate clinical trial data confirmed the relationship between the median time to baseline seizure count and the classical endpoint of median % seizure rate reduction that was observed with perampanel. In addition, the observed relationship agreed with the one that was predicted by the simulation model.
Clinical trial simulations were used to investigate the performance of a proof-of-concept study design using the time to baseline seizure count endpoint. The study consisted of a 4-week prospective baseline, followed by a 4-week double blind treatment period, after which subjects would exit the study if they had reached or exceeded their baseline seizure count, or would continue for another 8-weeks. These simulations showed that (1) with relatively small sample sizes (~ 20/arm) the design is able to identify clinical relevant treatment effects (30% - 50% seizure rate reduction); (2) a 4-week baseline period provides enough information on the baseline seizure count and (3) the length of exposure of subjects to placebo or an inactive treatment is strongly reduced as compared to a classical design.
Filip De Ridder is a Senior Scientific Director in the Statistical Modeling & Methodology group of Janssen R&D. Twenty years ago, he was one of founders of the Modeling & Simulation group at Janssen bringing together statisticians and pharmacometricians to apply modeling & simulation techniques in clinical drug development. Since then he has worked on M&S projects in the context of PK/PD modeling, dose response modeling and clinical trial design, mainly in neuroscience and infectious diseases.
The treatment of recurrent safety events and terminal events
requires careful consideration underlying the estimands in question, and the
assumptions in the methods used to estimate them. In this talk I shall give a
regulatory perspective on these issues, focussing on how and why the EU system
summarises data as it does, where the gaps are in the methodology, and how we
can progress to ensure that data are summarised appropriately. I will consider
whether we need to move beyond the methods currently used, and what questions
we truly need to be answering (and how). In particular I shall argue that we
need to be sure that when no true raised risk exists, the method we use to
summarise said risk should provide an unbiased average effect of 0, but in
time-to-event studies this is not always as quite straightforward as it seems.
Andrew Thomson is a statistician at the EMA Office of
Biostatistics and Methodology Support, joining in 2014. He supports the
methodological aspects of the assessments of Marketing Authorisation
Applications, as well as Scientific Advice, and methodological aspects of
Paediatric Investigational Plans. He has worked extensively on the
methodological aspects of the EMA Reflection Paper on the use of extrapolation
of efficacy in paediatric studies.
Prior to the EMA, he worked at the UK regulator, the
Medicines and Healthcare product Regulatory Agency. Here he worked initially as
a statistical assessor in the Licensing Division, assessing Marketing
Application Authorisations and providing Scientific Advice to companies. After
rising to Senior Statistical Assessor, he moved to the Vigilance and Risk
Management of Medicines Division, to be Head of Epidemiology. Here he managed a
team of statisticians, epidemiologists and data analysts providing support to
the assessment of post-licensing observational studies and meta-analyses. He
also managed the team’s design, conduct and analysis of epidemiology studies,
using the UK Clinical Practice Research
Arno Fritsch & Patrick Schlömer (Bayer)
|Estimands for recurrent events in the presence of a terminal event – Considerations and simulations for chronic heart failure trials.
In this presentation, we will discuss potential estimands according to the ICH E9 addendum framework that can be addressed for recurrent events when there is a non-negligible risk for a terminal event, typically death.
As an application, we consider trials in chronic heart failure (HF). Here in the past, the standard (composite) primary endpoint was the time to either hospitalization for HF or cardiovascular (CV) death. Since many patients experience recurrent HF hospitalizations, there is interest to include these events into the primary endpoint. We consider two estimands, one that focuses only on the total number of recurrent HF hospitalizations and another one that includes CV death as an additional composite event.
We present results of an extensive simulation study that investigated which standard methods for analyzing recurrent event data estimate the above-mentioned estimands. In addition, we compared the efficiency of recurrent event estimands and time-to-first event estimands.
Arno Fritsch received his PhD in Statistics from the University of Dortmund, Germany, in 2010. Since then he has been working at Bayer as a clinical statistician, mainly on the design, analysis and submission of cardiovascular trials. Since 2017 he has the position as Group Leader Europe in the cardiovascular statistics department. His methodological interests include handling of missing data, analysis of subgroups and recurrent events. He is one of the co-authors of the application for an EMA qualification opinion on use of recurrent events.
Patrick Schlömer received his PhD in Statistics from the University of Bremen, Germany, in 2014 for his work on group sequential and adaptive designs for three-arm non-inferiority trials. Since then he has been working at Bayer as a clinical statistician in the cardio-renal area with increasing responsibilities, now holding the position Lead Statistician. His methodological interests include group sequential and adaptive designs, multiple comparison procedures and recurrent events. He is one of the co-authors of the application for an EMA qualification opinion on use of recurrent events.
(London School of Hygiene & Tropical Medicine)
The value of including recurrent events in the analysis of cardiovascular outcomes trials.
Including recurrent events in analyses of clinical trials can increase power and lead to a more complete assessment of treatment benefit. There are several strategies to analysing repeat events, but little practical guidance as to which are best in any given scenario. Several methods for analyses of repeat events in trials will be compared, including Andersen-Gill, Wei-Lin-Weissfeld, negative binomial regression, and joint frailty models. The assumptions underlying each of these methods, and their various advantages and disadvantages will be outlined using data from recent large cardiovascular trials.
John Gregson is an Assistant Professor in Medical Statistics at the London School of Hygeine and Trpoical Medicine. He has a range of experience in the analysis of cardiovascular clinical trials, many of which have been published in high impact journals (e.g. NEJM, Lancet, JACC). As well as an interest in the applied analysis of randomised clinical trials and epidemiological studies, a major research interest of his is in methodological research into statistical issues which commonly arise in such studies. He holds a PhD in Epidemiology from Cambridge University and a Masters in Medical Statistics from Southampton University.
Tobias Bluhmki (University of Ulm)
Resampling complex time-to-event data without individual patient data, with a view toward recurrent events.
In this talk we consider non- and semi-parametric resampling of multistate event histories by simulating individual trajectories from an empirical multivariate hazard measure.
One advantage is that it does not necessarily require individual patient data, but may be based on published information. This is also attractive for both study planning and simulating realistic real‐world event history data in general. A special focus is on simulating recurrent events data with associated terminal events. We demonstrate that our proposal gives a more natural interpretation of how such data evolve over the course of timethan many of the competing approaches. The multistate perspective avoids any latent failure time structure and sampling spaces impossible in real life, whereas its parsimony follows the principle of Occam's razor. We also suggest empirical simulation as a novel bootstrap procedure to assess estimation uncertainty in the absence of individual patient data. This is not possible for established procedures such as Efron's bootstrap.
Tobias Bluhmki studied Mathematical Biometry at Ulm University from 2009 to 2014 and was honored with the "Bernd-Streitberg Award" by the International Biometric Society - German Region for his Master's Thesis. Since then, he has been research assistant at the Institute of Statistics, Ulm University, Germany. He has recently defended his PhD thesis supervised by Jan Beyersmann at the Faculty of Mathematics and Economics and is now postdoctoral researcher. His research focuses on statistical methodology in clinical trials and epidemiological studies based on survival and event history techniques.
He has published several articles in biostatistical, epidemiological and medical journals and is the current co-lead of the "Team of Young Statisticians" of the International Biometric Society - German Region.
Rob Hemmings (Consilium)
I am a partner at Consilium. Consilium is my consultancy partnership with Tomas Salmonson, a long-standing member of the EMA’s CHMP and formerly the chair of that committee. Tomas and I support companies in the development, authorisation and life-cycle management of medicines.
Previously I worked at AstraZeneca and for 19 years at the Medicines and Healthcare products Regulatory Agency, heading the group of medical statisticians and pharmacokineticists. I am a statistician by background and whilst working at MHRA I was co-opted as a member of EMA’s CHMP for expertise in medical statistics and epidemiology. At CHMP I was Rapporteur for multiple products and was widely engaged across both scientific and policy aspects of the committee’s work. I was fortunate to chair the CHMP’s Scientific Advice Working Party for 8 years and have also chaired their expert groups on Biostatistics, Modelling and Simulation and Extrapolation. I wrote or co-wrote multiple regulatory guidance documents, including those related to estimands, subgroups, use of conditional marketing authorisation, development of fixed-dose combinations, extrapolation and adaptive designs. I have a particular interest in when and how to use data generated in clinical practice to support drug development.