Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop...

169
FDA/Industry Statistics Workshop Adaptive Designs Working Group September 27, 2006 Washington, D.C. Adaptive Clinical Trials Short Course Presenters: Michael Krams, M.D., Wyeth Research Vladimir Dragalin, Ph.D., Wyeth Research Jeff Maca, Ph.D., Novartis Pharmaceuticals Keaven Anderson, Ph.D., Merck Research Laboratories Paul Gallo, Ph.D., Novartis Pharmaceuticals

Transcript of Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop...

Page 1: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

FDA/Industry Statistics Workshop Adaptive Designs Working Group

September 27, 2006Washington, D.C.

Adaptive Clinical Trials

Short Course Presenters:

Michael Krams, M.D., Wyeth Research

Vladimir Dragalin, Ph.D., Wyeth Research

Jeff Maca, Ph.D., Novartis Pharmaceuticals

Keaven Anderson, Ph.D., Merck Research Laboratories

Paul Gallo, Ph.D., Novartis Pharmaceuticals

Page 2: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

2

Adaptive Designs Working Group

Short Course Outline

Topics Time Presenter General Introduction 10 min Mike Krams Terminology and Classification 40 min Vlad Dragalin Adaptive Seamless Design 40 min Jeff Maca

Break 20 min

Sample Size Reestimation and Group Sequential Design 40 min Keaven Anderson

Logistic, Operational and Regulatory Issues 40 min Paul Gallo

General Discussion 20 min

Page 3: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

FDA/Industry Statistics Workshop Adaptive Designs Working Group

September 27, 2006Washington, D.C.

Adaptive Designs General Introduction

Michael KramsWyeth Research

On behalf of the PhRMA Adaptive Designs Working Group

Page 4: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

4

Adaptive Designs Working Group

PhRMA Adaptive Designs Working Group

Co-Chairs: Michael KramsBrenda Gaydos

Authors:

Keaven AndersonSuman BhattacharyaAlun BeddingDon BerryFrank BretzChristy Chuang-SteinVlad DragalinPaul GalloBrenda GaydosMichael KramsQing LiuJeff MacaInna PerevozskayaJose PinheiroJudith Quinlan

Members:Carl-Fredrik BurmanDavid DeBrota Jonathan DenneGreg EnasRichard EntsuahAndy GrieveDavid HenryTony HoTelba IronyLarry LeskoGary LittmanCyrus MehtaAllan PallayMichael PooleRick SaxJerry SchindlerMichael D SmithMarc WaltonSue-Jane WangGernot WassmerPauline Williams

Page 5: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

5

Adaptive Designs Working Group

Vision

To establish a dialogue between statisticians, clinicians, regulators and other lines within the Pharmaceutical Industry, Health Authorities and Academia,

with a goal to contribute to developing a consensus position on when and how to consider the use of adaptive designs in clinical drug development.

Page 6: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

6

Adaptive Designs Working Group

Mission To facilitate the implementation adaptive designs, but only where

appropriate

To contribute to standardizing the terminology and classification in the rapidly evolving field of adaptive designs

To contribute to educational and information sharing efforts on adaptive designs

To interact with experts within Health Authorities (FDA, EMEA, and others) and Academia to sharpen our thinking on defining the scope of adaptive designs

To support our colleagues in health authorities in their work towards the formulation of regulatory draft guidance documents on the topic of adaptive designs.

Page 7: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

7

Adaptive Designs Working Group

Executive Summary of White Paper

Page 8: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

8

Adaptive Designs Working Group

Full White Paper - to appear in DIJ in Nov 2006

Page 9: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

9

Adaptive Designs Working Group

PhRMA/FDA conference on Adaptive Design:Opportunities, Challenges and Scope in Drug Development

Nov 13/14th, 2006 Marriott Bethesda North Hotel & Conference Center

North Bethesda, MD 20852

Program committee

Dennis Erb Merck

Brenda Gaydos Eli Lilly

Michael Krams Wyeth, Co-Chair

Walt Offen Eli Lilly, Co-Chair

Frank Shen BMS

Luc Truyen J&J

Greg Campbell FDA CDRH

Shirley Murphy FDA CDER

Robert O’Neill FDA CDER

Robert Powell FDA CDER

Marc Walton FDA CDER

Sue Jane Wang FDA CDER

Page 10: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

10

Adaptive Designs Working Group

Page 11: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

FDA/Industry Statistics Workshop Adaptive Designs Working Group

September 27, 2006Washington, D.C.

Adaptive Designs Terminology and Classification

Vlad DragalinBiomedical Data Sciences

GlaxoSmithKline

(currently with Wyeth Research)

Page 12: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

12

Adaptive Designs Working Group

Primary PhRMA references

PhRMA White Paper section:

Dragalin V. Adaptive Designs: Terminology and Classification. Drug Information Journal. 2006; 40(4): 425-436.

Page 13: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

13

Adaptive Designs Working Group

Outline

Definition and general structure of adaptive designs

Classification of adaptive designs in drug development

Achieving the goals

Page 14: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

14

Adaptive Designs Working Group

What are Adaptive Designs?

An adaptive design should be adaptive by "design" not an adhoc change of the trial conduct and analysis

Adaptation is a prospective design feature, not a remedy for poor planning

An adaptive design should be adaptive by "design" not an adhoc change of the trial conduct and analysis

Adaptation is a prospective design feature, not a remedy for poor planning

ADAPTIVESequential

Multi-stage

Dynamic Self-designing

Flexible

Response-driven

Novel

Page 15: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

15

Adaptive Designs Working Group

What are Adaptive Designs?

Adaptive PlanAdaptive Plan

… … not Adaptive Planenot Adaptive Plane

Page 16: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

16

Adaptive Designs Working Group

Definition

Adaptive Design

uses accumulating data to decide on how to modify aspects of the study

without undermining the validity and integrity of the trial

Validity means providing correct statistical

inference (such as adjusted p-values, unbiased estimates and adjusted confidence intervals, etc)

assuring consistency between different stages of the study

minimizing operational bias

Validity means providing correct statistical

inference (such as adjusted p-values, unbiased estimates and adjusted confidence intervals, etc)

assuring consistency between different stages of the study

minimizing operational bias

Integrity means providing convincing results to a

broader scientific community preplanning, as much as possible,

based on intended adaptations maintaining confidentiality of data

Integrity means providing convincing results to a

broader scientific community preplanning, as much as possible,

based on intended adaptations maintaining confidentiality of data

Page 17: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

17

Adaptive Designs Working Group

An adaptive design requires the trial to be conducted in several stages with access to the accumulated data

An adaptive design may have one or more rules:

Allocation Rule: how subjects will be allocated to available arms

Sampling Rule: how many subjects will be sampled at next stage

Stopping Rule: when to stop the trial (for efficacy, harm, futility)

Decision Rule: the terminal decision rule and interim decisions pertaining to design change not covered by the previous three rules

At any stage, the data may be analyzed and next stages redesigned taking into account all available data

General Structure

Page 18: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

18

Adaptive Designs Working Group

Examples

Group Sequential Designs: only Stopping Rule Response Adaptive Allocation: only Allocation Rule Sample Size Re-assessment: only Sampling Rule Flexible Designs:

Adaptive AR: changing the randomization ratio Adaptive SaR: the timing of the next IA Stopping Rule Adaptive DR: changing the target treatment difference;

changing the primary endpoint; varying the form of the primary analysis; modifying the patient population; etc

Page 19: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

19

Adaptive Designs Working Group

Allocation Rules

Fixed (static) AR: Randomization used to achieve balance in all prognostic

factors at baseline Complete randomization uses equal allocation probabilities Stratification improves the randomization

Adaptive (dynamic) AR: Response-adaptive randomization uses interim data to

unbalance the allocation probabilities in favor of the “better” treatment(s): urn models, RPW, doubly adaptive biased coin design

Bayesian AR alters the allocation probabilities based on posterior probabilities of each treatment arm being the “best”

Page 20: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

20

Adaptive Designs Working Group

Sampling Rules

Sample size re-estimation (SSR) Restricted sampling rule Blinded SSR or Unblinded SSR based on estimate of nuisance

parameter Traditional Group Sequential Designs

Fixed sample sizes per stage Error Spending Approach

Variable sample sizes per stage (but do not depend on observations)

Sequentially Planned Decision Procedures Future stage sample size depends on the current value of test

statistic Flexible SSR uses also the estimated treatment effect

Page 21: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

21

Adaptive Designs Working Group

Stopping Rules

Early Stopping based on Boundary Crossing Superiority Harm Futility

Stochastic Curtailment Conditional power Predictive power

Bayesian Stopping Rules Based on posterior probabilities of hypotheses Complemented by making predictions of the possible

consequences of continuing

Page 22: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

22

Adaptive Designs Working Group

Decision Rules

Changing the test statistics Adaptive scores in trend test or under non proportional hazards Adaptive weight in location-scale test Including a covariate that shows variance reduction

Redesigning multiple endpoints Changing their pre-assigned hierarchical order in multiple testing Updating their correlation in reverse multiplicity situation

Switching from superiority to non-inferiority Changing the hierarchical order of hypotheses Changing the patient population

going forward either with the full population or with a pre-specified subpopulation

Page 23: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

23

Adaptive Designs Working Group

ClassificationPhase III to launch

Lifecycle Manage-

ment

Phase II to Commit to Phase III

FTIM to Committo PoC/Phase II

Disease selection

Target Family selection

Candidate selection to FTIM

Compound Progression StagesTarget to

tractable hitto candidate

SINGLE ARM TRIALS

Two-stage Designs

Screening Designs

TWO-ARM TRIALS

Group Sequential Designs

Information Based Designs

Adaptive GSD (Flexible Designs)

MULTI-ARM TRIALS

Bayesian Designs

Group Sequential Designs

Flexible Designs

DOSE-FINDING STUDIES

Dose-escalation designs

Dose-finding designs (Flexible Designs)

Adaptive Model-based Dose-finding

SEAMLESS DESIGNS

Dose-escalation based on efficacy/toxicity

Learning/Confirming in Phase II/III

Page 24: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

24

Adaptive Designs Working Group

Two-Stage Designs

Objective: single-arm studies using short-term endpoints; hypothesis testing about some minimal acceptable probability of response

Gehan design: early stopping for futility; sample size of the 2nd stage gives a specified precision for response rate

Group sequential designs: Fleming (1982), Simon (1989)

Adaptive two-stage design: Banerjee&Tsiatis (2006)

Bayesian designs: Thall&Simon (1994)

Page 25: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

25

Adaptive Designs Working Group

Screening Designs

Objective: adaptive design for the entire screening program Minimize the shortest time to identify the “promising”

compound Subject to the given constraints on type I and type II risks for

the entire screening program type I risk = Pr(screening procedures stops with a FP compound) type II risk= Pr(any of the rejected compounds is a FN compound)

Two-stage design (Yao&Venkatraman, 1998)

Adaptive screening designs (Stout and Hardwick, 2002)

Bayesian screening designs (Berry, 2001)

Page 26: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

26

Adaptive Designs Working Group

ClassificationPhase III to launch

Lifecycle Manage-

ment

Phase II to Commit to Phase III

FTIM to Committo PoC/Phase II

Disease selection

Target Family selection

Candidate selection to FTIM

Compound Progression StagesTarget to

tractable hitto candidate

SINGLE ARM TRIALS

Two-stage Designs

Screening Designs

TWO-ARM TRIALS

Group Sequential Designs

Information Based Designs

Adaptive GSD (Flexible Designs)

MULTI-ARM TRIALS

Bayesian Designs

Group Sequential Designs

Flexible Designs

DOSE-FINDING STUDIES

Dose-escalation designs

Dose-finding designs (Flexible Designs)

Adaptive Model-based Dose-finding

SEAMLESS DESIGNS

Dose-escalation based on efficacy/toxicity

Learning/Confirming in Phase II/III

Page 27: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

27

Adaptive Designs Working Group

Objective: testing two hypotheses with given significance level and power at the prespecified alternative

AR: fixed randomization SaR: after each observation StR: boundary crossing (e.g. SPRT, repeated significance test,

triangular test) DR: final decision - to accept or reject the null hypothesis

References: Siegmund (1985); Jennison&Turnbull (2000)

Fully Sequential Designs

Page 28: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

28

Adaptive Designs Working Group

Objective: testing two hypotheses with given significance level and power at the specified alternative, prefixed maximum sample size

AR: fixed randomization SaR: after a fixed number (a group) of observations,

or using error-spending function, or using “Christmas-tree” adjustment

StR: boundary crossing Haybittle, Pocock, O’Brien-Fleming type linear boundaries error-spending families conditional power, stochastic curtailment

DR: final decision - to accept or reject the null hypothesis

References: Jennison&Turnbull (2000); Whitehead (1997)

Group Sequential Designs

Page 29: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

29

Adaptive Designs Working Group

Information Based Designs

Objective: testing two hypotheses with given significance level and power at the specified alternative, prefixed maximum information

AR: fixed randomization SaR: after fixed increments of information StR: boundary crossing as for Group Sequential Designs DR: adjust maximum sample size based on interim information

about nuisance parameters

References: Mehta&Tsiatis (2001); East (2005)

Page 30: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

30

Adaptive Designs Working Group

Adaptive GSD (Flexible Designs)

Objective: testing two hypotheses with given significance level and power at the specified alternative or adaptively changing the alternative at which a specified power is to be attained

AR: fixed or adaptive randomization SaR: sample size of the next stage depends on results at the

time of interim analysis StR: p-value combination, conditional error, variance-spending DR: adapting alternative hypothesis, primary endpoint, test

statistics, inserting or skipping IAs

References: Bauer; Brannath et al; Müller&Schäfer; Fisher

Page 31: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

31

Adaptive Designs Working Group

ClassificationPhase III to launch

Lifecycle Manage-

ment

Phase II to Commit to Phase III

FTIM to Committo PoC/Phase II

Disease selection

Target Family selection

Candidate selection to FTIM

Compound Progression StagesTarget to

tractable hitto candidate

SINGLE ARM TRIALS

Two-stage Designs

Screening Designs

TWO-ARM TRIALS

Group Sequential Designs

Information Based Designs

Adaptive GSD (Flexible Designs)

MULTI-ARM TRIALS

Bayesian Designs

Group Sequential Designs

Flexible Designs

DOSE-FINDING STUDIES

Dose-escalation designs

Dose-finding designs (Flexible Designs)

Adaptive Model-based Dose-finding

SEAMLESS DESIGNS

Dose-escalation based on efficacy/toxicity

Learning/Confirming in Phase II/III

Page 32: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

32

Adaptive Designs Working Group

Objective: to use the posterior probabilities of hypotheses of interest as a basis for interim decisions (Proper Bayesian) or to explicitly assess the losses associated with consequences of stopping or continuing the study (Decision-theoretic Bayesian) AR: equal randomization or play-the-winner (next patient is allocated

to the currently superior treatment) or bandit designs (minimizing the number of patients allocated to the inferior treatment)

SaR: not specified StR: not formally pre-specified stopping criterion, or using a skeptical

prior for stopping for efficacy and an enthusiastic prior for stopping for futility, or using backwards induction

DR: update the posterior distribution; formal incorporation of external evidence; inference not affected by the number and timing of IAs

References: Berry (2001, 2004); Berry et al. (2001); Spiegelhalter et al. (2004).

Bayesian Designs

Page 33: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

33

Adaptive Designs Working Group

Pairwise comparisons with GSD

Objective: compare multiple treatments with a control; focus on type I error rate rather than power

A simple Bonferroni approximation is only slightly conservative Treatments may be dropped in the course of the trial if they are

significantly inferior to others “Step-down” procedures allow critical values for remaining

comparisons to be reduced after some treatments have been discarded

References: Follmann et al (1994)

Page 34: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

34

Adaptive Designs Working Group

p-value combination tests

Objective: compare multiple treatments with a control in a two-stage design allowing integration of data from both stages in a confirmatory trial

Focus: control of multiple (familywise) Type I error level Great flexibility:

General distributional assumptions for the endpoints General stopping rules and selection criteria Early termination of the trial Early elimination of treatments due to lack of efficacy or to

safety issues or for ethical/economic reasons

References: Bauer&Kieser (1994); Liu&Pledger (2005)

Page 35: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

35

Adaptive Designs Working Group

ClassificationPhase III to launch

Lifecycle Manage-

ment

Phase II to Commit to Phase III

FTIM to Committo PoC/Phase II

Disease selection

Target Family selection

Candidate selection to FTIM

Compound Progression StagesTarget to

tractable hitto candidate

SINGLE ARM TRIALS

Two-stage Designs

Screening Designs

TWO-ARM TRIALS

Group Sequential Designs

Information Based Designs

Adaptive GSD (Flexible Designs)

MULTI-ARM TRIALS

Bayesian Designs

Group Sequential Designs

Flexible Designs

DOSE-FINDING STUDIES

Dose-escalation designs

Dose-finding designs (Flexible Designs)

Adaptive Model-based Dose-finding

SEAMLESS DESIGNS

Dose-escalation based on efficacy/toxicity

Learning/Confirming in Phase II/III

Page 36: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

36

Adaptive Designs Working Group

Objective: target the MTD (Phase I) or the best safe dose (Phase I/II) or find the therapeutic window AR: non-parametric (3+3 rule, up-and-down)

or model-based (Continual Reassessment Methods) or Escalation With Overdose Control (EWOC) or Bayesian Decision Design or Bayesian Optimal Design or Penalized Adaptive D-optimal Design

SaR: cohorts of fixed size or in two stages (Storer design) StR: no early stopping or stopping by design (e.g. 3+3 rule) DR: update model parameters (for model-based AR)

References: O’Quigley et al.; Babb et al.; Edler; O’Quigley

Dose-escalation designs

Page 37: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

37

Adaptive Designs Working Group

Objective: find the optimal dose; working model for the dose-response; dose sequence identified in advance

AR: Bayesian (based on predictive probabilities: smallest average posterior variance) or frequentist (based on optimal experimental design: maximum information per cost)

SaR: cohorts of fixed size or after each observation StR: stopping for futility or when the optimal dose for

confirmatory stage is sufficiently well known (estimation!) DR: update model parameters, Bayesian predictions of long-

term endpoint using a longitudinal model

References: Berry et al. (2001); Dragalin&Fedorov; Fedorov&Leonov

Adaptive Model-based Dose-finding

Page 38: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

38

Adaptive Designs Working Group

Adaptive Dose-finding (Flexible Designs)

Objective: establishing a dose-response relationship or combining Phase II/III using p-value combination tests

AR: drop or add doses

SaR: sample size reassessment for the next stage

StR: early stopping for futility or early termination of some inferior doses

DR: adapting hypotheses, primary endpoint, test statistics, inserting or skipping IAs

References: Bauer&Kohne; Lehmacher et al

Page 39: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

39

Adaptive Designs Working Group

ClassificationPhase III to launch

Lifecycle Manage-

ment

Phase II to Commit to Phase III

FTIM to Committo PoC/Phase II

Disease selection

Target Family selection

Candidate selection to FTIM

Compound Progression StagesTarget to

tractable hitto candidate

SINGLE ARM TRIALS

Two-stage Designs

Screening Designs

TWO-ARM TRIALS

Group Sequential Designs

Information Based Designs

Adaptive GSD (Flexible Designs)

MULTI-ARM TRIALS

Bayesian Designs

Group Sequential Designs

Flexible Designs

DOSE-FINDING STUDIES

Dose-escalation designs

Dose-finding designs (Flexible Designs)

Adaptive Model-based Dose-finding

SEAMLESS DESIGNS

Dose-escalation based on efficacy/toxicity

Learning/Confirming in Phase II/III

Page 40: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

40

Adaptive Designs Working Group

Seamless Designs

Two-stage adaptive designs 1st Stage: treatment (dose) selection – “learning” 2nd Stage: comparison with control – “confirming”

Treatment selection may be based on a short-term endpoint (surrogate), while confirmation stage uses a long-term (clinical) endpoint

2nd Stage data and the relevant groups from 1st Stage data are combined in a way that Guarantees the Type I error rate for the comparison with

control Produces efficient unbiased estimates and confidence intervals

with correct coverage probability

Page 41: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

41

Adaptive Designs Working Group

Selection and testing

Objective: to select the “best” treatment in the 1st stage and proceed to the 2nd stage to compare with control

Focus: overall type I error rate is maintained (TSE) trial power is also achieved (ST) selection is based on surrogate (or short-term) endpoint (TS)

Method includes: early termination of the whole trial early elimination of inferior treatments

References: Thall,Simon&Ellenberg; Stallard&Todd; Todd&Stallard

Page 42: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

42

Adaptive Designs Working Group

Bayesian model-based designs

Objective: adaptive dose ranging within a confirmatory trial

Focus: efficient learning, effective treatment of patients in the trial

Method includes: AR: to maximize information about dose response SaR: Frequent analysis of the data as it accumulates Seamless switch to confirmatory stage without stopping

enrollment in a double-blind fashion Use of longitudinal model for prediction of the clinical endpoint

References: Berry et al; Inoue et al

Page 43: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

43

Adaptive Designs Working Group

ClassificationPhase III to launch

Lifecycle Manage-

ment

Phase II to Commit to Phase III

FTIM to Committo PoC/Phase II

Disease selection

Target Family selection

Candidate selection to FTIM

Compound Progression StagesTarget to

tractable hitto candidate

SINGLE ARM TRIALS

Two-stage Designs

Screening Designs

TWO-ARM TRIALS

Group Sequential Designs

Information Based Designs

Adaptive GSD (Flexible Designs)

MULTI-ARM TRIALS

Bayesian Designs

Group Sequential Designs

Flexible Designs

DOSE-FINDING STUDIES

Dose-escalation designs

Dose-finding designs (Flexible Designs)

Adaptive Model-based Dose-finding

SEAMLESS DESIGNS

Dose-escalation based on efficacy/toxicity

Learning/Confirming in Phase II/III

Page 44: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

44

Adaptive Designs Working Group

The objective of a clinical trial may be either to target the MTD or MED or to find the therapeutic range or to determine the OSD (Optimal Safe Dose) to be

recommended for confirmation or to confirm efficacy over control in Phase III clinical trial

This clinical goal is usually determined by the clinicians from the pharmaceutical industry practicing physicians key opinion leaders in the field, and the regulatory agency

Achieving the goals

Page 45: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

45

Adaptive Designs Working Group

Achieving the goals Once agreement has been reached on the objective, it

is the statistician's responsibility to provide the appropriate design and statistical inferential structure required to achieve that goal

Page 46: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

46

Adaptive Designs Working Group

Achieving the goals

There are plenty of available designs on statistician’s shelf

The greatest challenge is their implementation

Adaptive designs have much more to offer than the rigid conventional parallel group designs in clinical trials

Page 47: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

47

Adaptive Designs Working Group

References Babb J, Rogatko A, Zacks S. (1998). Cancer phase I clinical trials: efficient dose escalation with overdose

control. Stat. Med., 17, 1103-1120. Bauer P. Statistical methodology relevant to the overall drug development program. Drug Inf J. 2003;37: 81-89. Bauer P, Kieser M. Combining different phases in the development of medical treatments within a single trial.

Statistics in Medicine 1999;18: 1833-1848. Bauer P, Köhne K. Evaluation of experiments with adaptive interim analyses. Biometrics 1994;50: 1029-1041. Bauer P, Röhmel J. An adaptive method for establishing a dose-response relationship. Statistics in Medicine

1995;14: 1595-1607. Berry D. Adaptive trials and Bayesian statistics in drug development. Biopharmaceutical Report 2001; 9:1-11.

(with comments). Berry D. Bayesian statistics and the efficiency and ethics of clinical trials. Statistical Science 2004; 19:175-187. Berry D., Mueller P., Grieve A.P., Smith M., Parke T., Blazek R. Mitchard N., Krams M. Adaptive Bayesian

designs for dose-ranging drug trials. 2002; 162:99-181. In. Gatsonis C, Carlin B, Carriquiry A (Eds) "Case Studies in Bayesian Statistics V", New-York: Springer.

Brannath W, Posch M, Bauer P. Recursive combination tests. JASA 2002;97: 236-244. Dragalin V, Fedorov V. Adaptive model-based designs for dose-finding studies. Journal of Statistical Planning

and Inference, 2006; 136:1800-1823. East. Software for the design and interim monitoring of group sequential clinical trials, 2005. Cytel Software

Corporation. Edler L. Overview of Phase I Trials. 2001:1-34. In J. Crowley (Ed) “Handbook of Statistics in Clinical Oncology”.

Marcel Dekker, NY Fedorov V, Leonov S. Response driven designs in drug development. 2005, In: Wong, W.K., Berger, M. (eds.),

"Applied Optimal Designs", Wiley. Follman DA, Proschan MA, Geller NL. Monitoring pairwise comparisons in multi-armed clinical trials. Biometrics

1994; 50: 325-336. Gould L. Sample-size re-estimation: recent developments and practical considerations. Statistics in Medicine

2001; 20:2625-2643.

Page 48: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

48

Adaptive Designs Working Group

References Inoue LYT, Thall PF, Berry D. Seamlessly expanding a randomized Phase II trial to Phase III. Biometrics. 2002;

58: 823-831. Jennison C, Turnbull BW. Group Sequential Methods with Applications to Clinical Trials. Chapman & Hall, Boca

Raton, London, New York, Washington, D.C., 2000. Lehmacher W, Kieser M, Hothorn L. Sequential and multiple testing for dose-response analysis. Drug Inf. J.

2000;34: 591-597. Liu Q, Pledger GW. Phase 2 and 3 combination designs to accelerate drug development. JASA 2005; 100:493-502 Mehta CR, Tsiatis AS. Flexible sample size considerations using information based interim monitoring . Drug Inf. J.

2001;35: 1095-1112. Müller HH, Schäfer H. Adaptive group sequential designs for clinical trials: Combining the advantages of adaptive

and of classical group sequential approaches. Biometrics 2001; 57: 886-891. O'Quigley J, Pepe M, Fisher L. (1990). Continual reassessment method: a practical design for phase I clinical trials

in cancer. Biometrics 46: 33—48 O'Quigley J. Dose-finding designs using continual reassessment method. 2001:35-72. In J. Crowley (Ed)

“Handbook of Statistics in Clinical Oncology”. Marcel Dekker, NY Proschan M. Two-stage sample size re-estimation based on nuisance parameter: a review. JBS 2005; 15: 559-574 Thall PF, Simon R, Ellenberg S. A two-stage design for choosing among several experimental treatments and a

control in clinical trials. Biometrics 1989; 45: 537-547. Todd S, Stallard N. A new clinical trial design combining Phase 2 and 3: sequential designs with treatment

selection and a change of endpoint. Drug Inf J. 2005; 39:109-118. Rosenberger W.F, Lachin J.M. Randomization in Clinical Trials: Theory and Practice. 2002, Wiley Schwartz TA, Denne JS. Common threads between sample size recalculation and group sequential procedures.

Pharmaceut. Statist. 2003; 2: 263-271. Siegmund D. Sequential Analysis. Tests and Confidence Intervals. Springer, New York, 1985. Spiegelhalter D.J., Abrams K.R., Myles J.P. Bayesian Approaches to Clinical Trials and Health-Care Evaluation.

Wiley, 2004. Whitehead J. The Design and Analysis of Sequential Clinical Trials. 2nd ed. Wiley, New York, 1997.

Page 49: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

49

Adaptive Designs Working Group

Page 50: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

FDA/Industry Statistics Workshop Adaptive Designs Working Group

September 27, 2006Washington, D.C.

Adaptive Seamless Designs for Phase IIb/III Clinical Trials

Jeff Maca, Ph.D.Assoc. Director, Biostatistics

Novartis Pharmaceuticals

Page 51: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

51

Adaptive Designs Working Group

Primary PhRMA references

PhRMA White Paper sections:

Maca J, Bhattacharya S, Dragalin V, Gallo P, and Krams M. Adaptive Seamless Phase II/III Designs – Background, Operational Aspects, and Examples. Drug Information Journal. 2006; 40(4): 463-473.

Gallo P. Confidentiality and trial integrity issues for adaptive designs. Drug Information Journal. 2006; 40(4): 445-450.

Page 52: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

52

Adaptive Designs Working Group

Outline

Introduction and motivation of adaptive seamless designs (ASD)

Statistical methodology for seamless designs Considerations for adaptive design

implementation Simulations and comparisons of statistical

methods

Page 53: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

53

Adaptive Designs Working Group

Introduction and Motivation

Reducing time to market is/has/will be a top priority in pharmaceutical development

Brings valuable medicines to patients sooner Increases the value of the drug to the parent

company

Adaptive seamless designs can help reduce this development time

Page 54: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

54

Adaptive Designs Working Group

Definitions

Seamless design A clinical trial design which combines into a single

trial objectives which are traditionally addressed in separate trials

Adaptive Seamless design A seamless trial in which the final analysis will use

data from patients enrolled before and after the adaptation (inferentially seamless)

Page 55: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

55

Adaptive Designs Working Group

Adaptive Seamless Designs

Primary objective – combine “dose selection” and “confirmation” into one trial

Although dose is most common phase IIb objective, other choices could be made, e.g. population

After dose selection, only change is to new enrollments (patients are generally not re-randomized)

Patients on terminated treatment groups could be followed

All data from the chosen group and comparator is used in the final analysis. Appropriate statistical methods must be used

Page 56: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

56

Adaptive Designs Working Group

Adaptive Seamless Designs

Dose A

Dose B

Dose C

Placebo

Dose ADose B

Dose C

Placebo

< white space >Phase II

Phase III

Stage A (learning)

Phase B (confirming)

Time

Page 57: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

57

Adaptive Designs Working Group

Statistical methodology

Statistical methodology for Adaptive Seamless Designs must account for potential biases and statistical issues

Selection bias (multiplicity) Multiple looks at the data (interim analysis) Combination of data from independent stages

Page 58: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

58

Adaptive Designs Working Group

Simple Bonferonni adjustment

Test final hypothesis at α / ntrt

Accounts for selection bias: multiplicity adjustment Multiple looks at the data: not considered Combination of data from stages by simple pooling

In some sense, ignores that there was an interim analysis at all

Most conservative approach, simple to implement No other adjustments (i.e., sample size) can be made

Statistical methodology - Bonferronni

Page 59: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

59

Adaptive Designs Working Group

Statistical Methodology – Closed Testing

And alternative and more powerful approach is a closed testing approach, and combination of p-values with inverse normal method

Methodology combines:

Closed testing of hypothesis Simes adjustment of p-values for multiplicity Combines data (p-values) from stages via the inverse

normal method (or Fisher’s combination)

Page 60: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

60

Adaptive Designs Working Group

Statistical Methodology – Closed Testing

Closed test procedure n null hypotheses H1, …, Hn

Closed test procedure considers all intersection hypotheses.

Hi is rejected at global level α if all hypotheses HI formed by intersection with Hi are rejected at local level α

H1 can only be rejected

at α=.05 if H12 is also

rejected at α=.05

Page 61: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

61

Adaptive Designs Working Group

Statistical Methodology – Closed Testing

A typical study with 3 doses 3 pairwise hypotheses. Multiplicity can be handled by adjusting p-values from

each stage using Simes procedure

iSi

S pi

Sq

min S is number of elements in Hypothesis,

p(i) is the ordered P-values

Page 62: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

62

Adaptive Designs Working Group

Inverse Normal Method

If p1 and p2 are generated from independent data, then

will yield a Z test statistic

Note: For adaptive designs, typical value for t0 is n1/ (n1+n2)

Statistical Methodology – Closed Testing

)p1(t1)p1(t)p,p( 21

011

021 C

Page 63: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

63

Adaptive Designs Working Group

Statistical Methodology – Example

Example: Dose finding with 3 doses + control

Stage sample sizes: n1 = 75, n2 =75 Unadjusted pairwise p-values from the first stage:

p1,1= 0.23, p1,2 = 0.18, p1,3 = 0.08

Dose 3 selected at interim Unadjusted p-value from second stage: p2,3 = .01

Page 64: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

64

Adaptive Designs Working Group

Statistical Methodology – Example

Three-way test: q1,123 = min( 3*.08, 1.5*.18, 1*. 23)= .23

q2,123 = p2,3 = .01

C(q1,123, q2,123) = 2.17 P.value = .015

Page 65: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

65

Adaptive Designs Working Group

Statistical Methodology – Example

Two -way tests: q1,13 = min( 2*.08, 1*. 23)= .16

q1,23 = min(2*.08,1*.18) = .16

q2,13 = q2,23 = p2,3 = .01

C(q1,13, q2,13) = C(q1,23, q2,23) = 2.35 P.value = .0094

Page 66: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

66

Adaptive Designs Working Group

Statistical Methodology – Example

Final test: q1,3 = p1,3 = .08

q2,3 = p2,3 = .01

C(q1,13, q2,13) = C(q1,23, q2,23) = 2.64 P.value = .0042 Conclusion: Dose 3 is effective

Page 67: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

67

Adaptive Designs Working Group

Choosing sample sizes

There are two sample sizes to consider for a seamless design, n1, n2

If t is the number of treatments, the total same size N is:

N = t*n1 + 2*n2

The larger n1, the better job of choosing the “right” dose. However, this makes the total much larger.

Power can be determined by simulation, and is also a function of the (unknown) dose response

Statistical Methodology – Sample sizes

Page 68: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

68

Adaptive Designs Working Group

Simulation for power comparison

To compare the two methods for analyzing an adaptive seamless designs, the following parameters where used:

Sample sizes were n1= n2 = 75 Primary endpoint is normal, with σ = 12 One dose was selected for continuation Various dose responses were assumed 20,000 reps used for simulations (error = ±.5%)

Statistical Methodology – Power

Page 69: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

69

Adaptive Designs Working Group

Statistical Methodology – Power

Simulation for power comparison

Selecting 1 treatment group from 2 possible treatments

Dose Response

(Δ placebo )

Power Bonferronni Power Closed Test

0 , 4.5 83.1% 83.2%

4.5, 4.5 91.0% 92.2%

Page 70: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

70

Adaptive Designs Working Group

Statistical Methodology – Power

Simulation for power comparison

Selecting 1 treatment group from 3 possible treatments

Dose Response

(Δ placebo )

Power Bonferronni

Power Closed Test

0 ,0, 4.5 79.4% 78.9%

4.5, 4.5, 4.5 90.8% 92.7%

Page 71: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

71

Adaptive Designs Working Group

Considerations for Seamless Designs

With the added flexibility of seamless designs, comes added complexity.

Careful consideration should be given to the feasibility for a seamless design for the project.

Not all projects can use seamless development Even if two programs can use seamless development,

one might be better suited than the other Many characteristics add or subtract to the feasibility

Page 72: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

72

Adaptive Designs Working Group

Considerations for Seamless Designs

Enrollment vs. Endpoint

The length of time needed to make a decision relative to the time of enrollment must be small Otherwise enrollment must be paused

Endpoint must be well known and accepted If the goal of Phase II is to determine the endpoint for

registration, seamless development would be difficult

If surrogate marker will be used for dose selection, it must be accepted, validated and well understood

Page 73: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

73

Adaptive Designs Working Group

Considerations for Seamless Designs

Clinical Development Time

There will usually be two pivotal trials for registration Entire program must be completed in shorter timelines,

not just the adaptive trial

Page 74: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

74

Adaptive Designs Working Group

Considerations for Seamless Designs

Logistical considerations

Helpful if final product is available for adaptive trial (otherwise bioequivalence study is needed)

Decision process, and personnel must be carefully planned and pre-specified

Page 75: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

75

Adaptive Designs Working Group

Considerations for Seamless Designs

Novel drug or indication

Decision process which will be overly complicated could be an issue with an external board

If there are a lot of unknown issues with the indication or drug, a separate phase II trial would be better

However, getting a novel drug to patients sooner increases the benefit of seamless development

Page 76: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

76

Adaptive Designs Working Group

Conclusions

Adaptive seamless designs have an ability to improve the development process by reducing timelines for approval

Statistical methods are available to account for adaptive trial designs

Extra planning is necessary to implement an adaptive seamless design protocol

Benefits should be carefully weighed against the challenges of such designs before implementation

Page 77: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

77

Adaptive Designs Working Group

References Dragalin V. Adaptive designs: terminology and classification. Drug Inf J. 2006 (to

appear). Quinlan JA, Krams M. Implementing adaptive designs: logistical and operational

considerations. Drug Inf J. 2006 (to appear). Bechhofer RE, Kiefer J, Sobel M. Sequential Identification and Ranking Problems.

Chicago: University of Chicago Press;1968. Paulson E. A selection procedure for selecting the population with the largest mean

from k normal populations. Ann Math Stat. 1964;35:174-180. Thall PF, Simon R, Ellenberg SS. A two-stage design for choosing among several

experimental treatments and a control in clinical trials. Biometrics 1989;45:537-547. Schaid DJ, Wieand S, Therneau TM. Optimal two stage screening designs for

survival comparisons. Biometrika 1990;77:659-663. Stallard N, Todd S. Sequential designs for phase III clinical trials incorporating

treatment selection. Stat Med. 2003;22:689-703. Follman DA, Proschan MA, Geller NL. Monitoring pairwise comparisons in multi-

armed clinical trials. Biometrics 1994;50:325-336. Hellmich M. Monitoring clinical trials with multiple arms. Biometrics 2001;57:892-

898.

Page 78: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

78

Adaptive Designs Working Group

References Bischoff W, Miller F. Adaptive two-stage test procedures to find the best treatment in

clinical trials. Biometrika 2005;92:197-212. Todd S, Stallard N. A new clinical trial design combining Phases 2 and 3: sequential

designs with treatment selection and a change of endpoint. Drug Inf J. 2005;39:109-118.

Bauer P, Köhne K. Evaluation of experiments with adaptive interim analyses. Biometrics 1994;50:1029-1041.

Bauer P, Kieser M. Combining different phases in the development of medical treatments within a single trial. Stat Med. 1999;18:1833-1848.

Brannath W, Posch M, Bauer P. Recursive combination tests. J Am Stat Assoc. 2002;97:236-244.

Müller HH, Schäfer H. Adaptive group sequential designs for clinical trials: combining the advantages of adaptive and classical group sequential approaches. Biometrics 2001;57:886-819.

Liu Q, Pledger GW. Phase 2 and 3 combination designs to accelerate drug development. J Am Stat Assoc. 2005;100:493-502.

Posch M, Koenig F, Brannath W, Dunger-Baldauf C, Bauer P. Testing and estimation in flexible group sequential designs with adaptive treatment selection. Stat Med. 2005;24:3697-3714.

Bauer P, Einfalt J. Application of adaptive designs – a review. Biometrical J. 2006;48:1:14.

Page 79: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

79

Adaptive Designs Working Group

References Inoue LYT, Thall PF, Berry DA. Seamlessly expanding a randomized phase II trial to

phase III. Biometrics 2002;58:823-831. Berry, DA, Müller P, Grieve AP, Smith M, Parke T, Blazek R, Mitchard N, Krams M.

Adaptive Bayesian designs for dose-ranging drug trials. In Case Studies in Bayesian Statistics V. Lecture Notes in Statist. Springer: New York;2002;162:99-181.

Coburger S, Wassmer G. Sample size reassessment in adaptive clinical trials using a bias corrected estimate. Biometrical J. 2003;45:812-825.

Brannath W, König F, Bauer P. Improved repeated confidence bounds in trials with a maximal goal. Biometrical J. 2003;45:311-324.

Sampson AR, Sill MW. Drop-the-losers design: normal case. Biometrical J. 2005;47:257-281.

Stallard N, Todd S. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plan. Inference 2005;135:402-419.

US Food and Drug Administration. Guidance for Clinical Trial Sponsors. Establishment and Operation of Clinical Trial Data Monitoring Committees. 2006; Rockville MD: FDA. http://www.fda.gov/cber/qdlns/clintrialdmc.htm.

US Food and Drug Administration. Challenge and Opportunity on the Critical Path to New Medicinal Products. 2006. http://www.fda.gov/oc/initiatives/criticalpath/whitepaper.html.

Page 80: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

80

Adaptive Designs Working Group

Page 81: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

FDA/Industry Statistics Workshop Adaptive Designs Working Group

September 27, 2006Washington, D.C.

Adaptive Designs Sample size re-estimation: A review and recommendations

Keaven M. AndersonClinical Biostatistics and Research Decision Sciences

Merck Research Laboratories

Page 82: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

82

Adaptive Designs Working Group

Outline Introduction/background Methods

Fully sequential and group sequential designs Adaptive sample size re-estimation

Background Nuisance parameter estimation/internal pilot studies

Blinded sample size re-estimation Unblinded sample size re-estimation

Conditional power and related methods Discussion and recommendations Case studies References

Page 83: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

83

Adaptive Designs Working Group

Background

Origin PhRMA Adaptive Design Working Group Chuang-Stein C, Anderson K, Gallo P and Collins S,

Sample size re-estimation: a review and recommendations. Drug Information Journal, 2006; 40(4):475-484

Focus Late-stage (Phase III, IV) sample size re-estimation Frequentist methods Control of Type I error Potential for bias is critical in these ‘confirmatory’ trials

Implications for logistical issues

Page 84: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

84

Adaptive Designs Working Group

Adaptive designsallow design specifications to be changed based on accumulating data (and/or information external to the trial)

Extensive literature exists on adapting through sample size re-estimation, the topic of this talk

Since sample size in group sequential and fully sequential trials are data-dependent, we consider these to be included in a broad definition of adaptive design/sample size re-estimation

Introduction

Page 85: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

85

Adaptive Designs Working Group

Introduction

Why consider sample size re-estimation? Minimize number of patients exposed to inferior or highly

toxic treatment Right-size the trial to demonstrate efficacy

Reduce or increase sample size

Stop the trial for futility if insufficient benefit Incorporate new internal or external information into a

trial design during the course of the trial

Page 86: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

86

Adaptive Designs Working Group

Introduction Reasons for Unplanned Adaptation

Information that could not have been anticipated prior to trial start has become available.

Regulators change the primary efficacy endpoint from mean change in viral load to % of patients with viral load below the detectable limit in HIV patients.

Regulators decide that the new treatment needs to win on more than the one efficacy endpoint used to size the trial

Adaptation, not planned at the design stage, is used to possibly to ‘bail out’ a trial

A competing therapy removed from market, allowing a lesser treatment benefit to be viable.

Sponsor ‘changes mind’ about minimal treatment effect of interest

This topic will receive minimal discussion here

Page 87: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

87

Adaptive Designs Working Group

In order to appropriately power a trial, you need to know: The true effect size you wish to detect Nuisance parameters such as

Variability of a continuous endpoint Population event rate for a binary outcome or time to event

Other ancillary information (e.g., correlation between co-primary endpoints needed to evaluate study-level power)

Inappropriate assumptions about any of these factors can lead to an underpowered trial

The Problem

Page 88: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

88

Adaptive Designs Working Group

Consequences of incorrect planning for treatment difference and/or standard deviation (=0.05, planned Power=90%)

N planned/

N required

Power

Over-estimate or under-estimate by 50% 0.44 58%

Under-estimate or over-estimate by 50% 2.25 99.8%

Over-estimate AND under-estimate by 50% 0.20 30%

Under-estimate AND over-estimate by 50% 5.06 >99.9%

Under-estimate AND under-estimate by 50%

1 90%

Page 89: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

89

Adaptive Designs Working Group

Plan a fixed trial conservatively Pro: trial should be well-powered Cons: Can lead to lengthy, over-powered, expensive trial

Use group sequential design and plan conservatively Pro: can power trial well and stop at appropriate, early interim

analysis if your assumptions are too conservative Con: over-enrollment occurs past definitive interim analysis

because it takes time to collect, clean and analyze data Use adaptive design

Pro: can decide to alter trial size based on partial data or new, external information

Cons: methods used to adapt must be carefully chosen, regulatory scrutiny over methods and ‘partial unblinding,’ may not improve efficiency over group sequential design

Solutions to the problem

Page 90: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

90

Adaptive Designs Working Group

Outline Introduction/background Methods

Fully sequential and group sequential designs Adaptive sample size re-estimation

Background Nuisance parameter estimation/internal pilot studies

Blinded sample size re-estimation Unblinded sample size re-estimation

Conditional power and related methods Discussion and recommendations Case studies References

Page 91: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

91

Adaptive Designs Working Group

Fully sequential design

Not commonly used due to continuous monitoring May be useful to continuously monitor a rare serious

adverse effect Intracranial hemorrhage in a thrombolytic/anti-platelet trial Intussusception in rotavirus vaccine trial

Unblinded analysis suggests need for an independent monitor or monitoring committee

References Wald (1947), Sequential Analysis

Sequential probability ratio test (SPRT) Siegmund (1985), Sequential Analysis: Tests and Confidence

Intervals

Page 92: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

92

Adaptive Designs Working Group

Group sequential design Classic

Fixed sample sizes for interim and final analyses Pre-defined cutoffs for superiority and futility/inferiority at each analysis Trial stops (adapts) if sufficient evidence available to decide early Independent data monitoring committee often used to review unblinded

interim analyses Variations

Adjustment of interim analysis times (spending functions) Adjustment of total sample size or follow-up based on, for example,

number of events (information-based designs) Properties well understood and design is generally well-accepted by

regulators See: Jennison and Turnbull (2000): Group Sequential Methods with

Applications to Clinical Trials

Page 93: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

93

Adaptive Designs Working Group

Outline Introduction/background Methods

Fully sequential and group sequential designs Adaptive sample size re-estimation

Background Nuisance parameter estimation/internal pilot studies

Blinded sample size re-estimation Unblinded sample size re-estimation

Conditional power and related methods Discussion and recommendations Case studies Evolving issues

Page 94: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

94

Adaptive Designs Working Group

The Opportunity Size the study appropriately to reach study objectives

in an efficient manner based on interim data that offers more accurate information on Nuisance parameter

Within-group variability (continuous data) Event rate for the control group (binary data) # of subjects and amount of exposure needed to capture

adequate occurrences of time-to-event endpoint

Treatment effect Other ancillary information (e.g., correlation between co-

primary endpoints needed to evaluate study-level power)

Ensure that we will have collected enough exposure data for safety evaluation by the end of the study

Page 95: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

95

Adaptive Designs Working Group

SSR Strategies

Update sample size to ensure power as desired based on interim results Internal pilot studies: Adjust for nuisance parameter estimates

only Blinded estimation Unblinded estimation Testing strategy: no adjustment from usual test statistics

Adjusting for interim test statistic/treatment effect All methods adjust based on unblinded treatment difference Adjust sample size to retain power based on interim test statistic

Assume observed treatment effect at interim Assume original treatment effect

Testing strategy: adjust stage 2 critical value based on interim test statistic

Page 96: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

96

Adaptive Designs Working Group

SSR - Issues Planned vs Unplanned (at the design stage)

Control of Type I error rate and power

If we have a choice, do we do it blinded or unblinded?

If we do it unblinded, how do we maintain confidentiality? Who will know the exact SSR rule? Who will do it, a third party? Who will make the recommendation, a DMC? How will the results be shared? Who will know the results, the sponsors, investigators?

When is a good time to do SSR?

Regulatory acceptance

Page 97: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

97

Adaptive Designs Working Group

SSR reviews

These all concern what might be considered ‘internal pilot’ studies Friede and Kieser, Statistics in Medicine, 2001; 20:3861-73

Also Biometrical Journal, 2006; 48:537-555 Gould, Statistics in Medicine, 2001; 20:2625-43 Jennison and Turnbull, 2000, Chapter 14 Zucker, Wittes, Schabenberger, Brittain, 1999; Statistics in

Medicine, 18:3493-3509

Page 98: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

98

Adaptive Designs Working Group

Blinded SSR When SSR is based on nuisance parameters

Overall variability (continuous data)

Overall rate (binary data)

Advantage No need to break the blind.

In-house personnel can do it.

Minimal implication for Type I error rate.

Disadvantage The estimate of the nuisance parameter could be wrong,

leading to incorrect readjustment.

Page 99: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

99

Adaptive Designs Working Group

Blinded SSR Internal pilot studies to estimate nuisance parameter without adjustment of

final test statistic/critical value Gould and Shih (1992)

Uses EM algorithm to estimate individual group means or event rates Estimates variance (continuous case) Updates estimate of sample size required for adequate power Software: Wang, 1999

Friede and Kieser (2001) Assume treatment difference known (no EM algorithm required) Adjust within group sum of squares using this constant

Type I error and power appear good Some controversy over appropriateness of EM (Friede and Kieser, 2002?;

Gould and Shih, 2005?) Question to ask:

How well will this work if treatment effect is different than you have assumed for the EM procedure?

Will it be under- or over-powered? Group sequential version (Gould and Shih, 1998) may bail you out of this

Page 100: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

100

Adaptive Designs Working Group

Blinded SSR gone wrong?

0

500

1000

1500

2000

2500

3000

3500

4000

12.5% 14.5% 16.5% 18.5%

Observed combined event rate

n pe

r ar

m

Assuming20%placeboevent rate

Assuming25%reduction

90% power, 2-sided Type I error 5%

20% vs. 12%:N=436

17.8% vs. 14.2%n=1346

Page 101: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

101

Adaptive Designs Working Group

Unblinded SSR Advantage

Could provide more accurate sample-size estimate.

Disadvantages Re-estimate sample size in a continuous fashion can reveal

interim difference.

There could be concerns over bias resulting from knowledge of interim observed treatment effect.

Typically require an external group to conduct SSR for registration trials.

Interim treatment differences can be misleading

Due to random variation or

If trial conditions change

Page 102: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

102

Adaptive Designs Working Group

Internal Pilot Design: Continuous Data Adjusts sample size using only nuisance parameter estimate

Question to ask: does updated sample size reveal observed treatment effect? Use some fraction of the planned observations to estimate error variance

for continuous data, modify final sample size, allow observations used to estimate the variance in the final analysis.

Plug the new estimate into the SS formula and obtain a new SS. If the SS re-estimation involves at least 40 patients per group, simulations have shown (Wittes et al, SIM 1999,18:3481-3491; Zucker et al, SIM 1999,18:3493-3509)

The type I error rate of the unadjusted (naïve) test is at about the desirable level If we do not allow SS to go down

The unadjusted test could lead to non-trivial bias in the type I error rate If we allow the SS to go down

Power OK

Coffey and Muller (Biometrics, 2001, 57:625-631) investigated ways to control the type I error rate (including different ways to do SSR).

Denne and Jennison, (Biometrika, 1999) provide a group sequential version

Page 103: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

103

Adaptive Designs Working Group

Internal pilot design: binary data

See, e.g., Herson and Wittes (1993), Jennison and Turnbull (2000)

Estimate control group event rate at interim Type I error OK if interim n large enough

Options (see Jennison and Turnbull, 2000 for power study) Assume p1-p2 fixed

Power appears OK

Assume p1/p2 fixed Can be underpowered

Page 104: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

104

Adaptive Designs Working Group

Combination tests Methods for controlling Type I error

The invariance principle – calculate separate standardized test statistics from different stages and combine them in a predefined way to make decisions.

Weighting of a stage does not increase if sample size for that stage is increased, meaning that individual observations for that stage are down-weighted in the final test statistic

Efficiency issue (Tsiatis and Mehta, 2003)

Many methods available, including Fisher’s combination test (Bauer, 1989) Conditional error functions (Proschan and Hunsberger, 1995; Liu and

Chi, 2001) Inverse normal method (Lehmacher and Wassmer, 1999) Variance spending (Fisher, 1998)

Page 105: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

105

Adaptive Designs Working Group

Combination tests Apply combination test method to determine the critical value for

the second stage based on the observed data from the first stage.

Make assumption on treatment effect; options include:

Observed effect (highly variable)

External estimate

Original treatment effect used for sample size planning

Compute next stage sample size based on critical value, set conditional power to originally desired power given interim test statistic and assumed second stage treatment effect

Generally, will only raise sample size – not lower

Page 106: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

106

Adaptive Designs Working Group

Outline Introduction/background Methods

Fully sequential and group sequential designs Adaptive sample size re-estimation

Background Nuisance parameter estimation/internal pilot studies

Blinded sample size re-estimation Unblinded sample size re-estimation

Conditional power and related methods Discussion and recommendations Case studies References

Page 107: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

107

Adaptive Designs Working Group

Blinded vs Unblinded SSR For SSR due to improved estimate on variance (continuous

data), Friede and Kieser (Stat in Med, 2001) conclude that there is not much gain in conducting SSR unblinded.

They only studied a constant treatment effect

Statistical approaches to control Type I error rate particularly important when adjusting sample size to power for observed treatment difference

Decisions related to SSR because of inaccurate assumption on the nuisance parameters can differ significantly from those due to inaccurate assumption on the treatment effect.

Page 108: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

108

Adaptive Designs Working Group

Relative efficiency of SSR methods Internal estimates of treatment effect lead to very inefficient trials

(Jennison and Turnbull, 2003) due to the variability of the estimates.

External or pre-determined minimal treatment effect assumptions can yield comparable efficiency to group sequential (Liu and Chi, 2001, Anderson et. al, 2004)

Adding in a maximum sample size adjustment limit can improve over group sequential (Posch et al, 2003)

Based on comparison of optimal group sequential and adaptive designs, improvement of adaptive designs over group sequential is minimal (Jennison and Turnbull, SIM 2006; see also Anderson, 2006)

Use of sufficient statistic design rather than weighted combination test improves efficiency (Lokhnygina, 2004)

Page 109: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

109

Adaptive Designs Working Group

Group Sequential vs SSR Debate

Efficiency The adaptive designs for SSR using combination tests with

fixed weights are generally inefficient.

Efficient adaptive designs for SSR have little to offer over efficient group sequential designs in terms of sample size. However, the latter might require more interim analyses and offer minimum gain. In addition, the comparisons were made as if we knew the truth.

Flexibility and upfront resource commitment SSR offers flexibility and reduces upfront resource

commitment. The flip side is the need to renegotiate budget and request additional drug supply when an increase in SS is necessary.

SSR addresses uncertainty at the design stage.

Page 110: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

110

Adaptive Designs Working Group

Group Sequential vs SSR Debate

SSR is fluid and can respond to changing environment both in terms of medical care and the primary endpoint to assess treatment effect. The above is important for trials lasting 3-5 years when

environmental changes are expected.

Need to ascertain treatment effect in major subgroups even though the subgroups are not the primary analysis populations Xigris for disease severity groups Cozaar for race groups

Page 111: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

111

Adaptive Designs Working Group

Recommendation #1

Before considering adaptive sample-size re-estimation, evaluate whether or not group sequential design is adequate Pros:

Regulatory acceptance Well-understood methods allow substantial flexibility Experienced monitoring committee members available

Cons: May not work well in some situations when trial cannot be

stopped promptly (long follow-up, slow data collection, cleaning or analysis)

Page 112: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

112

Adaptive Designs Working Group

Recommendation #2 Anticipate as much as possible at the planning stage the

need to do SSR to incorporate information that will accumulate during the trial Treatment effect size Nuisance parameters The effect of environmental changes on the design assumptions

Do not use SSR to Avoid up-front decisions about planning As a ‘bait-and-switch’ technique where a low initial budget can

be presented with a later upward sample size adjustment.

Page 113: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

113

Adaptive Designs Working Group

Recommendation #3 For SSR based on variance, consider using blinded SSR

However, when there is much uncertainty about the treatment effect, consider using unblinded SSR.

For a binary outcome, one can either do blinded SSR based on the overall event rate or an unblinded SSR based on the event rate of the control group. There is no clear preference, choice dependant on several factors. If there is much uncertainty about treatment effect, unblinded SSR

using conditional power methods (see next slides). If SSR is blinded, consider conducting interim analysis to capture

higher than expected treatment effect early.

Page 114: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

114

Adaptive Designs Working Group

Recommendation #4 To help maintain confidentiality of the interim results, we

recommend

Do not reveal exact method for adjusting sample size. Make the outcome of SSR discrete with only 2-3 options.

Under the first approach, details on SSR methodology will not be described in the protocol, but documented in a stand-alone statistical analysis plan for SSR not available to study personnel.

For SSR based on observed treat effect (continuous case), it will be beneficial to base SSR on both variability and effect.

We recommend that the protocol include the maximum sample size allowed to minimize the need to go back to the IRB.

Page 115: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

115

Adaptive Designs Working Group

Recommendation #5 For unblinded SSR

Invite a third party to do the calculations following a pre-specified rule.

If possible, combine SSR with a group sequential design where SSR will be conducted at the same time with an interim analysis.

Convene a DMC (or preferably an IDMC) to review the SSR recommendation from the third party. If an IDMC is used, the IDMC statistician can carry out the SSR.

Assuming Recommendation #4 is followed, the new sample size will be communicated to the sponsor. The investigators will be told to continue enrollment.

Page 116: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

116

Adaptive Designs Working Group

Recommendation #6 Carefully consider the number of times to do SSR.

E.g., for variance estimation, is once enough?

Timing of the SSR should be based on multiple considerations such as available info at the design stage, disease, logistics

delay from enrollment until follow-up complete and data available enrollment rate,

Method whether the SSR will be based on variance or treatment effect

Gould and Shih (1992) recommend early update as soon as variance estimate stable due to administrative considerations, while Sandvik et al. (1996) recommend as late as possible to get accurate variance estimate

Page 117: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

117

Adaptive Designs Working Group

Recommendation #7 Acceptance of SSR by regulators varies, depending on the reasons

for SSR. In general, blinded SSR based on a nuisance parameter is acceptable.

When proposing unblinded SSR, should include The objective for SSR Statistical methodology including the control of Type I error When to do the SSR How to implement (e.g., DMC, third party) How to maintain confidentiality How will the results be shared Efficiency (power/sample size) considerations

Discuss the plan with regulatory agencies in advance.

Page 118: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

118

Adaptive Designs Working Group

Outline Introduction/background Methods

Fully sequential and group sequential designs Adaptive sample size re-estimation

Background Nuisance parameter estimation/internal pilot studies

Blinded sample size re-estimation Unblinded sample size re-estimation

Conditional power and related methods Discussion and recommendations Case studies References

Page 119: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

119

Adaptive Designs Working Group

Case Study #1: Blinded SSR Based on Variance

Drug X low-dose, high-dose and placebo

Main efficacy endpoint - percent change in continuous primary outcome

N=270 provides 90% power to detect a 10% difference versus placebo

estimate of the variability obtained from study performed in a different setting (SD estimated at 20%)

seasonal disease: interim analysis performed during long pause in enrollment

Recruitment was anticipated to be difficult, specified in protocol that a blinded estimate of the variability of primary outcome

would be computed when the sample size is 100 If the variability is less than anticipated (eg., SD 15%) then the final sample size

could be reduced If the variability is greater than anticipated (e.g., > 25%), the main comparison would

be the pooled Drug X groups (low and high-dose) vs. placebo

Page 120: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

120

Adaptive Designs Working Group

Case Study: REST Study Design

Sample size:

Age:

Dose regimen:

Formulation:

Potency:

Study Period:

Minimum of 60,000 (1V:1P)

6 to12 weeks at enrollment

3 oral doses of Rotavirus every 4-10 wks

Refrigerated liquid buffer/stabilizer intended for licensure

Release range intended for licensure

2001 to 2005

Page 121: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

121

Adaptive Designs Working Group

Primary Safety Hypothesis Oral RotaTeq™ will not increase the risk of

intussusception relative to placebo within 42 days after any dose

To satisfy the primary safety hypothesis, 2 criteria must be met:

1. During the study, the vaccine/placebo case ratio does not reach predefined unsafe boundaries being monitored by the DSMB

1 to 42 days following any dose 1 to 7 days following any dose

2. At the end of the study, the upper bound of the 95% CI estimate of the relative risk of intussusception must be 10

Page 122: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

122

Adaptive Designs Working Group

Safety Monitoring for Intussusception (IT)

Active surveillance -contacts on day 714, and 42

Passive surveillance -parent education

Intense surveillance during 6 weeks aftereach dose

Pediatric surgeon, radiologist, & emergencydepartment specialists

Use specific case definition

Individual & collab-orative adjudications

Unblind each case as it occurs and makerecommendationsabout continuing

Review all safety dataevery 6 months

IT Surveillanceat Study Sites

Safety Endpoint Adjudication Committee

Data and SafetyMonitoring Board (DSMB)

Potential IT Case

Positively Adjudicated

IT Case

Page 123: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

123

Adaptive Designs Working Group

0 1 2 3 4 5 6 7 8

Placebo Intussusception Cases

0

2

4

6

8

10

12

14

16

Vacc

ine I

ntu

ssusc

ep

tion C

ase

s

V260-6 PD 1-42 feb03 Feb. 26, 2003

Safety Monitoring for Intussusception

Trial utilizes two predefined stopping boundary graphs for the 1 to 7 and 1 to 42 day ranges after each dose

Stopping boundaries were developed to ensure that the trial will be stopped if there is an increased risk of intussusception within these day ranges

DSMB plots intussusception cases on graphs and makes recommendations about continuing the study

Stopping Boundary(For 1 to 42 Day

PeriodAfter Any Dose)

Page 124: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

124

Adaptive Designs Working Group

Intussusception 42 days post-dose

Unsafe Boundary

(LB on 95% CI >1.0)

0 2 4 6 8 10 12 14 16

Placebo Intussusception Cases

2

4

6

8

10

12

14

16

Vac

cine

Int

ussu

scep

tion

Cas

es

AcceptableSafety Profile

(UB on 95% CI <10)

Page 125: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

125

Adaptive Designs Working Group

REST Group Sequential Study Design

Enroll subjects

Monitor continuously forintussusception (IT)

Stop trial early if detectincreased risk of IT

Evaluate statistical criteriawith 60,000 subjects

Primary hypothesis satisfied: Stop

Data inconclusive:Enroll 10,000 more infants

Monitor continuously andstop early if detect increased

risk of IT

Evaluate statistical criteriawith 70,000 subjects

Page 126: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

126

Adaptive Designs Working Group

Comments on REST Study Design

The goal of the REST study design and the extensive safety monitoring was to provide:

i. High probability that a safe vaccine would meet the end of study criteria; and simultaneously

ii. High probability that a vaccine with increased intussusception risk would stop early due to ongoing safety monitoring

The statistical operating characteristics of REST were estimated using Monte Carlo simulation

Page 127: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

127

Adaptive Designs Working Group

Statistical Operating Characteristics of REST*

Risk Scenario

Probability of reaching unsafe monitoring

boundary

Probability of meeting end of study safety

criteria

Safe Vaccine (RR=1) ~6% ~94%

RRV-TV Risk Profile**

Case-control study

Case-series study

~91%

~85%

~9%

~15%

* Assumes background intussusception rate of 1/2000 infant years and102 days of safety follow-up over three doses.

** RRV-TV = rhesus rotavirus tetravalent vaccine; Murphy et al, New Engl J Med. 344(2001): 564-572.

Page 128: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

128

Adaptive Designs Working Group

References (Blinded SSR)

Gould, Shih (1992) Comm in Stat (A), 21:2833-2853. Gould (1992) Stat in Medicine, 11:55-66. Shih (1993) Drug Information Journal, 27:761-764. Shih, Gould (1995) Stat in Medicine, 14:2239-2248. Shih, Zhao (1997) Stat in Medicine, 16:1913-1923. Shih, Long (1998) Comm in Stat (A), 27:395-408. Shih, Zhao (1997) Stat in Medicine, 16:1913-1923. Gould, Shih (1998) Stat in Medicine, 17:89-100. Kieser, Friede (2000) Drug Info Journal, 34:455-460. Friede, Kieser (2002) Stat in Medicine, 21:165-176. Friede, Kieser (2003) Stat in Medicine, 22:995-1007. Friede, Kieser (2006) Biometrical Journal, 2006; 48:537-555 Wust, Kieser (2003) Biometrical J, 45:915-930. Wust, Kieser (2005) Comp Stat & Data Analysis, 49:835-855

Page 129: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

129

Adaptive Designs Working Group

References (Others) Bauer, P. (1989) Biometrie und Informatik in Medizin und Biologie 20:130–

148. Wittes, Brittain (1990) Stat in Medicine, 9:65-72 Bristol (1993) J of Biopharm Stat, 3:159-166 Herson, Wittes (1993) Drug Info Journal, 27:753-760 Birkett, Day (1994) Stat in Medicine, 13:2455-2463 Proschan, Hunsberger (1995) Biometrics, 51:1315–1324 Shih, Zhao (1997) Stat in Medicine, 16:1913-1923 Fisher (1998) Stat in Medicine,17:1551-1562 Cui, Hung, Wang (1999) Biometrics, 55:853-857 Wittes et al (1999) Stat in Med, 18:3481-3491 Zucker, Wittes et al (1999) Stat in Medicine,18:3493-3509 Lechmacher, Wassmer (1999) Biometrics, 55:1286-1290 Denne, Jennison (1999) Stat in Medicine, 18:1575-1585 Denne (2000) J of Biopharm Stat, 10:131-144 Kieser, Friede (2000) Stat in Medicine, 19:901-911 Shun, Yuan, Brady, Hsu (2001) Stat in Medicine, 20:497-513

Page 130: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

130

Adaptive Designs Working Group

References (Others) Muller, Schafer (2001) Biometrics, 57:886-891 Coffee, Muller (2001) Biometrics, 57:625-631 Mehta, Tsiatis (2001) Drug Info Journal, 35:1095-1112 Liu, Chi (2001) Biometrics, 57:172–177 Jennison, Turnbull (2003) Stat in Medicine, 22:971-993 Jennison, Turnbull (2006) Biometrika, 93:1-21 Jennison, Turnbull (2006) Stat in Medicine: 25:917-932 Tsiatis, Mehta (2003) Biometrika, 90:367–378 Schwartz, Denne (2003) Pharm Stat, 2:263-27 Brannath, Konig, Bauer (2003) Biometrical J, 45:311-324 Coburger, Wassmer (2003) Biometrical J, 45:812-825 Posch, Bauer, Brannath (2003) Stat in Med, 22:953-969 Friede, Kieser (2004) Pharm Stat, 3:269-279 Cheng, Shen (2004) Biometrics, 60:910-918 Jennison, Turnbull (2004) Technical Reports, U of Bath, UK Anderson et al (2004) JSM Proceedings, ASA.

Page 131: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

131

Adaptive Designs Working Group

References (Others) Lokhnygina (2004) NC State stat dissertation (with Anastasio Tsiatis as the advisor). Sandvik, Erikssen, Mowinckel, Rodland (1995), Statistics in Medicine, 15:1587-90 Denne and Jennison (2000), Biometrika, 87:125-134 Anderson (2006) Biometrical Journal, to appear

Page 132: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

132

Adaptive Designs Working Group

Page 133: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

FDA/Industry Statistics Workshop Adaptive Designs Working Group

September 27, 2006Washington, D.C.

Adaptive Designs Logistic, Operational and Regulatory Issues

Paul GalloBiostatistics and Statistical Reporting

Novartis

Page 134: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

134

Adaptive Designs Working Group

Primary PhRMA references

PhRMA White Paper sections:

Quinlan JA and Krams M. Implementing adaptive designs: logistical and operational considerations. Drug Information Journal 2006; 40(4): 437-444.

Gallo P. Confidentiality and trial integrity issues for adaptive designs. Drug Information Journal 2006; 40(4): 445-450.

Page 135: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

135

Adaptive Designs Working Group

Outline

Motivations and opportunities

Cautions and challenges

Logistic and feasibility issues

Interim monitoring and confidentiality issues review of current conventions

monitoring processes for adaptive designs

information conveyed by adaptive designs

Page 136: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

136

Adaptive Designs Working Group

General motivation

The greater flexibility offered within the adaptive design framework has the potential to translate into more ethical treatment of patients within trials (possibly including the use of fewer patients), more efficient drug development, and better focusing of available resources.

The potential appeal of adaptive designs is understandable, and motivates the current high level of interest in this topic.

Page 137: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

137

Adaptive Designs Working Group

Cautions But, being too eager, and proceeding without all

relevant issues being fully thought out, is not advisable either.

The question should be:

“What is the most appropriate (e.g., ethical, efficient) means at hand to address the research questions of importance?”

rather than:

“How can adaptive designs be integrated into our program at all costs?”

Page 138: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

138

Adaptive Designs Working Group

Challenges

Clearly, there will be many challenges to be addressed or overcome before adaptive designs become more widely utilized.

Statistical

Logistic

Procedural / regulatory

Page 139: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

139

Adaptive Designs Working Group

General considerations

Like any new technology with challenges, some resistance is to be expected.

Closer scrutiny is natural, and constructive. But we should not make “the perfect be the

enemy of the good ”. Can we address the challenges to a sufficient

extent so that in particular situations the advantages outweigh the drawbacks?

Page 140: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

140

Adaptive Designs Working Group

Planning

Adaptive designs are not a substitute for poor planning, and in fact will generally require more planning.

They are part of a rational strategy to achieve research objectives more efficiently and ethically: by utilizing knowledge gained from the study in a manner which maintains the validity and

interpretability of the results.

Page 141: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

141

Adaptive Designs Working Group

Feasibility issues

Endpoint follow-up time vs recruitment speed Shorter read-out time is generally favorable to

adaptive designs.

Surrogates / early predictors can have a role.

Timely data collection is important, as well as efficient analysis and decision-making processes. Electronic Data Capture should be helpful.

Page 142: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

142

Adaptive Designs Working Group

Data quality

All else being equal, cleaner is better But the usual trade-off exists:

cleaner takes longer, and results in less data being available for decisions

lack of data is a source of noise also! There is no requirement that data must be fully

cleaned for adaptive designs. Details of data quality requirements should be

considered on a case-by-case basis.

Page 143: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

143

Adaptive Designs Working Group

Opportunities

Early-phase trials may in the short term be the most favorable arena for wider-scale implementation of adaptive designs. More uncertainties, and thus more opportunity for

considering adaptation Lesser regulatory concerns Lower-risk opportunities to gain experience with ADs

to learn, solve operational problems, and set the stage for more important applications.

Page 144: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

144

Adaptive Designs Working Group

Simulation

Simulation will play an important role in planning of adaptive trials. Detailed simulation scenarios should be of broad

interest in evaluating adaptive design proposals (e.g., health authorities).

Main simulation results may be included in the protocol or analysis plan.

Page 145: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

145

Adaptive Designs Working Group

Monitoring / confidentiality issues

Issues relating to

monitoring of accruing data

restriction of knowledge of interim results

and the processes of data review, decision-making and implementation

are likely to be critical in determining the extent and shaping the nature of adaptive design utilization in clinical trials.

Page 146: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

146

Adaptive Designs Working Group

Current monitoring conventions Monitoring of accruing data is of course a common

feature in clinical trials. Most frequently for: safety monitoring

formal group sequential plan allowing stopping for efficacy

lack of effect / futility judgments.

Current procedures and conventions governing monitoring are a sensible starting point for addressing similar issues in trials with adaptive designs.

Page 147: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

147

Adaptive Designs Working Group

Current monitoring conventions As described in the FDA DMC guidance (2006):

Comparative interim results and access to unblinded data should not be accessible to trial personnel, sponsor, investigators.

Access to interim results diminishes the ability of trial personnel to manage the trial in a manner which is (and which will be seen by interested parties to be) completely objective.

Page 148: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

148

Adaptive Designs Working Group

Current monitoring conventions Knowledge of interim results could introduce subtle,

unknown biases into the trial, perhaps causing slight changes in characteristics of patients recruited, administration of the intervention, endpoint assessments, etc.

Changes in “investigator enthusiasm”?

The equipoise argument: knowledge of interim results violates equipoise.

Page 149: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

149

Adaptive Designs Working Group

Current conventions - sponsor FDA (2006): “Sponsor exposure to unblinded

interim data . . . can present substantial risk to the integrity of the trial.”

Risks include lack of objectivity in trial management; further unblinding, even if inadvertent; SEC requirements and fiduciary responsibilities, etc.

Sponsor is thus usually not involved in monitoring of confirmatory trials.

Page 150: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

150

Adaptive Designs Working Group

Issues for adaptive designsI. Adaptive designs will certainly require review of

accruing data. Who will be involved in the analysis, review, and

decision-making processes? Will operational models differ from those we’ve

become familiar with? Will sponsor perspective and input be relevant or

necessary for some types of adaptations? Will sponsors accept and trust decisions made

confidentially by external DMCs in long-term trials / projects with important business implications (e.g., seamless Phase II / III)?

Page 151: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

151

Adaptive Designs Working Group

Issues for adaptive designs

II. An important distinction versus common monitoring situations: the results will be used to implement adaptation(s) which will govern some aspect of the conduct of the remainder of the trial.

Can observers infer from viewing the actions taken information about the results which might be perceived to rise to an unacceptable level?

Page 152: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

152

Adaptive Designs Working Group

Analysis / review / decision process

Concerns about confidentiality to ensure objective trial management, and potential bias from broad knowledge of interim results, should be no less relevant for adaptive designs than in other settings.

The key principles to adhere to would seem to be: separation / independence of the DMC from other

trial activities limitation of knowledge about interim treatment

effects.

Page 153: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

153

Adaptive Designs Working Group

Analysis / review / decision process

Adaptive design trials may utilize a single monitoring board for adaptations and other responsibilities (e.g., safety); or else a separate board may be considered for the adaptation decisions.

DMCs in adaptive design trials may require additional expertise not traditionally represented on DMCs; perhaps to monitor the adaptation algorithm, or to make the type of decision called for in the adaptation plan (e.g., dose selection).

Page 154: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

154

Adaptive Designs Working Group

Sponsor participation Sponsor participation and knowledge of interim

results in confirmatory adaptive trials may be a hard sell.

The “objective trial management” issue - sponsor can have some influence on trial management activities, even for individuals not directly participating in the trial.

There seems to be an assumption that information, once within the sponsor organization, may not be controlled, whether inadvertently or otherwise.

Page 155: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

155

Adaptive Designs Working Group

Sponsor participation Proposal - There should be potential for sponsor

involvement in certain types of decisions if: a strong rationale can be described whereby these

individuals are needed for the best decision the individuals are not involved in trial operations all involved clearly understand the issues and risks to

the trial, and adequate firewalls are in place sponsor exposure to results is “minimal” for the needed

decision, i.e., only at the adaptation point, only the relevant data (e.g., unlike a DMC with whom they may be working, which may have a broader ongoing role).

Page 156: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

156

Adaptive Designs Working Group

Information apparent to observers Adaptive designs may lead to changes in a trial

which will be apparent to some extent - sample size, randomization allocation, population, dosage, treatment arm selection, etc., etc. - and can thus be viewed as providing some information to observers about the results which led to those changes.

Considering the concerns which are the basis for the confidentiality conventions: can we distinguish between types and amounts of information, and how risky they would be in this regard?

Page 157: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

157

Adaptive Designs Working Group

Information apparent to observers

Note: conventional monitoring is not immune from this issue.

It has never been the case that no information can be inferred from monitoring; i.e., all monitoring has some potential action thresholds, and lack of action usually implies that such thresholds have not been reached.

Page 158: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

158

Adaptive Designs Working Group

Example – Triangular test

Design: Normal data, 2 group comparison

Study designed to detect Δ = 0.15

4 equally-spaced analyses

will require about 2276 patients.

Page 159: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

159

Adaptive Designs Working Group

Example – Triangular test

-40

-20

0

20

40

60

80

0 100 200 300 400 500 600 700 800

V

Z

‘Christmas tree’ boundary

Page 160: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

160

Adaptive Designs Working Group

Example – Triangular test

Δ = Z / V

Continuation beyond the 3rd look would imply (barring over-ruling of the boundary) that the point estimate is between 0.076 and 0.106.

Doesn’t that convey quite a bit of information about the interim results?

In conventional GS design practice, this issue seems not to be perceived to compromise trials nor to discourage monitoring.

^

Page 161: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

161

Adaptive Designs Working Group

Information apparent to observers

Presumably, in GS practice it’s viewed that reasonable balance is struck between the objectives and benefits of the monitoring and any slight potential for risk to the trial, with appropriate and feasible safeguards in place to minimize that risk.

The same type of standard should make sense for adaptive designs.

Page 162: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

162

Adaptive Designs Working Group

Information apparent to observers

In some cases we may have opportunities to lessen this concern by withholding certain details of the strategy from the protocol, and placing them in another document of more limited circulation.

For example, if some type of selection is to be made based upon predictive probabilities, do full details and thresholds need to be described in the protocol?

Page 163: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

163

Adaptive Designs Working Group

Information apparent to observers

A proposal: Selection decisions (choice of dose, subgroup,

etc. for continuation) generally do NOT give away an amount of information that would be considered to compromise or influence the trial, as long as the specific numerical results on which the decisions were based remain confidential.

Page 164: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

164

Adaptive Designs Working Group

Information apparent to observers

Consider the alternative -

In a seamless Phase II / III design, we might instead have run a conventional separate-phase program.

Phase II results would be widely known (what about equipoise ??)

In this sense, maybe the adaptive design offers a further advantage relative to the traditional paradigm?

Page 165: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

165

Adaptive Designs Working Group

Algorithmic changes More problematic - changes based in an algorithmic

manner on interim treatment effect estimates in effect provide knowledge of those estimates to anyone who knows the algorithm and the change.

Most typical example - certain approaches to sample size re-estimation:

SSnew = f (interim treatment effect estimate)

=> estimate = f -1 (SSnew)

Page 166: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

166

Adaptive Designs Working Group

Mitigating the concerns Perhaps the adaptation can be made based upon a

combination of factors in order to mask the observed treatment effect.

e.g., SS re-estimation using the treatment effect, the observed variance, and external information.

If possible, “discretize” the potential actions, i.e., a small number of potential actions correspond to ranges of the treatment effects.

We may at times try to quantify that the knowledge which can be inferred is comparable to that of accepted group sequential plans.

Page 167: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

167

Adaptive Designs Working Group

Summary Adaptive designs suggest real benefits for the clinical

development process. Achieving this promise will require full investigation

and understanding of the relevant issues, trade-offs, and challenges.

Advantages should be considered in balance against any perceived risks or complexities.

This should be expected to require more planning, not less.

We can expect that adaptive designs will inevitably be scrutinized closely because of their novelty.

Page 168: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

168

Adaptive Designs Working Group

Summary We should not aim to broadly undo established

monitoring conventions, but rather to fine-tune them to achieve their sound underlying principles.

To justify sponsor participation in monitoring, provide convincing rationale and “minimize” this involvement, and enforce strict control of information.

Some types of adaptations convey limited information for which it seems difficult to envision how the trial might be compromised.

Others convey more information, but perhaps we can implement extra steps to mask this.

Page 169: Adaptive Designs Working Group September 27, 2006 Washington, D.C. FDA/Industry Statistics Workshop Adaptive Clinical Trials Short Course Presenters: Michael.

169

Adaptive Designs Working Group

References Committee for Medicinal Products for Human Use. Guideline on Data

Monitoring Committees. London: EMEA; 2006.

Committee for Medicinal Products for Human Use. Reflection paper on methodological issues in confirmatory clinical trials with flexible design and analysis plans (draft). London: EMEA; 2006.

DeMets DL, Furberg CD, Friedman LM (eds.). Data Monitoring in Clinical Trials: A Case Studies Approach. Springer; 2006.

Ellenberg SE, Fleming TR, DeMets DL. Data Monitoring Committees in Clinical Trials: A Practical Perspective. Chichester: Wiley; 2002.

US Food and Drug Administration. Guidance for Clinical Trial Sponsors on the Establishment and Operation of Data Monitoring Committees. Rockville MD: FDA; 2006.