Considerations for the planning, conduct and reporting of clinical trials with interim analyses: code accompanying the paper
Author
Kaspar Rufibach, on behalf of the author team
Published
October 11, 2024
1 Background
Code for the paper Recommendations for the planning, conduct and reporting of clinical trials with interim analyses, Asikanius et al. (2024), written by an author team of statisticians from regulators, academia, and industry.
2 Purpose of this document
This R Quarto file provides easy accessible code to compute all the quantities in the paper. The github repository where this document is hosted is available here.
These are the total number of events needed without interim and the corresponding minimal detectable difference (MDD).
5 Add interim analyses
We start with adding an interim using an O’Brien-Fleming group-sequential boundary. For simplicity, we also add an efficacy interim using OBF at the first interim. This would strictly speaking not be needed but does not make a numerical difference. One could properly design the first interim as futility only through a hand-knitted alpha-spending function in rpact.
time <-c(time, grid)# power loss through the interims# Futility HR and information at interimfutilityHR <-c(1, 0.9)eventsInterim <- nevents[1:2]# Calculate Z-score corresponding to futilityHR # Note: Minus sign is added to be consistent with directionUpper = FALSE # in getPowerSurvival call below.allocationRatioPlanned <-1# approximate info according to Schoenfeld's formulainfoInterim <- eventsInterim * allocationRatioPlanned/(1+ allocationRatioPlanned) ^2ZscoreInterim <--log(futilityHR) /sqrt(1/ infoInterim)designFutility <-getDesignGroupSequential(sided =1, alpha = alpha /2,informationRates = info,typeOfDesign ="noEarlyEfficacy",futilityBounds = ZscoreInterim,bindingFutility =FALSE)# Calculate power and timeline under H1 with original sample size if design includes # the non-binding futility (and the futility boundary is adhered to)power <-getPowerSurvival(designFutility,allocationRatioPlanned = allocationRatioPlanned,lambda2 =log(2) / m1, hazardRatio = hr,dropoutRate1 = dropoutRate1, dropoutRate2 = dropoutRate2, dropoutTime = dropoutTime,accrualTime = accrualTime, accrualIntensity = accrualIntensity,maxNumberOfEvents = nevents[3], directionUpper =FALSE)summary(power)
Power calculation for a survival endpoint
Sequential analysis with a maximum of 3 looks (group sequential design), overall significance level 2.5% (one-sided). The results were calculated for a two-sample logrank test, H0: hazard ratio = 1, power directed towards smaller values, H1: hazard ratio = 0.75, control lambda(2) = 0.01, maximum number of events = 385, accrual time = 12, accrual intensity = 100, dropout rate(1) = 0.025, dropout rate(2) = 0.025, dropout time = 12.
Stage
1
2
3
Planned information rate
33.3%
66.7%
100%
Efficacy boundary (z-value scale)
Inf
Inf
1.960
Futility boundary (z-value scale)
0
0.845
Cumulative power
0
0
0.7800
Number of subjects
1200.0
1200.0
1200.0
Expected number of subjects under H1
1200.0
Expected number of events
365.7
Cumulative number of events
128.3
256.7
385.0
Analysis time
19.71
35.69
54.96
Expected study duration
52.24
Cumulative alpha spent
0
0
0.0250
One-sided local significance level
0
0
0.0250
Efficacy boundary (t)
0
0
0.819
Futility boundary (t)
1.000
0.900
Overall exit probability (under H0)
0.5000
0.3207
Overall exit probability (under H1)
0.0516
0.0469
Exit probability for efficacy (under H0)
0
0
Exit probability for efficacy (under H1)
0
0
Exit probability for futility (under H0)
0.5000
0.3207
Exit probability for futility (under H1)
0.0516
0.0469
Legend:
(t): treatment effect scale
Finally, we compute the actual dates of the analyses, relying on an assumed first patient in date:
To illustrate trial simulation we first need to make a few own functions available:
# --------------------------------------------------------------# functions# --------------------------------------------------------------# Quantile function for a Weibull with a potential cure proportion.qWeibullCure <-function(p, p0, shape =1, scale){ res <-rep(NA, length(p)) ind1 <- (p <= p0) ind2 <- (p > p0) res[ind1] <-Inf res[ind2] <-qweibull(1- (p[ind2] - p0) / (1- p0), shape = shape, scale = scale)return(res) }# simulate Weibull distributed time-to-event data in one group, # where administrative censoring is applied after a pre-specified# number of events. A cure proportion and censoring due to drop-out # can be specified in addition.rWeibull1arm <-function(shape =1, scale, cure =0, recruit, dropout =0, start.accrual =0, cutoff, seed =NA){# shape Weibull shape parameter. # scale: Weibull scale parameter.# cure: Proportion of patients assumed to be cured, i.e. with an # event at +infty.# recruit: Recruitment.# dropout: Drop-out rate, on same time scale as med.# start.accrual: Time unit where accrual should start. Might be useful when # simulating multi-stage trials.# cutoff: Cutoff, #events the final censored data should have # (can be a vector of multiple cutoffs).# seed: If different from NA, seed used to generate random numbers.## Kaspar Rufibach, June 2014if (is.na(seed) ==FALSE){set.seed(seed)} n <-sum(recruit)# generate arrival times arrive <-rep(1:length(recruit), times = recruit) arrivetime <-NULLfor (i in1:n){arrivetime[i] <-runif(1, min = arrive[i] -1, max = arrive[i])} arrivetime <- start.accrual +sort(arrivetime)# generate event times: Exp(lambda) = Weibull(shape = 1, scale = 1 / lambda) eventtime <-qWeibullCure(runif(n), p0 = cure, shape = shape, scale = scale)# Apply drop-out. Do this before applying the cutoff below, # in order to correctly count necessary #events. dropouttime <-rep(Inf, n)if (dropout >0){dropouttime <-rexp(n, rate = dropout)} event.dropout <-ifelse(eventtime > dropouttime, 0, 1) time.dropout <-ifelse(event.dropout ==1, eventtime, dropouttime) # observed times, taking into account staggered entry tottime <- arrivetime + eventtime# find cutoff based on number of targeted events# only look among patients that are not considered dropped-out time <-data.frame(matrix(NA, ncol =length(cutoff), nrow = n)) event <- time cutoff.time <-rep(NA, length(cutoff))for (j in1:length(cutoff)){ cutoff.time[j] <-sort(tottime[event.dropout ==1])[cutoff[j]]# apply administrative censoring at cutoff event[event.dropout ==1, j] <-ifelse( tottime[event.dropout ==1] > cutoff.time[j], 0, 1) event[event.dropout ==0, j] <-0# define time to event, taking into account both types of censoring time[event.dropout ==1, j] <-ifelse(event[, j] ==1, eventtime, cutoff.time[j] - arrivetime)[event.dropout ==1] time[event.dropout ==0, j] <-pmin(cutoff.time[j] - arrivetime, time.dropout)[event.dropout ==0]# remove times for patients arriving after the cutoff rem <- (arrivetime > cutoff.time[j])if (TRUE%in% rem){time[rem, j] <-NA} }# generate output tab <-data.frame(cbind(1:n, arrivetime, eventtime, tottime, dropouttime, time, event))colnames(tab) <-c("pat", "arrivetime", "eventtime", "tottime", "dropouttime", paste("time cutoff = ", cutoff, sep =""), paste("event cutoff = ", cutoff, sep ="")) res <-list("cutoff.time"= cutoff.time, "tab"= tab)return(res)}# simulate Weibull distributed time-to-event data in two arms, # where administrative censoring is applied after a pre-specified # number of events. A cure proportion and censoring due to drop-out # can be specified in addition.rWeibull2arm <-function(shape =c(1, 1), scale, cure =c(0, 0), recruit, dropout =c(0, 0), start.accrual =c(0, 0), cutoff, seed =NA){# shape 2-d vector of Weibull shape parameter. # scale 2-d vector of Weibull scale parameter.# cure: 2-d vector with cure proportion assumed in each arm.# recruit: List with two elements, vector of # recruitment in each arm.# dropout: 2-d vector with drop-out rate for each arm, # on same time scale as med.# start.accrual: 2-d vector of time when accrual should start. # Might be useful when simulating multi-stage trials.# cutoff: Cutoff, #events the final censored data should have # (can be a vector of multiple cutoffs).# seed: If different from NA, seed used to generate random numbers.## Kaspar Rufibach, June 2014if (is.na(seed) ==FALSE){set.seed(seed)} dat1 <-rWeibull1arm(scale = scale[1], shape = shape[1], recruit = recruit[[1]], cutoff =1, dropout = dropout[1], cure = cure[1], start.accrual = start.accrual[1], seed =NA)$tab dat2 <-rWeibull1arm(scale = scale[2], shape = shape[2], recruit = recruit[[2]], cutoff =1, dropout = dropout[2], cure = cure[2], start.accrual = start.accrual[2], seed =NA)$tab n <-c(nrow(dat1), nrow(dat2))# treatment variable tmt <-factor(c(rep(0, n[1]), rep(1, n[2])), levels =0:1, labels =c("A", "B")) arrivetime <-c(dat1[, "arrivetime"], dat2[, "arrivetime"]) eventtime <-c(dat1[, "eventtime"], dat2[, "eventtime"]) tottime <-c(dat1[, "tottime"], dat2[, "tottime"]) dropouttime <-c(dat1[, "dropouttime"], dat2[, "dropouttime"])# Apply drop-out. Do this before applying the cutoff below, #in order to correctly count necessary #events. event.dropout <-ifelse(eventtime > dropouttime, 0, 1) time.dropout <-ifelse(event.dropout ==1, eventtime, dropouttime) # find cutoff based on number of targeted events# only look among patients that are not considered dropped-out time <-data.frame(matrix(NA, ncol =length(cutoff), nrow =sum(n))) event <- time cutoff.time <-rep(NA, length(cutoff))for (j in1:length(cutoff)){ cutoff.time[j] <-sort(tottime[event.dropout ==1])[cutoff[j]]# apply administrative censoring at cutoff event[event.dropout ==1, j] <-ifelse(tottime[event.dropout ==1] > cutoff.time[j], 0, 1) event[event.dropout ==0, j] <-0# define time to event, taking into account both types of censoring time[event.dropout ==1, j] <-ifelse(event[, j] ==1, eventtime, cutoff.time[j] - arrivetime)[event.dropout ==1] time[event.dropout ==0, j] <-pmin(cutoff.time[j] - arrivetime, time.dropout)[event.dropout ==0]# remove times for patients arriving after the cutoff rem <- (arrivetime > cutoff.time[j])if (TRUE%in% rem){time[rem, j] <-NA} }# generate output tab <-data.frame(cbind(1:sum(n), tmt, arrivetime, eventtime, tottime, dropouttime, time, event))colnames(tab) <-c("pat", "tmt", "arrivetime", "eventtime", "tottime", "dropouttime", paste("time cutoff = ", cutoff, sep =""), paste("event cutoff = ", cutoff, sep ="")) res <-list("cutoff.time"= cutoff.time, "tab"= tab)return(res)}
Now, generate trial data to reproduce numbers in Section 6 of paper, using appropriate seeds:
# median unbiased estimate, assuming the futility interim had happened # at the prespecified number of events with an observed HR of 0.9results2 <-getDataset(overallEvents =c(nevents[1], cut2),overallLogRanks =c(log(0.9)/sqrt(4/ nevents[1]), coef(cox2)[1, "z"]),overallAllocationRatio =c(1, 1))adj_result2 <-getAnalysisResults(design = designUpdate2,dataInput = results2,stage =2, directionUpper =FALSE)
d <-round(30* (trial2$cutoff.time -floor(trial2$cutoff.time)))c(fpi, fpi %m+%months(floor(trial2$cutoff.time)) +days(d))
[1] "2020-04-23" "2023-03-15"
7 References
Asikanius, Elina, Benjamin Hofner, Lisa V. Hampson, Gernot Wassmer, Christopher Jennison, Tobias Mielke, Cornelia Ursula Kunz, and Kaspar Rufibach. 2024. “Considerations for the Planning, Conduct and Reporting of Clinical Trials with Interim Analyses.”https://arxiv.org/abs/2410.01478.