You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
where $\theta = -\log(p)$, $\lambda> 0$ is a constant hazard rate and $t\ge 0$.
87
87
The component $\exp(-\lambda t)$ is an exponential survival distribution; while it could be replaced with an arbitrary survival distribution on $t>0$ for the mixture model, the exponential model is simple, adequately flexible and easy to explain.
88
88
This two-parameter model can be specified by the cure rate and the assumed survival rate $S_c(t_1)$ at some time $0 <t_1<\infty.$
89
-
For this study, the control group cure rate is assumed to be `r cure_rate` and the survival at `r time_unit` is assumed to be `r s1`.
89
+
For this study, the control group cure rate for the 3 scenarios is assumed to be `r cure_rate` and the survival at `r time_unit` is assumed to be `r s1`, respectively.
90
90
We can solve for $\theta$ and $\lambda$ as follows:
@@ -159,7 +159,7 @@ ggplot(survival, aes(x = Time, y = Survival, lty = Treatment, col = Scenario)) +
159
159
theme(legend.position = "bottom")
160
160
```
161
161
162
-
We also evaluate the failure rate over time for scenario 1, which is later used in the design derivation.
162
+
We also evaluate the failure rate over time for scenario 1, which is used below in the design derivation.
163
163
Note that the piecewise intervals used to approximate changing hazard rates can be made arbitrarily small to get more precise approximations of the above.
164
164
However, given the uncertainty of the underlying assumptions, it is not clear that this provides any advantage.
165
165
@@ -192,13 +192,13 @@ ggplot() +
192
192
193
193
## Event Accumulation
194
194
195
-
Based on the above model, we predict how events will accumulate for the control group, experimental group under the alternate hypothesis and overall based on either the null hypothesis of no failure rate difference or the alternate hypothesis where events accrue more slowly in the experimental group.
195
+
Based on the above model, we predict how events will accumulate based on either the null hypothesis of no failure rate difference or the alternate hypothesis where events accrue more slowly in the experimental group.
196
196
We do this by scenario.
197
197
We use as a denominator the final planned events under the alternate hypothesis for scenario 1.
198
198
199
199
Now we compare event accrual under the null and alternate hypothesis for each scenario, with 100% representing the targeted final events under scenario 1.
200
200
The user should not have to update the code here.
201
-
For the 3 scenarios studied, event accrual is quite different, creating difference spending issues.
201
+
For the 3 scenarios studied, event accrual is quite different, creating different spending issues.
202
202
As planned, the expected targeted event fraction reaches 1 for Scenario 1 at 48 months under the alternate hypothesis.
203
203
Under the null hypothesis for this scenario, expected targeted events are reached at approximately 36 months.
204
204
For Scenario 2 the expectation is that targeted events will be achieved in less than 24 months under both the null and alternative hypotheses.
@@ -262,16 +262,17 @@ ggplot(event_accrual, aes(x = Time, y = EF, color = Scenario, lty = Hypothesis))
262
262
263
263
We choose calendar-based timing for analyses as well as for spending.
264
264
This is not done automatically by the `gsSurv()` function, but is done using the `gsSurvCalendar()` function.
265
-
There are two things `gsSurvCalendar()` takes care of::
265
+
There are two things `gsSurvCalendar()` takes care of:
266
266
267
267
- How to get information fraction levels that correspond to targeted calendar analysis times to plug in for the planned design.
268
268
- Replacing information fraction levels with calendar fraction levels for $\alpha$- and $\beta$-spending.
269
269
270
270
We begin by specifying calendar times of analysis and find corresponding fractions of final planned events and calendar time under design assumptions.
271
+
Having the first interim at 14 months rather than 12 was selected to get the expected events well above 100.
271
272
272
273
```{r}
273
274
# Calendar time from start of randomization until each analysis time
274
-
calendarTime <- c(12, 24, 36, 48)
275
+
calendarTime <- c(14, 24, 36, 48)
275
276
```
276
277
277
278
Now we move on to other design assumptions.
@@ -292,8 +293,8 @@ test.type <- 6
292
293
# 1-sided Type I error used for safety (for asymmetric 2-sided design)
293
294
astar <- .2
294
295
# Spending functions (sfu, sfl) and parameters (sfupar, sflpar)
295
-
sfu <- sfHSD # O'Brien-Fleming approximation by Lan and DeMets
296
-
sfupar <- -4 # Not needed for sfLDPocock
296
+
sfu <- sfHSD
297
+
sfupar <- -3
297
298
sfl <- sfLDPocock # Near-equal Z-values for each analysis
298
299
sflpar <- NULL # Not needed for Pocock spending
299
300
# Dropout rate (exponential parameter per unit of time)
@@ -304,18 +305,16 @@ ratio <- 1
304
305
305
306
## Study Design and Event Accumulation
306
307
307
-
We now assume a trial is enrolled with a constant enrollment rate over `r enroll_duration` months trial duration of `r study_duration`.
308
+
We now assume a trial is enrolled with a constant enrollment rate over `r enroll_duration[1]` months trial duration of `r study_duration[1]`.
308
309
As noted above, the event accumulation pattern is highly sensitive to the assumptions of the design.
309
310
That is, deviations from plan in accrual, the hazard ratio overall or over time as well as relatively minor deviations from the cure model assumption could substantially change the calendar time of event-based analysis timing.
310
311
Thus, calendar-based timing and spending (@LanDeMets1989) may have some appeal to make the timing of analyses more predictable.
311
312
The main risk to this would likely be under-accumulation of the final targeted events for the trial.
312
313
The targeted 4-year window may be considered clinically important as well as an important limitation for trial duration.
313
314
Using the above predicted information fractions at 12, 24, 36, and 48 months to plan a calendar-based design.
314
-
We use the arguments `usTime` and `lsTime` to change to calendar-based spending for the upper and lower bounds, respectively.
315
-
However, the pattern of slowing event accumulation over time after year 1 seems reasonably likely to persist.
316
-
This means that calendar-based spending is likely to give more conservative bounds since the calendar fractions are lower than the information fractions in the text overlay of the plot after the first interim: 10%, 20%, 40%, 60%, 80% and 100%, respectively.
315
+
Calendar-based spending is likely to give more conservative interim bounds since the calendar fractions are lower than the information fractions in the text overlay of the plot after the first interim: 10%, 20%, 40%, 60%, 80% and 100%, respectively.
317
316
318
-
We now use the information fractions from the text overlay to set up a calendar-based design.
317
+
We now set up a calendar-based design.
319
318
320
319
```{r}
321
320
design_calendar <-
@@ -352,9 +351,8 @@ design_calendar %>%
352
351
There are a few things to note for the above design:
353
352
354
353
- The futility bounds are advisory only. In particular, the late futility bounds may be ignored since the follow-up for the full time period may merit continuing the trial.
354
+
- The efficacy spending function should be carefully considered to ensure the evidence required to cross any bound is likely to justify early stopping with a definitive demonstration of benefit.
355
355
- Substantial deviations in event accumulation would not change timing of analyses from their calendar times. This should be considered for acceptability at the time of design.
356
-
- The first efficacy bound is so extreme that it essentially makes the first analysis futility only. This is likely reasonable based on the minimal follow-up at that time.
357
-
- The Z-values and hazard ratios required for a positive efficacy finding are not terribly extreme starting from the two-year follow-up analysis. This is also probably reasonable as long as the hazard ratio differences at the bounds are clinically meaningful. A more extreme finding at year 1 may likely be required for a positive finding.
358
356
- The trial may be continued after crossing an efficacy bound for further follow-up as it is unlikely that control patients doing well would cross over to experimental therapy in absence of adverse clinical outcomes. Inference at subsequent analyses using repeated p-values (@JTBook) or sequential p-values (@LiuAnderson2008) are well-specified and interpretable as adjusted p-values.
0 commit comments