This vignette provides an introduction to the stan_surv
modelling function in the rstanarm package. The stan_surv
function allows the user to fit survival models (sometimes known as time-to-event models) under a Bayesian framework.
Survival (a.k.a. time-to-event) analysis is generally concerned with the time from some defined baseline (e.g. diagnosis of a disease) until an event of interest occurs (e.g. death or disease progression).
In standard survival analysis, one event time is measured for each observational unit. In practice however that event time may be unobserved due to left, right, or interval censoring, in which case the event time is only known to have occurred within the relevant censoring interval.
There are two common approaches to modelling survival data. The first is to model the rate of the event (known as the hazard) as a function of time – the class of models known as proportional and non-proportional hazards regression models. The second is to model the event time directly – the class of models known as accelerated failure time (AFT) models. In addition, a number of extensions to standard survival analysis have been proposed. These include the handling of multiple (recurrent) events, competing events, clustered survival data, cure models, and more.
The intention is for the stan_surv
modelling function in the rstanarm package to provide functionality for fitting a wide range of Bayesian survival models. The current implementation allows for the following model formulations:
Under each of those model formulations the following are allowed:
We assume that a true event time for individual () exists, denoted , but that in practice it may or may not observed due to left, right, or interval censoring. Therefore, in practice we observe outcome data for individual where:
and denotes an event indicator taking value:
The hazard of the event at time is the instantaneous rate of occurrence for the event at time . Mathematically, it is defined as:
The cumulative hazard is defined as:
It can be seen here that in the standard survival analysis setting there is a one-to-one relationship between each of the hazard, the cumulative hazard, and the survival probability. These quantities are also used to form the likelihood for the survival model described in the later sections.
When basehaz
is set equal to "exp"
, "weibull"
, "gompertz"
, "ms"
(the default), or "bs"
then the model is defined on the hazard scale as described by the following parameterisations.
We model the hazard of the event for individual at time using the regression model:
Our linear predictor is defined as:
The quantity is referred to as a “hazard ratio”. The hazard ratio (HR) quantifies the relative increase in the hazard that is associated with a unit-increase in the relevant covariate, ; e.g. a hazard ratio of 2 means that a unit-increase in the covariate leads to a doubling in the hazard (i.e. the instantaneous rate) of the event. The hazard ratio can be treated as a time-fixed quantity (i.e. proportional hazards) or time-varying quantity (i.e. non-proportional hazards), as described in later sections.
Exponential model (basehaz = "exp"
): for scale parameter we have:
Weibull model (basehaz = "weibull"
): for scale parameter and shape parameter we have:
Gompertz model (basehaz = "gompertz"
): for shape parameter and scale parameter we have:
M-splines model (basehaz = "ms"
, the default): letting denote a degree M-spline function with basis evaluated at a vector of knot locations $ = $ and parameter vector we have:
basehaz_ops
argument. It is worthwhile noting that would correspond to a piecewise constant baseline hazard.B-splines model (for the log baseline hazard): letting denote a degree B-spline function with basis evaluated at a vector of knot locations and parameter vector we have:
Note: When the linear predictor is not time-varying (i.e. under proportional hazards) there is a closed form expression for the survival probability (except for the B-splines model); details shown in the appendix. However, when the linear predictor is time-varying (i.e. under non-proportional hazards) there is no closed form expression for the survival probability; instead, quadrature is used to evaluate the survival probability for inclusion in the likelihood. Extended details on the parameterisations are given in the appendix.
When basehaz
is set equal to "exp-aft"
, or "weibull-aft"
then the model is defined on the accelerated failure time scale as described by the following parameterisations.
Following Hougaard (1999), we model the survival probability for individual at time using the regression model:
Our linear predictor is defined as:
The quantity is referred to as an “acceleration factor” and the quantity is referred to as a “survival time ratio”. The acceleration factor (AF) quantifies the acceleration (or deceleration) of the event process that is associated with a unit-increase in the relevant covariate, ; e.g. an acceleration factor of 0.5 means that a unit-increase in the covariate leads to an individual approaching the event at half the speed. If you find that somewhat confusing, then it may be easier to think about the survival time ratio. The survival time ratio (STR) is interpreted as the increase (or decrease) in the expected survival time that is associated with a unit-increase in the relevant covariate, ; e.g. a survival time ratio of 2 (which is equivalent to an acceleration factor of 0.5) means that a unit-increase in the covariate leads to an doubling in the expected survival time. The survival time ratio is equal to the inverse of the acceleration factor (i.e. ).
Exponential model (basehaz = "exp-aft"
): When the linear predictor is time-varying we have:
Weibull model (basehaz = "weibull-aft"
): When the linear predictor is time-varying we have:
Note: When the linear predictor is not time-varying (i.e. under time-fixed acceleration), there is a closed form expression for both the hazard function and survival function; details shown in the appendix. However, when the linear predictor is time-varying (i.e. under time-varying acceleration) there is no closed form expression for the hazard function or survival probability; instead, quadrature is used to evaluate the cumulative acceleration factor, which in turn is used to evaluate the hazard function and survival probability for inclusion in the likelihood. Extended details on the parameterisations are given in the appendix.
The coefficient (i.e. the log hazard ratio) or (i.e. log survival time ratio) can be treated as a time-fixed quantity (e.g. ) or as a time-varying quantity. We refer to the latter as time-varying effects because the effect of the covariate is allowed to change as a function of time. In stan_surv
time-varying effects are specified by using the tde
function in the model formula. Note that in the following definitions we only refer to (i.e. the log hazard ratio) but the same methodology applies to (i.e. the log survival time ratio).
Without time-varying effects we have:
With time-varying effects modelled using B-splines we have:
The degrees of freedom is equal to the number of additional parameters required to estimate a time-varying coefficient relative to a time-fixed coefficient. When a B-spline function is used to model the time-varying coefficient the degrees of freedom are where is the total number of knots (including boundary knots).
With time-varying effects modelled using a piecewise constant function we have:
The degrees of freedom is equal to the number of additional parameters required to estimate a time-varying coefficient relative to a time-fixed coefficient. When a piecewise constant function is used to model the time-varying coefficient the degrees of freedom are where is the total number of knots (including boundary knots).
Default knot locations: The vector of knot locations includes a lower boundary knot at the earliest entry time (equal to zero if there isn’t delayed entry) and an upper boundary knot at the latest event or censoring time. The boundary knots cannot be changed by the user. Internal knot locations – that is when – can be explicitly specified by the user (see the knots
argument to the tve
function) or are determined by default. The default is to place the internal knots at equally spaced percentiles of the distribution of uncensored event times. When a B-spline function is specified, the tve
function uses default values (degrees of freedom) and (cubic splines) which in fact corresponds to a cubic B-spline function with no internal knots. When a piecewise constant function is specified, the tve
function uses a default value of (degrees of freedom) which corresponds to internal knots at the , , and percentiles of the distribution of the uncensored event times.
Note on subscripts: We have dropped the subscript from the knot locations and degree of the B-splines discussed above. This is just for simplicity of the notation. In fact, if a model has time-varying effects estimated for more than one covariate, then each these can be modelled using different knot locations and/or degree if the user desires.
Allowing for the three forms of censoring and potential delayed entry (i.e. left truncation) the likelihood for the survival model takes the form:
The prior distribution for the so-called “auxiliary” parameters (i.e. for the Weibull and Gompertz models, or for the M-spline and B-spline models) is specified via the prior_aux
argument to stan_surv
. Choices of prior distribution include:
These choices are described in greater detail in the stan_surv
or priors
help file.
The prior distribution for the intercept parameter in the linear predictor is specified via the prior_intercept
argument to stan_surv
. Choices include the normal, t, or Cauchy distributions. The default is a normal distribution with mean zero and scale 20. Note that – internally (but not in the reported parameter estimates) – the prior is placed on the intercept after centering the predictors at their sample means and after applying a constant shift of where is the total number of events and is the total follow up time. For example, a prior specified by the user as prior_intercept = normal(0,20)
is in fact not centered on an intercept of zero when all predictors are at their sample means, but rather, it is centered on the log crude event rate when all predictors are at their means. This is intended to help with numerical stability and sampling, but does not impact on the reported estimates (i.e. the intercept is back-transformed before being returned to the user).
The choice of prior distribution for the time-fixed coefficients () is specified via the prior
argument to stan_surv
. This can any of the standard prior distributions allowed for regression coefficients in the rstanarm package; see the priors vignette and the stan_surv
help file for details.
The additional coefficients required for estimating time-varying effects (i.e. the B-spline coefficients or the interval-specific deviations in the piecewise constant function) are given a random walk prior of the form and for , where is the total number of cubic B-spline basis terms. The prior distribution for the hyperparameter is specified via the prior_smooth
argument to stan_surv
. Lower values of lead to a less flexible (i.e. smoother) function. Choices of prior distribution for the hyperparameter include an exponential, half-normal, half-t, or half-Cauchy distribution, and these are detailed in the stan_surv
help file.
We will use the German Breast Cancer Study Group dataset (see ?rstanarm-datasets
for details and references). In brief, the data consist of patients with primary node positive breast cancer recruited between 1984-1989. The primary response is time to recurrence or death. Median follow-up time was 1084 days. Overall, there were 299 (44%) events and the remaining 387 (56%) individuals were right censored. We concern our analysis here with a 3-category baseline covariate for cancer prognosis (good/medium/poor).
First, let us load the data and fit the proportional hazards model
mod1 <- stan_surv(formula = Surv(recyrs, status) ~ group,
data = bcancer,
chains = CHAINS,
cores = CORES,
seed = SEED,
iter = ITER)
The model here is estimated using the default cubic M-splines (with 5 degrees of freedom) for modelling the baseline hazard. Since there are no time-varying effects in the model (i.e. we did not wrap any covariates in the tve()
function) there is a closed form expression for the cumulative hazard and survival function and so the model is relatively fast to fit. Specifically, the model takes ~3.5 sec for each MCMC chain based on the default 2000 (1000 warm up, 1000 sampling) MCMC iterations.
We can easily obtain the estimated hazard ratios for the 3-catgeory group covariate using the generic print
method for stansurv
objects, as follows
stan_surv
baseline hazard: M-splines on hazard scale
formula: Surv(recyrs, status) ~ group
observations: 686
events: 299 (43.6%)
right censored: 387 (56.4%)
delayed entry: no
------
Median MAD_SD exp(Median)
(Intercept) -0.671 0.202 NA
groupMedium 0.798 0.200 2.222
groupPoor 1.585 0.169 4.880
m-splines-coef1 0.001 0.001 NA
m-splines-coef2 0.006 0.005 NA
m-splines-coef3 0.147 0.029 NA
m-splines-coef4 0.268 0.055 NA
m-splines-coef5 0.102 0.078 NA
m-splines-coef6 0.214 0.142 NA
m-splines-coef7 0.235 0.164 NA
------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg
We see from this output we see that individuals in the groups with Poor
or Medium
prognosis have much higher rates of death relative to the group with Good
prognosis (as we might expect!). The hazard of death in the Poor
prognosis group is approximately 5.0-fold higher than the hazard of death in the Good
prognosis group. Similarly, the hazard of death in the Medium
prognosis group is approximately 2.3-fold higher than the hazard of death in the Good
prognosis group.
It may also be of interest to compare the different types of the baseline hazard we could potentially use. Here, we will fit a series of models, each with a different baseline hazard specification
mod1_exp <- update(mod1, basehaz = "exp")
mod1_weibull <- update(mod1, basehaz = "weibull")
mod1_gompertz <- update(mod1, basehaz = "gompertz")
mod1_bspline <- update(mod1, basehaz = "bs")
mod1_mspline1 <- update(mod1, basehaz = "ms")
mod1_mspline2 <- update(mod1, basehaz = "ms", basehaz_ops = list(df = 10))
and then plot the baseline hazards with 95% posterior uncertainty limits using the generic plot
method for stansurv
objects (note that the default plot
for stansurv
objects is the estimated baseline hazard). We will write a little helper function to adjust the y-axis limits, add a title, and centre the title, on each plot, as follows
library(ggplot2)
plotfun <- function(model, title) {
plot(model, plotfun = "basehaz") + # plot baseline hazard
coord_cartesian(ylim = c(0,0.4)) + # adjust y-axis limits
labs(title = title) + # add plot title
theme(plot.title = element_text(hjust = 0.5)) # centre plot title
}
p_exp <- plotfun(mod1_exp, title = "Exponential")
p_weibull <- plotfun(mod1_weibull, title = "Weibull")
p_gompertz <- plotfun(mod1_gompertz, title = "Gompertz")
p_bspline <- plotfun(mod1_bspline, title = "B-splines with df = 5")
p_mspline1 <- plotfun(mod1_mspline1, title = "M-splines with df = 5")
p_mspline2 <- plotfun(mod1_mspline2, title = "M-splines with df = 10")
bayesplot::bayesplot_grid(p_exp,
p_weibull,
p_gompertz,
p_bspline,
p_mspline1,
p_mspline2,
grid_args = list(ncol = 3))
We can also compare the fit of these models using the loo
method for stansurv
objects
compare_models(loo(mod1_exp),
loo(mod1_weibull),
loo(mod1_gompertz),
loo(mod1_bspline),
loo(mod1_mspline1),
loo(mod1_mspline2))
Model comparison:
(ordered by highest ELPD)
elpd_diff se_diff
mod1_mspline1 0.0 0.0
mod1_bspline -1.2 1.3
mod1_mspline2 -1.9 1.7
mod1_weibull -18.7 5.5
mod1_gompertz -32.3 6.0
mod1_exp -36.7 5.6
where we see that models with a flexible parametric (spline-based) baseline hazard fit the data best followed by the standard parametric (Weibull, Gompertz, exponential) models. Roughly speaking, the B-spline and M-spline models seem to fit the data equally well since the differences in elpd
or looic
between the models are very small relative to their standard errors. Moreover, increasing the degrees of freedom for the M-splines from 5 to 10 doesn’t seem to improve the fit (that is, the default degrees of freedom df = 5
seems to provide sufficient flexibility to model the baseline hazard).
After fitting the survival model, we often want to estimate the predicted survival function for individual’s with different covariate patterns. Here, let us estimate the predicted survival function between 0 and 5 years for an individual in each of the prognostic groups. To do this, we can use the posterior_survfit
method for stansurv
objects, and it’s associated plot
method. First let us construct the prediction (covariate) data
group
1 Good
2 Medium
3 Poor
and then we will generate the posterior predictions
ps <- posterior_survfit(mod1, newdata = nd, times = 0, extrapolate = TRUE,
control = list(edist = 5))
head(ps)
stan_surv predictions
num. individuals: 3
prediction type: event free probability
standardised?: no
conditional?: no
id cond_time time median ci_lb ci_ub
1 1 NA 0.0000 1.0000 1.0000 1.0000
2 1 NA 0.3571 0.9966 0.9937 0.9979
3 1 NA 0.7143 0.9838 0.9769 0.9882
4 1 NA 1.0714 0.9604 0.9463 0.9711
5 1 NA 1.4286 0.9316 0.9085 0.9486
6 1 NA 1.7857 0.9034 0.8750 0.9271
Here we note that the id
variable in the data frame of posterior predictions identifies which row of newdata
the predictions correspond to. For demonstration purposes we have also shown a couple of other arguments in the posterior_survfit
call, namely
times = 0
argument says that we want to predict at time = 0 (i.e. baseline) for each individual in the newdata
(this is the default anyway)extrapolate = TRUE
argument says that we want to extrapolate forward from time 0 (this is also the default)control = list(edist = 5)
identifies the control of the extrapolation; this is saying extrapolate the survival function forward from time 0 for a distance of 5 time units (the default would have been to extrapolate as far as the largest event or censoring time in the estimation dataset, which is 7.28 years in the brcancer
data).Let us now plot the survival predictions. We will relabel the id
variable with meaningful labels identifying the covariate profile of each new individual in our prediction data
panel_labels <- c('1' = "Good", '2' = "Medium", '3' = "Poor")
plot(ps) +
ggplot2::facet_wrap(~ id, labeller = ggplot2::labeller(id = panel_labels))
We can see from the plot that predicted survival is worst for patients with a Poor
diagnosis, and best for patients with a Good
diagnosis, as we would expect based on our previous model estimates.
Alternatively, if we wanted to obtain and plot the predicted hazard function for each individual in our new data (instead of their survival function), then we just need to specify type = "haz"
in our posterior_survfit
call (the default is type = "surv"
), as follows
ph <- posterior_survfit(mod1, newdata = nd, type = "haz")
plot(ph) +
ggplot2::facet_wrap(~ id, labeller = ggplot2::labeller(id = panel_labels))
We can quite clearly see in the plot the assumption of proportional hazards. We can also see that the hazard is highest in the Poor
prognosis group (i.e. worst survival) and the hazard is lowest in the Good
prognosis group (i.e. best survival). This corresponds to what we saw in the plot of the survival functions previously.
To demonstrate the implementation of time-varying effects in stan_surv
we will use a simulated dataset, generated using the simsurv package (Brilleman, 2018).
We will simulate a dataset with individuals with event times generated under the following Weibull hazard function
# load package
library(simsurv)
# set seed for reproducibility
set.seed(999111)
# simulate covariate data
covs <- data.frame(id = 1:200,
trt = rbinom(200, 1L, 0.5))
# simulate event times
dat <- simsurv(lambdas = 0.1,
gammas = 1.5,
betas = c(trt = -0.5),
tde = c(trt = 0.2),
x = covs,
maxt = 5)
# merge covariate data and event times
dat <- merge(dat, covs)
# examine first few rows of data
head(dat)
id eventtime status trt
1 1 3.202809 1 0
2 2 4.907130 1 1
3 3 4.453174 1 1
4 4 2.566302 1 0
5 5 5.000000 0 1
6 6 4.262667 1 1
Now that we have our simulated dataset, let us fit a model with time-varying hazard ratio for trt
mod2 <- stan_surv(formula = Surv(eventtime, status) ~ tve(trt),
data = dat,
chains = CHAINS,
cores = CORES,
seed = SEED,
iter = ITER)
The tve
function is used in the model formula to state that we want a time-varying effect (i.e. a time-varying coefficient) to be estimated for the variable trt
. By default, a cubic B-spline basis with 3 degrees of freedom (i.e. two boundary knots placed at the limits of the range of event times, but no internal knots) is used for modelling the time-varying log hazard ratio. If we wanted to change the degree, knot locations, or degrees of freedom for the B-spline function we can specify additional arguments to the tve
function.
For example, to model the time-varying log hazard ratio using quadratic B-splines with 4 degrees of freedom (i.e. two boundary knots placed at the limits of the range of event times, as well as two internal knots placed – by default – at the 33.3rd and 66.6th percentiles of the distribution of uncensored event times) we could specify the model formula as
Let us now plot the estimated time-varying hazard ratio from the fitted model. We can do this using the generic plot
method for stansurv
objects, for which we can specify the plotfun = "tve"
argument. (Note that in this case, there is only one covariate in the model with a time-varying effect, but if there were others, we could specify which covariate(s) we want to plot the time-varying effect for by specifying the pars
argument to the plot
call).
From the plot, we can see how the hazard ratio (i.e. the effect of treatment on the hazard of the event) changes as a function of time. The treatment appears to be protective during the first few years following baseline (i.e. HR < 1), and then the treatment appears to become harmful after about 4 years post-baseline. Thankfully, this is a reflection of the model we simulated under!
The plot shows a large amount of uncertainty around the estimated time-varying hazard ratio. This is to be expected, since we only simulated a dataset of 200 individuals of which only around 70% experienced the event before being censored at 5 years. So, there is very little data (i.e. very few events) with which to reliably estimate the time-varying hazard ratio. We can also see this reflected in the differences between our data generating model and the estimates from our fitted model. In our data generating model, the time-varying hazard ratio equals 1 (i.e. the log hazard ratio equals 0) at 2.5 years, but in our fitted model the median estimate for our time-varying hazard ratio equals 1 at around ~3 years. This is a reflection of the large amount of sampling error, due to our simulated dataset being so small.
In the previous example we showed how non-proportional hazards can be modelled by using a smooth B-spline function for the time-varying log hazard ratio. This is the default approach when the tve
function is used to estimate a time-varying effect for a covariate in the model formula. However, another approach for modelling a time-varying log hazard ratio is to use a piecewise constant function. If we want to use a piecewise constant for the time-varying log hazard ratio (instead of the smooth B-spline function) then we just have to specify the type
argument to the tve
function.
We will again simulate some survival data using the simsurv package to show how a piecewise constant hazard ratio can be estimated using stan_surv
.
Similar to the previous example, we will simulate a dataset with individuals with event times generated under a Weibull hazard function with scale parameter , shape parameter , and binary baseline covariate . However, in this example our time-varying hazard ratio will be defined as where is the indicator function taking the value 1 if is true and 0 otherwise. This corresponds to a piecewise constant log hazard ratio with just two “pieces” or time intervals. The first time interval is during which the true hazard ratio is . The second time interval is during which the true log hazard ratio is . Our example uses only two time intervals for simplicity, but in general we could easily have considered more (although it would have required couple of additional lines of code to simulate the data). We will again enforce administrative censoring at 5 years if an individual’s simulated event time is >5 years.
# load package
library(simsurv)
# set seed for reproducibility
set.seed(888222)
# simulate covariate data
covs <- data.frame(id = 1:500,
trt = rbinom(500, 1L, 0.5))
# simulate event times
dat <- simsurv(lambdas = 0.1,
gammas = 1.5,
betas = c(trt = -0.5),
tde = c(trt = 0.7),
tdefun = function(t) (t > 2.5),
x = covs,
maxt = 5)
# merge covariate data and event times
dat <- merge(dat, covs)
# examine first few rows of data
head(dat)
id eventtime status trt
1 1 4.045842 1 0
2 2 5.000000 0 1
3 3 5.000000 0 1
4 4 5.000000 0 1
5 5 2.773858 1 0
6 6 1.007948 1 0
We now estimate a model with a piecewise constant time-varying effect for the covariate trt
as
mod3 <- stan_surv(formula = Surv(eventtime, status) ~
tve(trt, type = "pw", knots = 2.5),
data = dat,
chains = CHAINS,
cores = CORES,
seed = SEED,
iter = ITER)
This time we specify some additional arguments to the tve
function, so that our time-varying effect corresponds to the true data generating model used to simulate our event times. Specifically, we specify type = "pw"
to say that we want the time-varying effect (i.e. the time-varying log hazard ratio) to be estimated using a piecewise constant function and knots = 2.5
says that we only want one internal knot placed at the time .
We can again use the generic plot
function with argument plotfun = "tve"
to examine our estimated hazard ratio for treatment
Here we see that the estimated hazard ratio reasonably reflects our true data generating model (i.e. a hazard ratio of during the first time interval and a hazard ratio of during the second time interval) although there is a slight discrepancy due to the sampling variation in the simulated event times.
To demonstrate the estimation of a hierarchical model for survival data in stan_surv
we will use the frail
dataset (see help("rstanarm-datasets")
for a description). The frail
datasets contains simulated event times for 200 patients clustered within 20 hospital sites (10 patients per hospital site). The event times are simulated from a parametric proportional hazards model under the following assumptions: (i) a constant (i.e. exponential) baseline hazard rate of 0.1; (ii) a fixed treatment effect with log hazard ratio of 0.3; and (iii) a site-specific random intercept (specified on the log hazard scale) drawn from a N(0,1) distribution.
Let’s look at the first few rows of the data:
id site trt b eventtime status
1 1 1 0 0.4229517 0.9058188 1
2 2 1 1 0.4229517 5.9190576 1
3 3 1 0 0.4229517 7.8525219 1
4 4 1 0 0.4229517 1.2066141 1
5 5 1 1 0.4229517 1.1703645 1
6 6 1 0 0.4229517 2.6209007 1
To fit a hierarchical model for clustered survival data we use a formula syntax similar to what is used in the lme4 R package (Bates et al. (2015)). Let’s consider the following model (which aligns with the model used to generate the simulated data):
mod_randint <- stan_surv(
formula = Surv(eventtime, status) ~ trt + (1 | site),
data = frail,
basehaz = "exp",
chains = CHAINS,
cores = CORES,
seed = SEED,
iter = ITER)
The model contains a baseline covariate for treatment (0 or 1) as well as a site-specific intercept to allow for correlation in the event times for patients from the same site. We’ve call the model object mod_randint
to denote the fact that it includes a site-specific (random) intercept. Let’s examine the parameter estimates from the model:
stan_surv
baseline hazard: exponential
formula: Surv(eventtime, status) ~ trt + (1 | site)
observations: 200
events: 152 (76%)
right censored: 48 (24%)
delayed entry: no
------
Median MAD_SD exp(Median)
(Intercept) -2.30 0.31 NA
trt 0.46 0.19 1.59
Error terms:
Groups Name Std.Dev.
site (Intercept) 1.15
Num. levels: site 20
------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg
We see that the estimated log hazard ratio for treatment () is a bit larger than the “true” log hazard ratio used in the data generating model (). The estimated baseline hazard rate is , which is pretty close to the baseline hazard rate used in the data generating model (). Of course, the differences between the estimated parameters and the true parameters from the data generating model are attributable to sampling noise.
If this were a real analysis, we might wonder whether the site-specific estimates are necessary! Well, we can assess that by fitting an alternative model that does not include the site-specific intercepts and compare it to the model we just estimated. We will compare it using the loo
function. We first need to fit the model without the site-specific intercept. To do this, we will just use the generic update
method for stansurv
objects, since all we are changing is the model formula:
Let’s calculate the loo
for both these models and compare them:
Model comparison:
(negative 'elpd_diff' favors 1st model, positive favors 2nd)
elpd_diff se
56.8 9.7
We see strong evidence in favour of the model with the site-specific intercepts!
But let’s not quite finish there. What about if we want to generalise the random effects structure further. For instance, is the site-specific intercept enough? Perhaps we should consider estimating both a site-specific intercept and a site-specific treatment effect. We have minimal data to estimate such a model (recall that there is only 20 sites and 10 patients per site) but for the sake of demonstration we will forge on nonetheless. Let’s fit a model with both a site-specific intercept and a site-specific coefficient for the covariate trt
(i.e. treatment):
mod_randtrt <- update(mod_randint, formula. =
Surv(eventtime, status) ~ trt + (trt | site))
print(mod_randtrt, digits = 2)
stan_surv
baseline hazard: exponential
formula: Surv(eventtime, status) ~ trt + (trt | site)
observations: 200
events: 152 (76%)
right censored: 48 (24%)
delayed entry: no
------
Median MAD_SD exp(Median)
(Intercept) -2.27 0.26 NA
trt 0.47 0.20 1.60
Error terms:
Groups Name Std.Dev. Corr
site (Intercept) 1.104
trt 0.443 -0.23
Num. levels: site 20
------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg
We see that we have an estimated standard deviation for the site-specific intercepts and the site-specific coefficients for trt
, as well as the estimated correlation between those site-specific parameters.
Let’s now compare all three of these models based on loo
:
Model comparison:
(ordered by highest ELPD)
elpd_diff se_diff
mod_randint 0.0 0.0
mod_randtrt -0.4 0.8
mod_fixed -56.8 9.7
It appears that the model with just a site-specific intercept is the best fitting model. It is much better than the model without a site-specific intercept, and slightly better than the model with both a site-specific intercept and a site-specific treatment effect. In other words, including a site-specific intercept appears important, but including a site-specific treatment effect is not. This conclusion is reassuring, because it aligns with the data generating model we used to simulate the data!
Bates D, Maechler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software 2015;67(1):1–48.
Brilleman S. (2018) simsurv: Simulate Survival Data. R package version 0.2.2.
Hougaard P. Fundamentals of Survival Data. Biometrics 1999;55:13–22.
Ramsay JO. Monotone Regression Splines in Action. Statistical Science 1988;3(4):425–461.
Wang W, Yan J. (2018) splines2: Regression Spline Functions and Classes. R package version 0.2.8.
When basehaz
is set equal to "exp"
, "weibull"
, "gompertz"
, "ms"
(the default), or "bs"
then the model is defined on the hazard scale using the following parameterisations.
The exponential model is parameterised with scale parameter where denotes our linear predictor.
For individual we have:
or on the log scale:
The definition of for the baseline is:
The Weibull model is parameterised with scale parameter and shape parameter where denotes our linear predictor.
For individual we have:
or on the log scale:
The definition of for the baseline is:
The Gompertz model is parameterised with shape parameter and scale parameter where denotes our linear predictor.
For individual we have:
or on the log scale:
The definition of for the baseline is:
The M-spline model is parameterised with vector of regression coefficients for the baseline hazard and with covariate effects introduced through a linear predictor . Note that there is no intercept in the linear predictor since it is absorbed into the baseline hazard spline function.
For individual we have:
or on the log scale:
where denotes a cubic M-spline function evaluated at time with regression coefficients and basis evaluated using the vector of knot locations . Similarly, denotes a cubic I-spline function (i.e. integral of an M-spline) evaluated at time with regression coefficients and basis evaluated using the vector of knot locations .
The B-spline model is parameterised with vector of regression coefficients and linear predictor where denotes our linear predictor. Note that there is no intercept in the linear predictor since it is absorbed into the spline function.
For individual we have:
or on the log scale:
The cumulative hazard, survival function, and CDF for the B-spline model cannot be calculated analytically. Instead, the model is only defined analytically on the hazard scale and quadrature is used to evaluate the following:
We can extend the previous model formulations to allow for time-varying coefficients (i.e. non-proportional hazards). The time-varying linear predictor is introduced on the hazard scale. That is, in our previous model definitions is instead replaced by . This leads to an analytical form for the hazard and log hazard. However, in general, there is no longer a closed form expression for the cumulative hazard, survival function, or CDF. Therefore, when the linear predictor includes time-varying coefficients, quadrature is used to evaluate the following:
When basehaz
is set equal to "exp-aft"
, or "weibull-aft"
then the model is defined on the accelerated failure time scale using the following parameterisations.
The exponential model is parameterised with scale parameter where denotes our linear predictor.
For individual we have:
or on the log scale:
The definition of for the baseline is:
The relationship between coefficients under the PH (unstarred) and AFT (starred) parameterisations are as follows:
Lastly, the general form for the hazard function and survival function under an AFT model with acceleration factor can be used to derive the exponential AFT model defined here by setting , , and :
The Weibull model is parameterised with scale parameter and shape parameter where denotes our linear predictor.
For individual we have:
or on the log scale:
The definition of for the baseline is:
The relationship between coefficients under the PH (unstarred) and AFT (starred) parameterisations are as follows:
Lastly, the general form for the hazard function and survival function under an AFT model with acceleration factor can be used to derive the Weibull AFT model defined here by setting , , and :
We can extend the previous model formulations to allow for time-varying coefficients (i.e. time-varying acceleration factors).
The so-called “unmoderated” survival probability for an individual at time is defined as the baseline survival probability at time , i.e. . With a time-fixed acceleration factor, the survival probability for a so-called “moderated” individual is defined as the baseline survival probability but evaluated at “time multiplied by the acceleration factor ”. That is, the survival probability for the moderated individual is .
However, with time-varying acceleration we cannot simply multiply time by a fixed (acceleration) constant. Instead, we must integrate the function for the time-varying acceleration factor over the interval to . In other words, we must evaluate:
Hougaard also gives a general expression for the hazard function under time-varying acceleration, as follows:
Note: It is interesting to note here that the hazard at time is in fact a function of the full history of covariates and parameters (i.e. the linear predictor) from time up until time . This is different to the hazard scale formulation of time-varying effects (i.e. non-proportional hazards). Under the hazard scale formulation with time-varying effects, the survival probability is a function of the full history between times and , but the hazard is not; instead, the hazard is only a function of covariates and parameters as defined at the current time. This is particularly important to consider when fitting accelerated failure time models with time-varying effects in the presence of delayed entry (i.e. left truncation).
For the exponential distribution, this leads to:
and for the Weibull distribution, this leads to:
The general expressions for the hazard and survival function under an AFT model with a time-varying linear predictor are used to evaluate the likelihood for the accelerated failure time model in stan_surv
when time-varying effects are specified in the model formula. Specifically, quadrature is used to evaluate the cumulative acceleration factor and this is then substituted into the relevant expressions for the hazard and survival.