Get started to non-parametric bootstraps: execution and interpretation

Once you completed your model development, you now have a final model that fits your data the best and you’re now ready to validate your results. One way to internally validate your results is by looking at the precision of your parameter estimates by performing a non-parametric bootstrap.

What is a bootstrap?

In the figure below I tried to illustrate the steps required to perform a bootstrap:

Figure 1 – Bootstrap introduction and work flow

During model development you used one dataset as input for your model to get your parameter estimates. I referred to the modeling dataset as the “Original Dataset” in Figure 1. This data contains a representative sample of the whole population of interest (e.g., 20 healthy volunteers out of all the healthy volunteers available). By re-sampling with replacement the individuals from the original dataset multiple times, you create M datasets of the same size as the original dataset containing new population samples. Re-sampling with replacement means that the same individual can be sampled more than 1 time from the original dataset. Each sampled individual can appear multiple times in the new population samples or just not at all. The re-sampling is a means to extrapolate towards the real population, as the population samples generated don’t necessarily have the same distribution as the original one.
Each of the newly created M datasets will serve as a new input dataset in your final model. Hence, the model will run M times and generate M sets of new parameter estimates.

The results are aggregated and we get an empirical distribution (normal or log-normal) for each of the parameters that will serve to calculate their confidence intervals (CI).

Note: The data used for calculating the CI needs to be symmetrical. Both normal and log-normal distributions are symmetrical. 

Step-by-step bootstrap with Pirana & Perl-speaks-Nonmem (PsN)

To start your bootstrap, first you need to go to your model location in Pirana. Select the model and right-click to access the drop-down menu shown in Figure 2.

Figure 2 – Bootstrap menu
(right-click model –> PsN –> Model diagnostics –> bootstrap (select & click))

Then, the PsN command window will pop-up. The default bootstrap options are shown below, in Figure 3.

Figure 3 – PsN default options for bootstrap

When you click the orange play button, your bootstrap will start running with 50 samples of the original dataset, following the steps presented in Figure 1. One could also execute this command directly from the command promot.

The results will be saved in the directory specified under -dir (in this case “-dir=bs_pk”). However, is the number of default samples enough to calculate the CI in your situation? Are all the model runs of the bootstrap going to be successful?

How many samples are enough?

As a rule of thumb, we normally start with 1000 samples to get reliable and stable CIs. The parameter distributions resulting from the bootstrap are sensitive to the number of replicates used, especially the more sensitive parameters for which are more sensitive changes in upper and lower CI bounds were observed. If you would like to dive deeper into this matter I would suggest to have a look at this ASCPT presentation from 2005, which presents the results of a systematic analysis of the influence of sample size on determining CIs when performing a bootstrap.

Handling unsuccessful model runs

The number of terminated/unsuccessful model runs has less influence on calculating the CIs as compared to the number of samples. When calculating CIs the default option in PsN is to ignore terminated runs, however, a minimum number of successful runs is required to obtain percentile confidence intervals. As stated in the PsN documentation you would need:

  • 19 for interval 5% – 95% (90% CI)
  • 39 for interval 2.5% – 97.5% (95% CI)
  • 199 for interval 0.5% – 99.5% (99% CI)
  • 1999 for interval 0.05% – 99.95% (99.9 CI)

Note: For more information on how the CIs are calculated within PsN, please see the documentation here, under bootstrap.

Sometimes, you may want to include the terminated runs when calculating the bootstrap statistics. This would lead to a more conservative reflection of the uncertainty in the model parameter estimates. I didn’t try this myself yet, but you can find more information about it in the same presentation from ASCPT in 2005 (link).

Bootstrap results

Once your bootstrap is done, the results are stored in the folder you assigned in the PsN command line and it has the following structure:

  • |— bs_model_name
    • |— m1 –> directory that includes all the sampled datasets, model runs and results
    • |— modelfit_dir1 –>  directory that includes the raw results of each model run
    • |— bootstrap_results.csv –> name says it all
    • |— command.txt –> command line used to run the bootstrap
    • |— [ … ] –> other files with details about the sampling, etc.

The bootstrap_results.csv file includes statistical summaries specific for the bootstrap, that are usually reported together with the model parameter estimates and more. Usually, in a paper including bootstrap as an internal validation we normally see:

  • number of terminated runs and reason for termination
  • median values for all parameters
  • 2.5% – 97.5% interval (or the 95% CI)

Ideally, the model estimated parameters should always be within the 95% CI and it should not deviate with more than 20% from the median calculated with the bootstrap. A wide CI shows that a parameter is uncertain.

Figure 3 – Screenshot of bootstrap_results.csv file, enlarged to focus on model parameters. More information is included in the actual output. Please see PsN documentation for more information.

Graphical output

By using the option “-rplots” in the command line of PsN we get an R script (bootstrap.R) that can be used to visualize the bootstrap results. Here I show an example for CL:

Figure 4 – Parameter values distribution obtained with bootstrap. n – number of successful runs, centered, dashed lines – mean and original parameter estimates, dotted-dashed lines – show the bounds of the 95% CI

Recovering an interrupted bootstrap run

It can happen that your bootstrap takes several days to run and your computer is accidentally turned off. You can re-start the bootstrap from the point where it was interrupted by just re-using the same PsN command with which you started the bootstrap. You may encounter an error that looks like this:

for the runs that were interrupted. This is normal, just continue with the bootstrap.

Useful reading on bootstrap – background informatiom

If you want to learn more, have a look at the following pages:

  • ASCPT 2005 presentation by Marc Gastonguay (link)
  • Bootstrap & CI presentation by Nick Holford (link)
  • Thai et al. 2014 paper on the evaluation of bootstrap methods for estimating uncertainty of parameters (link)

GUEST AUTHOR

THIS POST WAS WRITTEN BY Sinziana Cristea FROM Leiden University.
She works on predictive models to study maturation of renal elimination processes, ultimately to guide dosing of renally excreted drugs in children.