You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.Rmd
+5-6
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ knitr::opts_chunk$set(
13
13
)
14
14
```
15
15
16
-
# `apm`: Averaged Prediction Models
16
+
##`apm`: Averaged Prediction Models
17
17
18
18
<!-- badges: start -->
19
19
<!-- badges: end -->
@@ -23,7 +23,7 @@ library(apm)
23
23
data("ptpdata")
24
24
```
25
25
26
-
## Supplying models
26
+
###Supplying models
27
27
28
28
We can specify the models to test using `apm_mod()`. This create a full cross of all supplied arguments, which include model formula, families, whether the outcome is logged or not, whether fixed effects are included or not, whether the outcome should be a difference, and whether outcome lags should appear as predictors. Below, we create a cross of 9 models.
29
29
@@ -50,7 +50,7 @@ models
50
50
51
51
This leaves us with 10 models.
52
52
53
-
## Fitting the models
53
+
###Fitting the models
54
54
55
55
Next we fit all 10 models to the data. We do so once for each validation time to compute the average prediction error that will be used to select the optimal model. All models are fit simultaneously so the simulation can use the full joint distribution of model parameter estimates. For each validation time, each model is fit using a dataset that contains data points prior to that time.
56
56
@@ -67,15 +67,14 @@ fits <- apm_pre(models,
67
67
fits
68
68
```
69
69
70
-
## Computing the ATT
70
+
###Computing the ATT
71
71
72
72
We compute the ATT using `apm_est()`, which uses bootstrapping to compute model uncertainty due to sampling along with uncertainty due to model selection.
0 commit comments