Skip to content

Commit a1c474b

Browse files
committed
Standardize section titles to "Coverage" in multiple documentation files
1 parent df3d01f commit a1c474b

File tree

5 files changed

+58
-11
lines changed

5 files changed

+58
-11
lines changed

doc/plm/lplr.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ from utils.style_tables import generate_and_show_styled_table
2222
init_notebook_mode(all_interactive=True)
2323
```
2424

25-
## ATE Coverage
25+
## Coverage
2626

2727
The simulations are based on the the [make_lplr_LZZ2020](https://docs.doubleml.org/stable/api/generated/doubleml.plm.datasets.make_lplr_LZZ2020.html)-DGP with $500$ observations.
2828

doc/plm/pliv.qmd

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ from utils.style_tables import generate_and_show_styled_table
2222
init_notebook_mode(all_interactive=True)
2323
```
2424

25-
## LATE Coverage
25+
## Coverage
2626

27-
The simulations are based on the the [make_pliv_CHS2015](https://docs.doubleml.org/stable/api/generated/doubleml.datasets.make_pliv_CHS2015.html)-DGP with $500$ observations. Due to the linearity of the DGP, Lasso is a nearly optimal choice for the nuisance estimation.
27+
The simulations are based on the the [make_pliv_CHS2015](https://docs.doubleml.org/stable/api/generated/doubleml.plm.datasets.make_pliv_CHS2015.html)-DGP with $500$ observations. Due to the linearity of the DGP, Lasso is a nearly optimal choice for the nuisance estimation.
2828

2929
::: {.callout-note title="Metadata" collapse="true"}
3030

doc/plm/plr.qmd

Lines changed: 51 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ from utils.style_tables import generate_and_show_styled_table
2222
init_notebook_mode(all_interactive=True)
2323
```
2424

25-
## ATE Coverage
25+
## Coverage
2626

27-
The simulations are based on the the [make_plr_CCDDHNR2018](https://docs.doubleml.org/stable/api/generated/doubleml.datasets.make_plr_CCDDHNR2018.html)-DGP with $500$ observations.
27+
The simulations are based on the the [make_plr_CCDDHNR2018](https://docs.doubleml.org/stable/api/generated/doubleml.plm.datasets.make_plr_CCDDHNR2018.html)-DGP with $500$ observations.
2828

2929
::: {.callout-note title="Metadata" collapse="true"}
3030

@@ -113,9 +113,9 @@ generate_and_show_styled_table(
113113
)
114114
```
115115

116-
## ATE Sensitivity
116+
## Sensitivity
117117

118-
The simulations are based on the the [make_confounded_plr_data](https://docs.doubleml.org/stable/api/generated/doubleml.datasets.make_confounded_plr_data.html)-DGP with $1000$ observations as highlighted in the [Example Gallery](https://docs.doubleml.org/stable/examples/py_double_ml_sensitivity.html#). As the DGP is nonlinear, we will only use corresponding learners. Since the DGP includes unobserved confounders, we would expect a bias in the ATE estimates, leading to low coverage of the true parameter.
118+
The simulations are based on the the [make_confounded_plr_data](https://docs.doubleml.org/stable/api/generated/doubleml.plm.datasets.make_confounded_plr_data.html)-DGP with $1000$ observations as highlighted in the [Example Gallery](https://docs.doubleml.org/stable/examples/py_double_ml_sensitivity.html#). As the DGP is nonlinear, we will only use corresponding learners. Since the DGP includes unobserved confounders, we would expect a bias in the ATE estimates, leading to low coverage of the true parameter.
119119

120120
Both sensitivity parameters are set to $cf_y=cf_d=0.1$, such that the robustness value $RV$ should be approximately $10\%$.
121121
Further, the corresponding confidence intervals are one-sided (since the direction of the bias is unkown), such that only one side should approximate the corresponding coverage level (here only the upper coverage is relevant since the bias is positive). Remark that for the coverage level the value of $\rho$ has to be correctly specified, such that the coverage level will be generally (significantly) larger than the nominal level under the conservative choice of $|\rho|=1$.
@@ -208,3 +208,50 @@ generate_and_show_styled_table(
208208
coverage_highlight_cols=["Coverage", "Coverage (Upper)"]
209209
)
210210
```
211+
212+
## Tuning
213+
214+
The simulations are based on the the [make_plr_CCDDHNR2018](https://docs.doubleml.org/stable/api/generated/doubleml.plm.datasets.make_plr_CCDDHNR2018.html)-DGP with $500$ observations. This is only an example as the untuned version just relies on the default configuration.
215+
216+
```{python}
217+
#| echo: false
218+
219+
# set up data
220+
df_tune_cov = pd.read_csv("../../results/plm/plr_ate_tune_coverage.csv", index_col=None)
221+
222+
assert df_tune_cov["repetition"].nunique() == 1
223+
n_rep_tune_cov = df_tune_cov["repetition"].unique()[0]
224+
225+
display_columns_tune_cov = ["Learner g", "Learner m", "Tuned", "Bias", "CI Length", "Coverage",]
226+
```
227+
228+
229+
### Partialling out
230+
231+
```{python}
232+
# | echo: false
233+
234+
generate_and_show_styled_table(
235+
main_df=df_tune_cov,
236+
filters={"level": 0.95, "Score": "partialling out"},
237+
display_cols=display_columns_tune_cov,
238+
n_rep=n_rep_tune_cov,
239+
level_col="level",
240+
rename_map={"Learner g": "Learner l"},
241+
coverage_highlight_cols=["Coverage"]
242+
)
243+
```
244+
245+
```{python}
246+
#| echo: false
247+
248+
generate_and_show_styled_table(
249+
main_df=df_tune_cov,
250+
filters={"level": 0.9, "Score": "partialling out"},
251+
display_cols=display_columns_tune_cov,
252+
n_rep=n_rep_tune_cov,
253+
level_col="level",
254+
rename_map={"Learner g": "Learner l"},
255+
coverage_highlight_cols=["Coverage"]
256+
)
257+
```

doc/plm/plr_cate.qmd

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ from utils.style_tables import generate_and_show_styled_table
2222
init_notebook_mode(all_interactive=True)
2323
```
2424

25-
## CATE Coverage
25+
## Coverage
2626

27-
The simulations are based on the the [make_heterogeneous_data](https://docs.doubleml.org/stable/api/generated/doubleml.datasets.make_heterogeneous_data.html)-DGP with $2000$ observations. The groups are defined based on the first covariate, analogously to the [CATE PLR Example](https://docs.doubleml.org/stable/examples/py_double_ml_cate_plr.html), but rely on [LightGBM](https://lightgbm.readthedocs.io/en/latest/index.html) to estimate nuisance elements (due to time constraints).
27+
The simulations are based on the the [make_heterogeneous_data](https://docs.doubleml.org/stable/api/generated/doubleml.irm.datasets.make_heterogeneous_data.html)-DGP with $2000$ observations. The groups are defined based on the first covariate, analogously to the [CATE PLR Example](https://docs.doubleml.org/stable/examples/py_double_ml_cate_plr.html), but rely on [LightGBM](https://lightgbm.readthedocs.io/en/latest/index.html) to estimate nuisance elements (due to time constraints).
2828

2929
The non-uniform results (coverage, ci length and bias) refer to averaged values over all groups (point-wise confidende intervals).
3030

doc/plm/plr_gate.qmd

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ from utils.style_tables import generate_and_show_styled_table
2222
init_notebook_mode(all_interactive=True)
2323
```
2424

25-
## GATE Coverage
25+
## Coverage
2626

27-
The simulations are based on the the [make_heterogeneous_data](https://docs.doubleml.org/stable/api/generated/doubleml.datasets.make_heterogeneous_data.html)-DGP with $500$ observations. The groups are defined based on the first covariate, analogously to the [GATE PLR Example](https://docs.doubleml.org/stable/examples/py_double_ml_gate_plr.html), but rely on [LightGBM](https://lightgbm.readthedocs.io/en/latest/index.html) to estimate nuisance elements (due to time constraints).
27+
The simulations are based on the the [make_heterogeneous_data](https://docs.doubleml.org/stable/api/generated/doubleml.irm.datasets.make_heterogeneous_data.html)-DGP with $500$ observations. The groups are defined based on the first covariate, analogously to the [GATE PLR Example](https://docs.doubleml.org/stable/examples/py_double_ml_gate_plr.html), but rely on [LightGBM](https://lightgbm.readthedocs.io/en/latest/index.html) to estimate nuisance elements (due to time constraints).
2828

2929
The non-uniform results (coverage, ci length and bias) refer to averaged values over all groups (point-wise confidende intervals).
3030

0 commit comments

Comments
 (0)