You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+37Lines changed: 37 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,14 +36,32 @@ Books marked with [a] are advanced material.
36
36
37
37
-[Random Number Generation](https://www.iro.umontreal.ca/~lecuyer/myftp/papers/handstat.pdf) by [Pierre L'Ecuyer](http://www-labs.iro.umontreal.ca/~lecuyer/);
38
38
-[Non-Uniform Random Variate Generation](http://www.nrbook.com/devroye/) by the great [Luc Devroye](http://luc.devroye.org/);
39
+
- Walker's [Alias method](https://en.wikipedia.org/wiki/Alias_method) is a fast way to generate discrete random variables;
39
40
-[Rejection Control and Sequential importance sampling](http://stat.rutgers.edu/home/rongchen/publications/98JASA_rejection-control.pdf) (1998), by Liu et al. discusses how to improve importance sampling by controlling rejections.
40
41
42
+
### Markov chains
43
+
44
+
-[These](https://pages.uoregon.edu/dlevin/MARKOV/markovmixing.pdf) notes from David Levin and Yuval Peres are excellent and cover a lot of material one might find interesting on Markov processes.
45
+
41
46
### Markov chain Monte Carlo
42
47
43
48
- Charlie Geyer's [website](http://users.stat.umn.edu/~geyer/) is a treasure trove of material on Statistics in general, MCMC methods in particular.
44
49
See, for instance, [On the Bogosity of MCMC Diagnostics](http://users.stat.umn.edu/~geyer/mcmc/diag.html).
45
50
-[Efficient construction of reversible jump Markov chain Monte Carlo proposal distributions](http://www2.stat.duke.edu/~scs/Courses/Stat376/Papers/TransdimMCMC/BrooksRobertsRJ.pdf) is nice paper on the construction of efficient proposals for reversible jump/transdimensional MCMC.
46
51
52
+
#### Hamiltonian Monte Carlo
53
+
54
+
The two definitive texts on HMC are [Neal (2011)](https://arxiv.org/pdf/1206.1901.pdf) and [Betancourt (2017)](https://arxiv.org/pdf/1701.02434.pdf).
55
+
56
+
#### Normalising Constants
57
+
58
+
[This](https://radfordneal.wordpress.com/2008/08/17/the-harmonic-mean-of-the-likelihood-worst-monte-carlo-method-ever/) post by Radford Neal explains why the Harmonic Mean Estimator (HME) is a _terrible_ estimator of the evidence.
59
+
60
+
#### Sequential Monte Carlo and Dynamic models
61
+
62
+
-[This](https://link.springer.com/book/10.1007/978-3-030-47845-2) book by Nicolas Chopin and Omiros Papaspiliopoulos is a great introduction (as it says in the title) about SMC.
63
+
SMC finds application in many areas, but dynamic (linear) models deserve a special mention. The seminal 1997 [book](https://link.springer.com/book/10.1007/b98971) by West and Harrison remains the _de facto_ text on the subject.
64
+
47
65
## Optmisation
48
66
#### The EM algortithm
49
67
- This elementary [tutorial](https://zhwa.github.io/tutorial-of-em-algorithm.html) is simple but effective.
@@ -61,8 +79,27 @@ See, for instance, [On the Bogosity of MCMC Diagnostics](http://users.stat.umn.e
61
79
- In [Markov Chain Monte Carlo Maximum Likelihood](https://www.stat.umn.edu/geyer/f05/8931/c.pdf), Charlie Geyer shows how one can use MCMC to do maximum likelihood estimation when the likelihood cannot be written in closed-form.
62
80
This paper is an example of MCMC methods being used outside of Bayesian statistics.
63
81
82
+
-[This](https://github.com/maxbiostat/Computational_Statistics/blob/master/supporting_material/1997_Dunbar_CollegeMaths.pdf) paper discusses the solution of Problem A in [assigment 0 (2021)](https://github.com/maxbiostat/Computational_Statistics/blob/master/assignments/warmup_assignment.pdf).
83
+
84
+
#### Reparametrisation
85
+
86
+
Sometimes a clever way to make a target distribution easier to compute expectations with respect to is to _reparametrise_ it. Here are some resources:
87
+
88
+
- A youtube video [Introduction of the concepts and a simple example](https://www.youtube.com/watch?v=gSd1msFFZTw);
89
+
-[Hamiltonian Monte Carlo for Hierarchical Models](https://arxiv.org/abs/1312.0906) from M. J. Betancourt and Mark Girolami;
90
+
-[A General Framework for the Parametrization of Hierarchical Models](https://projecteuclid.org/journals/statistical-science/volume-22/issue-1/A-General-Framework-for-the-Parametrization-of-Hierarchical-Models/10.1214/088342307000000014.full) from Omiros Papaspiliopoulos, Gareth O. Roberts, and Martin Sköld;
91
+
-[Efficient parametrisations for normal linear mixed models](https://www.jstor.org/stable/2337527?seq=1#metadata_info_tab_contents) from Alan E. Gelfand, Sujit K. Sahu and Bradley P. Carlin.
92
+
93
+
See [#4](https://github.com/maxbiostat/Computational_Statistics/issues/4). Contributed by @lucasmoschen.
94
+
95
+
#### Variance reduction
96
+
97
+
-[Rao-Blackwellisation](http://www.columbia.edu/~im2131/ps/rao-black.pdf) is a popular technique for obtaining estimators with lower variance. I recommend the recent International Statistical Review [article](https://arxiv.org/abs/2101.01011) by Christian Robert and Gareth Roberts on the topic.
98
+
64
99
### Extra (fun) resources
65
100
101
+
- A [Visualisation](https://chi-feng.github.io/mcmc-demo/app.html) of MCMC for various algorithms and targets.
102
+
66
103
In these blogs and websites you will often find interesting discussions on computational, numerical and statistical aspects of applied Statistics and Mathematics.
67
104
68
105
- Christian Robert's [blog](https://xianblog.wordpress.com/);
Copy file name to clipboardExpand all lines: annotated_bibliography.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,3 +25,5 @@ In this seminal paper, Dempster, Laird and Rubin introduce the Expectation-Maxim
25
25
26
26
13.[The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo (2014)](https://arxiv.org/abs/1111.4246). Matt Hoffmann and Andrew Gelman introduce a novel algorithm that tunes the step size and tree depth of the HMC algorithm automatically.
27
27
The No-U-Turn Sampler (NUTS) as it came to christened, is the building block for what would later for the main algorithm implemented in [Stan](https://mc-stan.org/).
28
+
29
+
14. In [A tutorial on adaptive MCMC](https://people.eecs.berkeley.edu/~jordan/sail/readings/andrieu-thoms.pdf), Cristophe Andrieu and Johannes Thoms give a very nice overview of the advantages and pitfalls (!) of adaptive MCMC. Pay special heed to Section 2.
0 commit comments