Skip to content

Commit 8d18a92

Browse files
docs: add all bib entries, fix references
1 parent d0a992c commit 8d18a92

File tree

18 files changed

+438
-217
lines changed

18 files changed

+438
-217
lines changed

docs/make.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,13 +9,14 @@ include("pages.jl")
99
mathengine = Documenter.MathJax()
1010

1111
bib = CitationBibliography(
12-
joinpath(@__DIR__, "refs.bib");
12+
joinpath(@__DIR__, "src", "refs.bib");
1313
style = :authoryear
1414
)
1515

1616
makedocs(; modules = [ReservoirComputing],
1717
sitename = "ReservoirComputing.jl",
1818
clean = true, doctest = false, linkcheck = true,
19+
plugins = [bib],
1920
format = Documenter.HTML(;
2021
mathengine,
2122
assets = ["assets/favicon.ico"],

docs/pages.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,5 +20,6 @@ pages = [
2020
"ESN Initializers" => "api/inits.md",
2121
"ESN Drivers" => "api/esn_drivers.md",
2222
"ESN Variations" => "api/esn_variations.md",
23-
"ReCA" => "api/reca.md"]
23+
"ReCA" => "api/reca.md"],
24+
#"References" => "references.md"
2425
]

docs/refs.bib

Lines changed: 0 additions & 42 deletions
This file was deleted.

docs/src/api/esn_drivers.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,3 +14,10 @@ The `GRU` driver also provides the user with the choice of the possible variants
1414
```
1515

1616
Please refer to the original papers for more detail about these architectures.
17+
18+
## References
19+
20+
```@bibliography
21+
Pages = ["esn_drivers.md"]
22+
Canonical = false
23+
```

docs/src/api/inits.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,3 +44,10 @@
4444
self_loop!
4545
add_jumps!
4646
```
47+
48+
## References
49+
50+
```@bibliography
51+
Pages = ["inits.md"]
52+
Canonical = false
53+
```

docs/src/api/states.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,3 +25,10 @@
2525
```@docs
2626
ReservoirComputing.create_states
2727
```
28+
29+
## References
30+
31+
```@bibliography
32+
Pages = ["states.md"]
33+
Canonical = false
34+
```

docs/src/esn_tutorials/change_layers.md

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Custom layers only need to follow these APIs to be compatible with ReservoirComp
2626

2727
## Example of minimally complex ESN
2828

29-
Using [^rodan2012] and [^rodan2010] as references this section will provide an
29+
Using [Rodan2012](@cite) and [Rodan2011](@cite) as references this section will provide an
3030
example on how to change both the input layer and the reservoir for ESNs.
3131

3232
The task for this example will be the one step ahead prediction of the Henon map.
@@ -77,11 +77,9 @@ end
7777
As it is possible to see, changing layers in ESN models is straightforward.
7878
Be sure to check the API documentation for a full list of reservoir and layers.
7979

80-
## Bibliography
80+
## References
8181

82-
[^rodan2012]: Rodan, Ali, and Peter Tiňo.
83-
“Simple deterministically constructed cycle reservoirs with regular jumps.”
84-
Neural computation 24.7 (2012): 1822-1852.
85-
[^rodan2010]: Rodan, Ali, and Peter Tiňo.
86-
“Minimum complexity echo state network.”
87-
IEEE transactions on neural networks 22.1 (2010): 131-144.
82+
```@bibliography
83+
Pages = ["change_layers.md"]
84+
Canonical = false
85+
```

docs/src/esn_tutorials/deep_esn.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Deep Echo State Network architectures started to gain some traction recently. In this guide, we illustrate how it is possible to use ReservoirComputing.jl to build a deep ESN.
44

5-
The network implemented in this library is taken from [^1]. It works by stacking reservoirs on top of each other, feeding the output from one into the next. The states are obtained by merging all the inner states of the stacked reservoirs. For a more in-depth explanation, refer to the paper linked above.
5+
The network implemented in this library is taken from [Gallicchio2017](@cite). It works by stacking reservoirs on top of each other, feeding the output from one into the next. The states are obtained by merging all the inner states of the stacked reservoirs. For a more in-depth explanation, refer to the paper linked above.
66

77
## Lorenz Example
88

@@ -88,6 +88,9 @@ plot(p1, p2, p3; plot_title="Lorenz System Coordinates",
8888
legendfontsize=12, titlefontsize=20)
8989
```
9090

91-
## Documentation
91+
## References
9292

93-
[^1]: Gallicchio, Claudio, and Alessio Micheli. "_Deep echo state network (deepesn): A brief survey._" arXiv preprint arXiv:1712.04323 (2017).
93+
```@bibliography
94+
Pages = ["deep_esn.md"]
95+
Canonical = false
96+
```

docs/src/esn_tutorials/different_drivers.md

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ While the original implementation of the Echo State Network implemented the mode
44

55
## Multiple Activation Function RNN
66

7-
Based on the double activation function ESN (DAFESN) proposed in [^1], the Multiple Activation Function ESN expands the idea and allows a custom number of activation functions to be used in the reservoir dynamics. This can be thought of as a linear combination of multiple activation functions with corresponding parameters.
7+
Based on the double activation function ESN (DAFESN) proposed in [Lun2015](@cite), the Multiple Activation Function ESN expands the idea and allows a custom number of activation functions to be used in the reservoir dynamics. This can be thought of as a linear combination of multiple activation functions with corresponding parameters.
88

99
```math
1010
\mathbf{x}(t+1) = (1-\alpha)\mathbf{x}(t) + \lambda_1 f_1(\mathbf{W}\mathbf{x}(t)+\mathbf{W}_{in}\mathbf{u}(t)) + \dots + \lambda_D f_D(\mathbf{W}\mathbf{x}(t)+\mathbf{W}_{in}\mathbf{u}(t))
@@ -14,7 +14,7 @@ where ``D`` is the number of activation functions and respective parameters chos
1414

1515
The method to call to use the multiple activation function ESN is `MRNN(activation_function, leaky_coefficient, scaling_factor)`. The arguments can be used as both `args` and `kwargs`. `activation_function` and `scaling_factor` have to be vectors (or tuples) containing the chosen activation functions and respective scaling factors (``f_1,...,f_D`` and ``\lambda_1,...,\lambda_D`` following the nomenclature introduced above). The `leaky_coefficient` represents ``\alpha`` and it is a single value.
1616

17-
Starting with the example, the data used is based on the following function based on the DAFESN paper [^1].
17+
Starting with the example, the data used is based on the following function based on the DAFESN paper [Lun2015](@cite).
1818

1919
```@example mrnn
2020
u(t) = sin(t) + sin(0.51 * t) + sin(0.22 * t) + sin(0.1002 * t) + sin(0.05343 * t)
@@ -87,7 +87,7 @@ In this example, it is also possible to observe the input of parameters to the m
8787

8888
## Gated Recurrent Unit
8989

90-
Gated Recurrent Units (GRUs) [^2] have been proposed in more recent years with the intent of limiting notable problems of RNNs, like the vanishing gradient. This change in the underlying equations can be easily transported into the Reservoir Computing paradigm, by switching the RNN equations in the reservoir with the GRU equations. This approach has been explored in [^3] and [^4]. Different variations of GRU have been proposed [^5][^6]; this section is subdivided into different sections that go into detail about the governing equations and the implementation of them into ReservoirComputing.jl. Like before, to access the GRU reservoir driver, it suffices to change the `reservoir_diver` keyword argument for `ESN` with `GRU()`. All the variations that will be presented can be used in this package by leveraging the keyword argument `variant` in the method `GRU()` and specifying the chosen variant: `FullyGated()` or `Minimal()`. Other variations are possible by modifying the inner layers and reservoirs. The default is set to the standard version `FullyGated()`. The first section will go into more detail about the default of the `GRU()` method, and the following ones will refer to it to minimize repetitions. This example was run on Julia v1.7.2.
90+
Gated Recurrent Units (GRUs) [Cho2014](@cite) have been proposed in more recent years with the intent of limiting notable problems of RNNs, like the vanishing gradient. This change in the underlying equations can be easily transported into the Reservoir Computing paradigm, by switching the RNN equations in the reservoir with the GRU equations. This approach has been explored in [Wang2020](@cite) and [Sarli2020](@cite). Different variations of GRU have been proposed [Dey2017](@cite); this section is subdivided into different sections that go into detail about the governing equations and the implementation of them into ReservoirComputing.jl. Like before, to access the GRU reservoir driver, it suffices to change the `reservoir_diver` keyword argument for `ESN` with `GRU()`. All the variations that will be presented can be used in this package by leveraging the keyword argument `variant` in the method `GRU()` and specifying the chosen variant: `FullyGated()` or `Minimal()`. Other variations are possible by modifying the inner layers and reservoirs. The default is set to the standard version `FullyGated()`. The first section will go into more detail about the default of the `GRU()` method, and the following ones will refer to it to minimize repetitions.
9191

9292
### Standard GRU
9393

@@ -104,7 +104,7 @@ Going over the `GRU` keyword argument, it will be explained how to feed the desi
104104

105105
- `activation_function` is a vector with default values `[NNlib.sigmoid, NNlib.sigmoid, tanh]`. This argument controls the activation functions of the GRU, going from top to bottom. Changing the first element corresponds to changing the activation function for ``\mathbf{r}(t)`` and so on.
106106
- `inner_layer` is a vector with default values `fill(DenseLayer(), 2)`. This keyword argument controls the ``\mathbf{W}_{\text{in}}``s going from top to bottom like before.
107-
- `reservoir` is a vector with default value `fill(RandSparseReservoir(), 2)`. In a similar fashion to `inner_layer`, this keyword argument controls the reservoir matrix construction in a top to bottom order.
107+
- `reservoir` is a vector with default value `fill(RandSparseReservoir(), 2)`. Similarly to `inner_layer`, this keyword argument controls the reservoir matrix construction in a top to bottom order.
108108
- `bias` is again a vector with default value `fill(DenseLayer(), 2)`. It is meant to control the ``\mathbf{b}``s, going as usual from top to bottom.
109109
- `variant` controls the GRU variant. The default value is set to `FullyGated()`.
110110

@@ -161,7 +161,7 @@ This variation can be obtained by setting `variation=Minimal()`. The `inner_laye
161161

162162
To showcase the use of the `GRU()` method, this section will only illustrate the standard `FullyGated()` version. The full script for this example with the data can be found [here](https://github.com/MartinuzziFrancesco/reservoir-computing-examples/tree/main/change_drivers/gru).
163163

164-
The data used for this example is the Santa Fe laser dataset [^7] retrieved from [here](https://web.archive.org/web/20160427182805/http://www-psych.stanford.edu/%7Eandreas/Time-Series/SantaFe.html). The data is split to account for a next step prediction.
164+
The data used for this example is the Santa Fe laser dataset [Hbner1989](@cite) retrieved from [here](https://web.archive.org/web/20160427182805/http://www-psych.stanford.edu/%7Eandreas/Time-Series/SantaFe.html). The data is split to account for a next step prediction.
165165

166166
```@example gru
167167
using DelimitedFiles
@@ -241,10 +241,9 @@ println(msd(testing_target, output))
241241
println(msd(testing_target, output_rnn))
242242
```
243243

244-
[^1]: Lun, Shu-Xian, et al. "_A novel model of leaky integrator echo state network for time-series prediction._" Neurocomputing 159 (2015): 58-66.
245-
[^2]: Cho, Kyunghyun, et al. “_Learning phrase representations using RNN encoder-decoder for statistical machine translation._” arXiv preprint arXiv:1406.1078 (2014).
246-
[^3]: Wang, Xinjie, Yaochu Jin, and Kuangrong Hao. "_A Gated Recurrent Unit based Echo State Network._" 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020.
247-
[^4]: Di Sarli, Daniele, Claudio Gallicchio, and Alessio Micheli. "_Gated Echo State Networks: a preliminary study._" 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE, 2020.
248-
[^5]: Dey, Rahul, and Fathi M. Salem. "_Gate-variants of gated recurrent unit (GRU) neural networks._" 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS). IEEE, 2017.
249-
[^6]: Zhou, Guo-Bing, et al. "_Minimal gated unit for recurrent neural networks._" International Journal of Automation and Computing 13.3 (2016): 226-234.
250-
[^7]: Hübner, Uwe, Nimmi B. Abraham, and Carlos O. Weiss. "_Dimensions and entropies of chaotic intensity pulsations in a single-mode far-infrared NH 3 laser._" Physical Review A 40.11 (1989): 6354.
244+
## References
245+
246+
```@bibliography
247+
Pages = ["different_drivers.md"]
248+
Canonical = false
249+
```

docs/src/esn_tutorials/hybrid.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Hybrid Echo State Networks
22

3-
Following the idea of giving physical information to machine learning models, the hybrid echo state networks [^1] try to achieve this results by feeding model data into the ESN. In this example, it is explained how to create and leverage such models in ReservoirComputing.jl.
3+
Following the idea of giving physical information to machine learning models, the hybrid echo state networks [Pathak2018](@cite) try to achieve this results by feeding model data into the ESN. In this example, it is explained how to create and leverage such models in ReservoirComputing.jl.
44

55
## Generating the data
66

@@ -94,6 +94,9 @@ plot(p1, p2, p3; plot_title="Lorenz System Coordinates",
9494
legendfontsize=12, titlefontsize=20)
9595
```
9696

97-
## Bibliography
97+
## References
9898

99-
[^1]: Pathak, Jaideep, et al. "_Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model._" Chaos: An Interdisciplinary Journal of Nonlinear Science 28.4 (2018): 041101.
99+
```@bibliography
100+
Pages = ["hybrid.md"]
101+
Canonical = false
102+
```

0 commit comments

Comments
 (0)