Skip to content

Commit 5d1640f

Browse files
update Weaving footer
1 parent 8fadd74 commit 5d1640f

25 files changed

+43
-34
lines changed

tutorials/DiffEqUncertainty/01-expectation_introduction.jmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -346,7 +346,7 @@ p_dist = [truncated(Normal(-.7f0, .1f0), -1f0,0f0)]
346346

347347
The performance gains realized by leveraging batch GPU processing is problem dependent. In this case, the number of batch evaluations required to overcome the overhead of using the GPU exceeds the number of simulations required to converge to the quadrature solution.
348348

349-
```{julia; echo=false; skip="notebook"}
349+
```julia, echo = false, skip="notebook"
350350
using SciMLTutorials
351351
SciMLTutorials.tutorial_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file])
352352
```

tutorials/DiffEqUncertainty/02-AD_and_optimization.jmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -292,7 +292,7 @@ begin
292292
end
293293
```
294294

295-
```{julia; echo=false; skip="notebook"}
295+
```julia, echo = false, skip="notebook"
296296
using SciMLTutorials
297297
SciMLTutorials.tutorial_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file])
298298
```

tutorials/Testing/test.jmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ author: Chris Rackauckas
55

66
This is a test of the builder system.
77

8-
```{julia; echo=false; skip="notebook"}
8+
```julia, echo = false, skip="notebook"
99
using SciMLTutorials
1010
SciMLTutorials.tutorial_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file])
1111
```

tutorials/advanced/01-beeler_reuter.jmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -655,7 +655,7 @@ heatmap(sol.u[end])
655655

656656
We achieve around a 6x speedup with running the explicit portion of our IMEX solver on a GPU. The major bottleneck of this technique is the communication between CPU and GPU. In its current form, not all of the internals of the method utilize GPU acceleration. In particular, the implicit equations solved by GMRES are performed on the CPU. This partial CPU nature also increases the amount of data transfer that is required between the GPU and CPU (performed every f call). Compiling the full ODE solver to the GPU would solve both of these issues and potentially give a much larger speedup. [JuliaDiffEq developers are currently working on solutions to alleviate these issues](http://www.stochasticlifestyle.com/solving-systems-stochastic-pdes-using-gpus-julia/), but these will only be compatible with native Julia solvers (and not Sundials).
657657

658-
```{julia; echo=false; skip="notebook"}
658+
```julia, echo = false, skip="notebook"
659659
using SciMLTutorials
660660
SciMLTutorials.tutorial_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file])
661661
```

tutorials/advanced/02-advanced_ODE_solving.jmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -505,7 +505,7 @@ Note that if your mass matrix is singular, i.e. your system is a DAE, then you
505505
need to make sure you choose
506506
[a solver that is compatible with DAEs](https://docs.sciml.ai/latest/solvers/dae_solve/#dae_solve_full-1)
507507

508-
```{julia; echo=false; skip="notebook"}
508+
```julia, echo = false, skip="notebook"
509509
using SciMLTutorials
510510
SciMLTutorials.tutorial_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file])
511511
```

tutorials/exercises/02-workshop_solutions.jmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -717,7 +717,7 @@ sim = solve(ensprob, Tsit5(), EnsembleGPUArray(), trajectories=length(z0))
717717

718718
# Information on the Build
719719

720-
```{julia; echo=false; skip="notebook"}
720+
```julia, echo = false, skip="notebook"
721721
using SciMLTutorials
722722
SciMLTutorials.tutorial_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file])
723723
```

tutorials/introduction/01-ode_introduction.jmd

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -340,8 +340,7 @@ sol[3]
340340

341341
These are the basic controls in DifferentialEquations.jl. All equations are defined via a problem type, and the `solve` command is used with an algorithm choice (or the default) to get a solution. Every solution acts the same, like an array `sol[i]` with `sol.t[i]`, and also like a continuous function `sol(t)` with a nice plot command `plot(sol)`. The Common Solver Options can be used to control the solver for any equation type. Lastly, the types used in the numerical solving are determined by the input types, and this can be used to solve with arbitrary precision and add additional optimizations (this can be used to solve via GPUs for example!). While this was shown on ODEs, these techniques generalize to other types of equations as well.
342342

343-
```{julia; echo=false; skip="notebook"}
343+
```julia, echo = false, skip="notebook"
344344
using SciMLTutorials
345-
SciMLTutorials.tutorial_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file])
345+
SciMLTutorials.bench_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file])
346346
```
347-

tutorials/introduction/02-choosing_algs.jmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ If you are familiar with MATLAB, SciPy, or R's DESolve, here's a quick translati
118118
- `ode15i` -> `IDA()`, though in many cases `Rodas4()` can handle the DAE and is
119119
significantly more efficient
120120

121-
```{julia; echo=false; skip="notebook"}
121+
```julia, echo = false, skip="notebook"
122122
using SciMLTutorials
123123
SciMLTutorials.tutorial_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file])
124124
```

tutorials/introduction/03-optimizing_diffeq_code.jmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -486,7 +486,7 @@ Why is `CVODE_BDF` doing well? What's happening is that, because the problem is
486486

487487
Julia gives you the tools to optimize the solver "all the way", but you need to make use of it. The main thing to avoid is temporary allocations. For small systems, this is effectively done via static arrays. For large systems, this is done via in-place operations and cache arrays. Either way, the resulting solution can be immensely sped up over vectorized formulations by using these principles.
488488

489-
```{julia; echo=false; skip="notebook"}
489+
```julia, echo = false, skip="notebook"
490490
using SciMLTutorials
491491
SciMLTutorials.tutorial_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file])
492492
```

tutorials/introduction/04-callbacks_and_events.jmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -321,7 +321,7 @@ saved_values.saveval
321321

322322
Go back to the Harmonic oscillator. Use the `SavingCallback` to save an array for the energy over time, and do this both with and without the `ManifoldProjection`. Plot the results to see the difference the projection makes.
323323

324-
```{julia; echo=false; skip="notebook"}
324+
```julia, echo = false, skip="notebook"
325325
using SciMLTutorials
326326
SciMLTutorials.tutorial_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file])
327327
```

0 commit comments

Comments
 (0)