Skip to content

Commit b895c20

Browse files
author
Quarto GHA Workflow Runner
committed
Built site for gh-pages
1 parent 9379d23 commit b895c20

File tree

5 files changed

+56
-62
lines changed

5 files changed

+56
-62
lines changed

.nojekyll

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
8d82f6e0
1+
01af3137

examples/heat.html

Lines changed: 20 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -484,22 +484,22 @@ <h2 class="anchored" data-anchor-id="data">Data</h2>
484484
<section id="implementation-in-textttdeep_tensor" class="level1">
485485
<h1>Implementation in <span class="math inline">\(\texttt{deep\_tensor}\)</span></h1>
486486
<p>We will now use <span class="math inline">\(\texttt{deep\_tensor}\)</span> to construct a DIRT approximation to the posterior. To accelerate this process, we will use a reduced order model in place of the full model. Then, we will illustrate some debiasing techniques which use the DIRT approximation to the posterior, in combination with the full model, to accelerate the process of drawing exact posterior samples.</p>
487-
<div id="c970d131" class="cell" data-execution_count="1">
487+
<div id="bc0e66d9" class="cell" data-execution_count="1">
488488
<div class="sourceCode cell-code" id="cb1"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a><span class="im">from</span> matplotlib <span class="im">import</span> pyplot <span class="im">as</span> plt</span>
489489
<span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a><span class="im">import</span> torch</span>
490490
<span id="cb1-3"><a href="#cb1-3" aria-hidden="true" tabindex="-1"></a></span>
491491
<span id="cb1-4"><a href="#cb1-4" aria-hidden="true" tabindex="-1"></a><span class="im">import</span> deep_tensor <span class="im">as</span> dt</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
492492
</div>
493493
<p>We begin by defining the prior, (full) model and reduced order model.</p>
494494
<p>The full model is implemented in <a href="https://fenicsproject.org/download/archive/">FEniCS</a>, on a <span class="math inline">\(96 \times 32\)</span> grid, using piecwise linear basis functions. Timestepping is done using the backward Euler method. The reduced order model is constructed using the proper orthogonal decomposition <span class="citation" data-cites="Benner2015">(see, <em>e.g.</em>, <a href="#ref-Benner2015" role="doc-biblioref">Benner, Gugercin, and Willcox 2015</a>)</span>.</p>
495-
<div id="65e73428" class="cell" data-execution_count="3">
495+
<div id="624be0a2" class="cell" data-execution_count="3">
496496
<div class="sourceCode cell-code" id="cb2"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a><span class="im">from</span> models.heat <span class="im">import</span> setup_heat_problem</span>
497497
<span id="cb2-2"><a href="#cb2-2" aria-hidden="true" tabindex="-1"></a></span>
498498
<span id="cb2-3"><a href="#cb2-3" aria-hidden="true" tabindex="-1"></a><span class="co"># Construct the prior, full model and reduced order model</span></span>
499499
<span id="cb2-4"><a href="#cb2-4" aria-hidden="true" tabindex="-1"></a>prior, model, rom <span class="op">=</span> setup_heat_problem()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
500500
</div>
501501
<p>Next, we will generate the true log-diffusion coefficient using a sample from the prior. The true log-diffusion coefficient is plotted in <a href="#fig-ktrue" class="quarto-xref">Figure&nbsp;1</a>.</p>
502-
<div id="8baea6f2" class="cell" data-execution_count="4">
502+
<div id="2ab4c3c2" class="cell" data-execution_count="4">
503503
<div class="sourceCode cell-code" id="cb3"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb3-1"><a href="#cb3-1" aria-hidden="true" tabindex="-1"></a>xi_true <span class="op">=</span> torch.randn((prior.dim,))</span>
504504
<span id="cb3-2"><a href="#cb3-2" aria-hidden="true" tabindex="-1"></a>logk_true <span class="op">=</span> prior.transform(xi_true)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
505505
</div>
@@ -528,7 +528,7 @@ <h1>Implementation in <span class="math inline">\(\texttt{deep\_tensor}\)</span>
528528
</div>
529529
</div>
530530
<p>Next, we will solve the (full) model to obtain the modelled temperatures corresponding to the true diffusion coefficient, and use these to generate some synthetic data. <a href="#fig-utrue" class="quarto-xref">Figure&nbsp;2</a> shows the true temperature field at time <span class="math inline">\(T=10\)</span>, as well as the observation locations.</p>
531-
<div id="f5f017da" class="cell" data-execution_count="6">
531+
<div id="19af3e66" class="cell" data-execution_count="6">
532532
<div class="sourceCode cell-code" id="cb5"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a><span class="co"># Generate true temperature field</span></span>
533533
<span id="cb5-2"><a href="#cb5-2" aria-hidden="true" tabindex="-1"></a>u_true <span class="op">=</span> model.solve(logk_true)</span>
534534
<span id="cb5-3"><a href="#cb5-3" aria-hidden="true" tabindex="-1"></a></span>
@@ -569,7 +569,7 @@ <h1>Implementation in <span class="math inline">\(\texttt{deep\_tensor}\)</span>
569569
<section id="building-the-dirt-object" class="level2">
570570
<h2 class="anchored" data-anchor-id="building-the-dirt-object">Building the DIRT Object</h2>
571571
<p>Now we will build a DIRT object to approximate the posterior density of the log-diffusion coefficient for the reduced-order model. We begin by defining functions which return the potential associated with the likelihood and prior.</p>
572-
<div id="20897a1b" class="cell" data-execution_count="8">
572+
<div id="30a8fa22" class="cell" data-execution_count="8">
573573
<div class="sourceCode cell-code" id="cb7"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb7-1"><a href="#cb7-1" aria-hidden="true" tabindex="-1"></a><span class="kw">def</span> neglogpri(xs: torch.Tensor) <span class="op">-&gt;</span> torch.Tensor:</span>
574574
<span id="cb7-2"><a href="#cb7-2" aria-hidden="true" tabindex="-1"></a> <span class="co">"""Returns the negative log prior density evaluated a given set of </span></span>
575575
<span id="cb7-3"><a href="#cb7-3" aria-hidden="true" tabindex="-1"></a><span class="co"> samples.</span></span>
@@ -599,27 +599,26 @@ <h2 class="anchored" data-anchor-id="building-the-dirt-object">Building the DIRT
599599
<span id="cb7-27"><a href="#cb7-27" aria-hidden="true" tabindex="-1"></a> <span class="cf">return</span> _negloglik(rom, xs)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
600600
</div>
601601
<p>Next, we specify a preconditioner. Because the prior of the coefficients <span class="math inline">\(\{\xi^{(i)}\}_{i=1}^{d}\)</span> is the standard Gaussian, the mapping between a Gaussian reference and the prior is simply the identity mapping. This is an appropriate choice of preconditioner in the absence of any other information.</p>
602-
<div id="0e787880" class="cell" data-execution_count="9">
602+
<div id="a61f24e6" class="cell" data-execution_count="9">
603603
<div class="sourceCode cell-code" id="cb8"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb8-1"><a href="#cb8-1" aria-hidden="true" tabindex="-1"></a>reference <span class="op">=</span> dt.GaussianReference()</span>
604604
<span id="cb8-2"><a href="#cb8-2" aria-hidden="true" tabindex="-1"></a>preconditioner <span class="op">=</span> dt.IdentityMapping(prior.dim, reference)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
605605
</div>
606606
<p>Next, we specify a polynomial basis.</p>
607-
<div id="d9f51550" class="cell" data-execution_count="10">
607+
<div id="29243200" class="cell" data-execution_count="10">
608608
<div class="sourceCode cell-code" id="cb9"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb9-1"><a href="#cb9-1" aria-hidden="true" tabindex="-1"></a>poly <span class="op">=</span> dt.Legendre(order<span class="op">=</span><span class="dv">20</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
609609
</div>
610610
<p>Finally, we can construct the DIRT object.</p>
611-
<div id="500852ef" class="cell" data-execution_count="11">
612-
<div class="sourceCode cell-code" id="cb10"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb10-1"><a href="#cb10-1" aria-hidden="true" tabindex="-1"></a><span class="co"># Reduce the initial and maximum tensor ranks to reduce the cost of </span></span>
613-
<span id="cb10-2"><a href="#cb10-2" aria-hidden="true" tabindex="-1"></a><span class="co"># each layer</span></span>
614-
<span id="cb10-3"><a href="#cb10-3" aria-hidden="true" tabindex="-1"></a>tt_options <span class="op">=</span> dt.TTOptions(init_rank<span class="op">=</span><span class="dv">12</span>, max_rank<span class="op">=</span><span class="dv">12</span>)</span>
615-
<span id="cb10-4"><a href="#cb10-4" aria-hidden="true" tabindex="-1"></a></span>
616-
<span id="cb10-5"><a href="#cb10-5" aria-hidden="true" tabindex="-1"></a>dirt <span class="op">=</span> dt.DIRT(</span>
617-
<span id="cb10-6"><a href="#cb10-6" aria-hidden="true" tabindex="-1"></a> negloglik_rom, </span>
618-
<span id="cb10-7"><a href="#cb10-7" aria-hidden="true" tabindex="-1"></a> neglogpri,</span>
619-
<span id="cb10-8"><a href="#cb10-8" aria-hidden="true" tabindex="-1"></a> preconditioner,</span>
620-
<span id="cb10-9"><a href="#cb10-9" aria-hidden="true" tabindex="-1"></a> poly, </span>
621-
<span id="cb10-10"><a href="#cb10-10" aria-hidden="true" tabindex="-1"></a> tt_options<span class="op">=</span>tt_options</span>
622-
<span id="cb10-11"><a href="#cb10-11" aria-hidden="true" tabindex="-1"></a>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
611+
<div id="1551107c" class="cell" data-execution_count="11">
612+
<div class="sourceCode cell-code" id="cb10"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb10-1"><a href="#cb10-1" aria-hidden="true" tabindex="-1"></a><span class="co"># Reduce the initial and maximum tensor ranks to reduce the cost of each layer</span></span>
613+
<span id="cb10-2"><a href="#cb10-2" aria-hidden="true" tabindex="-1"></a>tt_options <span class="op">=</span> dt.TTOptions(init_rank<span class="op">=</span><span class="dv">12</span>, max_rank<span class="op">=</span><span class="dv">12</span>)</span>
614+
<span id="cb10-3"><a href="#cb10-3" aria-hidden="true" tabindex="-1"></a></span>
615+
<span id="cb10-4"><a href="#cb10-4" aria-hidden="true" tabindex="-1"></a>dirt <span class="op">=</span> dt.DIRT(</span>
616+
<span id="cb10-5"><a href="#cb10-5" aria-hidden="true" tabindex="-1"></a> negloglik_rom, </span>
617+
<span id="cb10-6"><a href="#cb10-6" aria-hidden="true" tabindex="-1"></a> neglogpri,</span>
618+
<span id="cb10-7"><a href="#cb10-7" aria-hidden="true" tabindex="-1"></a> preconditioner,</span>
619+
<span id="cb10-8"><a href="#cb10-8" aria-hidden="true" tabindex="-1"></a> poly, </span>
620+
<span id="cb10-9"><a href="#cb10-9" aria-hidden="true" tabindex="-1"></a> tt_options<span class="op">=</span>tt_options</span>
621+
<span id="cb10-10"><a href="#cb10-10" aria-hidden="true" tabindex="-1"></a>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
623622
</div>
624623
</section>
625624
<section id="debiasing" class="level2">
@@ -629,7 +628,7 @@ <h2 class="anchored" data-anchor-id="debiasing">Debiasing</h2>
629628
<section id="mcmc-sampling" class="level3">
630629
<h3 class="anchored" data-anchor-id="mcmc-sampling">MCMC Sampling</h3>
631630
<p>First, we will illustrate how to use the DIRT density as part of an MCMC sampler. The simplest sampler, which we demonstrate here, is an independence sampler using the DIRT density as a proposal density.</p>
632-
<div id="a091d18e" class="cell" data-execution_count="13">
631+
<div id="a456cba8" class="cell" data-execution_count="13">
633632
<div class="sourceCode cell-code" id="cb11"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb11-1"><a href="#cb11-1" aria-hidden="true" tabindex="-1"></a><span class="co"># Generate a set of samples from the DIRT density</span></span>
634633
<span id="cb11-2"><a href="#cb11-2" aria-hidden="true" tabindex="-1"></a>rs <span class="op">=</span> dirt.reference.random(d<span class="op">=</span>dirt.dim, n<span class="op">=</span><span class="dv">5000</span>)</span>
635634
<span id="cb11-3"><a href="#cb11-3" aria-hidden="true" tabindex="-1"></a>xs, potentials_dirt <span class="op">=</span> dirt.eval_irt(rs)</span>
@@ -649,7 +648,7 @@ <h3 class="anchored" data-anchor-id="mcmc-sampling">MCMC Sampling</h3>
649648
<section id="importance-sampling" class="level3">
650649
<h3 class="anchored" data-anchor-id="importance-sampling">Importance Sampling</h3>
651650
<p>As an alternative to MCMC, we can also apply importance sampling to reweight samples from the DIRT approximation appropriately.</p>
652-
<div id="a4fbcc75" class="cell" data-execution_count="14">
651+
<div id="3977f219" class="cell" data-execution_count="14">
653652
<div class="sourceCode cell-code" id="cb13"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb13-1"><a href="#cb13-1" aria-hidden="true" tabindex="-1"></a>res <span class="op">=</span> dt.run_importance_sampling(potentials_dirt, potentials_exact)</span>
654653
<span id="cb13-2"><a href="#cb13-2" aria-hidden="true" tabindex="-1"></a><span class="bu">print</span>(<span class="ss">f"ESS: </span><span class="sc">{</span>res<span class="sc">.</span>ess<span class="sc">:.4f}</span><span class="ss">"</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
655654
<div class="cell-output cell-output-stdout">

reference/DIRT.html

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -854,17 +854,12 @@ <h3 class="anchored" data-anchor-id="deep_tensor.DIRT.eval_cirt">eval_cirt</h3>
854854
<span id="cb6-6"><a href="#cb6-6" aria-hidden="true" tabindex="-1"></a>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
855855
<p>Evaluates the conditional inverse Rosenblatt transport.</p>
856856
<p>Returns the conditional inverse Rosenblatt transport evaluated at a set of samples in the approximation domain.</p>
857-
<p>The conditional inverse Rosenblatt transport takes the form</p>
858-
<p><span class="math display">\[
859-
X|Y = \mathcal{T}(\mathcal{R}_{k}(Y), R),
860-
\]</span></p>
861-
<p>where <span class="math inline">\(Y\)</span> is a <span class="math inline">\(k\)</span>-dimensional random variable, <span class="math inline">\(R\)</span> is a <span class="math inline">\(n-k\)</span>-dimensional reference random variable, <span class="math inline">\(\mathcal{R}(\,\cdot\,)\)</span> denotes the (full) Rosenblatt transport, <span class="math inline">\(\mathcal{T}(\,\cdot\,) := \mathcal{R}^{-1}(\,\cdot\,)\)</span>, denotes its inverse, and <span class="math inline">\(\mathcal{R}_{k}(\,\cdot\,)\)</span> denotes the Rosenblatt transport for the first (or last) <span class="math inline">\(k\)</span> variables.</p>
862857
<section id="parameters-5" class="level4 doc-section doc-section-parameters">
863858
<h4 class="doc-section doc-section-parameters anchored" data-anchor-id="parameters-5">Parameters</h4>
864859
<dl>
865860
<dt><code><span class="parameter-name"><strong>ys</strong></span> <span class="parameter-annotation-sep">:</span> <span class="parameter-annotation"><a href="`torch.Tensor`">Tensor</a></span></code></dt>
866861
<dd>
867-
<p>A <span class="math inline">\(1 \times k\)</span> (if the same realisation of <span class="math inline">\(Y\)</span> is to be used for all samples in <code>rs</code>) or <span class="math inline">\(n \times k\)</span> matrix (if a different realisation of <span class="math inline">\(Y\)</span> is to be used for all samples in <code>rs</code>) containing samples from the approximation domain.</p>
862+
<p>A matrix containing samples from the approximation domain. The matrix should have dimensions <span class="math inline">\(1 \times k\)</span> (if the same realisation of <span class="math inline">\(Y\)</span> is to be used for all samples in <code>rs</code>) or <span class="math inline">\(n \times k\)</span> (if a different realisation of <span class="math inline">\(Y\)</span> is to be used for each samples in <code>rs</code>).</p>
868863
</dd>
869864
<dt><code><span class="parameter-name"><strong>rs</strong></span> <span class="parameter-annotation-sep">:</span> <span class="parameter-annotation"><a href="`torch.Tensor`">Tensor</a></span></code></dt>
870865
<dd>
@@ -876,7 +871,7 @@ <h4 class="doc-section doc-section-parameters anchored" data-anchor-id="paramete
876871
</dd>
877872
<dt><code><span class="parameter-name"><strong>n_layers</strong></span> <span class="parameter-annotation-sep">:</span> <span class="parameter-annotation"><a href="`int`">int</a> | None</span> <span class="parameter-default-sep">=</span> <span class="parameter-default">None</span></code></dt>
878873
<dd>
879-
<p>The number of layers of the deep inverse Rosenblatt transport to push the samples forward under. If not specified, the samples will be pushed forward through all the layers.</p>
874+
<p>The number of layers of the DIRT object to use when evaluating the CIRT. If not specified, all layers will be used.</p>
880875
</dd>
881876
</dl>
882877
</section>
@@ -889,7 +884,7 @@ <h4 class="doc-section doc-section-returns anchored" data-anchor-id="returns-4">
889884
</dd>
890885
<dt><code><span class="parameter-name"><strong>neglogfxs</strong></span> <span class="parameter-annotation-sep">:</span> <span class="parameter-annotation"><a href="`torch.Tensor`">Tensor</a></span></code></dt>
891886
<dd>
892-
<p>An <span class="math inline">\(n\)</span>-dimensional vector containing the potential function of the approximation to the conditional density of <span class="math inline">\(X \textbar Y\)</span> evaluated at each sample in <code>rs</code>.</p>
887+
<p>An <span class="math inline">\(n\)</span>-dimensional vector containing the potential function of the approximation to the conditional density of <span class="math inline">\(X \textbar Y\)</span> evaluated at each sample in <code>xs</code>.</p>
893888
</dd>
894889
</dl>
895890
</section>

0 commit comments

Comments
 (0)