|  | 
|  | 1 | +# run_irt_pcn { #deep_tensor.run_irt_pcn } | 
|  | 2 | + | 
|  | 3 | +```python | 
|  | 4 | +run_irt_pcn( | 
|  | 5 | +    potential: Callable[[Tensor], Tensor], | 
|  | 6 | +    dirt: AbstractDIRT, | 
|  | 7 | +    n: int, | 
|  | 8 | +    dt: float = 2.0, | 
|  | 9 | +    r0: Tensor | None = None, | 
|  | 10 | +    subset: str = 'first', | 
|  | 11 | +    verbose: bool = True, | 
|  | 12 | +) | 
|  | 13 | +``` | 
|  | 14 | + | 
|  | 15 | +Runs a pCN sampler using the DIRT mapping. | 
|  | 16 | + | 
|  | 17 | +Runs a preconditioned Crank-Nicholson sampler (Cotter *et al.*,  | 
|  | 18 | +2013) to characterise the pullback of the target density under the  | 
|  | 19 | +DIRT mapping, then pushes the resulting samples forward under the  | 
|  | 20 | +DIRT mapping to obtain samples distributed according to the target.  | 
|  | 21 | +This idea was initially outlined by Cui *et al.* (2023). | 
|  | 22 | + | 
|  | 23 | +Note that the pCN proposal is only applicable to problems with a  | 
|  | 24 | +Gaussian reference density. | 
|  | 25 | + | 
|  | 26 | +## Parameters {.doc-section .doc-section-parameters} | 
|  | 27 | + | 
|  | 28 | +<code>[**potential**]{.parameter-name} [:]{.parameter-annotation-sep} [[Callable](`typing.Callable`)\[\[[Tensor](`torch.Tensor`)\], [Tensor](`torch.Tensor`)\]]{.parameter-annotation}</code> | 
|  | 29 | + | 
|  | 30 | +:   A function that returns the negative logarithm of the (possibly  unnormalised) target density at a given sample. | 
|  | 31 | + | 
|  | 32 | +<code>[**dirt**]{.parameter-name} [:]{.parameter-annotation-sep} [[AbstractDIRT](`deep_tensor.irt.AbstractDIRT`)]{.parameter-annotation}</code> | 
|  | 33 | + | 
|  | 34 | +:   A previously-constructed DIRT object. | 
|  | 35 | + | 
|  | 36 | +<code>[**n**]{.parameter-name} [:]{.parameter-annotation-sep} [[int](`int`)]{.parameter-annotation}</code> | 
|  | 37 | + | 
|  | 38 | +:   The length of the Markov chain to construct. | 
|  | 39 | + | 
|  | 40 | +<code>[**dt**]{.parameter-name} [:]{.parameter-annotation-sep} [[float](`float`)]{.parameter-annotation} [ = ]{.parameter-default-sep} [2.0]{.parameter-default}</code> | 
|  | 41 | + | 
|  | 42 | +:   pCN stepsize, $\Delta t$. If this is not specified, a value of  $\Delta t = 2$ (independence sampler) will be used. | 
|  | 43 | + | 
|  | 44 | +<code>[**r0**]{.parameter-name} [:]{.parameter-annotation-sep} [[Tensor](`torch.Tensor`) \| None]{.parameter-annotation} [ = ]{.parameter-default-sep} [None]{.parameter-default}</code> | 
|  | 45 | + | 
|  | 46 | +:   The starting state. This should be a $1 \times k$ matrix  containing a sample from the reference domain. If not passed in,  the mean of the reference density will be used. | 
|  | 47 | + | 
|  | 48 | +<code>[**subset**]{.parameter-name} [:]{.parameter-annotation-sep} [[str](`str`)]{.parameter-annotation} [ = ]{.parameter-default-sep} [\'first\']{.parameter-default}</code> | 
|  | 49 | + | 
|  | 50 | +:   If the samples contain a subset of the variables, (*i.e.,*  $k < d$), whether they correspond to the first $k$ variables  (`subset='first'`) or the last $k$ variables (`subset='last'`). | 
|  | 51 | + | 
|  | 52 | +<code>[**verbose**]{.parameter-name} [:]{.parameter-annotation-sep} [[bool](`bool`)]{.parameter-annotation} [ = ]{.parameter-default-sep} [True]{.parameter-default}</code> | 
|  | 53 | + | 
|  | 54 | +:   Whether to print diagnostic information during the sampling  process. | 
|  | 55 | + | 
|  | 56 | +## Returns {.doc-section .doc-section-returns} | 
|  | 57 | + | 
|  | 58 | +<code>[**res**]{.parameter-name} [:]{.parameter-annotation-sep} [[MCMCResult](`deep_tensor.debiasing.mcmc.MCMCResult`)]{.parameter-annotation}</code> | 
|  | 59 | + | 
|  | 60 | +:   An object containing the constructed Markov chain and some  diagnostic information. | 
|  | 61 | + | 
|  | 62 | +## Notes {.doc-section .doc-section-notes} | 
|  | 63 | + | 
|  | 64 | +When the reference density is the standard Gaussian density (that  | 
|  | 65 | +is, $\rho(\theta) = \mathcal{N}(0_{d}, I_{d})$), the pCN proposal  | 
|  | 66 | +(given current state $\theta^{(i)}$) takes the form | 
|  | 67 | +$$ | 
|  | 68 | +    \theta' = \frac{2-\Delta t}{2+\Delta t} \theta^{(i)}  | 
|  | 69 | +        + \frac{2\sqrt{2\Delta t}}{2 + \Delta t} \tilde{\theta}, | 
|  | 70 | +$$ | 
|  | 71 | +where $\tilde{\theta} \sim \rho(\,\cdot\,)$, and $\Delta t$ denotes  | 
|  | 72 | +the step size.  | 
|  | 73 | + | 
|  | 74 | +When $\Delta t = 2$, the resulting sampler is an independence  | 
|  | 75 | +sampler. When $\Delta t > 2$, the proposals are negatively  | 
|  | 76 | +correlated, and when $\Delta t < 2$, the proposals are positively  | 
|  | 77 | +correlated. | 
|  | 78 | + | 
|  | 79 | +## References {.doc-section .doc-section-references} | 
|  | 80 | + | 
|  | 81 | +Cotter, SL, Roberts, GO, Stuart, AM and White, D (2013). *[MCMC  | 
|  | 82 | +methods for functions: Modifying old algorithms to make them  | 
|  | 83 | +faster](https://doi.org/10.1214/13-STS421).* Statistical Science  | 
|  | 84 | +**28**, 424--446. | 
|  | 85 | + | 
|  | 86 | +Cui, T, Dolgov, S and Zahm, O (2023). *[Scalable conditional deep  | 
|  | 87 | +inverse Rosenblatt transports using tensor trains and gradient-based  | 
|  | 88 | +dimension reduction](https://doi.org/10.1016/j.jcp.2023.112103).*  | 
|  | 89 | +Journal of Computational Physics **485**, 112103. | 
0 commit comments