- 
                Notifications
    You must be signed in to change notification settings 
- Fork 230
added requested docstrings #2692
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|  | @@ -82,7 +82,46 @@ end | |||||
|  | ||||||
| DynamicPPL.initialsampler(::Sampler{<:Hamiltonian}) = SampleFromUniform() | ||||||
|  | ||||||
| # Handle setting `nadapts` and `discard_initial` | ||||||
| """ | ||||||
| sample( | ||||||
| rng::AbstractRNG, | ||||||
| model::DynamicPPL.Model, | ||||||
| sampler::Sampler{<:AdaptiveHamiltonian}, | ||||||
| N::Integer; | ||||||
| nadapts=sampler.alg.n_adapts, | ||||||
| discard_adapt=true, | ||||||
| discard_initial=-1, | ||||||
| kwargs... | ||||||
| ) | ||||||
|  | ||||||
| Sample from `model` using an adaptive Hamiltonian sampler (NUTS or HMCDA). | ||||||
|  | ||||||
| This method handles adaptation and warm-up for adaptive Hamiltonian samplers. | ||||||
|  | ||||||
| # Keyword Arguments | ||||||
|  | ||||||
| - `nadapts::Int`: Number of adaptation steps. During these steps, the sampler adapts its | ||||||
| step size and mass matrix. Defaults to the sampler's `n_adapts` value. If set to `-1` | ||||||
| (the default for convenience constructors like `NUTS()`), automatically becomes | ||||||
| `min(1000, N ÷ 2)`. | ||||||
|  | ||||||
| - `discard_adapt::Bool`: Whether to discard the adaptation samples from the returned chain. | ||||||
| Defaults to `true`. When `true`, the adaptation samples are not included in the final results, | ||||||
| as they may not be from the target distribution whilst the sampler is still adapting. | ||||||
|  | ||||||
| - `discard_initial::Int`: Number of initial samples to discard from the chain. Defaults to `-1` | ||||||
| (automatic). When `-1`, this becomes `nadapts` if `discard_adapt` is `true`, otherwise `0`. | ||||||
| Use this to manually specify how many initial samples to discard. | ||||||
|  | ||||||
| - `initial_params`: Initial parameter values for sampling. See `DynamicPPL.initialstep` for details. | ||||||
|  | ||||||
| Additional keyword arguments are passed to the underlying sampling implementation. | ||||||
|  | ||||||
| # Note | ||||||
|  | ||||||
| When resuming from a previous run using `resume_from`, adaptation is disabled | ||||||
| (`nadapts=0`, `discard_adapt=false`, `discard_initial=0`). | ||||||
| """ | ||||||
| function AbstractMCMC.sample( | ||||||
| rng::AbstractRNG, | ||||||
| model::DynamicPPL.Model, | ||||||
|  | @@ -175,6 +214,41 @@ function find_initial_params( | |||||
| ) | ||||||
| end | ||||||
|  | ||||||
| """ | ||||||
| initialstep( | ||||||
| rng::AbstractRNG, | ||||||
| model::AbstractModel, | ||||||
| spl::Sampler{<:Hamiltonian}, | ||||||
| vi::AbstractVarInfo; | ||||||
| initial_params=nothing, | ||||||
| nadapts=0, | ||||||
| verbose::Bool=true, | ||||||
| kwargs... | ||||||
| ) | ||||||
|  | ||||||
| Perform the initial step for Hamiltonian Monte Carlo sampling. | ||||||
|  | ||||||
| This function initialises the Hamiltonian, finds a suitable step size (if not provided), | ||||||
| and performs the first sampling step. | ||||||
|  | ||||||
| # Keyword Arguments | ||||||
|  | ||||||
| - `initial_params`: Initial parameter values to use for sampling. If `nothing` (the default), | ||||||
|          | ||||||
| parameters are resampled from the prior until valid initial values with finite log probability | ||||||
| and gradient are found. If provided, these values are used directly without validation. | ||||||
| Must be in the same format as the model's parameters. | ||||||
|  | ||||||
| - `nadapts::Int`: Number of adaptation steps to be performed. Used internally to set up adaptation. | ||||||
| Defaults to `0`. | ||||||
|  | ||||||
| - `verbose::Bool`: Whether to print informative messages (e.g., the automatically determined step size). | ||||||
| Defaults to `true`. | ||||||
|  | ||||||
| # Note | ||||||
|  | ||||||
| If automatic initial parameter search fails after many attempts, an error is raised with | ||||||
| suggestions for how to proceed. Consider providing explicit `initial_params` if this occurs. | ||||||
| """ | ||||||
| function DynamicPPL.initialstep( | ||||||
| rng::AbstractRNG, | ||||||
| model::AbstractModel, | ||||||
|  | @@ -389,7 +463,7 @@ NUTS() # Use default NUTS configuration. | |||||
| NUTS(1000, 0.65) # Use 1000 adaption steps, and target accept ratio 0.65. | ||||||
|          | ||||||
| NUTS(1000, 0.65) # Use 1000 adaption steps, and target accept ratio 0.65. | |
| NUTS(1000, 0.65) # Use 1000 adaptation steps, and target accept ratio 0.65. | 
        
          
              
                Outdated
          
        
      
    
      
    
      Copilot
AI
    
    
    
      Oct 20, 2025 
    
  
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct 'adaption' to 'adaptation' in the field comment.
| n_adapts::Int # number of samples with adaption for ϵ | |
| n_adapts::Int # number of samples with adaptation for ϵ | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are all the other keyword arguments specific to HMC, or do they include generic ones joint to all samplers, like
verboseor something?Relatedly, may be worth linking to this in the docstring: https://turinglang.org/docs/usage/sampling-options Maybe already above?