-
Notifications
You must be signed in to change notification settings - Fork 70
Open
Labels
bugSomething isn't workingSomething isn't workinghelp wantedExtra attention is neededExtra attention is needed
Description
Take the following regression model for example:
import pymc as pm
import numpy as np
import pymc_extras as pmx
X = np.random.normal(size=(100, 3))
with pm.Model() as m:
beta = pm.Normal('beta', 0, 1, shape=(3,))
alpha = pm.Normal('alpha')
mu = alpha + X @ beta
sigma = pm.Exponential('sigma', 1)
y_hat = pm.Normal('y_hat', mu = mu, sigma=sigma)
data = pm.sample_prior_predictive(draws=1).prior.y_hat.values.squeeze()
Fit with just one iteration so we can see the initial loss value:
with pm.observe(m, {'y_hat':data}):
idata = pmx.find_MAP(maxiter=1)
Minimizing Elapsed Iteration Objective ||grad||
──────────────────────────────────────────────────────────────────────────────────────────────────
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0:00:00 3/3 27638.14142459 88.96384921
with pm.observe(m, {'y_hat':data}):
idata = pmx.find_MAP(maxiter=1, compile_kwargs={'mode':'JAX'})
Minimizing Elapsed Iteration Objective ||grad||
──────────────────────────────────────────────────────────────────────────────────────────────────
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0:00:00 3/3 166.24722983 88.96384921
They converge to the same answer, and the gradients are the same, but the loss values are very different, which is weird.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workinghelp wantedExtra attention is neededExtra attention is needed