-
Notifications
You must be signed in to change notification settings - Fork 505
Description
I'm trying to forecast sales values for future periods.
Previously, everything worked fine on a laptop, but on a PC, an error occurs.
Even the test example does not work.
Python 3.13, neuralprophet 0.8.0, pytorch-lightning 1.9.5 are installed.
The full text of the output is below.
WARNING - (NP.forecaster.fit) - When Global modeling with local normalization, metrics are displayed in normalized scale.
WARNING - (py.warnings._showwarnmsg) - C:\Users\user\anaconda3\Lib\site-packages\neuralprophet\df_utils.py:1152: FutureWarning: Series.view is deprecated and will be removed in a future version. Use astype
as an alternative to change the dtype.
converted_ds = pd.to_datetime(ds_col, utc=True).view(dtype=np.int64)
INFO - (NP.df_utils._infer_frequency) - Major frequency D corresponds to 99.966% of the data.
INFO - (NP.df_utils._infer_frequency) - Dataframe freq automatically defined as D
INFO - (NP.config.init_data_params) - Setting normalization to global as only one dataframe provided for training.
INFO - (NP.utils.set_auto_seasonalities) - Disabling daily seasonality. Run NeuralProphet with daily_seasonality=True to override this.
INFO - (NP.config.set_auto_batch_epoch) - Auto-set batch_size to 64
INFO - (NP.config.set_auto_batch_epoch) - Auto-set epochs to 80
WARNING - (NP.config.set_lr_finder_args) - Learning rate finder: The number of batches (47) is too small than the required number for the learning rate finder (237). The results might not be optimal.
Missing logger folder: C:\Users\user\PycharmProjects\kamtent_forecasting\lightning_logs
Finding best initial lr: 0%| | 0/237 [00:00<?, ?it/s]
�[1;31m---------------------------------------------------------------------------�[0m
�[1;31mUnpicklingError�[0m Traceback (most recent call last)
Cell �[1;32mIn[4], line 3�[0m
�[0;32m 1�[0m m �[38;5;241m=�[39m NeuralProphet()
�[0;32m 2�[0m m�[38;5;241m.�[39mset_plotting_backend(�[38;5;124m"�[39m�[38;5;124mplotly-static�[39m�[38;5;124m"�[39m) �[38;5;66;03m# show plots correctly in jupyter notebooks�[39;00m
�[1;32m----> 3�[0m metrics �[38;5;241m=�[39m m�[38;5;241m.�[39mfit(df)
File �[1;32m~\anaconda3\Lib\site-packages\neuralprophet\forecaster.py:1062�[0m, in �[0;36mNeuralProphet.fit�[1;34m(self, df, freq, validation_df, epochs, batch_size, learning_rate, early_stopping, minimal, metrics, progress, checkpointing, continue_training, num_workers)�[0m
�[0;32m 1060�[0m �[38;5;66;03m# Training�[39;00m
�[0;32m 1061�[0m �[38;5;28;01mif�[39;00m validation_df �[38;5;129;01mis�[39;00m �[38;5;28;01mNone�[39;00m:
�[1;32m-> 1062�[0m metrics_df �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39m_train(
�[0;32m 1063�[0m df,
�[0;32m 1064�[0m progress_bar_enabled�[38;5;241m=�[39m�[38;5;28mbool�[39m(progress),
�[0;32m 1065�[0m metrics_enabled�[38;5;241m=�[39m�[38;5;28mbool�[39m(�[38;5;28mself�[39m�[38;5;241m.�[39mmetrics),
�[0;32m 1066�[0m checkpointing_enabled�[38;5;241m=�[39mcheckpointing,
�[0;32m 1067�[0m continue_training�[38;5;241m=�[39mcontinue_training,
�[0;32m 1068�[0m num_workers�[38;5;241m=�[39mnum_workers,
�[0;32m 1069�[0m )
�[0;32m 1070�[0m �[38;5;28;01melse�[39;00m:
�[0;32m 1071�[0m df_val, _, _, _ �[38;5;241m=�[39m df_utils�[38;5;241m.�[39mprep_or_copy_df(validation_df)
File �[1;32m~\anaconda3\Lib\site-packages\neuralprophet\forecaster.py:2802�[0m, in �[0;36mNeuralProphet._train�[1;34m(self, df, df_val, progress_bar_enabled, metrics_enabled, checkpointing_enabled, continue_training, num_workers)�[0m
�[0;32m 2800�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mconfig_train�[38;5;241m.�[39mset_lr_finder_args(dataset_size�[38;5;241m=�[39mdataset_size, num_batches�[38;5;241m=�[39m�[38;5;28mlen�[39m(train_loader))
�[0;32m 2801�[0m �[38;5;66;03m# Find suitable learning rate�[39;00m
�[1;32m-> 2802�[0m lr_finder �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39mtrainer�[38;5;241m.�[39mtuner�[38;5;241m.�[39mlr_find(
�[0;32m 2803�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mmodel,
�[0;32m 2804�[0m train_dataloaders�[38;5;241m=�[39mtrain_loader,
�[0;32m 2805�[0m �[38;5;241m�[39m�[38;5;241m�[39m�[38;5;28mself�[39m�[38;5;241m.�[39mconfig_train�[38;5;241m.�[39mlr_finder_args,
�[0;32m 2806�[0m )
�[0;32m 2807�[0m �[38;5;28;01massert�[39;00m lr_finder �[38;5;129;01mis�[39;00m �[38;5;129;01mnot�[39;00m �[38;5;28;01mNone�[39;00m
�[0;32m 2808�[0m �[38;5;66;03m# Estimate the optimat learning rate from the loss curve�[39;00m
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\tuner\tuning.py:267�[0m, in �[0;36mTuner.lr_find�[1;34m(self, model, train_dataloaders, val_dataloaders, dataloaders, datamodule, method, min_lr, max_lr, num_training, mode, early_stop_threshold, update_attr)�[0m
�[0;32m 264�[0m lr_finder_callback�[38;5;241m.�[39m_early_exit �[38;5;241m=�[39m �[38;5;28;01mTrue�[39;00m
�[0;32m 265�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mtrainer�[38;5;241m.�[39mcallbacks �[38;5;241m=�[39m [lr_finder_callback] �[38;5;241m+�[39m �[38;5;28mself�[39m�[38;5;241m.�[39mtrainer�[38;5;241m.�[39mcallbacks
�[1;32m--> 267�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mtrainer�[38;5;241m.�[39mfit(model, train_dataloaders, val_dataloaders, datamodule)
�[0;32m 269�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mtrainer�[38;5;241m.�[39mcallbacks �[38;5;241m=�[39m [cb �[38;5;28;01mfor�[39;00m cb �[38;5;129;01min�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39mtrainer�[38;5;241m.�[39mcallbacks �[38;5;28;01mif�[39;00m cb �[38;5;129;01mis�[39;00m �[38;5;129;01mnot�[39;00m lr_finder_callback]
�[0;32m 271�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mtrainer�[38;5;241m.�[39mauto_lr_find �[38;5;241m=�[39m �[38;5;28;01mFalse�[39;00m
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\trainer\trainer.py:608�[0m, in �[0;36mTrainer.fit�[1;34m(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)�[0m
�[0;32m 606�[0m model �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39m_maybe_unwrap_optimized(model)
�[0;32m 607�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mstrategy�[38;5;241m.�[39m_lightning_module �[38;5;241m=�[39m model
�[1;32m--> 608�[0m call�[38;5;241m.�[39m_call_and_handle_interrupt(
�[0;32m 609�[0m �[38;5;28mself�[39m, �[38;5;28mself�[39m�[38;5;241m.�[39m_fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
�[0;32m 610�[0m )
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\trainer\call.py:38�[0m, in �[0;36m_call_and_handle_interrupt�[1;34m(trainer, trainer_fn, args, **kwargs)�[0m
�[0;32m 36�[0m �[38;5;28;01mreturn�[39;00m trainer�[38;5;241m.�[39mstrategy�[38;5;241m.�[39mlauncher�[38;5;241m.�[39mlaunch(trainer_fn, �[38;5;241m�[39margs, trainer�[38;5;241m=�[39mtrainer, �[38;5;241m�[39m�[38;5;241m�[39mkwargs)
�[0;32m 37�[0m �[38;5;28;01melse�[39;00m:
�[1;32m---> 38�[0m �[38;5;28;01mreturn�[39;00m trainer_fn(�[38;5;241m�[39margs, �[38;5;241m�[39m�[38;5;241m*�[39mkwargs)
�[0;32m 40�[0m �[38;5;28;01mexcept�[39;00m _TunerExitException:
�[0;32m 41�[0m trainer�[38;5;241m.�[39m_call_teardown_hook()
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\trainer\trainer.py:650�[0m, in �[0;36mTrainer._fit_impl�[1;34m(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)�[0m
�[0;32m 643�[0m ckpt_path �[38;5;241m=�[39m ckpt_path �[38;5;129;01mor�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39mresume_from_checkpoint
�[0;32m 644�[0m �[38;5;28mself�[39m�[38;5;241m.�[39m_ckpt_path �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39m_checkpoint_connector�[38;5;241m.�[39m_set_ckpt_path(
�[0;32m 645�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mstate�[38;5;241m.�[39mfn,
�[0;32m 646�[0m ckpt_path, �[38;5;66;03m# type: ignore[arg-type]�[39;00m
�[0;32m 647�[0m model_provided�[38;5;241m=�[39m�[38;5;28;01mTrue�[39;00m,
�[0;32m 648�[0m model_connected�[38;5;241m=�[39m�[38;5;28mself�[39m�[38;5;241m.�[39mlightning_module �[38;5;129;01mis�[39;00m �[38;5;129;01mnot�[39;00m �[38;5;28;01mNone�[39;00m,
�[0;32m 649�[0m )
�[1;32m--> 650�[0m �[38;5;28mself�[39m�[38;5;241m.�[39m_run(model, ckpt_path�[38;5;241m=�[39m�[38;5;28mself�[39m�[38;5;241m.�[39mckpt_path)
�[0;32m 652�[0m �[38;5;28;01massert�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39mstate�[38;5;241m.�[39mstopped
�[0;32m 653�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mtraining �[38;5;241m=�[39m �[38;5;28;01mFalse�[39;00m
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\trainer\trainer.py:1097�[0m, in �[0;36mTrainer._run�[1;34m(self, model, ckpt_path)�[0m
�[0;32m 1095�[0m �[38;5;66;03m# hook�[39;00m
�[0;32m 1096�[0m �[38;5;28;01mif�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39mstate�[38;5;241m.�[39mfn �[38;5;241m==�[39m TrainerFn�[38;5;241m.�[39mFITTING:
�[1;32m-> 1097�[0m �[38;5;28mself�[39m�[38;5;241m.�[39m_call_callback_hooks(�[38;5;124m"�[39m�[38;5;124mon_fit_start�[39m�[38;5;124m"�[39m)
�[0;32m 1098�[0m �[38;5;28mself�[39m�[38;5;241m.�[39m_call_lightning_module_hook(�[38;5;124m"�[39m�[38;5;124mon_fit_start�[39m�[38;5;124m"�[39m)
�[0;32m 1100�[0m �[38;5;28mself�[39m�[38;5;241m.�[39m_log_hyperparams()
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\trainer\trainer.py:1394�[0m, in �[0;36mTrainer._call_callback_hooks�[1;34m(self, hook_name, args, **kwargs)�[0m
�[0;32m 1392�[0m �[38;5;28;01mif�[39;00m �[38;5;28mcallable�[39m(fn):
�[0;32m 1393�[0m �[38;5;28;01mwith�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39mprofiler�[38;5;241m.�[39mprofile(�[38;5;124mf�[39m�[38;5;124m"�[39m�[38;5;124m[Callback]�[39m�[38;5;132;01m{�[39;00mcallback�[38;5;241m.�[39mstate_key�[38;5;132;01m}�[39;00m�[38;5;124m.�[39m�[38;5;132;01m{�[39;00mhook_name�[38;5;132;01m}�[39;00m�[38;5;124m"�[39m):
�[1;32m-> 1394�[0m fn(�[38;5;28mself�[39m, �[38;5;28mself�[39m�[38;5;241m.�[39mlightning_module, �[38;5;241m�[39margs, �[38;5;241m�[39m�[38;5;241m�[39mkwargs)
�[0;32m 1396�[0m �[38;5;28;01mif�[39;00m pl_module:
�[0;32m 1397�[0m �[38;5;66;03m# restore current_fx when nested context�[39;00m
�[0;32m 1398�[0m pl_module�[38;5;241m.�[39m_current_fx_name �[38;5;241m=�[39m prev_fx_name
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\callbacks\lr_finder.py:122�[0m, in �[0;36mLearningRateFinder.on_fit_start�[1;34m(self, trainer, pl_module)�[0m
�[0;32m 121�[0m �[38;5;28;01mdef�[39;00m �[38;5;21mon_fit_start�[39m(�[38;5;28mself�[39m, trainer: �[38;5;124m"�[39m�[38;5;124mpl.Trainer�[39m�[38;5;124m"�[39m, pl_module: �[38;5;124m"�[39m�[38;5;124mpl.LightningModule�[39m�[38;5;124m"�[39m) �[38;5;241m-�[39m�[38;5;241m>�[39m �[38;5;28;01mNone�[39;00m:
�[1;32m--> 122�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mlr_find(trainer, pl_module)
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\callbacks\lr_finder.py:107�[0m, in �[0;36mLearningRateFinder.lr_find�[1;34m(self, trainer, pl_module)�[0m
�[0;32m 105�[0m �[38;5;28;01mdef�[39;00m �[38;5;21mlr_find�[39m(�[38;5;28mself�[39m, trainer: �[38;5;124m"�[39m�[38;5;124mpl.Trainer�[39m�[38;5;124m"�[39m, pl_module: �[38;5;124m"�[39m�[38;5;124mpl.LightningModule�[39m�[38;5;124m"�[39m) �[38;5;241m-�[39m�[38;5;241m>�[39m �[38;5;28;01mNone�[39;00m:
�[0;32m 106�[0m �[38;5;28;01mwith�[39;00m isolate_rng():
�[1;32m--> 107�[0m �[38;5;28mself�[39m�[38;5;241m.�[39moptimal_lr �[38;5;241m=�[39m lr_find(
�[0;32m 108�[0m trainer,
�[0;32m 109�[0m pl_module,
�[0;32m 110�[0m min_lr�[38;5;241m=�[39m�[38;5;28mself�[39m�[38;5;241m.�[39m_min_lr,
�[0;32m 111�[0m max_lr�[38;5;241m=�[39m�[38;5;28mself�[39m�[38;5;241m.�[39m_max_lr,
�[0;32m 112�[0m num_training�[38;5;241m=�[39m�[38;5;28mself�[39m�[38;5;241m.�[39m_num_training_steps,
�[0;32m 113�[0m mode�[38;5;241m=�[39m�[38;5;28mself�[39m�[38;5;241m.�[39m_mode,
�[0;32m 114�[0m early_stop_threshold�[38;5;241m=�[39m�[38;5;28mself�[39m�[38;5;241m.�[39m_early_stop_threshold,
�[0;32m 115�[0m update_attr�[38;5;241m=�[39m�[38;5;28mself�[39m�[38;5;241m.�[39m_update_attr,
�[0;32m 116�[0m )
�[0;32m 118�[0m �[38;5;28;01mif�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39m_early_exit:
�[0;32m 119�[0m �[38;5;28;01mraise�[39;00m _TunerExitException()
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\tuner\lr_finder.py:273�[0m, in �[0;36mlr_find�[1;34m(trainer, model, min_lr, max_lr, num_training, mode, early_stop_threshold, update_attr)�[0m
�[0;32m 270�[0m log�[38;5;241m.�[39minfo(�[38;5;124mf�[39m�[38;5;124m"�[39m�[38;5;124mLearning rate set to �[39m�[38;5;132;01m{�[39;00mlr�[38;5;132;01m}�[39;00m�[38;5;124m"�[39m)
�[0;32m 272�[0m �[38;5;66;03m# Restore initial state of model�[39;00m
�[1;32m--> 273�[0m trainer�[38;5;241m.�[39m_checkpoint_connector�[38;5;241m.�[39mrestore(ckpt_path)
�[0;32m 274�[0m trainer�[38;5;241m.�[39mstrategy�[38;5;241m.�[39mremove_checkpoint(ckpt_path)
�[0;32m 275�[0m trainer�[38;5;241m.�[39mfit_loop�[38;5;241m.�[39mrestarting �[38;5;241m=�[39m �[38;5;28;01mFalse�[39;00m �[38;5;66;03m# reset restarting flag as checkpoint restoring sets it to True�[39;00m
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\trainer\connectors\checkpoint_connector.py:224�[0m, in �[0;36mCheckpointConnector.restore�[1;34m(self, checkpoint_path)�[0m
�[0;32m 211�[0m �[38;5;28;01mdef�[39;00m �[38;5;21mrestore�[39m(�[38;5;28mself�[39m, checkpoint_path: Optional[_PATH] �[38;5;241m=�[39m �[38;5;28;01mNone�[39;00m) �[38;5;241m-�[39m�[38;5;241m>�[39m �[38;5;28;01mNone�[39;00m:
�[0;32m 212�[0m �[38;5;250m �[39m�[38;5;124;03m"""Attempt to restore everything at once from a 'PyTorch-Lightning checkpoint' file through file-read and�[39;00m
�[0;32m 213�[0m �[38;5;124;03m state-restore, in this priority:�[39;00m
�[0;32m 214�[0m
�[1;32m (...)�[0m
�[0;32m 222�[0m �[38;5;124;03m checkpoint_path: Path to a PyTorch Lightning checkpoint file.�[39;00m
�[0;32m 223�[0m �[38;5;124;03m """�[39;00m
�[1;32m--> 224�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mresume_start(checkpoint_path)
�[0;32m 226�[0m �[38;5;66;03m# restore module states�[39;00m
�[0;32m 227�[0m �[38;5;28mself�[39m�[38;5;241m.�[39mrestore_datamodule()
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\trainer\connectors\checkpoint_connector.py:90�[0m, in �[0;36mCheckpointConnector.resume_start�[1;34m(self, checkpoint_path)�[0m
�[0;32m 88�[0m rank_zero_info(�[38;5;124mf�[39m�[38;5;124m"�[39m�[38;5;124mRestoring states from the checkpoint path at �[39m�[38;5;132;01m{�[39;00mcheckpoint_path�[38;5;132;01m}�[39;00m�[38;5;124m"�[39m)
�[0;32m 89�[0m �[38;5;28;01mwith�[39;00m pl_legacy_patch():
�[1;32m---> 90�[0m loaded_checkpoint �[38;5;241m=�[39m �[38;5;28mself�[39m�[38;5;241m.�[39mtrainer�[38;5;241m.�[39mstrategy�[38;5;241m.�[39mload_checkpoint(checkpoint_path)
�[0;32m 91�[0m �[38;5;28mself�[39m�[38;5;241m.�[39m_loaded_checkpoint �[38;5;241m=�[39m _pl_migrate_checkpoint(loaded_checkpoint, checkpoint_path)
File �[1;32m~\anaconda3\Lib\site-packages\pytorch_lightning\strategies\strategy.py:359�[0m, in �[0;36mStrategy.load_checkpoint�[1;34m(self, checkpoint_path)�[0m
�[0;32m 357�[0m �[38;5;28;01mdef�[39;00m �[38;5;21mload_checkpoint�[39m(�[38;5;28mself�[39m, checkpoint_path: _PATH) �[38;5;241m-�[39m�[38;5;241m>�[39m Dict[�[38;5;28mstr�[39m, Any]:
�[0;32m 358�[0m torch�[38;5;241m.�[39mcuda�[38;5;241m.�[39mempty_cache()
�[1;32m--> 359�[0m �[38;5;28;01mreturn�[39;00m �[38;5;28mself�[39m�[38;5;241m.�[39mcheckpoint_io�[38;5;241m.�[39mload_checkpoint(checkpoint_path)
File �[1;32m~\anaconda3\Lib\site-packages\lightning_fabric\plugins\io\torch_io.py:86�[0m, in �[0;36mTorchCheckpointIO.load_checkpoint�[1;34m(self, path, map_location)�[0m
�[0;32m 83�[0m �[38;5;28;01mif�[39;00m �[38;5;129;01mnot�[39;00m fs�[38;5;241m.�[39mexists(path):
�[0;32m 84�[0m �[38;5;28;01mraise�[39;00m �[38;5;167;01mFileNotFoundError�[39;00m(�[38;5;124mf�[39m�[38;5;124m"�[39m�[38;5;124mCheckpoint at �[39m�[38;5;132;01m{�[39;00mpath�[38;5;132;01m}�[39;00m�[38;5;124m not found. Aborting training.�[39m�[38;5;124m"�[39m)
�[1;32m---> 86�[0m �[38;5;28;01mreturn�[39;00m pl_load(path, map_location�[38;5;241m=�[39mmap_location)
File �[1;32m~\anaconda3\Lib\site-packages\lightning_fabric\utilities\cloud_io.py:51�[0m, in �[0;36m_load�[1;34m(path_or_url, map_location)�[0m
�[0;32m 49�[0m fs �[38;5;241m=�[39m get_filesystem(path_or_url)
�[0;32m 50�[0m �[38;5;28;01mwith�[39;00m fs�[38;5;241m.�[39mopen(path_or_url, �[38;5;124m"�[39m�[38;5;124mrb�[39m�[38;5;124m"�[39m) �[38;5;28;01mas�[39;00m f:
�[1;32m---> 51�[0m �[38;5;28;01mreturn�[39;00m torch�[38;5;241m.�[39mload(f, map_location�[38;5;241m=�[39mmap_location)
File �[1;32m~\anaconda3\Lib\site-packages\torch\serialization.py:1470�[0m, in �[0;36mload�[1;34m(f, map_location, pickle_module, weights_only, mmap, **pickle_load_args)�[0m
�[0;32m 1462�[0m �[38;5;28;01mreturn�[39;00m _load(
�[0;32m 1463�[0m opened_zipfile,
�[0;32m 1464�[0m map_location,
�[1;32m (...)�[0m
�[0;32m 1467�[0m �[38;5;241m�[39m�[38;5;241m�[39mpickle_load_args,
�[0;32m 1468�[0m )
�[0;32m 1469�[0m �[38;5;28;01mexcept�[39;00m pickle�[38;5;241m.�[39mUnpicklingError �[38;5;28;01mas�[39;00m e:
�[1;32m-> 1470�[0m �[38;5;28;01mraise�[39;00m pickle�[38;5;241m.�[39mUnpicklingError(_get_wo_message(�[38;5;28mstr�[39m(e))) �[38;5;28;01mfrom�[39;00m �[38;5;28;01mNone�[39;00m
�[0;32m 1471�[0m �[38;5;28;01mreturn�[39;00m _load(
�[0;32m 1472�[0m opened_zipfile,
�[0;32m 1473�[0m map_location,
�[1;32m (...)�[0m
�[0;32m 1476�[0m �[38;5;241m�[39m�[38;5;241m�[39mpickle_load_args,
�[0;32m 1477�[0m )
�[0;32m 1478�[0m �[38;5;28;01mif�[39;00m mmap:
�[1;31mUnpicklingError�[0m: Weights only load failed. This file can still be loaded, to do so you have two options, �[1mdo those steps only if you trust the source of the checkpoint�[0m.
(1) In PyTorch 2.6, we changed the default value of the weights_only
argument in torch.load
from False
to True
. Re-running torch.load
with weights_only
set to False
will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with weights_only=True
please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL neuralprophet.configure.ConfigSeasonality was not an allowed global by default. Please use torch.serialization.add_safe_globals([ConfigSeasonality])
or the torch.serialization.safe_globals([ConfigSeasonality])
context manager to allowlist this global if you trust this class/function.
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.