Replies: 3 comments 3 replies
-
6800xt same |
Beta Was this translation helpful? Give feedback.
3 replies
-
Having the same issue. 6800 XT. |
Beta Was this translation helpful? Give feedback.
0 replies
-
UPDATE: Danbooru interrogation works in the current commit (265d626). CLIP interrogation still doesn't. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Followed every step having this issue with RX 6600 XT (driver 23.2.2), Windows 11.
venv "C:\stable-diffusion-webui-directml-master\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash:
Installing requirements for Web UI
Launching Web UI with arguments:
Interrogate will be fallen back to cpu. Because DirectML device does not support it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
No module 'xformers'. Proceeding without it.
Loading weights [fe4efff1e1] from C:\stable-diffusion-webui-directml-master\models\Stable-diffusion\sd-v1-4.ckpt
Creating model from config: C:\stable-diffusion-webui-directml-master\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (InvokeAI).
Textual inversion embeddings loaded(0):
Model loaded in 4.5s (load weights from disk: 1.2s, create model: 0.5s, apply weights to model: 0.6s, apply half(): 0.6s, move model to device: 1.6s).
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.Beta Was this translation helpful? Give feedback.
All reactions