Skip to content

Conversation

@Ian-Blockmans
Copy link

I have added intel(xpu) support by starting with the intel/intel-extension-for-pytorch:2.8.10-xpu image. Then added the required dependencies to the pyproject.toml file(i have never used poetry before so you might need to do some cleanup there). Some minor code tweaks were necessary, mainly adding an option for xpu next to cuda. For clearing cache I added torch.xpu.memory.empty_cache(). I recommend you replace the cache clearing with torch.accelerator.memory.empty_cache() if you ever move on to torch version 2.9.0 or above. I also tried to updated the documentation to reflect the added intel support, I have only tested the container with a Intel arc B580 so i don't know if cuda still works. Also i would also like ask to make the uploaded container tags latest for cpu latest latest-cuda for nvidia and latest-intel for intel. That is how i changed it in the docs.

@Manny8787
Copy link

Do you have any advice on how to get this to work on Unraid?

@Ian-Blockmans
Copy link
Author

Ian-Blockmans commented Nov 23, 2025

Do you have any advice on how to get this to work on Unraid?

Someone with unraid already has this running on their system explained in issue #148. here is a direct link to the comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants