Skip to content

Possible unsupported configuration: using Ollama for embedding model #24

@sebastianillges

Description

@sebastianillges

Description

I'm using Open WebUI with the confluence search tool and and am trying to use an embedding model using Ollama. However, when I run the tool, it attempts to download the model from Hugging Face instead.

I'm not sure if Ollama is a supported source for embedding models, or if I'm misconfiguring something. I'd appreciate clarification on whether this use case is supported.

Steps to Reproduce

Ollama and Open WebUI are running and work properly.
I set the valve options to snowflake-arctic-embed2:latest and tried multiple combinations to specify the path to the Ollama model path.
I also set the RAG_EMBEDDING_MODEL variable correspondingly.

Expected Behavior

I expected the tool to detect or use the embedding model provided via Ollama, avoiding the need to download anything from Hugging Face.

Actual Behavior

The tool initiates a model download from Hugging Face which fails every time, since the server is offline and is maintained locally.

Environment

OS: 6.14.2-arch1-1
Python version: 3.13.3
Open WebUI version: 0.6.0
Ollama version: 0.6.6

Additional Context

I searched the documentation and issues but couldn't find a clear statement on Ollama compatibility for embedding models.

If this isn't supported yet, I'd be curious to know if it's planned or how best to contribute toward adding it.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions