From 85c83e1939c2e135f022ac978284cac2292a37cc Mon Sep 17 00:00:00 2001 From: scosman Date: Tue, 30 Sep 2025 15:59:45 -0400 Subject: [PATCH] Add Kiln to integrations section. Note: I also removed a few mentions of Haystack in links to the framework integration page. Haystack isn't actually listed there, so I thought this was best? Happy to revert that part if you like. --- content/docs/integrations/_index.md | 2 +- .../docs/integrations/frameworks/_index.md | 1 + content/docs/integrations/frameworks/kiln.md | 50 +++++++++++++++++++ content/docs/overview/_index.md | 2 +- 4 files changed, 53 insertions(+), 2 deletions(-) create mode 100644 content/docs/integrations/frameworks/kiln.md diff --git a/content/docs/integrations/_index.md b/content/docs/integrations/_index.md index 8fdad99f..61834850 100644 --- a/content/docs/integrations/_index.md +++ b/content/docs/integrations/_index.md @@ -11,6 +11,6 @@ hide_toc: true |:----------------|:-------------| | [Embedding Models](/docs/integrations/embedding/) | Connect with popular embedding models including OpenAI, Cohere, Hugging Face, and more | | [Reranking Models](/docs/integrations/reranking/) | Enhance search results with advanced reranking models and techniques | -| [AI Frameworks](/docs/integrations/frameworks/) | Integrate with LangChain, LlamaIndex, Haystack, and other AI development frameworks | +| [AI Frameworks](/docs/integrations/frameworks/) | Integrate with LangChain, LlamaIndex, Kiln, and other AI development frameworks | | [Data Platforms](/docs/integrations/platforms/) | Deploy LanceDB on AWS, Google Cloud, Azure, and other cloud platforms | diff --git a/content/docs/integrations/frameworks/_index.md b/content/docs/integrations/frameworks/_index.md index 7a622751..33b5cbd6 100644 --- a/content/docs/integrations/frameworks/_index.md +++ b/content/docs/integrations/frameworks/_index.md @@ -11,6 +11,7 @@ sidebar_collapsed: true |:----------|:------------| | [LangChain](/docs/integrations/frameworks/langchain) | Build LLM applications with memory, agents, and chat models using LanceDB as a vector store | | [LlamaIndex](/docs/integrations/frameworks/llamaindex) | Create data-aware LLM applications with structured data retrieval and RAG pipelines | +| [Kiln AI](/docs/integrations/frameworks/kiln) | Build and evaluate RAG pipelines with a drag-and-drop UI or Python library, deploy to LanceDB | | [PromptTools](/docs/integrations/frameworks/prompttools) | Evaluate and optimize LLM prompts with LanceDB for storing and retrieving evaluation results | | [Pydantic](/docs/integrations/frameworks/pydantic) | Define structured data models and schemas for LanceDB tables with type safety | | [GenKit](/docs/integrations/frameworks/genkit) | Build AI applications with Google's GenKit framework using LanceDB for vector storage | diff --git a/content/docs/integrations/frameworks/kiln.md b/content/docs/integrations/frameworks/kiln.md new file mode 100644 index 00000000..d8945cd5 --- /dev/null +++ b/content/docs/integrations/frameworks/kiln.md @@ -0,0 +1,50 @@ +--- +title: "Kiln AI" +sidebar_title: "Kiln" +weight: 1 +--- + +[**Kiln**](https://kiln.tech) is a free tool for building production-ready AI systems, combining an intuitive desktop application and an open-source Python library. It supports RAG pipelines, evaluations, agents, MCP tool-calling, synthetic data generation, and fine-tuning. Kiln provides deep integration with LanceDB for vector search, full-text search (BM25), and hybrid search. + +## Quick Start: Build a RAG Pipeline in 5 Minutes with Kiln & LanceDB + +{{< vimeo 1119945690 >}} + +
+ +Kiln's [app](https://kiln.tech/download) makes it easy to: + + - Build a RAG pipeline with a simple drag-and-drop interface + - [Compare](#find-the-best-rag-pipeline-for-your-use-case) search index options (powered by LanceDB), document extractors, embedding models, and chunking strategies + - Create end-to-end [evaluations](https://docs.kiln.tech/docs/evaluations) to determine which search configuration works best for your use case + - Load your data from Kiln into LanceDB Cloud for production use + - Iterate with confidence by evaluating new content, prompts, models, and embeddings in minutes instead of weeks + +## Find the Best RAG Pipeline for Your Use Case + +There is no universal best RAG solution—only the best solution for your specific use case. Kiln makes it easy to compare state-of-the-art configurations and find which works best for you. + +Start with pre-configured templates for state-of-the-art RAG at various performance/quality/cost levels, or experiment with any combination of options: + +|Area|Technologies|Description| +|:----|:----|:----| +|Search Index|LanceDB|Compare LanceDB's vector search, full-text search (BM25), and hybrid search to find the best approach for your use case.| +|Content|Kiln Document Library|Collaborate on a document library with your team to find the best content for your RAG. Track every revision and tag document sets.| +|Document Extraction|Gemini, OpenAI GPT, Qwen VL, and more|Find the most accurate document extraction models for converting PDFs, images, audio, video, and other formats into textual data for RAG.| +|Embeddings|Embedding models from Gemini, OpenAI, Nomic, Qwen, and more|Find the embedding model best suited to your use case.| +|Chunking|LlamaIndex|Find the ideal chunk size and method.| + +## Get Started + +To get started, download the [Kiln App](https://kiln.tech/download), create a project, and navigate to "Docs & Search". + +See the [Kiln documentation for creating a RAG system](https://docs.kiln.tech/docs/documents-and-search-rag) for details on each step of the process. + +## More Information + + - [Kiln Homepage](https://kiln.tech) + - [Download the Kiln App](https://kiln.tech/download) + - [Kiln GitHub Repository](https://github.com/Kiln-AI/Kiln) + - [Building RAG Systems - Kiln Documentation](https://docs.kiln.tech/docs/documents-and-search-rag) + - [Python Library](https://pypi.org/project/kiln-ai/) or `pip install kiln_ai` + diff --git a/content/docs/overview/_index.md b/content/docs/overview/_index.md index 762bb096..67b34318 100644 --- a/content/docs/overview/_index.md +++ b/content/docs/overview/_index.md @@ -114,7 +114,7 @@ LanceDB integrates seamlessly with the modern AI ecosystem, providing connectors | Category | Integrations | Documentation | |:---|:---|:---| -| **[AI Frameworks](/docs/integrations/frameworks/)** | LangChain, LlamaIndex, Haystack | [AI Frameworks](/docs/integrations/frameworks/) | +| **[AI Frameworks](/docs/integrations/frameworks/)** | LangChain, LlamaIndex, Kiln | [AI Frameworks](/docs/integrations/frameworks/) | | **[Embedding Models](/docs/integrations/embedding/)** | OpenAI, Cohere, Hugging Face, Custom Models | [Embedding Models](/docs/integrations/embedding/) | | **[Reranking Models](/docs/integrations/reranking/)** | BGE-reranker, Cohere Rerank, Custom Models | [Reranking Models](/docs/integrations/reranking/) | | **[Data Platforms](/docs/integrations/platforms/)** | DuckDB, Pandas, Polars | [Data Platforms](/docs/integrations/platforms/) |