Skip to content

A full-stack AI chatbot using local (Ollama) & cloud (OpenRouter) LLMs. Built with .NET 9 API & Angular 20 UI. Easily run models like PHI-3, Mistral, Gemma, Llama3 locally or online.

License

Notifications You must be signed in to change notification settings

hardikpatelse/AIChatBot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

82 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– AIChatBot

AIChatBot is a local and online AI-powered chatbot built using open-source language models. This project supports both locally hosted models (via Ollama) and cloud-based models via OpenRouter. It demonstrates the integration of AI with a .NET 8 API and Angular 20 frontend.

✨ Latest Features

πŸ†• RAG (Retrieval-Augmented Generation) Integration

Transform your chatbot experience with Knowledge-Based AI! Upload your documents and get contextually-aware responses based on your own content.

Key Highlights:

  • πŸ“„ Multi-format Document Support: Upload .txt, .md, and .pdf files
  • πŸ” Intelligent Document Search: Advanced retrieval algorithms find relevant content
  • πŸ’¬ Context-Aware Responses: AI answers based on your uploaded documents
  • πŸ“Š Source Attribution: See exactly which documents informed each response
  • πŸ—‚οΈ Document Management: Easy upload, view, and delete capabilities
  • πŸ‘€ User-Specific Collections: Each user maintains their own document library

Experience AI that truly understands YOUR content!


🎬 Demo Video

Watch the AIChatBot in action on YouTube:
πŸ“Ί AIChatBot Demo


πŸ› οΈ Prerequisites

To run this project locally, ensure the following:


🧱 Environment Setup (Local Models using Ollama)

1. Install Ubuntu 20.04.6 LTS

  • Go to Microsoft Store β†’ Search for Ubuntu 20.04.6 LTS β†’ Install.
  • Open Ubuntu and create your UNIX user account.

2. Install Ollama inside Ubuntu

curl -fsSL https://ollama.com/install.sh | sh

3. Pull and Run Models

To pull and run the desired models:

# Pull models
ollama pull phi3:latest
ollama pull mistral:latest
ollama pull gemma:2b
ollama pull llama3:latest

# Run models
ollama run phi3
ollama run mistral
ollama run gemma:2b
ollama run llama3

4. Manage Models

# List all pulled models
ollama list

# Stop a running model
ollama stop phi3

# View running models
ps aux | grep ollama

5. Shutdown Ubuntu

From Ubuntu terminal:

shutdown now

Or simply close the terminal window if you don’t need a full shutdown.


☁️ Online Models via OpenRouter

  1. Go to https://openrouter.ai and sign up.

  2. Navigate to API Keys in your profile and generate an API key.

  3. Set this key as an environment variable in your API project:

    export OPENROUTER_API_KEY=your_key_here
  4. Models used:

    • google/gemma-3-27b-it:free
    • deepseek/deepseek-chat-v3-0324:free

API requests are routed via OpenRouter using this key, supporting seamless AI chat.


πŸ€– AIChatBot Integration

πŸ”§ Backend (API - .NET 8)

  1. Navigate to AIChatBot.API/
  2. Run the following commands:
dotnet restore
dotnet build
dotnet run
  • Ensure appsettings.json file includes:
ApiKey=YOUR_KEY_HERE

πŸ’¬ Frontend (UI - Angular 20)

  1. Navigate to AIChatBot.UI/
  2. Run:
npm install
ng serve
  • Access the chatbot UI at http://localhost:4200/

πŸ“š RAG (Retrieval-Augmented Generation) Features

The AIChatBot includes advanced RAG capabilities that allow AI models to answer questions based on your uploaded documents. This feature significantly enhances the AI's ability to provide contextually relevant and accurate responses.

πŸ—οΈ RAG Architecture

The RAG system consists of several key components:

  1. Document Processing Pipeline:

    • File upload handling for multiple formats
    • Text extraction from PDF, TXT, and MD files
    • Content chunking for efficient retrieval
    • In-memory indexing with similarity search
  2. Retrieval System:

    • Semantic search across document chunks
    • Top-K retrieval (configurable, default: 3 chunks)
    • Relevance scoring and ranking
    • Source attribution and metadata tracking
  3. Generation Enhancement:

    • Context-aware prompt construction
    • Integration with all supported AI models
    • Source citation in responses
    • Fallback to general knowledge when needed

πŸŽ›οΈ RAG Configuration

The RAG system supports various AI models with different levels of effectiveness:

Model Category Models RAG Performance
Best RAG Support GPT-3.5 Turbo
Gemini Flash 2.0 (Unlimited)
Gemini Flash 2.0 (Limited)
⭐⭐⭐⭐⭐
Good RAG Support DeepSeek v3
Gemma 3 27B
LLaMA 3
⭐⭐⭐⭐
Basic RAG Support PHI-3
Mistral 7B
⭐⭐⭐

πŸš€ Using RAG Features

  1. Upload Documents:

    # Supported formats
    - Plain text files (.txt)
    - Markdown files (.md) 
    - PDF documents (.pdf)
  2. Document Management:

    • View all uploaded documents
    • Delete individual documents
    • User-specific document collections
    • Automatic indexing upon upload
  3. RAG-Enhanced Chat:

    • Select "Knowledge-Based (RAG)" mode
    • Ask questions about your uploaded content
    • Receive responses with source attribution
    • Contextual answers based on document content

πŸ’Ύ Document Storage

Currently, the RAG system uses in-memory storage (InMemoryRagStore), which provides:

  • Fast retrieval performance
  • Simple deployment setup
  • Automatic cleanup on application restart
  • User-isolated document collections

Note: For production deployments, consider implementing persistent storage solutions.


πŸ§ͺ Model & Environment Summary

Model Type Source Access RAG Support
PHI-3:latest Local Ollama ollama run ⭐⭐⭐
Mistral:latest Local Ollama ollama run ⭐⭐⭐
Gemma:2b Local Ollama ollama run ⭐⭐⭐
Llama3:latest Local Ollama ollama run ⭐⭐⭐⭐
google/gemma-3-27b-it:free Online OpenRouter.ai API Key ⭐⭐⭐⭐
deepseek/deepseek-chat-v3-0324 Online OpenRouter.ai API Key ⭐⭐⭐⭐
google/gemini-2.0-flash-exp Online OpenRouter.ai API Key ⭐⭐⭐⭐⭐
openai/gpt-3.5-turbo-0613 Online OpenRouter.ai API Key ⭐⭐⭐⭐⭐
google/gemini-2.0-flash-001 Online OpenRouter.ai API Key ⭐⭐⭐⭐⭐

πŸ“‚ Project Structure

AIChatBot/
β”‚
β”œβ”€β”€ AIChatBot.API/           # .NET 8 API for chatbot
β”‚   β”œβ”€β”€ Controllers/         # API endpoints
β”‚   β”‚   └── DocumentsController.cs    # RAG document upload/management
β”‚   β”œβ”€β”€ Services/           # Business logic services
β”‚   β”‚   β”œβ”€β”€ RagChatService.cs         # RAG-enabled chat functionality
β”‚   β”‚   β”œβ”€β”€ InMemoryRagStore.cs       # Document indexing and search
β”‚   β”‚   β”œβ”€β”€ AgentService.cs           # AI tool integration
β”‚   β”‚   └── ChatService.cs            # Standard chat functionality
β”‚   β”œβ”€β”€ Interfaces/         # Service contracts
β”‚   β”‚   └── IRagStore.cs              # RAG storage interface
β”‚   └── Migrations/         # Database schema updates for RAG
β”œβ”€β”€ AIChatBot.UI/           # Angular 20 UI frontend
β”‚   └── src/app/components/
β”‚       β”œβ”€β”€ document-upload/          # RAG document upload component
β”‚       β”œβ”€β”€ chat/                     # Main chat interface
β”‚       └── model-selector/           # AI model and mode selection
└── README.md               # Project documentation

🧠 AI Tools & Agent Integration

The AIChatBot supports three advanced operation modes beyond simple chat:

1. πŸ› οΈ Tool-Enabled AI

In this mode, the AI can recognize specific tasks in user prompts and use internal tools (functions) to perform actions. Integrated tools include:

Tool Function Description Example Prompt
CreateFile Creates a text file with given content "Create a file called report.txt with the text Hello world."
FetchWebData Fetches the HTML/content of a public URL "Fetch the content of https://example.com"
SendEmail Simulates sending an email (console-logged) "Send an email to john@example.com with subject Hello."

These functions are executed server-side in .NET, with input parsed from natural language prompts.

2. πŸ“š Knowledge-Based (RAG) Mode

NEW FEATURE: The RAG (Retrieval-Augmented Generation) mode enables AI to answer questions based on your uploaded documents. This powerful feature allows you to:

🎯 Key Capabilities:

  • Upload Documents: Support for .txt, .md, and .pdf files
  • Intelligent Retrieval: Automatically finds relevant content from your documents
  • Source Attribution: AI responses include references to source documents
  • Document Management: View, organize, and delete uploaded documents
  • User-Specific Storage: Each user has their own document collection

πŸ“„ Supported File Formats:

  • Text Files: .txt - Plain text documents
  • Markdown Files: .md - Formatted markdown documents
  • PDF Documents: .pdf - Portable document format

πŸš€ How to Use RAG Mode:

  1. Upload Documents: Use the drag-and-drop interface or browse to upload documents
  2. Select RAG Mode: Choose "Knowledge-Based (RAG)" from the chat mode dropdown
  3. Ask Questions: Query your documents using natural language
  4. Get Contextual Answers: Receive AI responses enriched with your document content

πŸ’‘ Example RAG Interaction:

User: "What are the key findings in the latest market report?"
AI: Based on the provided context from "market_analysis_2025.pdf", the key findings include:
1. Consumer spending increased by 15% in Q4
2. Digital transformation investments rose by 23%
3. Supply chain disruptions decreased significantly

*Sources: 1 document(s) referenced*

3. πŸ€– AI Agent Mode (Planning + Action)

The AI agent is capable of:

  • Understanding high-level tasks
  • Selecting and invoking appropriate tools
  • Providing intelligent responses based on the outcome

This is powered by an AgentService that works with both local LLMs (via Ollama) and cloud models (via OpenRouter) to determine the right function to execute and handle the response.

You can toggle between AI modes via the UI:

  • Chat-Only Mode
  • AI + Tools Mode
  • Knowledge-Based (RAG) Mode
  • Agent Mode (multi-step planning, coming soon)

πŸš€ Get Started

Quick Start Guide

  1. Choose your preferred model type (local or online)
  2. Start the backend using .NET 8
  3. Start the frontend using Angular CLI
  4. Access AIChatBot at http://localhost:4200/

Using RAG Features (NEW!)

  1. Upload Documents:

    • Click the "Documents for RAG" section in the sidebar
    • Drag & drop or browse to upload .txt, .md, or .pdf files
    • Wait for successful upload confirmation
  2. Enable Knowledge-Based Mode:

    • Select "Knowledge-Based (RAG)" from the chat mode dropdown
    • Choose an AI model with good RAG support (⭐⭐⭐⭐ or ⭐⭐⭐⭐⭐)
  3. Start Asking Questions:

    Examples:
    "Summarize the key points from my uploaded documents"
    "What does the report say about market trends?"
    "Find information about [specific topic] in my files"
    
  4. Review Source Attribution:

    • AI responses will include "Sources: X document(s) referenced"
    • Responses are enriched with content from your uploaded documents

πŸ’– Contributing

Pull requests and suggestions are welcome! Feel free to fork the repo and enhance it.


πŸ“„ License

This project is open-source and available under the MIT License.

About

A full-stack AI chatbot using local (Ollama) & cloud (OpenRouter) LLMs. Built with .NET 9 API & Angular 20 UI. Easily run models like PHI-3, Mistral, Gemma, Llama3 locally or online.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •