Skip to content

A smart AI-powered document assistant for contextual Q&A and challenge-based learning. Upload documents, ask questions, and test your understanding with justifications and references, powered by FastAPI, Streamlit, and Ollama LLM with Gemma3:4B .

Notifications You must be signed in to change notification settings

Drizer909/GenAI_work_by_Shivam_Sharma

Repository files navigation

🤖 RAG Q&A Assistant

A smart AI-powered document assistant for contextual Q&A and challenge-based learning. Upload a document, ask questions, and test your understanding—all with justifications and references.


###Main_Page Screenshot 2025-06-25 at 11 03 24

###ASk_Me Screenshot 2025-06-25 at 11 04 23

###Challenge_Mode Screenshot 2025-06-25 at 11 06 21 Screenshot 2025-06-25 at 11 06 37 Screenshot 2025-06-25 at 11 06 51

⚡ Quick Setup

Prerequisites

  • Python 3.9+
  • Ollama installed and running
  • Gemma3:4b model downloaded (ollama pull gemma3:4b)

Installation & Run

  1. Clone the repo
    git clone <repository-url>
    cd Rag_1-main
  2. Create and activate a virtual environment
    python -m venv rag_env
    source rag_env/bin/activate  # Windows: rag_env\Scripts\activate
  3. Install dependencies
    pip install -r requirements.txt
  4. Start Ollama
    ollama serve
    ollama pull gemma3:4b
  5. Start the backend
    cd backend
    python main.py
    # Backend: http://localhost:8000
  6. Start the frontend (new terminal)
    streamlit run streamlit_app.py
    # Frontend: http://localhost:8501

🏗️ Architecture & Reasoning Flow

System Overview

  • Frontend: Streamlit (Python)
  • Backend: FastAPI (Python)
  • LLM: Ollama (Gemma3:4b)
  • Document Parsing: PyPDF2, LangChain

Reasoning/Data Flow

  1. Document Upload:
    • User uploads PDF/TXT via Streamlit UI
    • Backend extracts text, generates summary
  2. Q&A Chat:
    • User asks a question
    • Backend retrieves context, sends to LLM
    • LLM generates answer, justification, and highlights source text
    • Frontend displays answer, justification, and highlights
  3. Challenge Mode:
    • Backend generates comprehension questions from document
    • User answers; backend evaluates, scores, and provides feedback/reasoning

📖 Usage

  1. Upload a PDF/TXT on the Document Upload page and process it
  2. Q&A Chat: Ask questions, get answers with justifications and source text
  3. Challenge Mode: Generate questions, answer, and receive feedback and scores

🗂️ Project Structure

Rag_1-main/
├── backend/
│   ├── main.py
│   ├── models.py
│   ├── utils.py
│   └── prompt.py
├── streamlit_app.py
├── requirements.txt
├── activate_env.sh
├── rag_env/  # (virtual environment)
└── README.md

🛠️ Troubleshooting

  • Ollama not running: ollama serve
  • Model not found: ollama pull gemma3:4b
  • Virtualenv issues: Delete and recreate rag_env
  • Port conflicts: Change ports in main.py/streamlit_app.py
  • Import errors: pip install -r requirements.txt --force-reinstall

📄 License

MIT License. See LICENSE file.


For questions or issues, please open an issue in the repository.

About

A smart AI-powered document assistant for contextual Q&A and challenge-based learning. Upload documents, ask questions, and test your understanding with justifications and references, powered by FastAPI, Streamlit, and Ollama LLM with Gemma3:4B .

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published