Welcome to the repo that documents how I built an intelligent, AI-powered chatbot integrated directly into my personal portfolio – designed to answer questions about my work, projects, and skills in real time.
Instead of making visitors scroll endlessly, I built a conversational assistant using Amazon Bedrock’s LLaMA 3.3 model and a Retrieval-Augmented Generation (RAG) architecture. This chatbot provides:
- 💬 Interactive Q&A on skills, projects, and experience
- 🧠 Document-aware answers using RAG
- 🤖 Smart LLM responses via LLaMA 3.3
- 🪄 Typing animation and responsive UI with Tailwind & Framer Motion
- ⚡ Rate-limiting and efficient inference to manage AWS costs
User Question
↓
Retrieve relevant context from Knowledge Base (PDFs, markdown, web)
↓
Construct RAG-style prompt
↓
Send to LLaMA 3.3 via Bedrock
↓
Return clean, markdown-formatted response
↓
Render via Next.js chat UI
- Knowledge Base: Documents stored in an S3 bucket (resume, about, projects, certifications)
- Embedding Model: Titan Embeddings v2
- LLM: Meta LLaMA 3.3 (70B, on-demand)
- Frontend: Next.js App Router + Tailwind + Framer Motion
- Deployment: Vercel
Upload:
resume.pdf
about-me.md
certifications.md
projects/*.md
- Go to Builder Tools > Knowledge Bases
- Connect your S3 bucket as a data source
- Optional: Add crawler links for GitHub, LinkedIn, etc.
- ✅ Titan Embeddings v2
- ✅ Meta LLaMA 3.3 70B (text generation)
- Sync content from S3 → Bedrock
- Select models for embedding and generation
The route:
- Accepts user input
- Retrieves context from Bedrock
- Sends combined context + prompt to LLaMA
- Returns markdown response
Key components:
Chatbot.tsx
: Handles toggling and stateChatInput.tsx
: Handles input + submitChatMessages.tsx
: Renders chatSuggestedQuestions.tsx
: Displays sample promptsFramer Motion
: Adds smooth animations
Middleware to prevent LLM abuse and control cost.
AWS_REGION=us-east-2
AWS_ACCESS_KEY_ID=XXXX
AWS_SECRET_ACCESS_KEY=XXXX
AWS_BEDROCK_KNOWLEDGE_BASE_ID=your-kb-id
AWS_BEDROCK_MODEL_ID=meta.llama3-3-70b-instruct-v1:0
AWS_BEDROCK_MODEL_FAMILY=llama
Push to GitHub → Connect to Vercel → Deploy
- Hosted on: https://rahulsaini.click
- Floating AI chatbot on every page
- Serves content from personal files and markdown
- Beautifully responsive UI
- Deeper understanding of LLM architectures and RAG pipelines
- Real-world use of Amazon Bedrock, OpenSearch, and embeddings
- Improved in API design, rate limiting, and frontend animation
- Learned how to design AI assistants that are scalable and cost-conscious
📧 sainirahul0802@gmail.com
📞 +1 205-643-1054
🔗 LinkedIn
📂 Resume
Folder/File | Purpose |
---|---|
app/api/query-bedrock/ |
API handler for LLM + RAG |
components/Chatbot/ |
All UI components |
utils/buildPayload.ts |
Prompt formatter for Bedrock LLM |
.env |
AWS + Bedrock config |