Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 92 additions & 0 deletions atomic-examples/fastapi-memory/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# FastAPI with Atomic Agents

A simple example demonstrating how to integrate Atomic Agents with FastAPI for building stateful conversational APIs.

## Features

- Session-based conversation management
- RESTful API endpoints for chat interactions
- Automatic session creation and cleanup
- Environment-based configuration

## Setup

1. Install dependencies:
```bash
poetry install
```

2. Create a `.env` file with your OpenAI API key:
```bash
cp .env.example .env
# Edit .env and add your OpenAI API key
```

## Running the Example

Start the FastAPI server:
```bash
poetry run python fastapi_memory/main.py
```

The API will be available at `http://localhost:8000`.

## API Documentation

Once running, visit:
- Interactive API docs: `http://localhost:8000/docs`
- Alternative docs: `http://localhost:8000/redoc`

## Usage Examples

### Send a message (creates new session automatically):
```bash
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{"message": "Hello, what can you help me with?"}'
```

### Continue a conversation with session ID:
```bash
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-d '{"message": "Tell me more about that", "session_id": "user123"}'
```

### List active sessions:
```bash
curl "http://localhost:8000/sessions"
```

### Clear a specific session:
```bash
curl -X DELETE "http://localhost:8000/sessions/user123"
```

## How It Works

The example demonstrates several key patterns:

1. **Session Management**: Each session maintains its own agent instance with independent conversation history.

2. **Lazy Initialization**: Agent instances are created on-demand when a session is first accessed.

3. **Automatic Cleanup**: The lifespan context manager ensures proper cleanup when the application shuts down.

4. **Type Safety**: Uses Pydantic schemas for request/response validation.

## Project Structure

```
fastapi-memory/
├── pyproject.toml # Project dependencies
├── .env.example # Environment variable template
├── README.md # This file
└── fastapi_memory/
└── main.py # FastAPI application
```

## Related Examples

For more advanced usage, check out:
- `mcp-agent/example-client/example_client/main_fastapi.py` - Advanced example with MCP protocol integration
94 changes: 94 additions & 0 deletions atomic-examples/fastapi-memory/fastapi_memory/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
import os
from contextlib import asynccontextmanager
from typing import Optional

import instructor
import openai
from atomic_agents.agents.atomic_agent import AtomicAgent
from atomic_agents.lib.base.base_io_schema import BaseIOSchema
from atomic_agents.lib.components.agent_config import AgentConfig
from atomic_agents.lib.components.system_prompt_generator import SystemPromptGenerator
from dotenv import load_dotenv
from fastapi import FastAPI, HTTPException
from pydantic import Field

load_dotenv()


class ChatRequest(BaseIOSchema):
message: str = Field(..., description="User message")
session_id: Optional[str] = Field(None, description="Session identifier for conversation continuity")


class ChatResponse(BaseIOSchema):
response: str = Field(..., description="Agent response")
session_id: str = Field(..., description="Session identifier")


sessions = {}


def get_or_create_agent(session_id: str) -> AtomicAgent:
if session_id not in sessions:
client = instructor.from_openai(openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY")))

system_prompt = SystemPromptGenerator(
background=["You are a helpful AI assistant that maintains conversation context."],
steps=["Understand the user's message", "Provide a clear and helpful response"],
output_instructions=["Be concise and friendly", "Reference previous context when relevant"],
)

config = AgentConfig(
client=client,
model="gpt-4o-mini",
system_prompt_generator=system_prompt,
)

sessions[session_id] = AtomicAgent(config=config)

return sessions[session_id]


@asynccontextmanager
async def lifespan(app: FastAPI):
yield
sessions.clear()


app = FastAPI(
title="Atomic Agents FastAPI Example",
description="Simple example showing FastAPI integration with Atomic Agents",
lifespan=lifespan,
)


@app.post("/chat", response_model=ChatResponse)
async def chat(request: ChatRequest):
try:
session_id = request.session_id or "default"
agent = get_or_create_agent(session_id)

result = agent.run(ChatRequest(message=request.message))
return ChatResponse(response=result.response, session_id=session_id)

except Exception as e:
raise HTTPException(status_code=500, detail=str(e))


@app.delete("/sessions/{session_id}")
async def clear_session(session_id: str):
if session_id in sessions:
del sessions[session_id]
return {"message": f"Session {session_id} cleared"}
raise HTTPException(status_code=404, detail="Session not found")


@app.get("/sessions")
async def list_sessions():
return {"active_sessions": list(sessions.keys())}


if __name__ == "__main__":
import uvicorn

uvicorn.run(app, host="0.0.0.0", port=8000)
20 changes: 20 additions & 0 deletions atomic-examples/fastapi-memory/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
[tool.poetry]
name = "fastapi-memory"
version = "0.1.0"
description = "Simple FastAPI integration example with Atomic Agents"
authors = ["BrainBlend AI"]
readme = "README.md"

[tool.poetry.dependencies]
python = ">=3.12,<4.0"
atomic-agents = {path = "../..", develop = true}
fastapi = "^0.115.14"
uvicorn = "^0.32.1"
instructor = "==1.9.2"
openai = ">=1.0.0"
pydantic = ">=2.10.3,<3.0.0"
python-dotenv = ">=1.0.1,<2.0.0"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"