Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,3 +58,56 @@ jobs:

- name: Run tests
run: make -C packages/sdk/server-ai test

server-ai-langchain-linux:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]

steps:
- uses: actions/checkout@v4

- uses: ./.github/actions/ci
with:
workspace_path: packages/ai-providers/server-ai-langchain
python_version: ${{ matrix.python-version }}

- uses: ./.github/actions/build
with:
workspace_path: packages/ai-providers/server-ai-langchain

server-ai-langchain-windows:
runs-on: windows-latest
defaults:
run:
shell: powershell

strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]

steps:
- uses: actions/checkout@v4

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}

- name: Install poetry
uses: abatilo/actions-poetry@7b6d33e44b4f08d7021a1dee3c044e9c253d6439

- name: Configure poetry for local virtualenvs
run: poetry config virtualenvs.in-project true

- name: Install server-ai dependency first
working-directory: packages/sdk/server-ai
run: poetry install

- name: Install requirements
working-directory: packages/ai-providers/server-ai-langchain
run: poetry install

- name: Run tests
run: make -C packages/ai-providers/server-ai-langchain test
1 change: 1 addition & 0 deletions packages/ai-providers/server-ai-langchain/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ test: install
lint: #! Run type analysis and linting checks
lint: install
poetry run mypy src/ldai_langchain
poetry run isort --check --atomic src/ldai_langchain
poetry run pycodestyle src/ldai_langchain

.PHONY: build
Expand Down
182 changes: 177 additions & 5 deletions packages/ai-providers/server-ai-langchain/README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,195 @@
# LaunchDarkly AI SDK - LangChain Provider

This package provides LangChain integration for the LaunchDarkly Server-Side AI SDK.
[![PyPI](https://img.shields.io/pypi/v/launchdarkly-server-sdk-ai-langchain.svg)](https://pypi.org/project/launchdarkly-server-sdk-ai-langchain/)

## Status

🚧 **Coming Soon** - This package is a placeholder for future LangChain integration.
This package provides LangChain integration for the LaunchDarkly Server-Side AI SDK, allowing you to use LangChain models and chains with LaunchDarkly's tracking and configuration capabilities.

## Installation

```bash
pip install launchdarkly-server-sdk-ai-langchain
```

You'll also need to install the LangChain provider packages for the models you want to use:

```bash
# For OpenAI
pip install langchain-openai

# For Anthropic
pip install langchain-anthropic

# For Google
pip install langchain-google-genai
```

## Quick Start

```python
import asyncio
from ldclient import LDClient, Config, Context
from ldai import init
from ldai_langchain import LangChainProvider

# Initialize LaunchDarkly client
ld_client = LDClient(Config("your-sdk-key"))
ai_client = init(ld_client)

# Get AI configuration
context = Context.builder("user-123").build()
config = ai_client.config("ai-config-key", context, {})

async def main():
# Create a LangChain provider from the AI configuration
provider = await LangChainProvider.create(config)

# Use the provider to invoke the model
from ldai.models import LDMessage
messages = [
LDMessage(role="system", content="You are a helpful assistant."),
LDMessage(role="user", content="Hello, how are you?"),
]

response = await provider.invoke_model(messages)
print(response.message.content)

asyncio.run(main())
```

## Usage

### Using LangChainProvider with the Create Factory

The simplest way to use the LangChain provider is with the static `create` factory method, which automatically creates the appropriate LangChain model based on your LaunchDarkly AI configuration:

```python
# Coming soon
from ldai_langchain import LangChainProvider

# Create provider from AI configuration
provider = await LangChainProvider.create(ai_config)

# Invoke the model
response = await provider.invoke_model(messages)
```

### Using an Existing LangChain Model

If you already have a LangChain model configured, you can use it directly:

```python
from langchain_openai import ChatOpenAI
from ldai_langchain import LangChainProvider

# Create your own LangChain model
llm = ChatOpenAI(model="gpt-4", temperature=0.7)

# Wrap it with LangChainProvider
provider = LangChainProvider(llm)

# Use with LaunchDarkly tracking
response = await provider.invoke_model(messages)
```

### Structured Output

The provider supports structured output using LangChain's `with_structured_output`:

```python
response_structure = {
"type": "object",
"properties": {
"sentiment": {"type": "string", "enum": ["positive", "negative", "neutral"]},
"confidence": {"type": "number"},
},
"required": ["sentiment", "confidence"],
}

result = await provider.invoke_structured_model(messages, response_structure)
print(result.data) # {"sentiment": "positive", "confidence": 0.95}
```

### Tracking Metrics

Use the provider with LaunchDarkly's tracking capabilities:

```python
# Get the AI config with tracker
config = ai_client.config("ai-config-key", context, {})

# Create provider
provider = await LangChainProvider.create(config)

# Track metrics automatically
async def invoke():
return await provider.invoke_model(messages)

response = await config.tracker.track_metrics_of(
invoke,
lambda r: r.metrics
)
```

### Static Utility Methods

The `LangChainProvider` class provides several utility methods:

#### Converting Messages

```python
from ldai.models import LDMessage
from ldai_langchain import LangChainProvider

messages = [
LDMessage(role="system", content="You are helpful."),
LDMessage(role="user", content="Hello!"),
]

# Convert to LangChain messages
langchain_messages = LangChainProvider.convert_messages_to_langchain(messages)
```

#### Extracting Metrics

```python
from ldai_langchain import LangChainProvider

# After getting a response from LangChain
metrics = LangChainProvider.get_ai_metrics_from_response(ai_message)
print(f"Success: {metrics.success}")
print(f"Tokens used: {metrics.usage.total if metrics.usage else 'N/A'}")
```

#### Provider Name Mapping

```python
# Map LaunchDarkly provider names to LangChain provider names
langchain_provider = LangChainProvider.map_provider("gemini") # Returns "google-genai"
```

## API Reference

### LangChainProvider

#### Constructor

```python
LangChainProvider(llm: BaseChatModel, logger: Optional[Any] = None)
```

#### Static Methods

- `create(ai_config: AIConfigKind, logger: Optional[Any] = None) -> LangChainProvider` - Factory method to create a provider from AI configuration
- `convert_messages_to_langchain(messages: List[LDMessage]) -> List[BaseMessage]` - Convert LaunchDarkly messages to LangChain messages
- `get_ai_metrics_from_response(response: AIMessage) -> LDAIMetrics` - Extract metrics from a LangChain response
- `map_provider(ld_provider_name: str) -> str` - Map LaunchDarkly provider names to LangChain names
- `create_langchain_model(ai_config: AIConfigKind) -> BaseChatModel` - Create a LangChain model from AI configuration

#### Instance Methods

- `invoke_model(messages: List[LDMessage]) -> ChatResponse` - Invoke the model with messages
- `invoke_structured_model(messages: List[LDMessage], response_structure: Dict[str, Any]) -> StructuredResponse` - Invoke with structured output
- `get_chat_model() -> BaseChatModel` - Get the underlying LangChain model

## Documentation

For full documentation, please refer to the [LaunchDarkly AI SDK documentation](https://docs.launchdarkly.com/sdk/ai/python).
Expand Down
14 changes: 11 additions & 3 deletions packages/ai-providers/server-ai-langchain/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -24,26 +24,34 @@ packages = [{ include = "ldai_langchain", from = "src" }]

[tool.poetry.dependencies]
python = ">=3.9,<4"
launchdarkly-server-sdk-ai = ">=0.10.1"
# langchain-core = ">=0.1.0" # Uncomment when implementing

launchdarkly-server-sdk-ai = ">=0.11.0"
langchain-core = ">=0.2.0"
langchain = ">=0.2.0"

[tool.poetry.group.dev.dependencies]
pytest = ">=2.8"
pytest-cov = ">=2.4.0"
pytest-asyncio = ">=0.21.0"
mypy = "==1.18.2"
pycodestyle = ">=2.11.0"
isort = ">=5.12.0"

[tool.mypy]
python_version = "3.9"
ignore_missing_imports = true
install_types = true
non_interactive = true

[tool.isort]
profile = "black"
known_third_party = ["langchain", "langchain_core", "ldai"]
sections = ["FUTURE", "STDLIB", "THIRDPARTY", "FIRSTPARTY", "LOCALFOLDER"]


[tool.pytest.ini_options]
addopts = ["-ra"]
testpaths = ["tests"]
asyncio_mode = "auto"


[build-system]
Expand Down
2 changes: 2 additions & 0 deletions packages/ai-providers/server-ai-langchain/setup.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[pycodestyle]
max-line-length = 120
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
"""LaunchDarkly AI SDK - LangChain Provider.

This package provides LangChain integration for the LaunchDarkly Server-Side AI SDK.
This package provides LangChain integration for the LaunchDarkly Server-Side AI SDK,
"""

__version__ = "0.1.0"
from ldai_langchain.langchain_provider import LangChainProvider

# Placeholder for future LangChain provider implementation
# from ldai_langchain.langchain_provider import LangChainProvider
__version__ = "0.1.0"

__all__ = [
'__version__',
# 'LangChainProvider', # Uncomment when implemented
'LangChainProvider',
]
Loading