Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 64 additions & 26 deletions libs/tracker/README.md
Original file line number Diff line number Diff line change
@@ -1,62 +1,100 @@
# LLMstudio by [TensorOps](http://tensorops.ai "TensorOps")
# LLMstudio Tracker

Prompt Engineering at your fingertips

![LLMstudio logo](https://imgur.com/Xqsj6V2.gif)
LLMstudio Tracker is the module of LLMstudio that allows monitoring and logging your LLM calls.
It supports seamless integration with the LLMstudio environment through configurable tracking servers, allowing for detailed insights into synchronous and asynchronous chat interactions. By leveraging LLMstudio Tracker, users can gain insights on model performance and streamline development workflows with actionable analytics.

## 🌟 Features

![LLMstudio UI](https://imgur.com/wrwiIUs.png)

- **LLM Proxy Access**: Seamless access to all the latest LLMs by OpenAI, Anthropic, Google.
- **Custom and Local LLM Support**: Use custom or local open-source LLMs through Ollama.
- **Prompt Playground UI**: A user-friendly interface for engineering and fine-tuning your prompts.
- **Python SDK**: Easily integrate LLMstudio into your existing workflows.
- **Monitoring and Logging**: Keep track of your usage and performance for all requests.
- **LangChain Integration**: LLMstudio integrates with your already existing LangChain projects.
- **Batch Calling**: Send multiple requests at once for improved efficiency.
- **Smart Routing and Fallback**: Ensure 24/7 availability by routing your requests to trusted LLMs.
- **Type Casting (soon)**: Convert data types as needed for your specific use case.
- **Logs Persistence with SQLAlquemy**: You can configure the tracker to use a database of your choice (SQLlite, Postgres, Bigquery, etc...)

## 🚀 Quickstart

Don't forget to check out [https://docs.llmstudio.ai](docs) page.

## Installation

Install the latest version of **LLMstudio** using `pip`. We suggest that you create and activate a new environment using `conda`
Install the latest version of **LLMstudio** using `pip`. We suggest that you create and activate a new virtual environment.

```bash
pip install 'llmstudio[tracker]'
```

Create a `.env` file at the same path you'll run **LLMstudio**
## How to run

To configure the tracker host, port, and database URI, create a `.env` file at the same path you'll run **LLMstudio** and set values for:
- LLMSTUDIO_TRACKING_HOST (default is 0.0.0.0)
- LLMSTUDIO_TRACKING_PORT (default is 50002)
- LLMSTUDIO_TRACKING_URI (the default is sqlite:///./llmstudio_mgmt.db)

If you skip this step, LLMstudio will just use the default values.


```bash
OPENAI_API_KEY="sk-api_key"
LLMSTUDIO_TRACKING_HOST=0.0.0.0
LLMSTUDIO_TRACKING_PORT=8002
LLMSTUDIO_TRACKIN_URI="your_db_uri"

```

Now you should be able to run **LLMstudio Tracker** using the following command.
### Launching from a terminal

Now you should be able to run **LLMstudio Tracker** using the following command:

```bash
llmstudio server --tracker
```

### Launching directly in your code

Alternatively, you can start the server in your code:
```python
from llmstudio.server import start_servers
start_servers(proxy=False, tracker=True)
```

When the `--tracker` flag is set, you'll be able to access the [Swagger at http://0.0.0.0:50002/docs (default port)](http://0.0.0.0:50002/docs)

If you didn't provide the URI to your database, LLMstudio will create an SQLite database at the root of your project and write the logs there.

## Usage

Now, you can initialize an LLM to make calls and link it to your tracking configuration so that the tracker will log the calls.

```python
from llmstudio_tracker.tracker import TrackingConfig

tracker_config = TrackingConfig(host="0.0.0.0", port="50002") # needs to match what was set in your .env file

# You can set OPENAI_API_KEY in your .env file
openai = LLM("openai", tracking_config = tracker_config)

openai.chat("Hey!", model="gpt-4o")
```

### Analysing the logs

```python
from llmstudio_tracker.tracker import Tracker

tracker = Tracker(tracking_config=tracker_config)

logs = tracker.get_logs()
logs.json()
```




## 📖 Documentation

- [Visit our docs to learn how the SDK works](https://docs.LLMstudio.ai) (coming soon)
- Checkout our [notebook examples](https://github.com/TensorOpsAI/LLMstudio/tree/main/examples) to follow along with interactive tutorials
- [Visit our docs to learn how it works](https://docs.LLMstudio.ai)
- Checkout our [notebook examples](https://github.com/TensorOpsAI/LLMstudio/tree/main/examples) to follow along with interactive tutorials, especially:
- [Intro to LLMstudio Tracking](https://github.com/TensorOpsAI/LLMstudio/tree/main/examples/01_intro_to_llmstudio_with_tracking.ipynb)
- [BigQuery Integration](https://github.com/TensorOpsAI/LLMstudio/tree/main/examples/04_bigquery_integration.ipynb)

## 👨‍💻 Contributing

- Head on to our [Contribution Guide](https://github.com/TensorOpsAI/LLMstudio/tree/main/CONTRIBUTING.md) to see how you can help LLMstudio.
- Join our [Discord](https://discord.gg/GkAfPZR9wy) to talk with other LLMstudio enthusiasts.

## Training

[![Banner](https://imgur.com/XTRFZ4m.png)](https://www.tensorops.ai/llm-studio-workshop)

---

Expand Down
2 changes: 1 addition & 1 deletion libs/tracker/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[tool.poetry]
name = "llmstudio-tracker"
version = "1.0.3a1"
description = ""
description = "LLMstudio Tracker is the module of LLMstudio that allows monitoring and logging your LLM calls. It supports seamless integration with the LLMstudio environment through configurable tracking servers, allowing for detailed insights into synchronous and asynchronous chat interactions. By leveraging LLMstudio Tracker, users can gain insights on model performance and streamline development workflows with actionable analytics."
authors = ["Diogo Goncalves <diogo.goncalves@tensorops.ai>"]
readme = "README.md"

Expand Down
Loading