Skip to content

Django-Redis matching engine microservice designed for a crypto/FX exchange, using snapshot/lot system to recover redis exact state

Notifications You must be signed in to change notification settings

mrfatolahi1/Crypto-Matching-Engine

Repository files navigation

Matching Engine Microservice

This project is a Django-based matching engine microservice designed for a crypto/FX exchange scenario.
It receives already-funded orders, matches them using a deterministic price–time priority engine, and persists all state in Redis with snapshot + recovery support via PostgreSQL.


Features

  • Price–time priority matching

    • Full and partial fills
    • Deterministic, reproducible results
    • Single-threaded matching via Celery worker (no race conditions, no double spending)
  • Order & Trade Storage

    • Primary state in Redis:
      • Orders stored as order:<id> hashes
      • Order book stored in sorted sets per symbol (bids / asks)
      • Trades stored as JSON in trades:<symbol> lists
    • PostgreSQL used for:
      • RedisEventLog — append-only log of all write operations to Redis
      • RedisSnapshot — periodic snapshots of Redis state for crash recovery
  • Resilience & Recovery

    • Periodic snapshot task (every 10 minutes) persists Redis state into PostgreSQL
    • On crash, the engine can rebuild Redis:
      • Load last snapshot
      • Re-apply logs from RedisEventLog after the snapshot timestamp
    • Ensures Redis state can be reconstructed exactly as it was before crash
  • API (Django REST Framework)

    • POST /api/orders/ — enqueue a funded order for matching
    • POST /api/orders/{order_id}/cancel/ — cancel an open / partially-filled order
    • GET /api/debug/orders/ — return all orders from Redis (for debugging)
    • GET /api/debug/state/ — return all orders and trades from Redis (for E2E validation)
  • API Documentation (Swagger / OpenAPI)

    • /api/schema/ — raw OpenAPI schema (JSON/YAML, served by drf-spectacular)
    • /api/docs/ — Swagger UI
    • /api/redoc/ — ReDoc UI

Local Settings

You should create a file name local_settings.py in crypto_matching_engine package, this file is imported in your main settings file and provides local setting. it should provide at least:

  • Secret Ket
  • Allowed Hosts
  • Debug Status

in django settings standard form.


Tech Stack

  • Language: Python 3.12
  • Web framework: Django 5 + Django REST Framework
  • Matching engine core: Custom, Redis-backed order book
  • Broker & cache: Redis
  • Database: PostgreSQL
  • Task queue: Celery (worker + beat)
  • API Docs: drf-spectacular (OpenAPI 3 + Swagger UI)
  • Containerization: Docker + docker-compose

Architecture Overview

High-Level Flow

  1. Order ingestion

    • Client calls POST /api/orders/ with a funded order (symbol, side, price, quantity).
    • The view does not touch the order book directly:
      • It pushes a JSON payload into matching:queue in Redis.
    • Immediate response: { "status": "ENQUEUED" }.
  2. Matching (single-threaded via Celery)

    • A Celery worker runs the task engine.process_matching_queue in a loop:
      • Pops items from matching:queue (Redis list).
      • For each payload, calls create_and_match_new_order(...).
    • create_and_match_new_order:
      • Generates a new order_id (order:id:seq).
      • Stores the order in order:<id> hash.
      • Matches against the opposite book (bids / asks) using price–time priority.
      • Produces one or more trades, updates both taker and maker orders, and maintains the Redis sorted sets.
    • All Redis writes are also logged as RedisEventLog rows in PostgreSQL.
  3. Snapshots & Recovery

    • A periodic Celery beat task, snapshot_redis_state, runs every 10 minutes:
      • Reads the full Redis state relevant to matching (orders, books, trades).
      • Serializes and stores it in RedisSnapshot.
    • Recovery flow:
      • Load last snapshot into Redis.
      • Fetch all RedisEventLog entries after snapshot.created_at.
      • Re-apply the logs in order to reproduce the exact Redis state.
  4. Debug & QA

    • /api/debug/orders/ reads order:* hashes from Redis and returns normalized JSON.
    • /api/debug/state/ returns:
      • All orders (from order:*).
      • All trades (from trades:* lists).
    • External E2E scripts can:
      • Post many concurrent orders.
      • Cancel orders.
      • Pull /api/debug/state/ to verify:
        • No negative balances or remaining quantities.
        • quantity ≈ remaining + traded.
        • Trade prices satisfy price-time priority and no crossed book remains.

Running the Project with Docker

Prerequisites

  • Docker
  • docker-compose (or docker compose with recent Docker versions)

Environment Variables

Create a .env file in the project root (same directory as docker-compose.yml), for example:

POSTGRES_DB=crypto_matching_engine_db
POSTGRES_USER=adminuser
POSTGRES_PASSWORD=adminpassword
POSTGRES_HOST=db
POSTGRES_PORT=5432

REDIS_URL=redis://redis:6379/0

DJANGO_SETTINGS_MODULE=crypto_matching_engine.settings
DJANGO_SECRET_KEY=change-me
DJANGO_DEBUG=True
DJANGO_ALLOWED_HOSTS=*

Adjust usernames/passwords as needed. Make sure the same values are referenced in settings.py (or use dj-database-url if configured).

Starting the Stack

docker compose up --build

This will start:

  • web: Django + DRF application
  • db: PostgreSQL database
  • redis: Redis instance
  • celery_worker: Celery worker (single concurrency) consuming from matching queue
  • celery_beat: Celery beat scheduler (snapshot + matching loop)

Once up:


Running Tests

Inside the web container, you can run Django tests as usual:

docker compose exec web python manage.py test

You can also run a specific test module, e.g.:

docker compose exec web python manage.py test engine.tests.test_matching_engine_concurrent
docker compose exec web python manage.py test engine.tests.test_recovery

These tests cover:

  • Non-concurrent matching scenarios with various edge cases.
  • Heavy concurrent tests to assert:
    • Absence of deadlocks.
    • No race conditions.
    • No double spending on balances or order quantities.
  • Recovery tests that:
    • Create a snapshot.
    • Simulate a crash by clearing Redis.
    • Rebuild state from RedisSnapshot + RedisEventLog.
    • Assert the final Redis state equals the pre-crash state.

API Examples

Create Order

Request

POST /api/orders/

{
  "symbol": "BTC_USDT",
  "side": "BUY",
  "price": 100.0,
  "quantity": 1.0
}

Response

{
  "status": "ENQUEUED"
}

The actual order ID and matching details are produced asynchronously by the Celery worker and can be inspected via the debug endpoints.

Cancel Order

Request

POST /api/orders/{order_id}/cancel/

{
  "reason": "User requested cancel"
}

Response

{
  "status": "CANCELLED"
}

The engine will update the order status in Redis and ensure it is removed from the book.

Debug Endpoints

  • GET /api/debug/orders/
    Returns:

    {
      "orders": [
        {
          "id": 1,
          "symbol": "BTC_USDT",
          "side": "BUY",
          "price": 100.0,
          "quantity": 1.0,
          "remaining": 0.0,
          "status": "FILLED",
          "created_at": 1732880000.123
        },
        ...
      ]
    }
  • GET /api/debug/state/
    Returns:

    {
      "orders": [...],
      "trades": [
        {
          "id": 1,
          "symbol": "BTC_USDT",
          "price": 100.0,
          "quantity": 1.0,
          "maker_order_id": 5,
          "taker_order_id": 12,
          "timestamp": 1732880001.123
        },
        ...
      ]
    }

These are intended only for local debugging / QA and should be protected or disabled in production.


Swagger / OpenAPI Integration

The project uses drf-spectacular for schema generation:

  • Add drf_spectacular to INSTALLED_APPS.
  • Configure DRF:
REST_FRAMEWORK = {
    "DEFAULT_SCHEMA_CLASS": "drf_spectacular.openapi.AutoSchema",
}
  • Configure URLs:
from drf_spectacular.views import (
    SpectacularAPIView,
    SpectacularSwaggerView,
    SpectacularRedocView,
)

urlpatterns = [
    # ...
    path("api/schema/", SpectacularAPIView.as_view(), name="schema"),
    path(
        "api/docs/",
        SpectacularSwaggerView.as_view(url_name="schema"),
        name="swagger-ui",
    ),
    path(
        "api/redoc/",
        SpectacularRedocView.as_view(url_name="schema"),
        name="redoc",
    ),
]

For each POST endpoint you can use serializers and @extend_schema to get a nice request/response body in Swagger UI.


API Tests & Load Testing

Alongside the Django test suite under engine/tests/, this project includes external API-level tests in the api_tests/ folder.
These tests exercise the service from the outside, via HTTP, against a running Docker stack (including Celery, Redis, and PostgreSQL).

1. Load testing with Locust

File: api_tests/locustfile.py

This file defines a MatchingUser load profile that:

  • randomly chooses a symbol from ["BTC_USDT", "ETH_USDT", "XRP_USDT"]
  • randomly chooses BUY/SELL, price, and quantity
  • sends POST /api/orders/ requests under configurable user load

Run Locust locally (outside Docker)

  1. Install Locust:

    pip install locust
  2. Make sure the Docker stack is up:

    docker compose up
  3. Run Locust from the project root:

    locust -f api_tests/stress_test_with_locust.py --host=http://localhost:8000
  4. Open the web UI at:

There you can configure:

  • number of users
  • spawn rate
  • see response times, failures, and percentiles.

Run Locust from a Docker service (optional)

You can also create a locust service in docker-compose.yml that mounts the repo and runs:

locust -f api_tests/locustfile.py --host=http://web:8000

so that Locust runs next to the stack inside Docker.

2. End-to-end sanity check script

File: api_tests/e2e_sanity_check.py

This script performs a full end-to-end check against a running instance of the service:

  1. Fetches current orders via GET /api/debug/orders/ (for context).
  2. Creates:
    • 10 SELL orders as initial liquidity
    • 10 BUY orders that should match against the liquidity
  3. Waits a few seconds for Celery to process the queue.
  4. Fetches the full state via GET /api/debug/state/, which returns:
    • orders: all orders from Redis
    • trades: all trades from Redis
  5. Performs checks:
    • basic invariants on orders:
      • 0 <= remaining <= quantity
      • FILLED → remaining == 0
      • OPEN → remaining == quantity (within tolerance)
    • consistency between orders and trades (if trades are present):
      • each trade has valid maker/taker orders
      • maker/taker have opposite sides (BUY vs SELL)
      • trade price lies between maker and taker prices
      • for each order: quantity ≈ remaining + sum(trade quantities)
  6. Cancels up to 3 cancellable orders (OPEN / PARTIALLY_FILLED) using POST /api/orders/{id}/cancel/.
  7. Fetches state again and re-runs the checks, plus explicit verification that the chosen orders are now CANCELLED.

Configuration

By default, the script targets:

BASE_URL = "http://localhost:8000"

You can optionally change it to read from an environment variable (e.g. MATCHING_BASE_URL) if you want to run it inside a container.

Running the E2E script

Outside Docker, from the project root:

pip install requests
docker compose up  # if not already running
python api_tests/e2e_sanity_check.py

Inside the web container (for example):

docker compose exec web python api_tests/e2e_sanity_check.py

This script is meant as a practical tool for:

  • verifying that matching + cancel logic behaves correctly end-to-end,
  • checking that Celery processing, Redis state, and debug endpoints are all wired correctly,
  • providing a reproducible scenario other developers can run after cloning the repo.

Concurrency & Safety Guarantees

  • Matching is effectively single-threaded per deployment:
    • Celery worker is run with --concurrency=1.
    • This guarantees deterministic matching and prevents race conditions inside the book.
  • Redis is used as the single source of truth for the live book.
  • PostgreSQL provides durability and recovery via:
    • Snapshots (RedisSnapshot model).
    • Append-only Redis write logs (RedisEventLog model).

High-load concurrent tests (outside the service, using HTTP) can be used to hammer:

  • POST /api/orders/ with many parallel clients.
  • POST /api/orders/{id}/cancel/ interleaved with takers.
  • Then GET /api/debug/state/ to verify:
    • No order quantity is double-spent.
    • The book has no crossed state (best bid < best ask).
    • The sum of trades per order matches its filled quantity.

Development Notes

  • Local inspection:
    • PostgreSQL: connect via psql to the db service (crypto_matching_engine_db).
    • Redis: use redis-cli on the redis service to inspect order:*, trades:*, and book keys.
  • Admin / inspection:
    • You can register RedisEventLog and RedisSnapshot in Django admin to inspect them via /admin/.
  • Logging:
    • Each Redis write relevant to the matching engine should have a corresponding RedisEventLog entry.
    • This makes it possible to reconstruct the exact sequence of operations during incident analysis.
  • Create a UI to see live orderbook and matchings

About

Django-Redis matching engine microservice designed for a crypto/FX exchange, using snapshot/lot system to recover redis exact state

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published