This project is a Django-based matching engine microservice designed for a crypto/FX exchange scenario.
It receives already-funded orders, matches them using a deterministic price–time priority engine, and persists
all state in Redis with snapshot + recovery support via PostgreSQL.
-
Price–time priority matching
- Full and partial fills
- Deterministic, reproducible results
- Single-threaded matching via Celery worker (no race conditions, no double spending)
-
Order & Trade Storage
- Primary state in Redis:
- Orders stored as
order:<id>hashes - Order book stored in sorted sets per symbol (
bids/asks) - Trades stored as JSON in
trades:<symbol>lists
- Orders stored as
- PostgreSQL used for:
RedisEventLog— append-only log of all write operations to RedisRedisSnapshot— periodic snapshots of Redis state for crash recovery
- Primary state in Redis:
-
Resilience & Recovery
- Periodic snapshot task (every 10 minutes) persists Redis state into PostgreSQL
- On crash, the engine can rebuild Redis:
- Load last snapshot
- Re-apply logs from
RedisEventLogafter the snapshot timestamp
- Ensures Redis state can be reconstructed exactly as it was before crash
-
API (Django REST Framework)
POST /api/orders/— enqueue a funded order for matchingPOST /api/orders/{order_id}/cancel/— cancel an open / partially-filled orderGET /api/debug/orders/— return all orders from Redis (for debugging)GET /api/debug/state/— return all orders and trades from Redis (for E2E validation)
-
API Documentation (Swagger / OpenAPI)
/api/schema/— raw OpenAPI schema (JSON/YAML, served by drf-spectacular)/api/docs/— Swagger UI/api/redoc/— ReDoc UI
You should create a file name local_settings.py in crypto_matching_engine package, this file is imported in your main settings file and provides local setting. it should provide at least:
- Secret Ket
- Allowed Hosts
- Debug Status
in django settings standard form.
- Language: Python 3.12
- Web framework: Django 5 + Django REST Framework
- Matching engine core: Custom, Redis-backed order book
- Broker & cache: Redis
- Database: PostgreSQL
- Task queue: Celery (worker + beat)
- API Docs: drf-spectacular (OpenAPI 3 + Swagger UI)
- Containerization: Docker + docker-compose
-
Order ingestion
- Client calls
POST /api/orders/with a funded order (symbol,side,price,quantity). - The view does not touch the order book directly:
- It pushes a JSON payload into
matching:queuein Redis.
- It pushes a JSON payload into
- Immediate response:
{ "status": "ENQUEUED" }.
- Client calls
-
Matching (single-threaded via Celery)
- A Celery worker runs the task
engine.process_matching_queuein a loop:- Pops items from
matching:queue(Redis list). - For each payload, calls
create_and_match_new_order(...).
- Pops items from
create_and_match_new_order:- Generates a new
order_id(order:id:seq). - Stores the order in
order:<id>hash. - Matches against the opposite book (
bids/asks) using price–time priority. - Produces one or more trades, updates both taker and maker orders, and maintains the Redis sorted sets.
- Generates a new
- All Redis writes are also logged as
RedisEventLogrows in PostgreSQL.
- A Celery worker runs the task
-
Snapshots & Recovery
- A periodic Celery beat task,
snapshot_redis_state, runs every 10 minutes:- Reads the full Redis state relevant to matching (orders, books, trades).
- Serializes and stores it in
RedisSnapshot.
- Recovery flow:
- Load last snapshot into Redis.
- Fetch all
RedisEventLogentries aftersnapshot.created_at. - Re-apply the logs in order to reproduce the exact Redis state.
- A periodic Celery beat task,
-
Debug & QA
/api/debug/orders/readsorder:*hashes from Redis and returns normalized JSON./api/debug/state/returns:- All orders (from
order:*). - All trades (from
trades:*lists).
- All orders (from
- External E2E scripts can:
- Post many concurrent orders.
- Cancel orders.
- Pull
/api/debug/state/to verify:- No negative balances or remaining quantities.
quantity ≈ remaining + traded.- Trade prices satisfy price-time priority and no crossed book remains.
- Docker
- docker-compose (or
docker composewith recent Docker versions)
Create a .env file in the project root (same directory as docker-compose.yml), for example:
POSTGRES_DB=crypto_matching_engine_db
POSTGRES_USER=adminuser
POSTGRES_PASSWORD=adminpassword
POSTGRES_HOST=db
POSTGRES_PORT=5432
REDIS_URL=redis://redis:6379/0
DJANGO_SETTINGS_MODULE=crypto_matching_engine.settings
DJANGO_SECRET_KEY=change-me
DJANGO_DEBUG=True
DJANGO_ALLOWED_HOSTS=*Adjust usernames/passwords as needed. Make sure the same values are referenced in settings.py (or use dj-database-url if configured).
docker compose up --buildThis will start:
web: Django + DRF applicationdb: PostgreSQL databaseredis: Redis instancecelery_worker: Celery worker (single concurrency) consuming frommatchingqueuecelery_beat: Celery beat scheduler (snapshot + matching loop)
Once up:
- Django app: http://localhost:8000/
- Swagger UI: http://localhost:8000/api/docs/
- ReDoc: http://localhost:8000/api/redoc/
Inside the web container, you can run Django tests as usual:
docker compose exec web python manage.py testYou can also run a specific test module, e.g.:
docker compose exec web python manage.py test engine.tests.test_matching_engine_concurrent
docker compose exec web python manage.py test engine.tests.test_recoveryThese tests cover:
- Non-concurrent matching scenarios with various edge cases.
- Heavy concurrent tests to assert:
- Absence of deadlocks.
- No race conditions.
- No double spending on balances or order quantities.
- Recovery tests that:
- Create a snapshot.
- Simulate a crash by clearing Redis.
- Rebuild state from
RedisSnapshot+RedisEventLog. - Assert the final Redis state equals the pre-crash state.
Request
POST /api/orders/
{
"symbol": "BTC_USDT",
"side": "BUY",
"price": 100.0,
"quantity": 1.0
}Response
{
"status": "ENQUEUED"
}The actual order ID and matching details are produced asynchronously by the Celery worker and can be inspected via the debug endpoints.
Request
POST /api/orders/{order_id}/cancel/
{
"reason": "User requested cancel"
}Response
{
"status": "CANCELLED"
}The engine will update the order status in Redis and ensure it is removed from the book.
-
GET /api/debug/orders/
Returns:{ "orders": [ { "id": 1, "symbol": "BTC_USDT", "side": "BUY", "price": 100.0, "quantity": 1.0, "remaining": 0.0, "status": "FILLED", "created_at": 1732880000.123 }, ... ] } -
GET /api/debug/state/
Returns:{ "orders": [...], "trades": [ { "id": 1, "symbol": "BTC_USDT", "price": 100.0, "quantity": 1.0, "maker_order_id": 5, "taker_order_id": 12, "timestamp": 1732880001.123 }, ... ] }
These are intended only for local debugging / QA and should be protected or disabled in production.
The project uses drf-spectacular for schema generation:
- Add
drf_spectaculartoINSTALLED_APPS. - Configure DRF:
REST_FRAMEWORK = {
"DEFAULT_SCHEMA_CLASS": "drf_spectacular.openapi.AutoSchema",
}- Configure URLs:
from drf_spectacular.views import (
SpectacularAPIView,
SpectacularSwaggerView,
SpectacularRedocView,
)
urlpatterns = [
# ...
path("api/schema/", SpectacularAPIView.as_view(), name="schema"),
path(
"api/docs/",
SpectacularSwaggerView.as_view(url_name="schema"),
name="swagger-ui",
),
path(
"api/redoc/",
SpectacularRedocView.as_view(url_name="schema"),
name="redoc",
),
]For each POST endpoint you can use serializers and @extend_schema to get a nice request/response body in Swagger UI.
Alongside the Django test suite under engine/tests/, this project includes external API-level tests in the api_tests/ folder.
These tests exercise the service from the outside, via HTTP, against a running Docker stack (including Celery, Redis, and PostgreSQL).
File: api_tests/locustfile.py
This file defines a MatchingUser load profile that:
- randomly chooses a symbol from
["BTC_USDT", "ETH_USDT", "XRP_USDT"] - randomly chooses BUY/SELL, price, and quantity
- sends
POST /api/orders/requests under configurable user load
-
Install Locust:
pip install locust
-
Make sure the Docker stack is up:
docker compose up
-
Run Locust from the project root:
locust -f api_tests/stress_test_with_locust.py --host=http://localhost:8000
-
Open the web UI at:
There you can configure:
- number of users
- spawn rate
- see response times, failures, and percentiles.
You can also create a locust service in docker-compose.yml that mounts the repo and runs:
locust -f api_tests/locustfile.py --host=http://web:8000so that Locust runs next to the stack inside Docker.
File: api_tests/e2e_sanity_check.py
This script performs a full end-to-end check against a running instance of the service:
- Fetches current orders via
GET /api/debug/orders/(for context). - Creates:
- 10 SELL orders as initial liquidity
- 10 BUY orders that should match against the liquidity
- Waits a few seconds for Celery to process the queue.
- Fetches the full state via
GET /api/debug/state/, which returns:orders: all orders from Redistrades: all trades from Redis
- Performs checks:
- basic invariants on orders:
0 <= remaining <= quantityFILLED → remaining == 0OPEN → remaining == quantity(within tolerance)
- consistency between orders and trades (if trades are present):
- each trade has valid maker/taker orders
- maker/taker have opposite sides (BUY vs SELL)
- trade price lies between maker and taker prices
- for each order:
quantity ≈ remaining + sum(trade quantities)
- basic invariants on orders:
- Cancels up to 3 cancellable orders (
OPEN/PARTIALLY_FILLED) usingPOST /api/orders/{id}/cancel/. - Fetches state again and re-runs the checks, plus explicit verification that the chosen orders are now
CANCELLED.
By default, the script targets:
BASE_URL = "http://localhost:8000"You can optionally change it to read from an environment variable (e.g. MATCHING_BASE_URL) if you want to run it inside a container.
Outside Docker, from the project root:
pip install requests
docker compose up # if not already running
python api_tests/e2e_sanity_check.pyInside the web container (for example):
docker compose exec web python api_tests/e2e_sanity_check.pyThis script is meant as a practical tool for:
- verifying that matching + cancel logic behaves correctly end-to-end,
- checking that Celery processing, Redis state, and debug endpoints are all wired correctly,
- providing a reproducible scenario other developers can run after cloning the repo.
- Matching is effectively single-threaded per deployment:
- Celery worker is run with
--concurrency=1. - This guarantees deterministic matching and prevents race conditions inside the book.
- Celery worker is run with
- Redis is used as the single source of truth for the live book.
- PostgreSQL provides durability and recovery via:
- Snapshots (
RedisSnapshotmodel). - Append-only Redis write logs (
RedisEventLogmodel).
- Snapshots (
High-load concurrent tests (outside the service, using HTTP) can be used to hammer:
POST /api/orders/with many parallel clients.POST /api/orders/{id}/cancel/interleaved with takers.- Then
GET /api/debug/state/to verify:- No order quantity is double-spent.
- The book has no crossed state (best bid < best ask).
- The sum of trades per order matches its filled quantity.
- Local inspection:
- PostgreSQL: connect via
psqlto thedbservice (crypto_matching_engine_db). - Redis: use
redis-clion theredisservice to inspectorder:*,trades:*, and book keys.
- PostgreSQL: connect via
- Admin / inspection:
- You can register
RedisEventLogandRedisSnapshotin Django admin to inspect them via/admin/.
- You can register
- Logging:
- Each Redis write relevant to the matching engine should have a corresponding
RedisEventLogentry. - This makes it possible to reconstruct the exact sequence of operations during incident analysis.
- Each Redis write relevant to the matching engine should have a corresponding
- Create a UI to see live orderbook and matchings