AI4REALNET Campaign Hub Orchestrator integrates with Validation Campaign Hub (aka. FAB).
This repo contains the domain-specific orchestrator and test runner implementations.
It uses the Python library fab-clientlib to upload results to the Validation Campaign Hub (FAB).
- The campaign benchmarks are set up in the Validation Campaign Hub by domain-specific project managers (TU Delft, RTE, Flatland) together with FLATLAND IT administrator.
- The domain-specific orchestrators are configured and deployed by the domain-specific IT administrators: see
orchestrator.pyin the blueprint - Experiments (Test Runners, Test Evaluator) are implemented by KPI Owners: see
test_runner_evaluator.pyin the blueprint. - Experiments are carried out by Algorithmic Researchers, Human Factors Researchers and results are uploaded as a submission to FAB.
- offline-loop: manually upload your test results (JSON) via
- FAB UI
- FAB REST API using Python FAB Client Lib
- closed-loop:
- Algorithmic Researcher starts experiment from hub
- Orchestrator uploads results (JSON) to hub and closes submission
- interactive-loop:
- Human Factors Researcher starts experiment from hub
- orchestrator uploads results (JSON) to hub
- Human Factors Researcher complements submission manually via FAB UI or Python CLI
- Human Factors Researcher closes submission manually
Arrows indicate information flow and not control flow.
sequenceDiagram
participant FAB
participant Orchestrator
participant TestRunner_TestEvaluator
participant HumanFactorsResearcher
alt closed-loop
FAB ->> Orchestrator: BenchmarkId, SubmissionId, List[TestId], SubmissionDataUrl
Orchestrator ->> TestRunner_TestEvaluator: BenchmarkId,TestId,SubmissionId,SubmissionDataUrl
TestRunner_TestEvaluator ->> Orchestrator: <TestId>_<SubmissionId>.json
Orchestrator ->> FAB: <TestId>_<SubmissionId>.json
Orchestrator ->> FAB: close submission
else interactive-loop
FAB ->> Orchestrator: BenchmarkId, SubmissionId, List[TestId], SubmissionDataUrl
Orchestrator ->> TestRunner_TestEvaluator: BenchmarkId,TestId,SubmissionId,SubmissionDataUrl
opt automatic partial scoring
TestRunner_TestEvaluator ->> Orchestrator: <TestId>_<SubmissionId>.json
Orchestrator ->> FAB: upload <TestId>_<SubmissionId>.json
end
TestRunner_TestEvaluator ->> HumanFactorsResearcher: Any
HumanFactorsResearcher ->> FAB: upload/complement/edit <TestId>_<SubmissionId>.json
HumanFactorsResearcher ->> FAB: close submission
else offline-loop
HumanFactorsResearcher ->> TestRunner_TestEvaluator: Any
TestRunner_TestEvaluator ->> HumanFactorsResearcher: Any
HumanFactorsResearcher ->> FAB: create new submission SubmissionId
HumanFactorsResearcher ->> FAB: upload/complement/edit <TestId>_<SubmissionId>.json
HumanFactorsResearcher ->> FAB: close submission
end
In your domain-specific infrastructure:
- Clone this repo.
- Run orchestrator: The following command loads the railway orchestrator in the background:
export BENCHMARK_ID=<get it from Flatland>
export BACKEND_URL=rpc://
export BROKER_URL=amqps://<USER - get it from Flatland>:<PW - get it from Flatland>@rabbitmq-int.flatland.cloud:5671//
export CLIENT_ID=<get it from Flatland>
export CLIENT_SECRET=<get it from Flatland>
export FAB_API_URL=https://ai4realnet-int.flatland.cloud:8000
export RABBITMQ_KEYFILE=.../certs/tls.key # get it from Flatland
export RABBITMQ_CERTFILE=.../certs/tls.crt # get it from Flatland
export RABBITMQ_CA_CERTS=.../certs/ca.crt # get it from Flatland
...
conda create -n railway-orchestrator python=3.13
conda activate railway-orchestrator
python -m pip install -r requirements.txt -r ai4realnet_orchestrators/railway/requirements.txt
python -m celery -A ai4realnet_orchestrators.railway.orchestrator worker -l info -n orchestrator@%n -Q ${BENCHMARK_ID} --logfile=$PWD/railway-orchestrator.log --pidfile=$PWD/railway-orchestrator.pid --detachSee https://docs.celeryq.dev/en/stable/reference/cli.html#celery-worker for the available options to start a Celery worker.
In particular, use concurrency option to determine the worker pool size.