Skip to content

Commit 72048c1

Browse files
authored
Merge pull request flatland-association#25 from flatland-association/20-step-by-step-guide-implementing-orchestrator
docs: Add some documentation on how to use docker images for submissions.
2 parents 00efb3c + 40a9ac2 commit 72048c1

File tree

3 files changed

+95
-55
lines changed

3 files changed

+95
-55
lines changed

README.md

Lines changed: 74 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -21,16 +21,16 @@ It uses the Python library [fab-clientlib](https://pypi.org/project/fab-clientli
2121
## Experiment Workflows
2222

2323
* **offline-loop**: manually upload your test results (JSON) via
24-
* FAB UI to initiate a submission
25-
* FAB REST API using Python FAB Client Lib
24+
* FAB UI to initiate a submission
25+
* FAB REST API using Python FAB Client Lib
2626
* **closed-loop**:
27-
* Algorithmic Researcher starts experiment from hub
28-
* Orchestrator uploads results (JSON) to hub and closes submission
27+
* Algorithmic Researcher starts experiment from hub
28+
* Orchestrator uploads results (JSON) to hub and closes submission
2929
* **interactive-loop**:
30-
* Human Factors Researcher starts experiment from hub
30+
* Human Factors Researcher starts experiment from hub
3131
* Orchestrator uploads results (JSON) to hub
32-
* Human Factors Researcher complements submission manually via FAB UI or Python CLI
33-
* Human Factors Researcher closes submission manually
32+
* Human Factors Researcher complements submission manually via FAB UI or Python CLI
33+
* Human Factors Researcher closes submission manually
3434

3535
> [!TIP]
3636
> Beware that interactive-loop is meant here from a technical perspective:
@@ -85,33 +85,33 @@ Arrows indicate information flow and not control flow.
8585

8686
```mermaid
8787
sequenceDiagram
88-
participant FAB
89-
participant Orchestrator
90-
participant TestRunner_TestEvaluator
91-
participant HumanFactorsResearcher
92-
alt closed-loop
93-
FAB ->> Orchestrator: BenchmarkId, SubmissionId, List[TestId], SubmissionDataUrl
94-
Orchestrator ->> TestRunner_TestEvaluator: BenchmarkId,TestId,SubmissionId,SubmissionDataUrl
95-
TestRunner_TestEvaluator ->> Orchestrator: <TestId>_<SubmissionId>.json
96-
Orchestrator ->> FAB: <TestId>_<SubmissionId>.json
97-
Orchestrator ->> FAB: close submission
98-
else interactive-loop
99-
FAB ->> Orchestrator: BenchmarkId, SubmissionId, List[TestId], SubmissionDataUrl
100-
Orchestrator ->> TestRunner_TestEvaluator: BenchmarkId,TestId,SubmissionId,SubmissionDataUrl
101-
opt automatic partial scoring
102-
TestRunner_TestEvaluator ->> Orchestrator: <TestId>_<SubmissionId>.json
103-
Orchestrator ->> FAB: upload <TestId>_<SubmissionId>.json
104-
end
105-
TestRunner_TestEvaluator ->> HumanFactorsResearcher: Any
106-
HumanFactorsResearcher ->> FAB: upload/complement/edit <TestId>_<SubmissionId>.json
107-
HumanFactorsResearcher ->> FAB: close submission
108-
else offline-loop
109-
HumanFactorsResearcher ->> TestRunner_TestEvaluator: Any
110-
TestRunner_TestEvaluator ->> HumanFactorsResearcher: Any
111-
HumanFactorsResearcher ->> FAB: create new submission SubmissionId
112-
HumanFactorsResearcher ->> FAB: upload/complement/edit <TestId>_<SubmissionId>.json
113-
HumanFactorsResearcher ->> FAB: close submission
88+
participant FAB
89+
participant Orchestrator
90+
participant TestRunner_TestEvaluator
91+
participant HumanFactorsResearcher
92+
alt closed-loop
93+
FAB ->> Orchestrator: BenchmarkId, SubmissionId, List[TestId], SubmissionDataUrl
94+
Orchestrator ->> TestRunner_TestEvaluator: BenchmarkId,TestId,SubmissionId,SubmissionDataUrl
95+
TestRunner_TestEvaluator ->> Orchestrator: <TestId>_<SubmissionId>.json
96+
Orchestrator ->> FAB: <TestId>_<SubmissionId>.json
97+
Orchestrator ->> FAB: close submission
98+
else interactive-loop
99+
FAB ->> Orchestrator: BenchmarkId, SubmissionId, List[TestId], SubmissionDataUrl
100+
Orchestrator ->> TestRunner_TestEvaluator: BenchmarkId,TestId,SubmissionId,SubmissionDataUrl
101+
opt automatic partial scoring
102+
TestRunner_TestEvaluator ->> Orchestrator: <TestId>_<SubmissionId>.json
103+
Orchestrator ->> FAB: upload <TestId>_<SubmissionId>.json
114104
end
105+
TestRunner_TestEvaluator ->> HumanFactorsResearcher: Any
106+
HumanFactorsResearcher ->> FAB: upload/complement/edit <TestId>_<SubmissionId>.json
107+
HumanFactorsResearcher ->> FAB: close submission
108+
else offline-loop
109+
HumanFactorsResearcher ->> TestRunner_TestEvaluator: Any
110+
TestRunner_TestEvaluator ->> HumanFactorsResearcher: Any
111+
HumanFactorsResearcher ->> FAB: create new submission SubmissionId
112+
HumanFactorsResearcher ->> FAB: upload/complement/edit <TestId>_<SubmissionId>.json
113+
HumanFactorsResearcher ->> FAB: close submission
114+
end
115115
```
116116

117117
## TL;DR;
@@ -121,7 +121,47 @@ sequenceDiagram
121121
In your domain-specific infrastructure:
122122

123123
1. Clone this repo.
124-
2. Run orchestrator: The following command loads the railway orchestrator in the background:
124+
2. Implement orchestrator:
125+
126+
- In your `<domain>/orchestrator.py`, uncomment the test runner for the KPI you want to implement
127+
- Implement the test runner:
128+
```python
129+
from ai4realnet_orchestrators.test_runner import TestRunner
130+
def load_submission_data(submission_data_url: str):
131+
raise NotImplementedError()
132+
133+
def load_model(submission_data):
134+
raise NotImplementedError()
135+
136+
def load_scenario_data(scenario_id: str):
137+
raise NotImplementedError()
138+
139+
class YourTestRunner(TestRunner):
140+
def init(self, submission_data_url: str):
141+
super().init(submission_data_url=submission_data_url)
142+
submission_data = load_submission_data(submission_data_url)
143+
self.model = load_model(submission_data)
144+
145+
def run_scenario(self, scenario_id: str, submission_id: str):
146+
# here you would implement the logic to run the test for the scenario:
147+
scenario_data = load_scenario_data(scenario_id)
148+
model = self.model
149+
150+
# data and other stuff initialized in the init method can be used here
151+
# for demonstration, we return a dummy result
152+
return {
153+
"primary": -999,
154+
}
155+
```
156+
* `load_scenario_data`: map the `scenario_id` to your test configuration, e.g. path to data you want to load for the KPI
157+
* `load_submission_data`: get the submission data from the string, e.g. parse the opaque `submission_data_url` string or fetch data from URL
158+
* `load_model`: load the model from the submission data, e.g. pull a Docker image from a remote registry
159+
* `return` a dict containing the scenario scores (floats); by default, there is one field called `primary`;
160+
* please contact Flatland Association if you need to change or have secondary fields as well.
161+
* the full configuration can be found [here (json)](https://github.com/flatland-association/flatland-benchmarks/blob/main/definitions/ai4realnet/ai4realnet_definitions.json) and [here (sql)](https://github.com/flatland-association/flatland-benchmarks/blob/main/definitions/ai4realnet/ai4realnet_definitions.sql)
162+
163+
164+
3. Run orchestrator: The following command loads the railway orchestrator in the background:
125165

126166
```shell
127167
export DOMAIN="Railway"

ai4realnet_orchestrators/atm/test_runner.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
import os
22
import pathlib
3+
34
import pandas as pd
45

56
from ai4realnet_orchestrators.fab_exec_utils import exec_with_logging
@@ -13,11 +14,11 @@ def run_scenario(self, scenario_id: str, submission_id: str):
1314
# here you would implement the logic to run the test for the scenario:
1415
args = ["bluesky", "--detached", "--workdir", WORKDIR, "--scenfile", scenario_id]
1516
exec_with_logging(args)
16-
17+
1718
# Read the generated data file
1819
files = list(pathlib.Path(WORKDIR).glob("*.csv"))
1920
latest = max(files, key=os.path.getctime)
2021
data = pd.read_csv(latest, comment="#")
2122
return {
22-
"output": data,
23+
"primary": data,
2324
}

ai4realnet_orchestrators/power_grid/test_runner.py

Lines changed: 18 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -2,31 +2,30 @@
22

33

44
def load_submission_data(submission_data_url: str):
5-
raise NotImplementedError()
5+
raise NotImplementedError()
66

77

88
def load_model(submission_data):
9-
raise NotImplementedError()
9+
raise NotImplementedError()
1010

1111

1212
def load_scenario_data(scenario_id: str):
13-
raise NotImplementedError()
13+
raise NotImplementedError()
1414

1515

1616
class YourTestRunner(TestRunner):
17-
def init(self, submission_data_url: str):
18-
super().init(submission_data_url=submission_data_url)
19-
submission_data = load_submission_data(submission_data_url)
20-
self.model = load_model(submission_data)
21-
22-
def run_scenario(self, scenario_id: str, submission_id: str):
23-
# here you would implement the logic to run the test for the scenario:
24-
scenario_data = load_scenario_data(scenario_id)
25-
model = self.model
26-
27-
# data and other stuff initialized in the init method can be used here
28-
# for demonstration, we return a dummy result
29-
return {
30-
"key_1": "value_1",
31-
"key_2": "value_2",
32-
}
17+
def init(self, submission_data_url: str):
18+
super().init(submission_data_url=submission_data_url)
19+
submission_data = load_submission_data(submission_data_url)
20+
self.model = load_model(submission_data)
21+
22+
def run_scenario(self, scenario_id: str, submission_id: str):
23+
# here you would implement the logic to run the test for the scenario:
24+
scenario_data = load_scenario_data(scenario_id)
25+
model = self.model
26+
27+
# data and other stuff initialized in the init method can be used here
28+
# for demonstration, we return a dummy result
29+
return {
30+
"primary": -999,
31+
}

0 commit comments

Comments
 (0)