-
-
Notifications
You must be signed in to change notification settings - Fork 20
Hybrid mapping #419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Hybrid mapping #419
Conversation
Update action space and feature space Update actions Update action space
0be7063
to
043be23
Compare
Fix: resolve pre-commit issues and add missing annotations Fix: resolve pre-commit issues and add missing annotations Remove example_test.py Remove example_test.py
fd457a0
to
4b78134
Compare
Fix: resolve pre-commit issues and add missing annotations Fix: resolve pre-commit issues and add missing annotations Fix: resolve pre-commit issues and add missing annotations
d872e00
to
5935e6f
Compare
5bc0bf7
to
f71fb29
Compare
@Shaobo-Zhou Just fyi: you can also run mypy locally to debug this on your machine without relying on CI by following: https://mqt.readthedocs.io/projects/predictor/en/latest/development_guide.html#code-formatting-and-linting |
14b79d4
to
3c7592b
Compare
@Shaobo-Zhou Is there any reason you keep closing your issues? |
@burgholzer Hi, sorry for the confusion. I had been closing PRs because I noticed that each push triggered notifications, and I didn’t want to create unnecessary noise while I was still figuring things out. |
Don't worry about the notifications. That's fine and it's exactly what draft PRs are for 😌
or
and commit the resulting changes. Once the PR is really ready for review, simply give us a ping here. |
4c96196
to
2692b96
Compare
9a8c77a
to
f4874e6
Compare
By the way: If you are annoyed that you always have to wait for someone from the team to trigger the CI, the easiest way to get around that is to submit another PR with a small change (could be a typo fix or similar) that we can quickly merge. This makes you a repeating contributor, which automatically runs CI without direct approval. |
@burgholzer Thanks for the info! |
2682c2e
to
54eec91
Compare
Fix windows runtime warning problem Fix windows runtime warning issue
b4f563b
to
b67d0a6
Compare
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
@burgholzer Could you also add @flowerthrower as a reviewer for this PR so we can clarify any potential confusion during the review process? |
Done ✅ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Shaobo, and thank you for setting up this PR.
Great to see you working with our setup — this should make future PRs easier to manage.
A few general points (in addition to the inline comments):
- Why is it no longer possible to work with algorithm-level circuits (but was before)?
- This PR should focus as much as possible on adding the new actions.
- Other improvements not strictly required for these passes should be left out of this single-purpose PR, perhaps we can make this a bit more lean.
determine_valid_actions_for_state
I agree, better documentation would help, but the function generally works as intended.
No need to merge layout and routing.
If a circuit becomes unmapped, determine_valid_actions_for_state
already enforces:
- Re-layout if
self.layout
is unset - Re-routing if unrouted
This depends on self.layout
being correctly updated after optimizations.
If that is currently not the case, I would suggest to keep the current structure, but ensure self.layout
always matches the circuit state after each optimization.
Next steps:
I haven’t yet reviewed the new passes in detail, but you can already address this feedback (may affect the combined actions) and request another review once done.
Overall, I am happy to see this progressing — this brings us really closer to integrating your work into the MQT Predictor.
@@ -29,7 +30,7 @@ | |||
|
|||
# TODO(denialhaag): Add 3.14 when all dependencies support it | |||
# https://github.com/munich-quantum-toolkit/predictor/issues/420 | |||
PYTHON_ALL_VERSIONS = ["3.10", "3.11", "3.12", "3.13"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems rather unfortunate to lose Python 3.13
support to enable a single pass; it would be preferable to make the pass optional based on the available Python version (ideally with a short comment that mentions the current limitation and can be revisited in the future).
@@ -66,7 +67,9 @@ def _run_tests( | |||
"test", | |||
*install_args, | |||
"pytest", | |||
"-v", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No expert on this, but these flags seem like artefacts from debugging and apparently were not necessary before (are they now?).
@@ -82,6 +85,9 @@ def tests(session: nox.Session) -> None: | |||
@nox.session(reuse_venv=True, venv_backend="uv", python=PYTHON_ALL_VERSIONS) | |||
def minimums(session: nox.Session) -> None: | |||
"""Test the minimum versions of dependencies.""" | |||
if platform.system() == "Windows": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we know why this is suddenly too slow? Seems unfortunate to miss this test if it was running fine before (note, there have always been longer test durations with the Predictor, so this is potentially fine).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually the windows test runs fine and is not skipped...I will delete this part
@@ -133,5 +139,7 @@ def docs(session: nox.Session) -> None: | |||
"--frozen", | |||
"sphinx-autobuild" if serve else "sphinx-build", | |||
*shared_args, | |||
"-v", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
@@ -44,8 +44,10 @@ dependencies = [ | |||
"numpy>=1.24; python_version >= '3.11'", | |||
"numpy>=1.22", | |||
"numpy>=1.22,<2; sys_platform == 'darwin' and 'x86_64' in platform_machine and python_version < '3.13'", # Restrict numpy v2 for macOS x86 since it is not supported anymore since torch v2.3.0 | |||
"torch>=2.7.1,<2.8.0; sys_platform == 'darwin' and 'x86_64' in platform_machine and python_version < '3.13'", # Restrict torch v2.3.0 for macOS x86 since it is not supported anymore. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reason for deleting this?
layouted_qc = layout_pm.run(qc) | ||
layout_props = dict(layout_pm.property_set) | ||
except Exception: | ||
return qc, {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps an error message as below would be nice.
qc: The input quantum circuit. | ||
max_iteration: A tuple (layout_trials, routing_trials) specifying | ||
how many times to try. | ||
metric_fn: Optional function to score circuits; defaults to circuit depth. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could use the already existing figure_of_merit
s here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be an option, I tried to use the standard SWAP count metric as in Sabre
if getattr(action, "stochastic", False): | ||
|
||
def metric_fn(circ: QuantumCircuit) -> float: | ||
return float(circ.count_ops().get("swap", 0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the docstring above, it was mentioned that this defaults to circuit depth?
@@ -37,7 +38,7 @@ def test_predictor_env_reset_from_string() -> None: | |||
device = get_device("ibm_eagle_127") | |||
predictor = Predictor(figure_of_merit="expected_fidelity", device=device) | |||
qasm_path = Path("test.qasm") | |||
qc = get_benchmark("dj", BenchmarkLevel.ALG, 3) | |||
qc = get_benchmark("dj", BenchmarkLevel.INDEP, 3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this restriction is really unavoidable (see general comment), then the examples and documentation should be updated accordingly.
ml_predictor.compile_training_circuits( | ||
timeout=600, path_compiled_circuits=target_path, path_uncompiled_circuits=source_path, num_workers=1 | ||
) | ||
ml_predictor.compile_training_circuits( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not be an issue as the timeout_watcher
still executes the function (even though without the timeout).
Thanks for the feedback! I will work on those points in detail, but maybe first some comments to the general points you mentioned:
|
Currently,
This structure already ensures re-layout whenever necessary and enforces the flow described in the predictor paper — i.e., start with synthesis before mapping. That is of course, as long as the layout attribute is correctly set after it has changed, e.g., in an optimization pass. EDIT: I can see how this order is broken in the case of not synthesized + laid out, and a routing pass is selected, but I don't see how that would break the re-layout whenever necessary. |
I think the real issue is exactly that optimization passes change the circuit structure but don’t reset the layout (only layout actions set the layout), so the layout attribute never changes. Since a layout exists, the only viable option becomes routing, even though a re-layout might actually be needed. |
Just throwing a random comment in here because I was roughly following the discussion. |
Description
This PR introduces significant improvements to the action space and handling of stochastic mapping passes in the reinforcement learning environment:
🚀 Major Changes
Expanded Action Space
AIRouting
as a routing option and wrapped it inSafeAIRouting
for robust integration into the RL pipeline.mapping
actions that combine layout and routing stages.Motivation for Mapping Redesign
mapping
action ensures layout and routing are always performed together when remapping is needed, preventing mismatches between layout and circuit state and improving both optimality and reliability.Support for Stochastic Passes
AIRouting
,SabreLayout
) in a multi-trial evaluation loop, similar to QiskitO3 pipeline.(layout_trials, routing_trials)
as parameters to control trial counts, enabling improved predictor performance and stability.SabreLayout
andVF2Layout
.Fixes and Enhancements
OptimizeCliffords
by ensuringCollectCliffords
runs beforehand.['id', 'u1', 'u2', 'u3', 'cx']
(see [https://quantum.cloud.ibm.com/docs/en/api/qiskit/0.24/transpiler]) during:decompose
tk_to_qiskit
conversionOptimize1qGatesDecomposition
ensuring correct native gate usage which is essential if individual gate counts are included in the feature space—otherwise RL cannot meaningfully learn from a generic gate basis.
GatesInBasis
inrl/predictorenv.py
INDEP
intest_predictor_rl.py
, since the current action space does not guarantee support for high-level gates.Dependency Update
qiskit-ibm-ai-local-transpiler
to the dependenciesnetworkx==2.8.5
to ensure compatibility withqiskit-ibm-ai-local-transpiler
pytket_qiskit>=0.71.0
torch>=2.7.1,<2.8.0
Checklist: