Skip to content

Conversation

@CIGbalance
Copy link
Collaborator

@CIGbalance CIGbalance commented Oct 16, 2025

Added a preliminary script for checking consistency, along with a README detailing the envisioned github workflow.

Main question for review: Should I be checking for anything else? Is there anything that is too strict?

Closes #123

"textual description",
]

UNIQUE_FIELDS = ["name", "reference", "implementation"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This triggers an error when:

  • reference is '' (empty), which is probably not what we want. Thinking about it: A reference can also introduce multiple problems, so does not need to be unique in any case? What do you think?
  • Similarly, I expect the same happens for the implementation field, which may also not need to be unique, because a single package may implement multiple problems/benchmarks. (I guess it might depend a bit on how specific we want the reference to the implementation to be, but it probably cannot always be specific enough to be unique.)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I added it since I was hoping this could weed out instances where the same problem was added multiple times just with a different name. But you are right, there are several valid reasons for why the reference/implementation could be the same. I am going to change it to a warning instead.

import sys

# Define the required fields your YAML must have
REQUIRED_FIELDS = [
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have the same list of fields in yaml_to_html.py (called default_columns). We should probably maintain it in a single place, and/or let one inherit the other?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch, I am going to import from the file



def check_fields(data):
if len(data) != len(REQUIRED_FIELDS):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should test that there are at least this many fields. I would explicitly want people to add new fields for interesting properties we do not collect yet. Then:

  • Properties not in the REQUIRED_FIELDS should then be checked against a (to be created) NOT_REQUIRED_FIELDS which would contain all other fields (might be empty for now).
  • A message should be returned listing the new fields (found in neither the required or not-required lists), to be verified by an OPL maintainer as actually new (not just a new name for an existing property), and then either added to the not required list or fixed (or requested as change on a PR) to have the correct existing name.
  • Ideally all other checks are still done before such a list is returned, so we know everything else already passes the checks, and verifying new fields (or maybe other similar things) is all that needs to be done.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I removed the length check because it doesn't really add anything in this case.
I am now adding warnings for unknown fields that the reviewers can check.
Adding to the optional fields variable will be done on merge in the merging script, so this is not part of this PR. For now there are just no optional fields.

@kvdblom
Copy link
Collaborator

kvdblom commented Nov 20, 2025

Something else that came to mind that I did not think about or try yesterday:

  • What happens if multiple new problems/benchmarks are added at the same time (in a single yaml file)?

@CIGbalance
Copy link
Collaborator Author

Something else that came to mind that I did not think about or try yesterday:

* What happens if multiple new problems/benchmarks are added at the same time (in a single yaml file)?

Yes, very good point. I changed this now to test for each entry and not assume it is only one.

Note: I also changed the format of the prints. with the new syntax, they should be automatically picked up by the GitHub Action. This is to be tested for PR #127

@CIGbalance CIGbalance requested a review from kvdblom November 27, 2025 10:25
@kvdblom kvdblom mentioned this pull request Nov 27, 2025
# Load existing problems
read_status, existing_data = read_data(PROBLEMS_FILE)
if read_status != 0:
print("::eror::Could not read existing problems for novelty check.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: eror -> error

print("::eror::Could not read existing problems for novelty check.")
return False
assert existing_data is not None
for field in UNIQUE_FIELDS or UNIQUE_WARNING_FIELDS:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not seem to work, and ends up only checking the UNIQUE_FIELDS
Change to: UNIQUE_FIELDS + UNIQUE_WARNING_FIELDS

An edge case I'm not entirely sure what to do with: Fields that are left empty also trigger this warning, e.g.: ::warning::Field 'reference' with value '' already exists. Consider choosing a unique value.
Maybe that is fine, but maybe it would be nicer to ignore (not give a warning) those cases.

This also made me realise/check: If UNIQUE_FIELDS are left empty (i.e., field exists, but has no value), they currently don't raise an error. It would probably be good if they did, because currently I can add a problem without a name.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably the only key thing, the rest is minor and/or future problems.

The intended way of adding a new problem to the repository is thus as follows:

* Change the [new_problem.yaml](new_problem.yaml) template file to fit the new problem.
* Create a PR which modifies with the changes (for example with a fork).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sentence, maybe remove 'modifies' ?


* On PR creation and commits to the PR, the [validate_yaml.py](validate_yaml.py) script is run to check that the YAML file is valid and consistent. It is expecting the changes to be in the [new_problem.yaml](new_problem.yaml) file.
* Then the PR should be reviewed manually.
* When the PR is merged into the main branch, a second script runs (which doesn't exist yet), that adds the content of [new_problem.yaml](new_problem.yaml) to the [problems.yaml](../problems.yaml) file, and returns it to its previous version.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what you mean with "and returns it to its previous version"

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe not a now issue, but when the new template is ready, there may be REQUIRED_FIELDS that are subfields to other fields. E.g., something like this:

variables:
dimensionality: scalable
variable type: continuous

These would not pass the current checks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement the consistency check for new problems to be run on the PRs

3 participants