-
Notifications
You must be signed in to change notification settings - Fork 243
Fix: Unit test CTE failures not being captured #5081
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
failed_subtest = "" | ||
|
||
if subtest := getattr(self, "_subtest", None): | ||
if cte := subtest.params.get("cte"): | ||
failed_subtest = f" (CTE {cte})" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is to differentiate the CTE mismatch from the output one, e.g if both subtests fail in a single test that'd be the output:
FF
======================================================================
----------------------------------------------------------------------
FAIL: test_example_full_model (/Users/vaggelisd/Desktop/tobiko/test_dir/tests/test.yaml)
----------------------------------------------------------------------
Data mismatch (CTE "filtered_orders_cte")
┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
┃ Row ┃ id: Expected ┃ id: Actual ┃
┡━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
│ 0 │ 5 │ 1 │
└──────────┴─────────────────────────┴─────────────────────┘
----------------------------------------------------------------------
FAIL: test_example_full_model (/Users/vaggelisd/Desktop/tobiko/test_dir/tests/test.yaml)
----------------------------------------------------------------------
Data mismatch
┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
┃ Row ┃ id: Expected ┃ id: Actual ┃
┡━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
│ 0 │ 2 │ 1 │
└──────────┴─────────────────────────┴─────────────────────┘
----------------------------------------------------------------------
Test Failure Summary
======================================================================
Ran 1 tests against duckdb in 0.14 seconds.
Failed tests (1):
• /Users/vaggelisd/Desktop/tobiko/test_dir/tests/test.yaml::test_example_full_model
======================================================================
def get_fail_and_error_tests(self) -> t.List[ModelTest]: | ||
# If tests contain failed subtests (e.g testing CTE outputs) we don't want | ||
# to report it as different test failures | ||
test_name_to_test = { | ||
test.test_name: test | ||
for test, _ in self.failures + self.errors | ||
if isinstance(test, ModelTest) | ||
} | ||
return list(test_name_to_test.values()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simple abstraction to simplify console.py
copy_test_file(original_test_file, tmp_path / "tests" / f"test_success_{i}.yaml", i) | ||
copy_test_file(new_test_file, tmp_path / "tests" / f"test_failure_{i}.yaml", i) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously we copied these files directly but unit tests don't work well if the name is duplicated, I'll follow up with a UX fix
errors = result.errors | ||
failures = result.failures | ||
skipped = result.skipped | ||
|
||
infos = [] | ||
if failures: | ||
infos.append(f"failures={len(failures)}") | ||
if errors: | ||
infos.append(f"errors={len(errors)}") | ||
if skipped: | ||
infos.append(f"skipped={skipped}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dead code, infos
is no longer used
a726d55
to
681d328
Compare
|
||
failed_subtest = "" | ||
|
||
if subtest := getattr(self, "_subtest", None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it even makes sense at this point to rely on subtests rather than write our own context manager which sets the current CTE and then resets it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any other benefits that subtests offer us here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had the same thought as I was working on this, I think it's merit is that it triggers the callback addSubTest
when it finishes, otherwise we'd have to schedule that ourselves.
Not really a strong argument for it, but I'm not sure if there's a strong argument against it either. Happy to reconsider if you had a better thought with the custom CM
Fixes #5067
In
unittest
internalsTestCase::addSubTest
appends the result of that subtest directly toself.errors
orself.failures
:However, our custom classes expected that this'd happen through
addError(...)
oraddFailure(...)
so that we can intercept these exceptions (since they get turned into strings before being stored in their parent classes).To solve this regression,
ModelTestTextResult::addSubTest
now follows the parent implementation and instead calls the respective APIs which do fill our ownoriginal_failures
&original_error
structures.The output for that looks like:
Docs
unittest.TestResult | unittest.TestCase | unittest.runner