-
Notifications
You must be signed in to change notification settings - Fork 19.7k
Run tests on TPU #21425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run tests on TPU #21425
Conversation
… hosted TPU based runner
updated build file path
updated build file path
Updated tpu_build job of actions.yml with specific runner label
Added container section
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #21425 +/- ##
==========================================
- Coverage 82.66% 82.48% -0.19%
==========================================
Files 577 577
Lines 59419 59506 +87
Branches 9313 9330 +17
==========================================
- Hits 49121 49084 -37
- Misses 7898 8010 +112
- Partials 2400 2412 +12
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Progress bar would always report the starting batch + 1 at the end of the batch. Now it takes into account `steps_per_execution` for the last batch reported. Fixes keras-team#20861
Using `keras.ops.math.logsumexp` with an int for `axis` in a functional model would throw an error.
…eras-team#21429) Arbitrary functions and classes are not allowed. - Made `Operation` extend `KerasSaveable`, this required moving imports to avoid circular imports - `Layer` no longer need to extend `KerasSaveable` directly - Made feature space `Cross` and `Feature` extend `KerasSaveable` - Also dissallow public function `enable_unsafe_deserialization`
…developed dtypes_new_test.py to use requires_tpu marker
hertschuh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates!
|
An observed was made that tests were silently falling back to CPU during execution even when TPU was available while running them on a TPU VM which lead to the suspicion that the same could be happening with current passing tests. its a good idea to log the device type on which tests are getting executed (cpu/cuda/tpu). |
hertschuh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, one last thing.
We want these tests to run only after the PR is approved, like the kokoro GPU tests. Do you know how to do that?
|
Hi @hertschuh, I'm not sure how to do that exactly, Should I add the TPU specific workflow in |
It looks like this would be a way to do it: https://github.com/orgs/community/discussions/25372#discussioncomment-3247688 But it means you need a separate yml file. |
…the PR review is approved
|
Hello @hertschuh , I've added the condition to check if the PR is reviewed and approved before running the TPU tests job. Can you please let me know if the change is valid? |
hertschuh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
hertschuh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Last 2 nitpicks
Running tests on V6 TPU runner.