-
Notifications
You must be signed in to change notification settings - Fork 637
🌱 Bump CAPI to v1.11 and k8s to v1.33 #5720
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Welcome @clebs! |
|
Hi @clebs. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
@richardcase This PR here based on @bryan-cox's: #5720 Changes:
Current state:
|
|
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be bumping KUBERNETES_VERSION_MANAGEMENT and KUBERNETES_VERSION_UPGRADE_FROM to target 1.33 in this file?
cnmcavoy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See my comment on the subnet filtering regression
|
We can ignore It is permafailing also on periodics: https://storage.googleapis.com/k8s-triage/index.html?text=No%20Control%20Plane%20machines%20came%20into%20existence.%20&job=cluster-api-provider-aws-.*ci-artifacts&xjob=-release- |
|
/test pull-cluster-api-provider-aws-e2e-clusterclass |
|
I agree on the clusterctl one, this still fails already on the still running job: pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-test |
|
/lgtm Let's see what the tests say. |
|
LGTM label has been added. Git tree hash: 63ace2d4e0e410d0924b4df57a2df01d636dbcef
|
|
New changes are detected. LGTM label has been removed. |
|
/test pull-cluster-api-provider-aws-e2e |
|
/retest |
2 similar comments
|
/retest |
|
/retest |
|
@chrischdi we can wait after the janitor cleans things up, also maybe it helps to start them in small batches (e2e / e2e eks first, then the others) |
|
/test pull-cluster-api-provider-aws-e2e-blocking |
|
/test pull-cluster-api-provider-aws-e2e-blocking |
|
/test pull-cluster-api-provider-aws-e2e-clusterclass |
|
/test pull-cluster-api-provider-aws-e2e-conformance |
* rosa: deflake unit test * fixup
|
/test pull-cluster-api-provider-aws-e2e |
|
/retest |
|
/test pull-cluster-api-provider-aws-e2e-clusterclass Note:
|
|
/test pull-cluster-api-provider-aws-e2e |
|
@clebs: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What type of PR is this?
/kind support
What this PR does / why we need it:
This PR bumps CAPI to v1.11.0, and k8s to v1.33.3.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)format, will close the issue(s) when PR gets merged):Fixes #5593
Replaces #5624
Special notes for your reviewer:
Checklist:
Release note: