-
Notifications
You must be signed in to change notification settings - Fork 29
Bump versions of infrastructure providers (CAPI and CAPO) #616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
3e9c97c to
7abf6b7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! Just a few of minor suggestions from me.
60192c4 to
67e4d26
Compare
e0af6ca to
7d9a7ee
Compare
…s' into bump-infra-providers
| {{- if .issuerUrl }} | ||
| clusterConfiguration: | ||
| apiServer: | ||
| extraArgs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh my... we override this at quite a few places in the templates :/
| infrastructureRef: | ||
| kind: OpenStackMachineTemplate | ||
| apiGroup: infrastructure.cluster.x-k8s.io | ||
| name: {{ include "openstack-cluster.controlplane.mt.name" . }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we have to drop namespace here? Do we need namespace in the outer KubeadmControlPlane CRD?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do have to drop namespace yes - I'll go check on whether it can live on the outer KubeadmControlPlane CRD.
| {{- $nodeGroup := deepCopy $.Values.nodeGroupDefaults | mustMerge $nodeGroupOverrides }} | ||
| --- | ||
| apiVersion: bootstrap.cluster.x-k8s.io/v1beta1 | ||
| apiVersion: bootstrap.cluster.x-k8s.io/v1beta2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was v1beta1 removed? I think we probabably need some bit notes in the release notes about what capi-helm-chart versions depend on, like what is the minimum version, so that the upgrades are smooth. I am hoping we can bump all the capi capo operators, nothing breaks using the older version of the helm chart, then we move to this version of the helm chart, and we are good, but I am assuming we were too late for that? (Sorry being slow getting my head around the upgrade path here).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
v1beta1 wasn't/isn't removed in this version, so clusters defined with that version can still be worked on by the updated CAPI - we test this by deploying a cluster at the latest tag in this repo, then upgrading to the tip of the feature branch (here: https://github.com/azimuth-cloud/capi-helm-charts/actions/runs/18321667523/job/52195259366?pr=616).
| "cluster-api": "v1.11.2", | ||
| "cluster-api-janitor-openstack": "0.9.1", | ||
| "cluster-api-provider-openstack": "v0.11.3", | ||
| "cluster-api-provider-openstack": "v0.13.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to move to v0.12.x first?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so, or at least there is no requirement that I can find for that. The biggest issue would be deprecation of the v1alpha6 and v1alpha7 APIs, which we have already moved away from in #423 (https://github.com/kubernetes-sigs/cluster-api-provider-openstack/releases/tag/v0.12.0).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW CAPI explicitly supports N+3 upgrades so I'd guess that CAPO follows something similar (or at least doesn't strictly require N+1).
| @@ -1,4 +1,5 @@ | |||
| clusterNetworking: | |||
| internalNetwork: | |||
| networkMTU: 1450 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its scary that we need this now? Does it not just get the default MTU correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, awesome work getting this green.
In my head, its not clear on the upgrade path here for Azimuth and Mangum, in terms of bumping all the operators and the helm chart version. I am guessing when we install the new operators, we must stop using the old helm chart version, and start using this version? And the operators will do the version upgrade on the CRDs that are already in the cluster "for us"?
|
Greate work, hope this PR will be merged soon. |
No description provided.