-
Notifications
You must be signed in to change notification settings - Fork 4.2k
(CA) leaner go.mod configuration #8238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jackfrancis The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/assign @BigDarkClown |
/test pull-cluster-autoscaler-e2e-azure-master |
/assign @towca @gjtempleton @x13n |
cluster-autoscaler/go.mod
Outdated
replace k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.34.0-alpha.0 | ||
|
||
replace k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.34.0-alpha.0 | ||
|
||
replace k8s.io/kms => k8s.io/kms v0.34.0-alpha.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this one also a no-op?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, thanks!
Thanks for cleaning this up! Judging from file contents I think one more line might be dropped, so adding a hold. LGTM otherwise. /lgtm |
37e4890
to
65682d5
Compare
New changes are detected. LGTM label has been removed. |
@x13n this is ready for another review round, thank you! |
replace k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.34.0-alpha.1 | ||
|
||
replace k8s.io/apimachinery => k8s.io/apimachinery v0.34.0-alpha.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I'm pretty sure the replaces for k/k staging modules were here intentionally, to cover a corner case where CA starts depending on a new k/k staging module.
More specifically, this is my understanding:
- We want to keep CA's k/k dependencies (k8s.io/kuberentes + the k/k staging modules) in sync, all pinned to one kubernetes version.
- The replaces guarantee that we always use the pinned kubernetes version. Requirements just specify the minimum version, and a higher one could be picked when building. So we need both a require and a replace for every k/k dependency.
- All of the k/k dependencies are updated via a script which adds both a require and a replace for them.
- What happens if we add a new k/k dependency to CA? We need to either remember/mandate adding the replace as well, or remember/mandate running the dependency-upgrading script which adds the replace. But we can avoid the remembering/mandating if we add replaces for all possible k/k dependencies when running the script.
What was the criteria by which you removed the k/k replaces? Maybe points 1. or 2. are not actually true/no longer apply?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Main criteria is making the go.mod easier to reason about (arguably the replaces aren't that much overhead), and setting it up for automated dependabot maintenance going forward.
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
This PR optimizes the go.mod configuration for Cluster Autoscaler, specifically to remove a bunch of unnecessary replace directives. Some of the deletions are directives for packages that aren't actually present in the dependency graph, some are unnecessary because they match the exact version already declared in the require set.
These changes produce no changes in the resultant go.sum outcome.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: