-
Notifications
You must be signed in to change notification settings - Fork 99
OCM-19812: Update development docs #559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
OCM-19812: Update development docs #559
Conversation
|
@bhushanthakur93: This pull request references OCM-19812 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: bhushanthakur93, rawsyntax The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@clcollins could you please review this MR? Thanks! |
| ### Run using cluster routes | ||
| #### Run using cluster routes | ||
|
|
||
| Run locally using standard namespace and cluster routes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Below should be changed to ROUTES=true make run, run-standard-routes target no longer exists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
docs/development.md
Outdated
| ``` | ||
| - Scale down existing MUO deployment | ||
| ``` | ||
| oc scale deployment managed-upgrade-operator -n managed-upgrade-operator --replicas=0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I did this, the namespace for the existing MUO deployment was openshift-managed-upgrade-operator
lmizell@compu-p1:~/Development/ocm/managed-upgrade-operator$ oc get deployment -A | grep managed-upgrade-operator
openshift-managed-upgrade-operator managed-upgrade-operator 1/1 1 1 80m
lmizell@compu-p1:~/Development/ocm/managed-upgrade-operator$ oc scale deployment managed-upgrade-operator -n managed-upgrade-operator --replicas=0
error: no objects passed to scale namespaces "managed-upgrade-operator" not found
lmizell@compu-p1:~/Development/ocm/managed-upgrade-operator$ oc scale deployment managed-upgrade-operator -n openshift-managed-upgrade-operator --replicas=0
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: node-role.kubernetes.io/master is use "node-role.kubernetes.io/control-plane" instead
deployment.apps/managed-upgrade-operator scaled
lmizell@compu-p1:~/Development/ocm/managed-upgrade-operator$
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch! fixed it now.
docs/development.md
Outdated
| - Once the cluster installs, create a user with `cluster-admin` role and log in using `oc` client. | ||
| - You will need to be logged in with an account that meets the [RBAC requirements](https://github.com/openshift/managed-upgrade-operator/blob/master/deploy/cluster_role.yaml) for the MUO service account. To do that run | ||
| ``` | ||
| oc login $(oc get infrastructures cluster -o json | jq -r '.status.apiServerURL') --token=$(oc create token managed-upgrade-operator -n openshift-managed-upgrade-operator) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the documentation is a bit misleading because if you follow these instructions and log in with a service account you run into permission errors trying to either create a project or scale down the existing MUO deployment.
I think it should be separated into running for local development vs production replication. AIUI, the service account is only for testing RBAC restrictions to verify the operator works with production permissions.
The setup steps should read something like:
1. Log in as a user with cluster-admin privileges:
oc login --token=<your-admin-token> --server=https://api.your-cluster.example.com:6443
2. Scale down the existing MUO deployment to avoid conflicts:
oc scale deployment managed-upgrade-operator -n openshift-managed-upgrade-operator --replicas=0
3. Choose how to run the operator locally:
Option A: Run as your admin user (simpler for development)
- You're already logged in, just proceed to run the operator
Option B: Run as the MUO service account (production-like)
- Switch to the service account context:
oc login $(oc get infrastructures cluster -o json | jq -r '.status.apiServerURL') --token=$(oc create token
managed-upgrade-operator -n openshift-managed-upgrade-operator)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point.
I'd like to avoid running as administrator as that can silently creep in potential bugs in future if we end up with more permissions. So I prefer explicit permissions i.e production-like setup as the only option for local development.
I updated the docs to clarify where we need cluster-admin privilege. Hope that helps avoid confusion.
|
Hi @bhushanthakur93, could you please look at the latest comments? |
|
New changes are detected. LGTM label has been removed. |
Yep. @Alcamech PTAL now. |
|
@bhushanthakur93: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What type of PR is this?
(documentation)
What this PR does / why we need it?
Updated development docs for better onboarding experience.
Which Jira/Github issue(s) this PR fixes?
OCM-19812
Special notes for your reviewer:
Pre-checks (if applicable):