Replies: 1 comment
-
|
hi @meln5674 .... you are the user, so I'd encourage you to decide what is most important and write issues, especially if you are interested in contributing. Things that might help you make that decision are: you're own good judgement and whether it is a "breaking" change. At this point, almost all things going into main are going to be for podman 6. So this is the time to break things. It is quite possible that based on your issue/feature, we may ask for a lightweight design document so we can all agree before you jump in. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am currently evaluating
podman kube playfor a use case in an environment that is almost entirely Kubernetes, but requires a few components to run outside of clusters, and I'm finding it tantalizingly close to what I need, but rather unintuitive, or just unfinished in a few ways that I'd like to provide some suggestions for, which unfortunately render it unusable for me in its current state, and honestly leaves me wondering what the intended use cases are actually are.For some context on these issues, my main use case is translating third-party configurations such as helm charts, jsonnet, etc, into single-node pod sets, ideally managed remotely over something like Ansible or the API socket. My secondary use case is as a replacement for docker/podman-compose so development teams which deploy to Kubernetes (i.e. all of the teams I support) do not need to maintain a separate configuration for localhost testing, and do not need static dev/test/staging environments per user. While the answer to many of my issues is a simple "Well, don't do that, then", it would require manually updating files each time a change is required, which is not only a pain for me as an administrator, but it is a tough sell for development teams to move to.
These are only the things I have found so far within a single hour of evaluation, so I will likely be updating this post as I continue finding things to talk about. I'd be happy to create individual Issues and PR's to implement the proposed ideas if there is support for them.
This is the big one.
If I run
podman kube play foo.yaml, and it works, I expect to be able to run the same command again without issue, and not have it complain that the resources already exist, despite being in an identical configuration.kube play --replaceis not an improvement, as it will delete volumes, and the data in them. These are the same issues that podman compose has (though, notably, not docker compose), and while I chalked that up to it being basically an afterthought, the same issue being present in the main podman repo instead suggests to me that this is intentional, which is worrying. Even worse than this is the fact that ifkube playfails halfway through,kube downwill also fail because the resources it didn't create don't exist.I realize that podman isn't kubernetes, but if you offer a feature that makes it look like kubernetes, I expect it to at least try to act like it. I'm at a loss for what the intended purpose of this feature could possibly be in its current state. I'm certain now after a day of experimentation that it would be easier to just run a single-node cluster than to bother with this.
I can't seem to find any documentation or discussions about why these are not implemented. I see that even in deployments,
replicasis hard-coded to 1, so I cannot imagine there would be any sort of ordering issues, meaning a StatefulSet could just be treated as a pod with an accompanying set of volumes.A similar argument can be made for DaemonSets, and possibly other base resources I haven't considered yet.
As far as I can tell, the implementation of livenessProbe works by translating the different k8s probes into local commands to be run within the container itself. This relies on the assumption that there is a shell and various common tools like curl and nc in the image. This renders containers that use the common "scratch" base with static binaries unusable without manually removing their probes.
I suggest providing these health checks in podman itself, either as a hidden subcommand, or as a separate statically-linked binary, which takes a single argument containing the JSON representation of the probe and performs the same check. This binary can then be bind-mounted into the container and set as the health check target. The Go standard library should make command, httpGet, and tcpSocket trivial, and grpc probes can be implemented if desired using the official library. I've used a similar approach in other container-based applications, and it works well.
A clear example is when I had not realized that StatefulSets were not implemented, and I received this error message:
Not only is this not actually the problem (that I was using an unsupported resource), it doesn't even list the name of the configmap in question, nor does it explain what is expected of the user. This should very obviously be formatted as something like
Configmaps X, Y, ..., Z are not used as volumes or environment variables by any pods. I think any unsupported resources should be logged as warnings at the default log level, and there should probably be a flag to fail if unsupported resources are provided at all.As an aside, I cannot see any reason why an unused configmap wouldn't just be a local directory volume that is populated with files. Dubiously useful, but it seems like a better approach than just refusing the play outright.
If you perform a
kube playon a set of resources in a file, then remove a resource from that file, runningkube playagain does not remove it. This makes perfect sense based on how it appears to be implemented, but makes updating configurations an error-prone chore. Helm fixes this by having the notion of a "release", named sets of resourced tracked as metadata, and ArgoCD fixes this by labeling every resource it creates with a unique identifier. Even docker/podman compose has the notion of a "project" for this purpose.I think it would be straightforward to add a new flag
-p,--project-name=<name>to mirror compose, which would add a label (maybepodman.io/kube-play-project?) to all resources created, and perhaps another flag to use the current directory as the project name.kube playwould find these resources and remove them if not present in the new file. This also facilitates easily adding an option tokube downto remove all such resources by simply providing the project name, instead of having to provide the exact yaml that was provided tokube play. This also facilitates a new commandkube pswhich would list such resources by project name. This could even be added as an option to systemd foo.kube files to automatically label resources based on the unit file path.This seems like another odd omission, but a straightforward addition.
--configmapflag existsSee above. Even if another flag isn't added, I don't see why this flag shouldn't support Secrets.
kube logcommandI'd like to be able to run
podman kube log nginxinstead ofpodman ps | grep nginxfollowed bypodman log nginx-pod-0-nginx. Unsure of how this could be implemented yet, maybe more labels.Admittedly, I haven't tried this yet, but Ctrl+F-ing the man page makes no mention of namespaces, so I'm going to assume they are ignored, so disregard this if that isn't the case. I think its reasonable to at least use the namespace in the full name of a resource's containers, volumes, etc, if it explicitly provides one, to allow replicating multi-namespace setups.
Beta Was this translation helpful? Give feedback.
All reactions