Skip to content

Conversation

@graz-dev
Copy link
Contributor

@graz-dev graz-dev commented Dec 1, 2025

Description

Kubernetes v1.35 announcement blog post

@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 1, 2025
@k8s-ci-robot k8s-ci-robot added the area/blog Issues or PRs related to the Kubernetes Blog subproject label Dec 1, 2025
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. language/en Issues or PRs related to English language size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Dec 1, 2025
@netlify
Copy link

netlify bot commented Dec 1, 2025

Pull request preview available for checking

Built without sensitive environment variables

Name Link
🔨 Latest commit 5da8a5d
🔍 Latest deploy log https://app.netlify.com/projects/kubernetes-io-main-staging/deploys/6938a29f6ed50900088fb6eb
😎 Deploy Preview https://deploy-preview-53498--kubernetes-io-main-staging.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Dec 8, 2025
@graz-dev graz-dev changed the title [WIP] Kubernetes v1.35 Announcement Blog Post Kubernetes v1.35 Announcement Blog Post Dec 8, 2025
@graz-dev graz-dev marked this pull request as ready for review December 8, 2025 18:04
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 8, 2025
@graz-dev
Copy link
Contributor Author

graz-dev commented Dec 8, 2025

The announcement blog is ready for review.

Still missing:

  • Release logo and theme
  • Some event links
  • Release webinar link

cc
@drewhagen @rytswd @dipesh-rawat @SwathiR03 @chadmcrowell @arujjval @aakankshabhende


This work was done as part of [KEP #4368](https://kep.k8s.io/4368) led by SIG Apps.

### DRA: structured parameters
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the DRA is in GA starting 1.34: https://kubernetes.io/blog/2025/09/01/kubernetes-v1-34-dra-updates/

What am I missing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really sure how to handle it.
Is is listed as "stable" in the v1.35 milestone but you are sure it was "GA" in v1.34 and I can't understand which is the change here in v1.35

Reword the paragraph a bit to make it more clear.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SergeyKanzhelev @graz-dev
Routing back to the KEP and its documentation
kubernetes/enhancements#4381
#52881

My understanding is that this feature did go GA in v1.34 with feature gate default on, and now in v1.35, it is locked. It has already been around but the lock seems like a noteworthy change.

Do we want to keep this section with it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The title for this section may need change. Maybe something like:

Suggested change
### DRA: structured parameters
### Continued innovation in DRA

@johnbelamaric @pohly which KEPs do you want to highlight?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This whole paragraph likely not needed in release blog


This work was done as part of [KEP #4639](https://kep.k8s.io/4639) led by SIG Node.

### Expose node topology labels via Downward API
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change may need a bit more attention, I am not sure where. It will include all topology labels to all Pods - very noticeable change that we want to warn users about

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you suggest to do?
Moved it at the top of the list, for now.
I'm open for any suggestion on how to put more attention on it :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe in release notes it needs to be in highlights

Copy link
Member

@lmktfy lmktfy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could merge as-is. Most impressive.

Editorial team, please feel free to take a deep bow / pour yourselves a celebratory drink.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Dec 8, 2025
Comment on lines 79 to 82
This KEP defines the core Dynamic Resource Allocation (DRA) API, replacing earlier “[classic DRA](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md)” work and enabling pods to request modern hardware resources beyond CPU and RAM. As Kubernetes expands into batch and edge use cases, workloads increasingly need devices such as GPUs, accelerators, and network-attached hardware that the existing device plugin API cannot fully support.
DRA introduces a structured way to describe and allocate such devices, including support for sharing, reuse of expensive-to-initialize hardware, and custom configuration parameters. Vendors provide DRA drivers that publish available devices as `ResourceSlice` objects. Users request them through `ResourceClaims`, and the scheduler matches claims to slices to allocate devices and select nodes. DRA drivers then inject the allocated resources into pods.

This work was done as part of [KEP #4381](https://kep.k8s.io/4381) led by SIG Node.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the wrong description. We should be describing https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/4381-dra-structured-parameters#summary

(found by visiting https://www.kubernetes.dev/resources/keps/ and searching for "Structured Parameters").

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really sure how to handle it.
Is is listed as "stable" in the v1.35 milestone but you are sure it was "GA" in v1.34 and I can't understand which is the change here in v1.35

Reword the paragraph a bit to make it more clear.

Copy link
Member

@drewhagen drewhagen Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@graz-dev

  • description update for the correct KEP


Similar to previous releases, the release of Kubernetes v1.35 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.

This release consists of 59 enhancements. Of those enhancements, 16 have graduated to Stable, 19 have entered Beta, and 22 have entered Alpha.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the DRA KEP from the list below and from the counter here since it was GA in the v1.34 too.
Do we need to keep it or it is ok to remove it?

cc @drewhagen

Copy link
Member

@drewhagen drewhagen Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would keep it in the count of KEPs at least. There were changes and we processed them.

In regards to keeping the section,
I understand this feature already went GA in 1.34. Since the DRA: Structured Params
kubernetes/enhancements#4381
#52881
is being locked in v1.35, that feels notable for users, but I'm unsure if that warrants a full section on it again in the announcement blog. I'd like to get @kubernetes/sig-node-leads advice on this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok updated back to 60 enhancements and 17 stable.
Do I need to add it to the "### Graduations to stable" list?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I think we can keep that the same since it's not graduated, but rather was stable and is staying there. The math might not add up, but it's still technically correct this way


### Consider terminating Pods in Deployments

Updating or scaling Deployments has historically relied on an eager strategy that creates replacement Pods immediately, often ignoring those that are still terminating. This behavior can lead to resource contention and scheduling deadlocks, particularly in clusters with limited capacity, as terminating Pods continue to occupy quota while new replicas compete for the same resources.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This entire paragraph is not correct. As part of the overall work towards exposing that functionality you're writing about we ONLY exposed the information about terminating replicas in the deployment. That functionality is being promoted to beta, the rest of the work under KEP-3973 umbrella is still WIP.

Copy link
Contributor

@danwinship danwinship Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The title of the section is also confusing. I parsed it as "You should consider terminating the Pods in your Deployments".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rephrased the title and the content, PTAL
Let me know if it looks better, I'm still struggling to figure out why it's listed as alpha in the KEP :(

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh is this graduating to beta? I see it marked as alpha in release team tracking, kep issue description and the kep.yaml in the enhancements repo:

https://github.com/kubernetes/enhancements/pull/5568/files

Copy link
Member

@drewhagen drewhagen Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scratch that, we see @dipesh-rawat's comment below

I remember that this kubernetes/enhancements#3973 could only merge some of its changes before the code freeze. Only DeploymentReplicaSetTerminatingReplicas feature gate is moved to beta for this release, although the main feature DeploymentPodReplacementPolicy is still in alpha.

There is some additional context in latest comment post code freeze kubernetes/enhancements#3973 (comment).

@atiratree Please feel free to jump in and help us move this forward.


Kubernetes v1.35 introduces the `podReplacementPolicy` field for Deployments, enabling users to enforce a TerminationComplete strategy. This new mechanism ensures that the controller waits for terminating Pods to fully exit before creating replacements, while also providing visibility into terminating replicas via the Deployment status.

The primary benefit is safer and more predictable application rollouts. By serializing the termination and creation steps, it prevents transient resource overcommitment, making it easier to manage workloads with complex shutdown sequences in high-density environments.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All three paragraphs are not correct. As part of the overall work towards exposing that functionality you're writing about we ONLY exposed the information about terminating replicas in the deployment. That functionality is being promoted to beta, the rest of the work under KEP-3973 umbrella is still WIP.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe something like:

Kubernetes v1.35 introduces the `terminatingReplicas` field for Deployments, 
informing users about the total number of terminating pods targeted by this 
deployment. This field is part of an ongoing major initiative which plans to 
improve the pod replacement policy inside a deployment. 

/cc @atiratree

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, since this field is beta, it might need to move to previous section, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the KEP #3973 listed as "Alpha" in our tracking board.
@drewhagen what do you suggest to do?

Content and title rephrased, does it looks better now?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh is this graduating to beta? I see it marked as alpha in release team tracking, kep issue description and the kep.yaml in the enhancements repo:

https://github.com/kubernetes/enhancements/pull/5568/files

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember that this KEP-3973 could only merge some of its changes before the code freeze. Only DeploymentReplicaSetTerminatingReplicas feature gate is moved to beta for this release, although the main feature DeploymentPodReplacementPolicy is still in alpha.

There is some additional context in latest comment post code freeze kubernetes/enhancements#3973 (comment).

@atiratree Please feel free to jump in and help us move this forward.


This work was done as part of [KEP #4381](https://kep.k8s.io/4381) led by SIG Node.

### Reliable Pod update tracking with `Generation`
Copy link
Member

@drewhagen drewhagen Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kubernetes/sig-node-leads Please let us know if you have any feedback on this and any other sections with SIG Node KEPs. I appreciate it, thanks :)
kubernetes/enhancements#5067

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


This work was done as part of [KEP #4742](https://kep.k8s.io/4742) led by SIG Node.

### Move storage version migrator in-tree
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @kubernetes/sig-api-machinery-leads @kerthcet! Our release comms team put this section together for https://kep.k8s.io/4912 for the v1.35 announcement blog which we're sharing with the media for an early peak, and of course, we release next week.
How does this look?

Appreciate any contribution, advice or feedback 😄


This work was done as part of [KEP #4912](https://kep.k8s.io/4912) led by SIG API Machinery.

### Mutable CSINode allocatable property
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @kubernetes/sig-storage-leads @torredil! Our release comms team put this section together for https://kep.k8s.io/4876 for the v1.35 announcement blog which we're sharing with the media for an early peak, and of course, we release next week.
How does this look?

Appreciate any contribution, advice or feedback 😄


This work was done as part of [KEP #4876](https://kep.k8s.io/4876) led by SIG Storage.

### Opportunistic batching
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @kubernetes/sig-scheduling-leads @bwsalmon! Our release comms team put this section together for https://kep.k8s.io/5598 for the v1.35 announcement blog which we're sharing with the media for an early peak, and of course, we release next week.
How does this look?

Appreciate any contribution, advice or feedback 😄


This work was done as part of [KEP #5295](https://kep.k8s.io/5295) led by SIG CLI.

### Configurable tolerance for Horizontal Pod Autoscalers
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @kubernetes/sig-autoscaling-leads @jm-franc! Our release comms team put this section together for https://kep.k8s.io/4951 for the v1.35 announcement blog which we're sharing with the media for an early peak, and of course, we release next week.
How does this look?

Appreciate any contribution, advice or feedback 😄


This work was done as part of [KEP #4951](https://kep.k8s.io/4951) led by SIG Autoscaling.

### Support user namespaces in Pods
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @kubernetes/sig-node-leads! Our release comms team put this section together for https://kep.k8s.io/127 for the v1.35 announcement blog which we're sharing with the media for an early peak, and of course, we release next week.
How does this look?

Appreciate any contribution, advice or feedback 😄


*This is a selection of some of the improvements that are now stable following the v1.35 release.*

### Comparable resource version
Copy link
Member

@drewhagen drewhagen Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @kubernetes/sig-api-machinery-leads @michaelasp! Our release comms team put this section together for https://kep.k8s.io/5504 for the v1.35 announcement blog which will be shared with the media for an early peak by tomorrow, and of course, we release next week.

Appreciate any contribution, advice or feedback 😄

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! Thanks for the write up :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! :) cc: @graz-dev


This work was done as part of [KEP #1287](https://kep.k8s.io/1287) led by SIG Node.

### Beta: Pod certificates for workload identity and security
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @kubernetes/sig-auth-leads @ahmedtd! Our release comms team put this section together for https://kep.k8s.io/4317 for the v1.35 announcement blog which we're sharing with the media for an early peak, and of course, we release next week.
How does this look?

Appreciate any contribution, advice or feedback 😄

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@enj I see your contribution to the docs on this, so if you have any feedback, let us know :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

@graz-dev graz-dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh @danwinship
implemented most of your comments, can you please take a look?


Similar to previous releases, the release of Kubernetes v1.35 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.

This release consists of 59 enhancements. Of those enhancements, 16 have graduated to Stable, 19 have entered Beta, and 22 have entered Alpha.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok updated back to 60 enhancements and 17 stable.
Do I need to add it to the "### Graduations to stable" list?


Kubernetes v1.35 introduces the `podReplacementPolicy` field for Deployments, enabling users to enforce a TerminationComplete strategy. This new mechanism ensures that the controller waits for terminating Pods to fully exit before creating replacements, while also providing visibility into terminating replicas via the Deployment status.

The primary benefit is safer and more predictable application rollouts. By serializing the termination and creation steps, it prevents transient resource overcommitment, making it easier to manage workloads with complex shutdown sequences in high-density environments.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the KEP #3973 listed as "Alpha" in our tracking board.
@drewhagen what do you suggest to do?

Content and title rephrased, does it looks better now?


### Consider terminating Pods in Deployments

Updating or scaling Deployments has historically relied on an eager strategy that creates replacement Pods immediately, often ignoring those that are still terminating. This behavior can lead to resource contention and scheduling deadlocks, particularly in clusters with limited capacity, as terminating Pods continue to occupy quota while new replicas compete for the same resources.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rephrased the title and the content, PTAL
Let me know if it looks better, I'm still struggling to figure out why it's listed as alpha in the KEP :(


This work was done as part of [KEP #1287](https://kep.k8s.io/1287) led by SIG Node.

### Beta: Pod certificates for workload identity and security
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@enj I see your contribution to the docs on this, so if you have any feedback, let us know :)


Similar to previous releases, the release of Kubernetes v1.35 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community.

This release consists of 59 enhancements. Of those enhancements, 16 have graduated to Stable, 19 have entered Beta, and 22 have entered Alpha.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I think we can keep that the same since it's not graduated, but rather was stable and is staying there. The math might not add up, but it's still technically correct this way


*This is a selection of some of the improvements that are now stable following the v1.35 release.*

### Comparable resource version
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! :) cc: @graz-dev


Kubernetes v1.35 introduces the `podReplacementPolicy` field for Deployments, enabling users to enforce a TerminationComplete strategy. This new mechanism ensures that the controller waits for terminating Pods to fully exit before creating replacements, while also providing visibility into terminating replicas via the Deployment status.

The primary benefit is safer and more predictable application rollouts. By serializing the termination and creation steps, it prevents transient resource overcommitment, making it easier to manage workloads with complex shutdown sequences in high-density environments.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh is this graduating to beta? I see it marked as alpha in release team tracking, kep issue description and the kep.yaml in the enhancements repo:

https://github.com/kubernetes/enhancements/pull/5568/files


### Consider terminating Pods in Deployments

Updating or scaling Deployments has historically relied on an eager strategy that creates replacement Pods immediately, often ignoring those that are still terminating. This behavior can lead to resource contention and scheduling deadlocks, particularly in clusters with limited capacity, as terminating Pods continue to occupy quota while new replicas compete for the same resources.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh is this graduating to beta? I see it marked as alpha in release team tracking, kep issue description and the kep.yaml in the enhancements repo:

https://github.com/kubernetes/enhancements/pull/5568/files


[We honor the memory of Han Kang](https://github.com/cncf/memorials/blob/main/han-kang.md), a long-time contributor and respected engineer whose technical excellence and infectious enthusiasm left a lasting impact on the Kubernetes community. Han was a significant force within SIG Instrumentation and SIG API Machinery, earning a [2021 Kubernetes Contributor Award](https://www.kubernetes.dev/community/awards/2021/) for his critical work and sustained commitment to the project's core stability. Beyond his technical contributions, Han was deeply admired for his generosity as a mentor and his passion for building connections among people. He was known for "opening doors" for others, whether guiding new contributors through their first pull requests or supporting colleagues with patience and kindness. Han’s legacy lives on through the engineers he inspired, the robust systems he helped build, and the warm, collaborative spirit he fostered within the cloud native ecosystem.

We would like to thank the entire [Release Team](https://github.com/kubernetes/sig-release/blob/master/releases/release-1.35/release-team.md) for the hours spent hard at work to deliver the Kubernetes v1.35 release to our community. The Release Team's membership ranges from first-time shadows to returning team leads with experience forged over several release cycles. A very special thanks goes out to our release lead, [Drew Hagen](https://github.com/drewhagen), for guiding us through a successful release cycle, for his hands-on approach to solving challenges, and for bringing the energy and care that drives our community forward.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure who is supposed to write nice words about me (I swear I'm not trying to fish for compliments here, lol) but I noticed that this is a copy/paste of what was written up for Vyom (1.34)
And Nina (1.33) and Fred (1.32) had different ones

https://kubernetes.io/blog/2024/12/11/kubernetes-v1-32-release/
https://kubernetes.io/blog/2025/04/23/kubernetes-v1-33-release/
https://kubernetes.io/blog/2025/08/27/kubernetes-v1-34-release/

@graz-dev @rytswd


### Opportunistic batching

Gang Scheduling is a scheduling algorithm that schedules related threads or processes to run on distributed systems. Currently, the Kubernetes scheduling algorithm has a time complexity of O(number of Pods x number of Nodes). This KEP introduces an opportunistic batching mechanism that aims to improve the performance of scheduling compatible Pods at once by introducing a Pod scheduling signature and batching mechanism, eventually paving way for the implementation of the full-fledged Gang Scheduling algorithm within Kubernetes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opportunisatic batching is different than gang scheduling.
@kubernetes/sig-scheduling-leads @bwsalmon

Suggested change
Gang Scheduling is a scheduling algorithm that schedules related threads or processes to run on distributed systems. Currently, the Kubernetes scheduling algorithm has a time complexity of O(number of Pods x number of Nodes). This KEP introduces an opportunistic batching mechanism that aims to improve the performance of scheduling compatible Pods at once by introducing a Pod scheduling signature and batching mechanism, eventually paving way for the implementation of the full-fledged Gang Scheduling algorithm within Kubernetes.
Currently, the Kubernetes scheduler processes pods sequentially with time complexity O(num pods x num nodes), which can result in redundant computation for compatible pods. This KEP introduces an opportunistic batching mechanism that aims to improve the performance by identifying such compatible Pods via 'Pod scheduling signature` and batching them together, allowing shared filtering and scoring results across them.

Copy link
Member

@helayoty helayoty left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated few statements related to features I worked on.


### Extended toleration operators for threshold-based placement

Managing workload prioritization in Kubernetes has typically depended on PriorityClass, which orders Pods but does not account for waiting time. This static approach can lead to resource starvation for lower-priority workloads where they are perpetually preempted by higher-priority tasks making it difficult to guarantee strict Service Level Agreements (SLAs).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no relation between this feature and PriorityClass.

Suggested change
Managing workload prioritization in Kubernetes has typically depended on PriorityClass, which orders Pods but does not account for waiting time. This static approach can lead to resource starvation for lower-priority workloads where they are perpetually preempted by higher-priority tasks making it difficult to guarantee strict Service Level Agreements (SLAs).


Managing workload prioritization in Kubernetes has typically depended on PriorityClass, which orders Pods but does not account for waiting time. This static approach can lead to resource starvation for lower-priority workloads where they are perpetually preempted by higher-priority tasks making it difficult to guarantee strict Service Level Agreements (SLAs).

Kubernetes v1.35 introduces SLA-aware scheduling capabilities within the Scheduling Framework. This enhancement updates the `QueueSort` plugin to optionally consider the waiting time of Pods, effectively elevating the priority of long-pending workloads to ensure they eventually get scheduled.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The feature doesn't have anything to do with queueSort plugin.

Suggested change
Kubernetes v1.35 introduces SLA-aware scheduling capabilities within the Scheduling Framework. This enhancement updates the `QueueSort` plugin to optionally consider the waiting time of Pods, effectively elevating the priority of long-pending workloads to ensure they eventually get scheduled.
Kubernetes v1.35 introduces SLA-aware scheduling by enabling workloads to express reliability requirements. The feature adds numeric comparison operators to tolerations, allowing pods to match or avoid nodes based on SLA-oriented taints such as service guarantees or fault-domain quality.


Kubernetes v1.35 introduces SLA-aware scheduling capabilities within the Scheduling Framework. This enhancement updates the `QueueSort` plugin to optionally consider the waiting time of Pods, effectively elevating the priority of long-pending workloads to ensure they eventually get scheduled.

The primary benefit is a fairer and more deterministic scheduling lifecycle. By mitigating resource starvation, operators can enforce stricter time-based SLAs and ensure that lower-priority batch jobs or background tasks make steady progress even in highly utilized clusters.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The primary benefit is a fairer and more deterministic scheduling lifecycle. By mitigating resource starvation, operators can enforce stricter time-based SLAs and ensure that lower-priority batch jobs or background tasks make steady progress even in highly utilized clusters.
The primary benefit is enhancing the scheduler with more precise placement. As critical workloads can demand higher-SLA nodes, while lower priority workloads can opt into lower SLA ones. This improves utilization and reduce cost without compromising reliability.

Copy link
Member

@rytswd rytswd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding some minor grammar fixes

Comment on lines +32 to +33
Kubernetes is graduating in-place updates for Pod resources to General Availability (GA).
This feature allows users to adjust cpu and memory resources without restarting Pods or Containers. Previously, such modifications required recreating Pods, which could disrupt workloads, particularly for stateful or batch applications. Previous Kubernetes releases already allowed you to change infrastructure resources settings (requests and limits) for existing Pods. This allows for smoother vertical scaling, improves efficiency, and can also simplify development.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As this is about the release being made available, the correct tense would be present perfect.

Suggested change
Kubernetes is graduating in-place updates for Pod resources to General Availability (GA).
This feature allows users to adjust cpu and memory resources without restarting Pods or Containers. Previously, such modifications required recreating Pods, which could disrupt workloads, particularly for stateful or batch applications. Previous Kubernetes releases already allowed you to change infrastructure resources settings (requests and limits) for existing Pods. This allows for smoother vertical scaling, improves efficiency, and can also simplify development.
Kubernetes has graduated in-place updates for Pod resources to General Availability (GA).
This feature allows users to adjust CPU and memory resources without restarting Pods or Containers. Previously, such modifications required recreating Pods, which could disrupt workloads, particularly for stateful or batch applications. Previous Kubernetes releases already allowed you to change infrastructure resources settings (requests and limits) for existing Pods. This feature allows for smoother vertical scaling, improves efficiency, and can also simplify development.

### Beta: Pod certificates for workload identity and security

Previously, delivering certificates to pods required external controllers (cert-manager, SPIFFE/SPIRE), CRD orchestration, and Secret management, with rotation handled by sidecars or init containers. KEP-4317 enables native workload identity with automated certificate rotation, drastically simplifying service mesh and zero-trust architectures.
Now, the `kubelet` generates keys, requests certificates via PodCertificateRequest, and writes credential bundles directly to the pod's filesystem. `kube-apiserver` enforces node restriction at admission time, eliminating the single biggest footgun for third-party signers: accidentally violating node isolation boundaries. Pure mTLS flows, no bearer tokens in the issuance path.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Now, the `kubelet` generates keys, requests certificates via PodCertificateRequest, and writes credential bundles directly to the pod's filesystem. `kube-apiserver` enforces node restriction at admission time, eliminating the single biggest footgun for third-party signers: accidentally violating node isolation boundaries. Pure mTLS flows, no bearer tokens in the issuance path.
Now, the `kubelet` generates keys, requests certificates via PodCertificateRequest, and writes credential bundles directly to the Pod's filesystem. The `kube-apiserver` enforces node restriction at admission time, eliminating the single biggest footgun for third-party signers: accidentally violating node isolation boundaries. This enables pure mTLS flows, no bearer tokens in the issuance path.

### Alpha: Node declared features before scheduling

When control planes enable new features, but nodes lag behind (permitted by Kubernetes skew policy), the scheduler can place pods requiring those features onto incompatible older nodes.
The node-declaration features framework: nodes declare their supported Kubernetes features. With the new alpha feature enabled, a Node reports the features it supports, publishing this information to the control plane via a new `.status.declaredFeatures` field. Then, the `kube-scheduler`, admission controllers and third-party components can use these declarations. For example, you can enforce scheduling and API validation constraints to ensure that Pods run only on compatible nodes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The node-declaration features framework: nodes declare their supported Kubernetes features. With the new alpha feature enabled, a Node reports the features it supports, publishing this information to the control plane via a new `.status.declaredFeatures` field. Then, the `kube-scheduler`, admission controllers and third-party components can use these declarations. For example, you can enforce scheduling and API validation constraints to ensure that Pods run only on compatible nodes.
The node-declaration features framework allows nodes to declare their supported Kubernetes features. With the new alpha feature enabled, a Node reports the features it supports, publishing this information to the control plane via a new `.status.declaredFeatures` field. Then, the `kube-scheduler`, admission controllers, and third-party components can use these declarations. For example, you can enforce scheduling and API validation constraints to ensure that Pods run only on compatible nodes.


### Comparable resource version

`ResourceVersion` comparability is being strengthened to give clients more than simple equality checks. Today, clients treat the resource version as an opaque string, while the apiserver uses it internally as a monotonically increasing integer. The new proposal aligns client-side semantics with apiserver behavior, allowing clients to interpret resource versions as integers and compare their ordering.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`ResourceVersion` comparability is being strengthened to give clients more than simple equality checks. Today, clients treat the resource version as an opaque string, while the apiserver uses it internally as a monotonically increasing integer. The new proposal aligns client-side semantics with apiserver behavior, allowing clients to interpret resource versions as integers and compare their ordering.
The `ResourceVersion` comparability has been strengthened to give clients more than simple equality checks. Today, clients treat the resource version as an opaque string, while the apiserver uses it internally as a monotonically increasing integer. The new proposal aligns client-side semantics with apiserver behavior, allowing clients to interpret resource versions as integers and compare their ordering.

### Comparable resource version

`ResourceVersion` comparability is being strengthened to give clients more than simple equality checks. Today, clients treat the resource version as an opaque string, while the apiserver uses it internally as a monotonically increasing integer. The new proposal aligns client-side semantics with apiserver behavior, allowing clients to interpret resource versions as integers and compare their ordering.
This enables key use cases like storage version migration, informer performance improvements, and more reliable controller behavior, all of which require knowing whether one resource version is newer than another.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This enables key use cases like storage version migration, informer performance improvements, and more reliable controller behavior, all of which require knowing whether one resource version is newer than another.
This enables key use cases such as storage version migration, informer performance improvements, and more reliable controller behavior, all of which require knowing whether one resource version is newer than another.


Historically, the Pod API lacked the `metadata.Generation` field found in other Kubernetes objects like Deployments. A drawback of this omission was that controllers and users had no reliable way to verify if the `kubelet` had actually processed the latest changes to a Pod's specification. This ambiguity was particularly problematic for features like [In-Place Pod Vertical Scaling](#stable-in-place-update-of-pod-resources), where knowing exactly when a resource resize request had been enacted was difficult.

This KEP introduces standard generation tracking to the Pod API. Every time a Pod's `spec` is updated, the `metadata.Generation` sequence is incremented. Crucially, the Pod status now includes an `observedGeneration` field, which reports the generation that the `kubelet` has successfully seen and processed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This KEP introduces standard generation tracking to the Pod API. Every time a Pod's `spec` is updated, the `metadata.Generation` sequence is incremented. Crucially, the Pod status now includes an `observedGeneration` field, which reports the generation that the `kubelet` has successfully seen and processed.
This KEP introduces standard generation tracking for the Pod API. Every time a Pod's `spec` is updated, the `metadata.Generation` sequence is incremented. Crucially, the Pod status now includes an `observedGeneration` field, which reports the generation that the `kubelet` has successfully seen and processed.


### Configurable NUMA Node limit for Topology Manager

The `TopologyManager` has historically used a hardcoded limit of 8 for the maximum number of NUMA nodes it supports to prevent state explosion during affinity calculation. A drawback of this fixed limit is that it prevents Kubernetes from fully utilizing modern high-end servers which increasingly feature CPU architectures with more than 8 NUMA nodes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The `TopologyManager` has historically used a hardcoded limit of 8 for the maximum number of NUMA nodes it supports to prevent state explosion during affinity calculation. A drawback of this fixed limit is that it prevents Kubernetes from fully utilizing modern high-end servers which increasingly feature CPU architectures with more than 8 NUMA nodes.
The `TopologyManager` has historically used a hard-coded limit of 8 for the maximum number of NUMA nodes it supports, preventing state explosion during affinity calculation. A drawback of this fixed limit is that it prevents Kubernetes from fully utilizing modern high-end servers, which increasingly feature CPU architectures with more than 8 NUMA nodes.


Accessing node topology information, such as region and zone, from within a Pod has typically required querying the Kubernetes API server. While functional, this approach creates complexity and security risks by necessitating broad RBAC permissions or sidecar containers just to retrieve infrastructure metadata. Kubernetes v1.35 promotes the capability to expose node topology labels directly via the Downward API to beta.

The kubelet can now inject standard topology labels, such as `topology.kubernetes.io/zone` and `topology.kubernetes.io/region`, into Pods as environment variables or projected volume files. The primary benefit is a safer and more efficient way for workloads to be topology-aware. It allows applications to natively adapt to their availability zone or region without dependencies on the API server, strengthening security by upholding the principle of least privilege and simplifying cluster configuration.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The kubelet can now inject standard topology labels, such as `topology.kubernetes.io/zone` and `topology.kubernetes.io/region`, into Pods as environment variables or projected volume files. The primary benefit is a safer and more efficient way for workloads to be topology-aware. It allows applications to natively adapt to their availability zone or region without dependencies on the API server, strengthening security by upholding the principle of least privilege and simplifying cluster configuration.
The `kubelet` can now inject standard topology labels, such as `topology.kubernetes.io/zone` and `topology.kubernetes.io/region`, into Pods as environment variables or projected volume files. The primary benefit is a safer and more efficient way for workloads to be topology-aware. This allows applications to natively adapt to their availability zone or region without dependencies on the API server, strengthening security by upholding the principle of least privilege and simplifying cluster configuration.


### Mutable CSINode allocatable property

A CSI (Container Storage Interface) driver is a Kubernetes plugin that provides a consistent way for storage systems to be exposed to containerized workloads. The `CSINode` object records details about all CSI drivers installed on a node. However, a mismatch can arise between the reported and actual attachment capacity on nodes. When volume slots are consumed after a CSI driver starts up, `kube-scheduler` may assign stateful pods to nodes without sufficient capacity, ultimately getting stuck in a `ContainerCreating` state.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A CSI (Container Storage Interface) driver is a Kubernetes plugin that provides a consistent way for storage systems to be exposed to containerized workloads. The `CSINode` object records details about all CSI drivers installed on a node. However, a mismatch can arise between the reported and actual attachment capacity on nodes. When volume slots are consumed after a CSI driver starts up, `kube-scheduler` may assign stateful pods to nodes without sufficient capacity, ultimately getting stuck in a `ContainerCreating` state.
A CSI (Container Storage Interface) driver is a Kubernetes plugin that provides a consistent way for storage systems to be exposed to containerized workloads. The `CSINode` object records details about all CSI drivers installed on a node. However, a mismatch can arise between the reported and actual attachment capacity on nodes. When volume slots are consumed after a CSI driver starts up, the `kube-scheduler` may assign stateful pods to nodes without sufficient capacity, ultimately getting stuck in a `ContainerCreating` state.


A CSI (Container Storage Interface) driver is a Kubernetes plugin that provides a consistent way for storage systems to be exposed to containerized workloads. The `CSINode` object records details about all CSI drivers installed on a node. However, a mismatch can arise between the reported and actual attachment capacity on nodes. When volume slots are consumed after a CSI driver starts up, `kube-scheduler` may assign stateful pods to nodes without sufficient capacity, ultimately getting stuck in a `ContainerCreating` state.

This KEP makes `CSINode.spec.drivers[*].allocatable.count` mutable so that a node’s available volume attachment capacity can be updated dynamically. It also allows CSI drivers to control how frequently the `allocatable.count` value is updated on all nodes by introducing a configurable refresh interval, defined through the `CSIDriver` object. Additionally, it automatically updates `CSINode.spec.drivers[*].allocatable.count` on detecting a failure in volume attachment due to insufficient capacity. Although this feature graduated to Beta in v1.34, it continues to be in Beta for v1.35 to allow time for feedback.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This KEP makes `CSINode.spec.drivers[*].allocatable.count` mutable so that a node’s available volume attachment capacity can be updated dynamically. It also allows CSI drivers to control how frequently the `allocatable.count` value is updated on all nodes by introducing a configurable refresh interval, defined through the `CSIDriver` object. Additionally, it automatically updates `CSINode.spec.drivers[*].allocatable.count` on detecting a failure in volume attachment due to insufficient capacity. Although this feature graduated to Beta in v1.34, it continues to be in Beta for v1.35 to allow time for feedback.
This KEP makes `CSINode.spec.drivers[*].allocatable.count` mutable so that a node’s available volume attachment capacity can be updated dynamically. It also allows CSI drivers to control how frequently the `allocatable.count` value is updated on all nodes by introducing a configurable refresh interval, defined through the `CSIDriver` object. Additionally, it automatically updates `CSINode.spec.drivers[*].allocatable.count` on detecting a failure in volume attachment due to insufficient capacity. Although this feature graduated to Beta in v1.34, it remains in Beta for v1.35 to allow time for feedback.


### DRA: structured parameters

This KEP defines the core Dynamic Resource Allocation (DRA) API, establishing the "structured parameters" model as the standard for requesting hardware resources beyond CPU and RAM. Replacing the deprecated opaque parameter model, this API enables workloads to request devices like GPUs and network accelerators using a schema that the Kubernetes scheduler can understand and act upon directly.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't mention DRA by the KEP. It was just locked. All changes were part of other KEPs. This paragraph seems to be out of place. Perhaps it can be like:

Suggested change
This KEP defines the core Dynamic Resource Allocation (DRA) API, establishing the "structured parameters" model as the standard for requesting hardware resources beyond CPU and RAM. Replacing the deprecated opaque parameter model, this API enables workloads to request devices like GPUs and network accelerators using a schema that the Kubernetes scheduler can understand and act upon directly.
Dynamic Resource Allocation (DRA) that entered GA in 1.34 is a modern expressive way to describe advanced hardware and Pods utilizing this hardware.


This KEP defines the core Dynamic Resource Allocation (DRA) API, establishing the "structured parameters" model as the standard for requesting hardware resources beyond CPU and RAM. Replacing the deprecated opaque parameter model, this API enables workloads to request devices like GPUs and network accelerators using a schema that the Kubernetes scheduler can understand and act upon directly.

As the foundational DRA API graduated to General Availability in v1.34, v1.35 focuses on stability and ecosystem growth rather than breaking changes to this core specification. The API continues to rely on `ResourceSlice` objects for publishing device inventories and `ResourceClaims` for user requests. In v1.35, this stable foundation supports the continued evolution of advanced Alpha features such as *Device Taints* and *Consumable Capacity*, ensuring consistent and reliable resource allocation for batch and edge use cases.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
As the foundational DRA API graduated to General Availability in v1.34, v1.35 focuses on stability and ecosystem growth rather than breaking changes to this core specification. The API continues to rely on `ResourceSlice` objects for publishing device inventories and `ResourceClaims` for user requests. In v1.35, this stable foundation supports the continued evolution of advanced Alpha features such as *Device Taints* and *Consumable Capacity*, ensuring consistent and reliable resource allocation for batch and edge use cases.
As the foundational DRA API graduated to General Availability in v1.34, v1.35 focuses on stability and ecosystem growth by delivering new exciting capabilities. The API allows to see `ResourceSlice` objects for publishing device inventories and `ResourceClaims` for user requests. In v1.35, this stable foundation supports the continued evolution of advanced Alpha features such as *Device Taints* and *Consumable Capacity*, ensuring consistent and reliable resource allocation for batch and edge use cases.

However we are rarely include Alpha features in the release blog


This work was done as part of [KEP #5067](https://kep.k8s.io/5067) led by SIG Node.

### Configurable NUMA Node limit for Topology Manager
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ffromani can you help review this?


This work was done as part of [KEP #127](https://kep.k8s.io/127) led by SIG Node.

### VolumeSource: OCI artifact and/or image
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@saschagrunert pls help review text

### VolumeSource: OCI artifact and/or image

When creating a Pod, you often need to provide data, binaries, or configuration files for your containers. This meant including the content into the main container image or using a custom init container to download and unpack files into an `emptyDir`. Both these approaches are still valid. Kubernetes v1.31 added support for the `image` volume type allowing Pods to declaratively pull and unpack OCI container image artifacts into a volume. This lets you package and deliver data-only artifacts such as configs, binaries, or machine learning models using standard OCI registry tools.
With this feature, you can fully separate your data from your container image and remove the need for extra init containers or startup scripts. The image volume type has been in beta since v1.33 and will be enabled by default in v1.35.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please mention the Containerd version that has full support for the feature.


This KEP introduces a mechanism where the `kubelet` enforces credential verification for cached images. Before allowing a Pod to use a locally cached image, the `kubelet` checks if the Pod has the valid credentials to pull it. This ensures that only authorized workloads can use private images, regardless of whether they are already present on the node, significantly hardening the security posture for shared clusters.

As this feature graduates to Beta in v1.35, it has been enabled by default. However, users can disable it by setting the `KubeletEnsureSecretPulledImages` feature gate to false. The feature works by tracking the credentials used to pull images and verifying that subsequent requests for the same image provide matching or valid credentials before launching the container.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please mention configuration that can be done to kubelet: imagePullCredentialsVerificationPolicy. This is almost the AI when upgrading to 1.35.

Suggested change
As this feature graduates to Beta in v1.35, it has been enabled by default. However, users can disable it by setting the `KubeletEnsureSecretPulledImages` feature gate to false. The feature works by tracking the credentials used to pull images and verifying that subsequent requests for the same image provide matching or valid credentials before launching the container.
As this feature graduates to Beta in v1.35, it has been enabled by default. However, users can disable it by setting the `KubeletEnsureSecretPulledImages` feature gate to false or adjusting kubelet configuration flag: `imagePullCredentialsVerificationPolicy`. It also possible to configure a desired security level by adjusting this configuration flag from least secure, but maximum backaward compatible, to the most secure and potentially breaking change.


This work was done as part of [KEP #2535](https://kep.k8s.io/2535) led by SIG Node.

### Fine-grained Container restart rules
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yuanwang04 please review

### Expose node topology labels via Downward API

Accessing node topology information, such as region and zone, from within a Pod has typically required querying the Kubernetes API server. While functional, this approach creates complexity and security risks by necessitating broad RBAC permissions or sidecar containers just to retrieve infrastructure metadata. Kubernetes v1.35 promotes the capability to expose node topology labels directly via the Downward API to beta.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Note: this KEP injects topology labels to every Pod so they can be configured for Downward API. With 1.35 upgrade, you will see a lot of new labels on each Pod, and this is a part of a design.


*This is a selection of some of the improvements that are now beta following the v1.35 release.*

### Expose node topology labels via Downward API
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andrewsykim for text review


This work was done as part of [KEP #5237](https://kep.k8s.io/5237) led by SIG Cloud Provider.

### CSI driver opt-in for service account tokens via secrets field
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feature started in beta in v1.35


The primary benefit is the prevention of accidental credential exposure in logs and error messages. This change ensures that sensitive workload identities are handled via the appropriate secure channels, aligning with best practices for secret management while maintaining backward compatibility for existing drivers.

This work was done as part of [KEP #5538](https://kep.k8s.io/5538) led by SIG Storage.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This work was done by SIG Auth in cooperation with SIG Storage.

* [Invariant Testing](https://kep.k8s.io/5468)
* [In-Place Update of Pod Resources](https://kep.k8s.io/1287)
* [Fine-grained SupplementalGroups control](https://kep.k8s.io/3619)
* [Structured Authentication Config](https://kep.k8s.io/3331)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This graduated to stable in v1.34. In v1.35 we only added some metrics.


This work was done as part of [KEP #5307](https://kep.k8s.io/5307) led by SIG Node.

## New features in Alpha
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Constrained impersanation from SIG Auth is missing in alpha.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/blog Issues or PRs related to the Kubernetes Blog subproject cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. language/en Issues or PRs related to English language size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.