Skip to content

When scale up workload replicas, karmada-scheduler will distribute new replicas to unhealthy member clusters #6861

@LivingCcj

Description

@LivingCcj

What happened:
Workload have already been distributed to member cluster A and member cluster B, meanwhile member cluster A is unhealthy for a period of time.
And we want to scale up the workload replicas,the newly added replicas will be distributed to unhealthy cluster A. However cluster A is down and can’t run these replicas.

The reproducible propagation policy with the dynamic replica division strategy:

spec:
  conflictResolution: Abort
  placement:
    clusterAffinity:
      labelSelector:
        matchExpressions:
        - key: zone
          operator: In
          values:
          - cloud
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        dynamicWeight: AvailableReplicas
  preemption: Never
  priority: 0
  propagateDeps: true
  resourceSelectors:
  - apiVersion: apps/v1
    kind: Deployment
    name: test
    namespace: default
  schedulerName: default-scheduler

What you expected to happen:
The karmada-scheduler should stop distributing new replicas to unhealthy member cluster

Environment:

  • Karmada version: v1.9.1
  • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version): 1.9.1
  • Others:

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.

    Type

    No type

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions