Skip to content

Conversation

rohan-019
Copy link
Contributor

@rohan-019 rohan-019 commented Sep 24, 2025

What type of PR is this?
/kind bug

What this PR does / why we need it:
This PR fixes an issue in the Karmada scheduler where historical replica assignments (AssignedReplicasForCluster) were unconditionally added to a cluster’s available replicas, even when the cluster no longer satisfies the workload’s updated scheduling constraints (e.g., nodeAffinity, resourceRequests).

With the previous logic, clusters that became unsuitable after constraint changes could appear more attractive due to inflated available replicas, leading to suboptimal scheduling decisions.

This PR ensures that historical assignments are only counted for clusters that still meet the current constraints, so new replicas are scheduled to clusters that can actually accommodate them.
Which issue(s) this PR fixes:

Fixes #6771

Special notes for your reviewer:

Modified pkg/scheduler/core/spreadconstraint/group_clusters.go to conditionally include historical assignments.

Verified behavior with unit tests and scaling scenarios under constraint changes.

Does this PR introduce a user-facing change?: NO

karmada-scheduler: Fixed a scheduling bug where historical assigned replicas were incorrectly counted for clusters that no longer meet updated scheduling constraints, preventing suboptimal replica placement.

@karmada-bot karmada-bot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 24, 2025
Copy link

Summary of Changes

Hello @rohan-019, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug within the Karmada scheduler's replica assignment logic. Previously, historical replica assignments were unconditionally added to a cluster's available capacity, even if the cluster no longer satisfied the workload's updated scheduling constraints. This flaw could lead to inefficient and incorrect replica placement. The implemented fix introduces a conditional check, ensuring that historical assignments are only factored in for clusters that genuinely meet the current constraints, thereby promoting more accurate and optimal scheduling decisions.

Highlights

  • Scheduling Logic Correction: Modified the generateClustersInfo function in the scheduler to conditionally include historical assigned replicas (AssignedReplicasForCluster) in a cluster's AvailableReplicas. This ensures that historical assignments are only considered if the cluster still has available capacity and meets current scheduling constraints.
  • Bug Fix: Addresses a bug where clusters that no longer met updated scheduling constraints could still appear more attractive due to inflated available replica counts, leading to suboptimal scheduling decisions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign garrybest for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Sep 24, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes a bug in the Karmada scheduler where historical replica counts could lead to poor scheduling decisions for clusters that no longer meet constraints. The proposed logic change correctly addresses this by only considering historical replicas if a cluster can accommodate new ones. My review includes a suggestion to improve a comment for clarity and strongly recommends adding a regression test to prevent this issue from recurring.

@rohan-019 rohan-019 force-pushed the fix-ignore-historical-replicas branch from 78fa8ee to 45c5120 Compare September 24, 2025 05:41
@codecov-commenter
Copy link

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 45.75%. Comparing base (70411ab) to head (45c5120).
❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #6779   +/-   ##
=======================================
  Coverage   45.74%   45.75%           
=======================================
  Files         689      689           
  Lines       57161    57163    +2     
=======================================
+ Hits        26149    26154    +5     
+ Misses      29383    29379    -4     
- Partials     1629     1630    +1     
Flag Coverage Δ
unittests 45.75% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@rohan-019
Copy link
Contributor Author

Hey, @GitHubxsy @zhzhuang-zju
please review!

@GitHubxsy
Copy link
Contributor

GitHubxsy commented Sep 29, 2025

Hey, @GitHubxsy @zhzhuang-zju please review!

@rohan-019 I apologize for the delay in seeing your PR. Your PR can only solve part of the scenario, such as when affinity rules are modified and the cluster has no nodes that meet the new affinity requirements. However, if the user modifies the resource requests (e.g., CPU/memory requests for the Pods), the current calculated maximum available replicas will still be incorrect.
My idea is to remove the existing logic from the karmada-scheduler and delegate the responsibility of providing an accurate maximum available replica count entirely to the karmada-scheduler-estimator.
​How the karmada-scheduler-estimator can accurately calculate the maximum available replicas:​​
The resource view snapshot maintained by the karmada-scheduler-estimator should subtract the resources already consumed by the Pods of this workload that are currently deployed. Perhaps we need to add a new selector parameter. This parameter would be used to match the Pods belonging to this specific workload, allowing us to accurately account for the resources they are already occupying within the resource view snapshot. Then, this updated resource view would be used to calculate the true maximum available replicas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kind/bug Categorizes issue or PR as related to a bug. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Inaccurate Cluster MaxAvailableReplicas Calculation When Scheduling Constraints Change Due to Inclusion of Historically Assigned Replicas

4 participants