-
Notifications
You must be signed in to change notification settings - Fork 1k
Implement default resource interpreter in third-party resourcecustomizations for SparkApplication #6818
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @liaolecheng, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates a new built-in resource interpreter for Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a built-in resource interpreter for SparkApplication resources from the Spark Operator (v1beta2) to support better resource management in Karmada. The implementation provides customized interpretation logic for health status, resource requirements, replica counting, and status aggregation specific to Spark applications.
- Adds comprehensive Lua scripts for interpreting SparkApplication health, replicas, and component resources
- Implements status aggregation and reflection logic for SparkApplication status fields
- Provides test data and test configuration to validate the interpreter functionality
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
customizations.yaml | Core interpreter configuration with Lua scripts for health, replicas, components, and status handling |
customizations_tests.yaml | Test configuration defining operations to validate the interpreter |
desired-sparkapplication.yaml | Test data representing the desired SparkApplication spec |
observed-sparkapplication.yaml | Test data representing an observed SparkApplication with status |
status-file.yaml | Test data containing status information for aggregation testing |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Summary of ChangesHello @liaolecheng, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates native support for Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Summary of ChangesHello @liaolecheng, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates native support for Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a default resource interpreter for SparkApplication resources, which is a great addition. The implementation covers health interpretation, component resource discovery, replica counting, status aggregation, and status reflection using Lua scripts. The overall approach is solid, but I've found several issues in the Lua scripts related to correctness and robustness, such as incorrect field names (camelCase vs. PascalCase), unsafe access to object properties, and use of global variables. I've left detailed comments with suggestions to address these points. Additionally, one of the test data files contains a field with an incorrect name, which should also be corrected.
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Outdated
Show resolved
Hide resolved
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Outdated
Show resolved
Hide resolved
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Outdated
Show resolved
Hide resolved
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Outdated
Show resolved
Hide resolved
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Show resolved
Hide resolved
...ations/sparkoperator.k8s.io/v1beta2/SparkApplication/testdata/observed-sparkapplication.yaml
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a default resource interpreter for SparkApplication resources. The implementation is comprehensive, covering health interpretation, resource discovery for components, replica counting, status aggregation, and status reflection using Lua scripts. The changes are well-structured and include necessary test files. My review focuses on improving the Lua scripts for correctness and maintainability, as well as enhancing the test coverage and fixing minor issues in the test data. The most critical feedback is regarding the AggregateStatus
function, which has flawed logic and uses global variables.
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Show resolved
Hide resolved
...sourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/testdata/status-file.yaml
Outdated
Show resolved
Hide resolved
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Outdated
Show resolved
Hide resolved
...sourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations_tests.yaml
Show resolved
Hide resolved
...zations/sparkoperator.k8s.io/v1beta2/SparkApplication/testdata/desired-sparkapplication.yaml
Outdated
Show resolved
Hide resolved
...ations/sparkoperator.k8s.io/v1beta2/SparkApplication/testdata/observed-sparkapplication.yaml
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a resource interpreter for SparkApplication CRD. The implementation contains several bugs in the Lua scripts for resource and status interpretation, including incorrect field names, improper use of helper functions, lack of nil-safety checks, and flawed status aggregation logic. I've provided detailed comments and suggestions to address these critical issues.
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Outdated
Show resolved
Hide resolved
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Show resolved
Hide resolved
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Show resolved
Hide resolved
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Outdated
Show resolved
Hide resolved
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Show resolved
Hide resolved
...rty/resourcecustomizations/sparkoperator.k8s.io/v1beta2/SparkApplication/customizations.yaml
Outdated
Show resolved
Hide resolved
...ations/sparkoperator.k8s.io/v1beta2/SparkApplication/testdata/observed-sparkapplication.yaml
Outdated
Show resolved
Hide resolved
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #6818 +/- ##
==========================================
+ Coverage 45.84% 45.88% +0.03%
==========================================
Files 690 690
Lines 57300 57392 +92
==========================================
+ Hits 26271 26333 +62
- Misses 29399 29423 +24
- Partials 1630 1636 +6
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
4f95e73
to
9402ef8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/assign
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
ac4c55d
to
857206a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just looked at the InterpretHealth
. Please cc me once it's ready for review, as I see you are updating this PR right now.
Please add a test report on how to test it.
function InterpretHealth(observedObj) | ||
if observedObj and observedObj.status and observedObj.status.applicationState and observedObj.status.applicationState.state then | ||
local state = observedObj.status.applicationState.state | ||
return state ~= 'PENDING' and state ~= 'UNKNOWN' | ||
end | ||
return false | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function InterpretHealth(observedObj) | |
if observedObj and observedObj.status and observedObj.status.applicationState and observedObj.status.applicationState.state then | |
local state = observedObj.status.applicationState.state | |
return state ~= 'PENDING' and state ~= 'UNKNOWN' | |
end | |
return false | |
end | |
function InterpretHealth(observedObj) | |
if not observedObj or | |
not observedObj.status or | |
not observedObj.status.applicationState or | |
not observedObj.status.applicationState.state then | |
return false | |
end | |
-- Only the 'FAILED' state is considered unhealthy. All other states are treated | |
-- as healthy or recoverable. | |
local state = observedObj.status.applicationState.state | |
if state == 'FAILED' then | |
return false | |
end | |
return true |
13ad9f5
to
7593da0
Compare
@RainbowMango I've revised the code, and it's now ready for review. I've been quite busy with other matters recently, but I’ll make sure to provide the test report (detailing how to test it) within the next couple of days. Thank you for your understanding and patience! |
…zations for SparkApplication Signed-off-by: liaolecheng <[email protected]>
What type of PR is this?
/kind feature
What this PR does / why we need it:
Which issue(s) this PR fixes:
Part of #6587
Special notes for your reviewer:
Does this PR introduce a user-facing change?: