Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added tests for pkg/version #5418

Conversation

anujagrawal699
Copy link
Contributor

Description:
This PR introduces tests for pkg/version/sharedcommand/sharedcommand.go and improves test coverage of other files in the version package enhancing it's test coverage to 100%.

Test Coverage:
The test coverage of version package has been improved from 41.67% to 100%. This can be verified in the file directory i.e. pkg/version using go test ./... -coverprofile=coverage.out.

What type of PR is this?
/kind failing-test
/kind feature

What this PR does / why we need it:
This PR adds comprehensive tests to the version package, improving the overall test coverage and ensuring the reliability and maintainability of the codebase.

Which issue(s) this PR fixes:
Fixes a part of #5235

Does this PR introduce a user-facing change?:

NONE

Signed-off-by: Anuj Agrawal <[email protected]>
@karmada-bot karmada-bot added kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/feature Categorizes issue or PR as related to a new feature. labels Aug 23, 2024
@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign kevin-wangzefeng for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Aug 23, 2024
@codecov-commenter
Copy link

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 30.15%. Comparing base (bcf68fa) to head (56515e1).

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5418      +/-   ##
==========================================
+ Coverage   30.12%   30.15%   +0.03%     
==========================================
  Files         632      632              
  Lines       43936    43936              
==========================================
+ Hits        13236    13251      +15     
+ Misses      29742    29726      -16     
- Partials      958      959       +1     
Flag Coverage Δ
unittests 30.15% <ø> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@NishantBansal2003
Copy link
Contributor

I think almost all tests are covered. /lgtm

@anujagrawal699
Copy link
Contributor Author

I think almost all tests are covered. /lgtm

Thanks @NishantBansal2003 .

@yashpandey06
Copy link
Contributor

yashpandey06 commented Aug 23, 2024

@anujagrawal699 its a good practice to run make verify and make test before pushing the changes, so it would help you to see the break points in test (if any)

@anujagrawal699
Copy link
Contributor Author

@anujagrawal699 its a good practice to run make verify and make test before pushing the changes, so it would help you to see the break points in test (if any)

@yashpandey06 i think CI fails sometimes due to different test environments and some other factors. After retesting the same, it'll pass. If CI workflows of lint or UT fails, then it is a indication of some faults.

@yashpandey06
Copy link
Contributor

@anujagrawal699 its a good practice to run make verify and make test before pushing the changes, so it would help you to see the break points in test (if any)

@yashpandey06 i think CI fails sometimes due to different test environments and some other factors. After retesting the same, it'll pass. If CI workflows of lint or UT fails, then it is a indication of some faults.

@anujagrawal699 yeah exactly, but just to be more sure it will be a good thing if you adapt before pushing the changes .

@XiShanYongYe-Chang
Copy link
Member

I think almost all tests are covered. /lgtm

The label can be changed to a separate line.

@XiShanYongYe-Chang
Copy link
Member

/retest

Comment on lines +58 to +62
if tc.gitVersion != "" {
rv, err = ParseGitVersion(tc.gitVersion)
} else {
rv = &ReleaseVersion{Version: nil}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This use case may not be appropriate here because an empty string is not valid for the ParseGitVersion method. The expectError parameter is not considered in one dimension.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your review, I added that check due to a new test case where r.Version is nil and I overlooked the expectError parameter in that case.

limitations under the License.
*/

package sharedcommand
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @anujagrawal699, some of the tests added to this file are pointless. The tested method itself creates a struct. More functionality is actually provided by the cobra package. The new tests seem to be testing these features. For karmada, this may be overkill. and is not easy to maintain.

@XiShanYongYe-Chang
Copy link
Member

New tests should try to maintain the functional logic of the code. However, some methods are not suitable for adding tests, which not only fails to maintain the code quality, but also increases the code maintenance cost.

We can try to improve the code coverage that contains the code logic, which requires some recognition.

You've done so well before, keep it up. 👍

For the current pr, I suggest to close it first.

@anujagrawal699
Copy link
Contributor Author

New tests should try to maintain the functional logic of the code. However, some methods are not suitable for adding tests, which not only fails to maintain the code quality, but also increases the code maintenance cost.

We can try to improve the code coverage that contains the code logic, which requires some recognition.

You've done so well before, keep it up. 👍

For the current pr, I suggest to close it first.

Thank you for your guidance and i understand your point about focusing on meaningful tests. I'll align with your feedback while further contributing tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/feature Categorizes issue or PR as related to a new feature. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants