-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Syncing latest changes from upstream master for rook #570
Conversation
This update modifies the Helm chart rook-ceph-cluster to allow define cephStorageClass as part of the blokpool also the documentation was updated for cephECBlockPool definition Signed-off-by: Javier <[email protected]>
Bumps the github-dependencies group with 1 update: [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go). Updates `github.com/aws/aws-sdk-go` from 1.50.5 to 1.50.12 - [Release notes](https://github.com/aws/aws-sdk-go/releases) - [Commits](aws/aws-sdk-go@v1.50.5...v1.50.12) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go dependency-type: direct:production update-type: version-update:semver-patch dependency-group: github-dependencies ... Signed-off-by: dependabot[bot] <[email protected]>
The validation logic of checking the number of devices is wrong when `metadataDevice` is set and `osdsPerDevice` > 1. `len(cvReports)` is the expected number of OSDs and is the number of specified data devices multiplied by `osdsPerDevice`. Closes: rook#13637 Signed-off-by: Satoru Takeuchi <[email protected]>
…ices-when-metadatadevice-is-set osd: correct counting the devices when metadataDevice is set
Currently, when rook provisions OSDs(in the OSD prepare job), rook effectively run a c-v command such as the following. ```console ceph-volume lvm batch --prepare <deviceA> <deviceB> <deviceC> --db-devices <metadataDevice> ``` but c-v lvm batch only supports disk and lvm, instead of disk partitions. We can resort to `ceph-volume lvm prepare` to implement it. Signed-off-by: Liang Zheng <[email protected]>
…dependencies-6731b9abb2 build(deps): bump the github-dependencies group with 1 update
Signed-off-by: Joshua Hoblitt <[email protected]>
…ltiple-cephECStorageClass helm: allow defining multiple cephEcStorageClass
…st-item docs: fix CephCluster.spec.cephVersion.imagePullPolicy list item
When CPU requests and limits are assigned to a pod, the pod will be guaranteed the requests, up to the limits. Even if there are spare CPU cycles, the pod cannot use them. Thus, pods can be unnecessarily denied compute when they need to burst if the limits are set. Therefore, it is not recommended to set CPU limits since the CPU requests are already guaranteeing that no pod will be starved at least for its requests. Signed-off-by: travisn <[email protected]>
since networkFence is a cluster-based resource so that we don't need the namespace and ownerReferences as it cause garbage-collector errors. Signed-off-by: subhamkrai <[email protected]>
helm: Remove cpu limits from all pods
core: remove ownerRef from networkFence
osd: support create osd with metadata partition
@df-build-team: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/approve |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: df-build-team, travisn The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
PR containing the latest commits from upstream master branch