Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[server] Validate block-cache config against system memory #1508

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

majisourav99
Copy link
Contributor

Validate block-cache config against system memory

If operator deploys config with exceeds system resource limits, it makes the cluster into inoperable state and finally destroys the cluster. This PR fails rolling out such config for blockcache.

How was this PR tested?

GHCI

Does this PR introduce any user-facing changes?

  • No. You can skip the rest of this section.
  • Yes. Make sure to explain your proposed changes and call out the behavior change.

Copy link
Contributor

@m-nagarajan m-nagarajan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @majisourav99. Left a couple of questions.

rocksDBServerConfig.getRocksDBBlockCacheSizeInBytes() + (rocksDBServerConfig.isUseSeparateRMDCacheEnabled()
? rocksDBServerConfig.getRocksDBRMDBlockCacheSizeInBytes()
: 0);
if (Runtime.getRuntime().maxMemory() * 0.8 < cacheBytesNeeded) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we make this threshold configurable?
Also can multiple rocksDB be instantiated resulting in a combined usage of more than the allowed memory?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this will work. The Runtime will tell you information about the JVM, but the rocksdb data usage won't be billed against that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

blockcache is shared amongst all rocksdb engines.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, Zac is correct.
One way in Oracle JVM (getting this from stackoverflow):

        long memorySize = ((com.sun.management.OperatingSystemMXBean) ManagementFactory.getOperatingSystemMXBean()).getTotalPhysicalMemorySize();

Not sure whether it works with OpenJDK or not.

@@ -338,9 +338,9 @@ public RocksDBServerConfig(VeniceProperties props) {
}

this.rocksDBBlockCacheSizeInBytes =
props.getSizeInBytes(ROCKSDB_BLOCK_CACHE_SIZE_IN_BYTES, 16 * 1024 * 1024 * 1024L); // 16GB
props.getSizeInBytes(ROCKSDB_BLOCK_CACHE_SIZE_IN_BYTES, 2 * 1024 * 1024 * 1024L); // 2GB
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the jump from 16 -> 2?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Glancing at our config repo, we're already explicitly setting these in all colos, so the only users who this 'might' impact at davinci users? We should check that. I don't think shrinking it is going to be a problem for server deployments. Though I too am curious to the logic for why we'd like to shrink these defaults in this PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make all the tests pass as mac jvm allocates 4G of memory

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DVC users pick up from global-config or can override it, which is also set to 16G.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we please verify how this config is getting defaulted for Davinci users? If we're covered there I think this is fine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Emm.. Why not just override it in VeniceServerWrapper instead of here to let test pass in Mac?

@@ -338,9 +338,9 @@ public RocksDBServerConfig(VeniceProperties props) {
}

this.rocksDBBlockCacheSizeInBytes =
props.getSizeInBytes(ROCKSDB_BLOCK_CACHE_SIZE_IN_BYTES, 16 * 1024 * 1024 * 1024L); // 16GB
props.getSizeInBytes(ROCKSDB_BLOCK_CACHE_SIZE_IN_BYTES, 2 * 1024 * 1024 * 1024L); // 2GB
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Emm.. Why not just override it in VeniceServerWrapper instead of here to let test pass in Mac?

this.rocksDBRMDBlockCacheSizeInBytes =
props.getSizeInBytes(ROCKSDB_RMD_BLOCK_CACHE_SIZE_IN_BYTES, 2 * 1024 * 1024 * 1024L); // 2GB
props.getSizeInBytes(ROCKSDB_RMD_BLOCK_CACHE_SIZE_IN_BYTES, 1 * 1024 * 1024 * 1024L); // 1GB
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question as above?

rocksDBServerConfig.getRocksDBBlockCacheSizeInBytes() + (rocksDBServerConfig.isUseSeparateRMDCacheEnabled()
? rocksDBServerConfig.getRocksDBRMDBlockCacheSizeInBytes()
: 0);
if (Runtime.getRuntime().maxMemory() * 0.8 < cacheBytesNeeded) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, Zac is correct.
One way in Oracle JVM (getting this from stackoverflow):

        long memorySize = ((com.sun.management.OperatingSystemMXBean) ManagementFactory.getOperatingSystemMXBean()).getTotalPhysicalMemorySize();

Not sure whether it works with OpenJDK or not.

: 0);

long systemMemorySize = getOSMemorySize();
if (systemMemorySize > 0 && (systemMemorySize * 0.8 < cacheBytesNeeded)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @m-nagarajan mentioned, it is better to make the ratio configurable.

OperatingSystemMXBean osBean = ManagementFactory.getOperatingSystemMXBean();

if (osBean instanceof com.sun.management.OperatingSystemMXBean) {
com.sun.management.OperatingSystemMXBean extendedOsBean = (com.sun.management.OperatingSystemMXBean) osBean;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tested this function in Mac or Linux with OpenJDK?
If it doesn't work in these envs, this feature will be useless...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this works in mac, and linux container/helix code uses same mechanism to fetch system memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants