[DVC] Add RequestBasedMetaRepository to enable metadata retrieval directly from server #4285
VeniceCI-StaticAnalysisAndUnitTests.yml
on: pull_request
Matrix: Clients / UT & CodeCov
Matrix: Controller / UT & CodeCov
Matrix: Integrations / UT & CodeCov
Matrix: Internal / UT & CodeCov
Matrix: Router / UT & CodeCov
Matrix: Server / UT & CodeCov
Matrix: StaticAnalysis
ValidateGradleWrapper
11s
StaticAnalysisAndUnitTestsCompletionCheck
0s
Annotations
40 errors
LeakedPushStatusCleanUpServiceTest.testLeakedZKNodeShouldBeDeleted:
services/venice-controller/src/test/java/com/linkedin/venice/pushmonitor/LeakedPushStatusCleanUpServiceTest.java#L95
org.mockito.exceptions.verification.ArgumentsAreDifferent: Argument(s) are different! Wanted:
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpServiceTest.testLeakedZKNodeShouldBeDeleted(LeakedPushStatusCleanUpServiceTest.java:95)
Actual invocations have different arguments:
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v1"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.lambda$run$1(LeakedPushStatusCleanUpService.java:183)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v1"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.lambda$run$1(LeakedPushStatusCleanUpService.java:183)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v1"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.lambda$run$1(LeakedPushStatusCleanUpService.java:183)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v1"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.lambda$run$1(LeakedPushStatusCleanUpService.java:183)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v1"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.lambda$run$1(LeakedPushStatusCleanUpService.java:183)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushSta
|
LeakedPushStatusCleanUpServiceTest.testLeakedZKNodeShouldBeDeleted:
services/venice-controller/src/test/java/com/linkedin/venice/pushmonitor/LeakedPushStatusCleanUpServiceTest.java#L95
org.mockito.exceptions.verification.ArgumentsAreDifferent: Argument(s) are different! Wanted:
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpServiceTest.testLeakedZKNodeShouldBeDeleted(LeakedPushStatusCleanUpServiceTest.java:95)
Actual invocations have different arguments:
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v1"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.lambda$run$1(LeakedPushStatusCleanUpService.java:183)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v1"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.lambda$run$1(LeakedPushStatusCleanUpService.java:183)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v1"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.lambda$run$1(LeakedPushStatusCleanUpService.java:183)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v1"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.lambda$run$1(LeakedPushStatusCleanUpService.java:183)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:129)
offlinePushAccessor.getOfflinePushStatusCreationTime(
"test_store_v2"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.run(LeakedPushStatusCleanUpService.java:152)
offlinePushAccessor.deleteOfflinePushStatusAndItsPartitionStatuses(
"test_store_v1"
);
-> at com.linkedin.venice.pushmonitor.LeakedPushStatusCleanUpService$PushStatusCleanUpTask.lambda$run$1(LeakedPushStatusCleanUpService.java:183)
offlinePushAccessor.loadOfflinePushStatusPaths(
);
-> at com.linkedin.venice.pushmonitor.LeakedPushSta
|
Server / UT & CodeCov (8)
Process completed with exit code 1.
|
SITWithTWiseAndBufferAfterLeaderTest.testOffsetSyncBeforeGracefulShutDown[1](AA_OFF):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseAndBufferAfterLeaderTest.java#L1
java.lang.AssertionError: pcs.getLatestProcessedLocalVersionTopicOffset() for PARTITION_FOO is expected to be zero! expected [0] but found [2]
|
SITWithTWiseWithoutBufferAfterLeaderTest.testRecordLevelMetricForCurrentVersion[0](false):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseWithoutBufferAfterLeaderTest.java#L1
Wanted but not invoked:
hostLevelIngestionStats.recordTotalBytesConsumed(
<any long>
);
-> at com.linkedin.davinci.stats.HostLevelIngestionStats.recordTotalBytesConsumed(HostLevelIngestionStats.java:499)
However, there were exactly 39 interactions with this mock:
hostLevelIngestionStats.recordProcessConsumerActionLatency(
4.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.002285d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.001372d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.11361299999999999d,
1738800520484L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.070762d,
1738800520486L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.001372d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.001152d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.053149d,
1738800520497L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.co
|
NativeMetadataRepositoryTest.testNativeMetadataRepositoryStats:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/NativeMetadataRepositoryTest.java#L141
java.lang.AssertionError: expected [2000.0] but found [1000.0]
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
SITWithTWiseAndBufferAfterLeaderTest.testOffsetSyncBeforeGracefulShutDown[1](AA_OFF):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseAndBufferAfterLeaderTest.java#L1
java.lang.AssertionError: pcs.getLatestProcessedLocalVersionTopicOffset() for PARTITION_FOO is expected to be zero! expected [0] but found [2]
|
SITWithTWiseWithoutBufferAfterLeaderTest.testRecordLevelMetricForCurrentVersion[0](false):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseWithoutBufferAfterLeaderTest.java#L1
Wanted but not invoked:
hostLevelIngestionStats.recordTotalBytesConsumed(
<any long>
);
-> at com.linkedin.davinci.stats.HostLevelIngestionStats.recordTotalBytesConsumed(HostLevelIngestionStats.java:499)
However, there were exactly 39 interactions with this mock:
hostLevelIngestionStats.recordProcessConsumerActionLatency(
4.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.002285d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.001372d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.11361299999999999d,
1738800520484L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.070762d,
1738800520486L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.001372d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.001152d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.053149d,
1738800520497L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.co
|
SITWithTWiseAndBufferAfterLeaderTest.testRecordLevelMetricForCurrentVersion[0](false):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseAndBufferAfterLeaderTest.java#L1
org.mockito.exceptions.verification.TooFewActualInvocations:
hostLevelIngestionStats.recordTotalBytesConsumed(
<any long>
);
Wanted 2 times:
-> at com.linkedin.davinci.stats.HostLevelIngestionStats.recordTotalBytesConsumed(HostLevelIngestionStats.java:499)
But was 1 time:
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2629)
|
NativeMetadataRepositoryTest.testNativeMetadataRepositoryStats:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/NativeMetadataRepositoryTest.java#L131
java.lang.AssertionError: expected [1000.0] but found [0.0]
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
SITWithTWiseAndBufferAfterLeaderTest.testRecordLevelMetricForCurrentVersion[0](false):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseAndBufferAfterLeaderTest.java#L1
org.mockito.exceptions.verification.TooFewActualInvocations:
hostLevelIngestionStats.recordTotalBytesConsumed(
<any long>
);
Wanted 2 times:
-> at com.linkedin.davinci.stats.HostLevelIngestionStats.recordTotalBytesConsumed(HostLevelIngestionStats.java:499)
But was 1 time:
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2629)
|
NativeMetadataRepositoryTest.testNativeMetadataRepositoryStats:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/NativeMetadataRepositoryTest.java#L131
java.lang.AssertionError: expected [1000.0] but found [0.0]
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
Server / UT & CodeCov (17)
Process completed with exit code 1.
|
SITWithPWiseWithoutBufferAfterLeaderTest.testProcessConsumerActionsError:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithPWiseWithoutBufferAfterLeaderTest.java#L1
Wanted but not invoked:
leaderFollowerStoreIngestionTask.reportError(
<any string>,
1,
<Capturing argument>
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.reportError(StoreIngestionTask.java:4353)
However, there were exactly 56 interactions with this mock:
leaderFollowerStoreIngestionTask.subscribePartition(
TestTopic_5d48b9b506_d01db033_v1-1
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTaskTest.runTest(StoreIngestionTaskTest.java:887)
leaderFollowerStoreIngestionTask.subscribePartition(
TestTopic_5d48b9b506_d01db033_v1-1,
true
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.subscribePartition(StoreIngestionTask.java:644)
leaderFollowerStoreIngestionTask.throwIfNotRunning();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.subscribePartition(StoreIngestionTask.java:658)
leaderFollowerStoreIngestionTask.isRunning();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.throwIfNotRunning(StoreIngestionTask.java:605)
leaderFollowerStoreIngestionTask.getIsRunning();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.isRunning(StoreIngestionTask.java:4186)
leaderFollowerStoreIngestionTask.nextSeqNum();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.subscribePartition(StoreIngestionTask.java:667)
leaderFollowerStoreIngestionTask.run();
-> at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
leaderFollowerStoreIngestionTask.isRunning();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.run(StoreIngestionTask.java:1657)
leaderFollowerStoreIngestionTask.getIsRunning();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.isRunning(StoreIngestionTask.java:4186)
leaderFollowerStoreIngestionTask.updateIngestionRoleIfStoreChanged(
Mock for Store, hashCode: 1569760686
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.run(StoreIngestionTask.java:1659)
leaderFollowerStoreIngestionTask.isHybridMode();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.updateIngestionRoleIfStoreChanged(StoreIngestionTask.java:1575)
leaderFollowerStoreIngestionTask.processConsumerActions(
Mock for Store, hashCode: 1569760686
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.run(StoreIngestionTask.java:1660)
leaderFollowerStoreIngestionTask.processConsumerAction(
KafkaTaskMessage{type=SUBSCRIBE, topicPartition=TestTopic_5d48b9b506_d01db033_v1-1, attempts=4, sequenceNumber=1, createdTimestampInMs=1738800567676},
Mock for Store, hashCode: 1569760686
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1949)
leaderFollowerStoreIngestionTask.processCommonConsumerAction(
KafkaTaskMessage{type=SUBSCRIBE, topicPartition=TestTopic_5d48b9b506_d01db033_v1-1, attempts=4, sequenceNumber=1, createdTimestampInMs=1738800567676}
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.processConsumerAction(LeaderFollowerStoreIngestionTask.java:572)
leaderFollowerStoreIngestionTask.reportIfCatchUpVersionTopicOffset(
PCS{replicaId=TestTopic_5d48b9b506_d01db033_v1-1, hybrid=false, latestProcessedLocalVersionTopicOffset=-1, latestProcessedUpstreamVersionTopicOffset=-1, latestProcessedUpstreamRTOffsetMap={}, latestIgnoredUpstreamRTOffsetMap={}, latestRTOffsetTriedToProduceToVTMap{}, offsetRecord=OffsetRecord{localVersionTopicOffset=-1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=-1, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}, errorReported=false, started=false, lagCaughtUp=false, processedRecordSizeSinceLastSync=0, leaderFollowerState=STANDBY}
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processCommonConsumerAction(StoreIngestionTask.java:2199)
leaderFollowerStoreIngestionTask.updateLeaderTopicOnFollower(
PCS{replicaId=TestTopic_5d48b9b506_d01db033_v1-1, hybrid=false, latestProcessedLocalVersionTopicOffse
|
SITWithTWiseAndBufferAfterLeaderTest.testNotifier[1](AA_OFF):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseAndBufferAfterLeaderTest.java#L1
org.mockito.exceptions.verification.ArgumentsAreDifferent: Argument(s) are different! Wanted:
storageMetadataService.put(
"TestTopic_584bf51113_90b73d1d_v1",
2,
OffsetRecord{localVersionTopicOffset=3, upstreamOffset=-1, leaderTopic=null, offsetLag=0, eventTimeEpochMs=-1, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=true, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTaskTest.lambda$testNotifier$35(StoreIngestionTaskTest.java:1621)
Actual invocations have different arguments:
storageMetadataService.getLastOffset(
"TestTopic_584bf51113_90b73d1d_v1",
1
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.getLastOffset(DeepCopyOffsetManager.java:49)
storageMetadataService.getLastOffset(
"TestTopic_584bf51113_90b73d1d_v1",
2
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.getLastOffset(DeepCopyOffsetManager.java:49)
storageMetadataService.computeStoreVersionState(
"TestTopic_584bf51113_90b73d1d_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$1028/0x000000080088d440@796250a
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_584bf51113_90b73d1d_v1",
1,
OffsetRecord{localVersionTopicOffset=1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738800546211, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_584bf51113_90b73d1d_v1",
1,
OffsetRecord{localVersionTopicOffset=2, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738800546211, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.computeStoreVersionState(
"TestTopic_584bf51113_90b73d1d_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$1028/0x000000080088d440@37d10d1b
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_584bf51113_90b73d1d_v1",
1,
OffsetRecord{localVersionTopicOffset=3, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738800546212, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=true, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.computeStoreVersionState(
"TestTopic_584bf51113_90b73d1d_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$1028/0x000000080088d440@3e31c7f7
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_584bf51113_90b73d1d_v1",
2,
OffsetRecord{localVersionTopicOffset=1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738800546211, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_584bf51113_90b73d1d_v1",
2,
OffsetRecord{localVersionTopicOffset=2, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738800546212, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerCl
|
SITWithTWiseAndBufferAfterLeaderTest.testVeniceMessagesProcessingWithSortedInputWithBlobMode[3](true, true):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseAndBufferAfterLeaderTest.java#L1
org.mockito.exceptions.misusing.CannotStubVoidMethodWithReturnValue:
'put' is a *void method* and it *cannot* be stubbed with a *return value*!
Voids are usually stubbed with Throwables:
doThrow(exception).when(mock).someVoidMethod();
If you need to set the void method to do nothing you can use:
doNothing().when(mock).someVoidMethod();
For more information, check out the javadocs for Mockito.doNothing().
***
If you're unsure why you're getting above error read on.
Due to the nature of the syntax above problem might occur because:
1. The method you are trying to stub is *overloaded*. Make sure you are calling the right overloaded version.
2. Somewhere in your test you are stubbing *final methods*. Sorry, Mockito does not verify/stub final methods.
3. A spy is stubbed using when(spy.foo()).then() syntax. It is safer to stub spies -
- with doReturn|Throw() family of methods. More in javadocs for Mockito.spy() method.
4. Mocking methods declared on non-public parent classes is not supported.
|
SITWithTWiseAndBufferAfterLeaderTest.methodSetUp:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseAndBufferAfterLeaderTest.java#L1
org.mockito.exceptions.misusing.UnfinishedVerificationException:
Missing method call for verify(mock) here:
-> at com.linkedin.davinci.stats.AggKafkaConsumerServiceStats.recordTotalUpdateCurrentAssignmentLatency(AggKafkaConsumerServiceStats.java:68)
Example of correct verification:
verify(mock).doSomething()
Also, this error might show up because you verify either of: final/private/equals()/hashCode() methods.
Those methods *cannot* be stubbed/verified.
Mocking methods declared on non-public parent classes is not supported.
|
SITWithTWiseWithoutBufferAfterLeaderTest.testProcessConsumerActionsError:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseWithoutBufferAfterLeaderTest.java#L1
Wanted but not invoked:
leaderFollowerStoreIngestionTask.reportError(
<any string>,
1,
<Capturing argument>
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.reportError(StoreIngestionTask.java:4353)
However, there were exactly 58 interactions with this mock:
leaderFollowerStoreIngestionTask.subscribePartition(
TestTopic_58cab41202_899a4d90_v1-1
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTaskTest.runTest(StoreIngestionTaskTest.java:887)
leaderFollowerStoreIngestionTask.subscribePartition(
TestTopic_58cab41202_899a4d90_v1-1,
true
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.subscribePartition(StoreIngestionTask.java:644)
leaderFollowerStoreIngestionTask.throwIfNotRunning();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.subscribePartition(StoreIngestionTask.java:658)
leaderFollowerStoreIngestionTask.isRunning();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.throwIfNotRunning(StoreIngestionTask.java:605)
leaderFollowerStoreIngestionTask.getIsRunning();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.isRunning(StoreIngestionTask.java:4186)
leaderFollowerStoreIngestionTask.nextSeqNum();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.subscribePartition(StoreIngestionTask.java:667)
leaderFollowerStoreIngestionTask.run();
-> at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
leaderFollowerStoreIngestionTask.isRunning();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.run(StoreIngestionTask.java:1657)
leaderFollowerStoreIngestionTask.getIsRunning();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.isRunning(StoreIngestionTask.java:4186)
leaderFollowerStoreIngestionTask.updateIngestionRoleIfStoreChanged(
Mock for Store, hashCode: 1357698805
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.run(StoreIngestionTask.java:1659)
leaderFollowerStoreIngestionTask.isHybridMode();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.updateIngestionRoleIfStoreChanged(StoreIngestionTask.java:1575)
leaderFollowerStoreIngestionTask.processConsumerActions(
Mock for Store, hashCode: 1357698805
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.run(StoreIngestionTask.java:1660)
leaderFollowerStoreIngestionTask.processConsumerAction(
KafkaTaskMessage{type=SUBSCRIBE, topicPartition=TestTopic_58cab41202_899a4d90_v1-1, attempts=5, sequenceNumber=1, createdTimestampInMs=1738800548377},
Mock for Store, hashCode: 1357698805
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1949)
leaderFollowerStoreIngestionTask.processCommonConsumerAction(
KafkaTaskMessage{type=SUBSCRIBE, topicPartition=TestTopic_58cab41202_899a4d90_v1-1, attempts=5, sequenceNumber=1, createdTimestampInMs=1738800548377}
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.processConsumerAction(LeaderFollowerStoreIngestionTask.java:572)
leaderFollowerStoreIngestionTask.reportIfCatchUpVersionTopicOffset(
PCS{replicaId=TestTopic_58cab41202_899a4d90_v1-1, hybrid=false, latestProcessedLocalVersionTopicOffset=-1, latestProcessedUpstreamVersionTopicOffset=-1, latestProcessedUpstreamRTOffsetMap={}, latestIgnoredUpstreamRTOffsetMap={}, latestRTOffsetTriedToProduceToVTMap{}, offsetRecord=OffsetRecord{localVersionTopicOffset=-1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=-1, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}, errorReported=false, started=false, lagCaughtUp=false, processedRecordSizeSinceLastSync=0, leaderFollowerState=STANDBY}
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processCommonConsumerAction(StoreIngestionTask.java:2199)
leaderFollowerStoreIngestionTask.updateLeaderTopicOnFollower(
PCS{replicaId=TestTopic_58cab41202_899a4d90_v1-1, hybrid=false, latestProcessedLocalVersionTopicOffse
|
NativeMetadataRepositoryTest.testNativeMetadataRepositoryStats:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/NativeMetadataRepositoryTest.java#L141
java.lang.AssertionError: expected [2000.0] but found [1000.0]
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
RequestBasedMetaRepositoryTest.testRequestBasedMetaRepositoryGetSchemaData:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/RequestBasedMetaRepositoryTest.java#L130
java.lang.AssertionError: expected object to not be null
|
Server / UT & CodeCov (11)
Process completed with exit code 1.
|
DispatchingAvroGenericStoreClientTest.testBatchGet:
clients/venice-client/src/test/java/com/linkedin/venice/fastclient/DispatchingAvroGenericStoreClientTest.java#L654
org.testng.internal.thread.ThreadTimeoutException: Method com.linkedin.venice.fastclient.DispatchingAvroGenericStoreClientTest.testBatchGet() didn't finish within the time-out 10000
|
DispatchingAvroGenericStoreClientTest.testBatchGetToUnreachableClient:
clients/venice-client/src/test/java/com/linkedin/venice/fastclient/DispatchingAvroGenericStoreClientTest.java#L1021
java.lang.AssertionError: Cannot invoke "com.linkedin.venice.fastclient.StatsAvroGenericStoreClient.batchGet(com.linkedin.venice.fastclient.BatchGetRequestContext, java.util.Set)" because "this.statsAvroGenericStoreClient" is null expected [true] but found [false]
|
DispatchingAvroGenericStoreClientTest.testBatchGet:
clients/venice-client/src/test/java/com/linkedin/venice/fastclient/DispatchingAvroGenericStoreClientTest.java#L654
org.testng.internal.thread.ThreadTimeoutException: Method com.linkedin.venice.fastclient.DispatchingAvroGenericStoreClientTest.testBatchGet() didn't finish within the time-out 10000
|
DispatchingAvroGenericStoreClientTest.testBatchGetToUnreachableClient:
clients/venice-client/src/test/java/com/linkedin/venice/fastclient/DispatchingAvroGenericStoreClientTest.java#L1021
java.lang.AssertionError: Cannot invoke "com.linkedin.venice.fastclient.StatsAvroGenericStoreClient.batchGet(com.linkedin.venice.fastclient.BatchGetRequestContext, java.util.Set)" because "this.statsAvroGenericStoreClient" is null expected [true] but found [false]
|
StaticAnalysisAndUnitTestsCompletionCheck
Process completed with exit code 1.
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
StaticAnalysis
|
666 KB |
|
clients-jdk11
|
2.6 MB |
|
clients-jdk17
|
2.64 MB |
|
clients-jdk8
|
2.56 MB |
|
controller-jdk11
|
1.65 MB |
|
controller-jdk17
|
1.64 MB |
|
controller-jdk8
|
1.63 MB |
|
integrations-jdk11
|
561 KB |
|
integrations-jdk17
|
566 KB |
|
integrations-jdk8
|
548 KB |
|
internal-jdk11
|
3.51 MB |
|
internal-jdk17
|
3.53 MB |
|
internal-jdk8
|
3.5 MB |
|
router-jdk11
|
1.04 MB |
|
router-jdk17
|
1.04 MB |
|
router-jdk8
|
1.03 MB |
|
server-jdk11
|
8.87 MB |
|
server-jdk17
|
7.56 MB |
|
server-jdk8
|
7.55 MB |
|