[server][common][vpj] Introduce VeniceComplexPartitioner to materialized view #4311
VeniceCI-StaticAnalysisAndUnitTests.yml
on: pull_request
Matrix: Clients / UT & CodeCov
Matrix: Controller / UT & CodeCov
Matrix: Integrations / UT & CodeCov
Matrix: Internal / UT & CodeCov
Matrix: Router / UT & CodeCov
Matrix: Server / UT & CodeCov
Matrix: StaticAnalysis
ValidateGradleWrapper
11s
StaticAnalysisAndUnitTestsCompletionCheck
0s
Annotations
27 errors
Internal / UT & CodeCov (8)
Process completed with exit code 1.
|
Internal / UT & CodeCov (17)
Process completed with exit code 1.
|
Internal / UT & CodeCov (11)
Process completed with exit code 1.
|
Server / UT & CodeCov (8)
Process completed with exit code 1.
|
SITWithTWiseWithoutBufferAfterLeaderTest.testNotifier[0](AA_ON):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseWithoutBufferAfterLeaderTest.java#L1
org.mockito.exceptions.verification.ArgumentsAreDifferent: Argument(s) are different! Wanted:
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
2,
OffsetRecord{localVersionTopicOffset=3, upstreamOffset=-1, leaderTopic=null, offsetLag=0, eventTimeEpochMs=-1, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=true, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTaskTest.lambda$testNotifier$35(StoreIngestionTaskTest.java:1621)
Actual invocations have different arguments:
storageMetadataService.getLastOffset(
"TestTopic_6491053337_3d20456b_v1",
1
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.getLastOffset(DeepCopyOffsetManager.java:49)
storageMetadataService.getLastOffset(
"TestTopic_6491053337_3d20456b_v1",
2
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.getLastOffset(DeepCopyOffsetManager.java:49)
storageMetadataService.computeStoreVersionState(
"TestTopic_6491053337_3d20456b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$1066/0x00007f774073f870@7d5e5e71
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
1,
OffsetRecord{localVersionTopicOffset=1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909962107, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
1,
OffsetRecord{localVersionTopicOffset=2, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909962107, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.computeStoreVersionState(
"TestTopic_6491053337_3d20456b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$1066/0x00007f774073f870@5c6e1bf8
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.computeStoreVersionState(
"TestTopic_6491053337_3d20456b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$1066/0x00007f774073f870@16926137
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
1,
OffsetRecord{localVersionTopicOffset=3, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909962107, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=true, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
2,
OffsetRecord{localVersionTopicOffset=1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909962107, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
2,
OffsetRecord{localVersionTopicOffset=2, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909962107, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerC
|
SITWithTWiseWithoutBufferAfterLeaderTest.testRecordLevelMetricForCurrentVersion[0](false):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseWithoutBufferAfterLeaderTest.java#L1
Wanted but not invoked:
hostLevelIngestionStats.recordTotalBytesConsumed(
<any long>
);
-> at com.linkedin.davinci.stats.HostLevelIngestionStats.recordTotalBytesConsumed(HostLevelIngestionStats.java:499)
However, there were exactly 27 interactions with this mock:
hostLevelIngestionStats.recordProcessConsumerActionLatency(
1.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.009578d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.14588099999999998d,
1738909967924L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.044372999999999996d,
1738909967925L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.006071d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.006592d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.049252000000000004d,
1738909967937L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.004438d
);
-> at com.linkedin.davinci.kafka.consumer.Lea
|
SITWithTWiseWithoutBufferAfterLeaderTest.testNotifier[0](AA_ON):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseWithoutBufferAfterLeaderTest.java#L1
org.mockito.exceptions.verification.ArgumentsAreDifferent: Argument(s) are different! Wanted:
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
2,
OffsetRecord{localVersionTopicOffset=3, upstreamOffset=-1, leaderTopic=null, offsetLag=0, eventTimeEpochMs=-1, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=true, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTaskTest.lambda$testNotifier$35(StoreIngestionTaskTest.java:1621)
Actual invocations have different arguments:
storageMetadataService.getLastOffset(
"TestTopic_6491053337_3d20456b_v1",
1
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.getLastOffset(DeepCopyOffsetManager.java:49)
storageMetadataService.getLastOffset(
"TestTopic_6491053337_3d20456b_v1",
2
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.getLastOffset(DeepCopyOffsetManager.java:49)
storageMetadataService.computeStoreVersionState(
"TestTopic_6491053337_3d20456b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$1066/0x00007f774073f870@7d5e5e71
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
1,
OffsetRecord{localVersionTopicOffset=1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909962107, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
1,
OffsetRecord{localVersionTopicOffset=2, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909962107, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.computeStoreVersionState(
"TestTopic_6491053337_3d20456b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$1066/0x00007f774073f870@5c6e1bf8
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.computeStoreVersionState(
"TestTopic_6491053337_3d20456b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$1066/0x00007f774073f870@16926137
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
1,
OffsetRecord{localVersionTopicOffset=3, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909962107, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=true, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
2,
OffsetRecord{localVersionTopicOffset=1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909962107, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_6491053337_3d20456b_v1",
2,
OffsetRecord{localVersionTopicOffset=2, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909962107, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerC
|
SITWithTWiseWithoutBufferAfterLeaderTest.testRecordLevelMetricForCurrentVersion[0](false):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseWithoutBufferAfterLeaderTest.java#L1
Wanted but not invoked:
hostLevelIngestionStats.recordTotalBytesConsumed(
<any long>
);
-> at com.linkedin.davinci.stats.HostLevelIngestionStats.recordTotalBytesConsumed(HostLevelIngestionStats.java:499)
However, there were exactly 27 interactions with this mock:
hostLevelIngestionStats.recordProcessConsumerActionLatency(
1.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.009578d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.14588099999999998d,
1738909967924L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.044372999999999996d,
1738909967925L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.006071d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.006592d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.049252000000000004d,
1738909967937L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.004438d
);
-> at com.linkedin.davinci.kafka.consumer.Lea
|
Server / UT & CodeCov (17)
Process completed with exit code 1.
|
SITWithPWiseAndBufferAfterLeaderTest.testOffsetSyncBeforeGracefulShutDown[1](AA_OFF):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithPWiseAndBufferAfterLeaderTest.java#L1
java.lang.AssertionError: pcs.getLatestProcessedLocalVersionTopicOffset() for PARTITION_FOO is expected to be zero! expected [0] but found [2]
|
SITWithPWiseWithoutBufferAfterLeaderTest.testNotifier[1](AA_OFF):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithPWiseWithoutBufferAfterLeaderTest.java#L1
org.mockito.exceptions.verification.ArgumentsAreDifferent: Argument(s) are different! Wanted:
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
2,
OffsetRecord{localVersionTopicOffset=3, upstreamOffset=-1, leaderTopic=null, offsetLag=0, eventTimeEpochMs=-1, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=true, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTaskTest.lambda$testNotifier$35(StoreIngestionTaskTest.java:1621)
Actual invocations have different arguments:
storageMetadataService.getLastOffset(
"TestTopic_5e2a62d9c0_a519f4b_v1",
1
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.getLastOffset(DeepCopyOffsetManager.java:49)
storageMetadataService.getLastOffset(
"TestTopic_5e2a62d9c0_a519f4b_v1",
2
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.getLastOffset(DeepCopyOffsetManager.java:49)
storageMetadataService.computeStoreVersionState(
"TestTopic_5e2a62d9c0_a519f4b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$786/0x00000008007ae040@33c2075b
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
1,
OffsetRecord{localVersionTopicOffset=1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909965213, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
1,
OffsetRecord{localVersionTopicOffset=2, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909965213, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.computeStoreVersionState(
"TestTopic_5e2a62d9c0_a519f4b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$786/0x00000008007ae040@7c9b1c4e
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
1,
OffsetRecord{localVersionTopicOffset=3, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909965213, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=true, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.computeStoreVersionState(
"TestTopic_5e2a62d9c0_a519f4b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$786/0x00000008007ae040@497fff84
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
2,
OffsetRecord{localVersionTopicOffset=1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909965213, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
2,
OffsetRecord{localVersionTopicOffset=2, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909965213, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
|
SITWithPWiseWithoutBufferAfterLeaderTest.testRecordLevelMetricForCurrentVersion[0](false):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithPWiseWithoutBufferAfterLeaderTest.java#L1
Wanted but not invoked:
hostLevelIngestionStats.recordTotalBytesConsumed(
<any long>
);
-> at com.linkedin.davinci.stats.HostLevelIngestionStats.recordTotalBytesConsumed(HostLevelIngestionStats.java:499)
However, there were exactly 21 interactions with this mock:
hostLevelIngestionStats.recordProcessConsumerActionLatency(
7.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.004949d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.09792399999999998d,
1738909970779L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.002384d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.071935d,
1738909970781L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.002905d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.045936000000000005d,
1738909970791L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.011201d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerS
|
SITWithTWiseWithoutBufferAfterLeaderTest.testRecordLevelMetricForCurrentVersion[0](false):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseWithoutBufferAfterLeaderTest.java#L1
org.mockito.exceptions.verification.TooFewActualInvocations:
hostLevelIngestionStats.recordTotalBytesConsumed(
<any long>
);
Wanted 2 times:
-> at com.linkedin.davinci.stats.HostLevelIngestionStats.recordTotalBytesConsumed(HostLevelIngestionStats.java:499)
But was 1 time:
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2629)
|
NativeMetadataRepositoryTest.testNativeMetadataRepositoryStats:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/NativeMetadataRepositoryTest.java#L133
java.lang.AssertionError: expected [2000.0] but found [1000.0]
|
SITWithPWiseAndBufferAfterLeaderTest.testOffsetSyncBeforeGracefulShutDown[1](AA_OFF):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithPWiseAndBufferAfterLeaderTest.java#L1
java.lang.AssertionError: pcs.getLatestProcessedLocalVersionTopicOffset() for PARTITION_FOO is expected to be zero! expected [0] but found [2]
|
SITWithPWiseWithoutBufferAfterLeaderTest.testNotifier[1](AA_OFF):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithPWiseWithoutBufferAfterLeaderTest.java#L1
org.mockito.exceptions.verification.ArgumentsAreDifferent: Argument(s) are different! Wanted:
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
2,
OffsetRecord{localVersionTopicOffset=3, upstreamOffset=-1, leaderTopic=null, offsetLag=0, eventTimeEpochMs=-1, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=true, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTaskTest.lambda$testNotifier$35(StoreIngestionTaskTest.java:1621)
Actual invocations have different arguments:
storageMetadataService.getLastOffset(
"TestTopic_5e2a62d9c0_a519f4b_v1",
1
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.getLastOffset(DeepCopyOffsetManager.java:49)
storageMetadataService.getLastOffset(
"TestTopic_5e2a62d9c0_a519f4b_v1",
2
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.getLastOffset(DeepCopyOffsetManager.java:49)
storageMetadataService.computeStoreVersionState(
"TestTopic_5e2a62d9c0_a519f4b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$786/0x00000008007ae040@33c2075b
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
1,
OffsetRecord{localVersionTopicOffset=1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909965213, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
1,
OffsetRecord{localVersionTopicOffset=2, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909965213, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.computeStoreVersionState(
"TestTopic_5e2a62d9c0_a519f4b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$786/0x00000008007ae040@7c9b1c4e
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
1,
OffsetRecord{localVersionTopicOffset=3, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909965213, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=true, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.computeStoreVersionState(
"TestTopic_5e2a62d9c0_a519f4b_v1",
com.linkedin.venice.offsets.DeepCopyStorageMetadataService$$Lambda$786/0x00000008007ae040@497fff84
);
-> at com.linkedin.venice.offsets.DeepCopyStorageMetadataService.computeStoreVersionState(DeepCopyStorageMetadataService.java:34)
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
2,
OffsetRecord{localVersionTopicOffset=1, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909965213, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
);
-> at com.linkedin.venice.offsets.DeepCopyOffsetManager.put(DeepCopyOffsetManager.java:38)
storageMetadataService.put(
"TestTopic_5e2a62d9c0_a519f4b_v1",
2,
OffsetRecord{localVersionTopicOffset=2, upstreamOffset=-1, leaderTopic=null, offsetLag=9223372036854775807, eventTimeEpochMs=1738909965213, latestProducerProcessingTimeInMs=0, isEndOfPushReceived=false, databaseInfo={}, realTimeProducerState={}, recordTransformerClassHash=null}
|
SITWithPWiseWithoutBufferAfterLeaderTest.testRecordLevelMetricForCurrentVersion[0](false):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithPWiseWithoutBufferAfterLeaderTest.java#L1
Wanted but not invoked:
hostLevelIngestionStats.recordTotalBytesConsumed(
<any long>
);
-> at com.linkedin.davinci.stats.HostLevelIngestionStats.recordTotalBytesConsumed(HostLevelIngestionStats.java:499)
However, there were exactly 21 interactions with this mock:
hostLevelIngestionStats.recordProcessConsumerActionLatency(
7.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.004949d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.09792399999999998d,
1738909970779L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.002384d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.071935d,
1738909970781L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.002905d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerStoreIngestionTask.checkLongRunningTaskState(LeaderFollowerStoreIngestionTask.java:756)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.recordQuotaMetrics(StoreIngestionTask.java:1635)
hostLevelIngestionStats.recordConsumerRecordsQueuePutLatency(
0.045936000000000005d,
1738909970791L
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1302)
hostLevelIngestionStats.recordStorageQuotaUsed(
NaNd
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.produceToStoreBufferServiceOrKafka(StoreIngestionTask.java:1307)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordTotalRecordsConsumed();
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2621)
hostLevelIngestionStats.recordProcessConsumerActionLatency(
0.0d
);
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerActions(StoreIngestionTask.java:1995)
hostLevelIngestionStats.recordCheckLongRunningTasksLatency(
0.011201d
);
-> at com.linkedin.davinci.kafka.consumer.LeaderFollowerS
|
SITWithTWiseWithoutBufferAfterLeaderTest.testRecordLevelMetricForCurrentVersion[0](false):
clients/da-vinci-client/src/test/java/com/linkedin/davinci/kafka/consumer/SITWithTWiseWithoutBufferAfterLeaderTest.java#L1
org.mockito.exceptions.verification.TooFewActualInvocations:
hostLevelIngestionStats.recordTotalBytesConsumed(
<any long>
);
Wanted 2 times:
-> at com.linkedin.davinci.stats.HostLevelIngestionStats.recordTotalBytesConsumed(HostLevelIngestionStats.java:499)
But was 1 time:
-> at com.linkedin.davinci.kafka.consumer.StoreIngestionTask.processConsumerRecord(StoreIngestionTask.java:2629)
|
NativeMetadataRepositoryTest.testNativeMetadataRepositoryStats:
clients/da-vinci-client/src/test/java/com/linkedin/davinci/repository/NativeMetadataRepositoryTest.java#L133
java.lang.AssertionError: expected [2000.0] but found [1000.0]
|
Server / UT & CodeCov (11)
Process completed with exit code 1.
|
DispatchingAvroGenericStoreClientTest.testBatchGet:
clients/venice-client/src/test/java/com/linkedin/venice/fastclient/DispatchingAvroGenericStoreClientTest.java#L664
org.testng.internal.thread.ThreadTimeoutException: Method com.linkedin.venice.fastclient.DispatchingAvroGenericStoreClientTest.testBatchGet() didn't finish within the time-out 10000
|
DispatchingAvroGenericStoreClientTest.testBatchGet:
clients/venice-client/src/test/java/com/linkedin/venice/fastclient/DispatchingAvroGenericStoreClientTest.java#L664
org.testng.internal.thread.ThreadTimeoutException: Method com.linkedin.venice.fastclient.DispatchingAvroGenericStoreClientTest.testBatchGet() didn't finish within the time-out 10000
|
DispatchingAvroGenericStoreClientTest.testBatchGet:
clients/venice-client/src/test/java/com/linkedin/venice/fastclient/DispatchingAvroGenericStoreClientTest.java#L1
org.testng.internal.thread.ThreadTimeoutException: Method com.linkedin.venice.fastclient.DispatchingAvroGenericStoreClientTest.testBatchGet() didn't finish within the time-out 10000
|
DispatchingAvroGenericStoreClientTest.testBatchGetToUnreachableClient:
clients/venice-client/src/test/java/com/linkedin/venice/fastclient/DispatchingAvroGenericStoreClientTest.java#L1010
java.lang.AssertionError: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@67c03e43[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@2d8c5291[Wrapped task = com.linkedin.venice.fastclient.meta.RequestBasedMetadata$$Lambda$466/0x00007fe5c052ae38@749b3fc5]] rejected from java.util.concurrent.ScheduledThreadPoolExecutor@1f0396ff[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] expected [true] but found [false]
|
DispatchingAvroGenericStoreClientTest.testBatchGet:
clients/venice-client/src/test/java/com/linkedin/venice/fastclient/DispatchingAvroGenericStoreClientTest.java#L1
org.testng.internal.thread.ThreadTimeoutException: Method com.linkedin.venice.fastclient.DispatchingAvroGenericStoreClientTest.testBatchGet() didn't finish within the time-out 10000
|
DispatchingAvroGenericStoreClientTest.testBatchGetToUnreachableClient:
clients/venice-client/src/test/java/com/linkedin/venice/fastclient/DispatchingAvroGenericStoreClientTest.java#L1010
java.lang.AssertionError: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@67c03e43[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@2d8c5291[Wrapped task = com.linkedin.venice.fastclient.meta.RequestBasedMetadata$$Lambda$466/0x00007fe5c052ae38@749b3fc5]] rejected from java.util.concurrent.ScheduledThreadPoolExecutor@1f0396ff[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] expected [true] but found [false]
|
StaticAnalysisAndUnitTestsCompletionCheck
Process completed with exit code 1.
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
StaticAnalysis
|
666 KB |
|
clients-jdk11
|
2.63 MB |
|
clients-jdk17
|
2.65 MB |
|
clients-jdk8
|
2.58 MB |
|
controller-jdk11
|
1.64 MB |
|
controller-jdk17
|
1.63 MB |
|
controller-jdk8
|
1.62 MB |
|
integrations-jdk11
|
557 KB |
|
integrations-jdk17
|
561 KB |
|
integrations-jdk8
|
544 KB |
|
internal-jdk11
|
3.53 MB |
|
internal-jdk17
|
3.54 MB |
|
internal-jdk8
|
3.52 MB |
|
router-jdk11
|
1.04 MB |
|
router-jdk17
|
1.04 MB |
|
router-jdk8
|
1.03 MB |
|
server-jdk11
|
9.3 MB |
|
server-jdk17
|
9.34 MB |
|
server-jdk8
|
9.24 MB |
|