- 1. [DONE] Virtual Nanosecond Clock
- 2. [DONE] External Config via JSON File
- 3. [DONE] Runtime Configuration
- 4. [DONE] HttpClient vs OkHttp
- 5. Tulip Documentation
- 6. [DROPPED] Pkl Config Support
- 7. GraalVM native application
- 8. Docker Support
- 9. [DONE] Tulip Runtime Library - local Maven
- 10. Tulip Runtime Library - Maven Central
- 11. [DONE] Java Benchmark Support
- 12. [DROPPED] Reimplement JSON input (config.json)
- 13. [DROPPED] Reimplement JSON output
- 14. [DONE] Micrometer Support
- 15. [DONE] Amper Support
- 16. [DONE] Remove Glowroot support
- 17. [DONE] Add user_actions and user_class to config.json
- 18. [DONE] Add user_params to config.json
- 19. [DONE] Re-write tulip_user.py in Kotlin Script
- 20. [DROPPED] Re-write json_print_asciidoc.py in Kotlin Script
- 21. [DONE] Benchmark Summary
- 22. [DROPPED] Performance Requirements
- 23. [DROPPED] HdrHistogram
- 24. [DONE] Summary Statistics
- 25. JCTools Concurrent Queues
- 26. [DONE] kscript
- 27. [DONE] Total Time Blocked
- 28. [DONE] Add histogram_rt data to results file
- 29. [DONE] Coordinaton Ommision Problem
- 30. Virtual Threads
- 31. Glances
- 32. Grazie Pro
- 33. AMPER-851
- 34. Github Benchmark Actions
- 35. w3m vs lynx
- 36. Dokka and Mermaid
Rework the rate governor logic to use a virtual nanosecond clock. Each virtual clock tick should advance the virtual clock by
(1,000,000,000.0 / tps_rate)
nanoseconds.
{
"json_filename": "json_results.txt",
"contexts": [
{
"name": "Scenario-1",
"num_users": 16,
"num_threads": 2
},
{
"name": "Scenario-2",
"num_users": 32,
"num_threads": 4
}
],
"benchmarks": [
{
"name": "Test0 (Initialize)",
"enabled": true,
"time": {
"startup_duration": 0,
"warmup_duration": 0,
"main_duration": 0,
"main_duration_repeat_count": 1
},
"throughput_rate": 0.0,
"work_in_progress": 1,
"actions": [
{
"id": 0
},
{
"id": 7
}
]
},
{
"name": "Test1 (Throughput Test - Max)",
"enabled": false,
"time": {
"startup_duration": 60,
"warmup_duration": 60,
"main_duration": 60,
"main_duration_repeat_count": 1
},
"throughput_rate": 0.0,
"work_in_progress": -1,
"actions": [
{
"id": 8
}
]
},
{
"name": "Test2 (Throughput Test - Fixed)",
"enabled": true,
"time": {
"startup_duration": 15,
"warmup_duration": 15,
"main_duration": 60,
"main_duration_repeat_count": 4
},
"throughput_rate": 100.0,
"work_in_progress": 0,
"actions": [
{
"id": 1,
"weight": 25
},
{
"id": 2,
"weight": 75
}
]
}
]
}
Add --config
parameter to specify which config.jsonc
file to use.
-
Use HttpClient from Java 21, and remove support for OkHttp
-
Remove unused and optional JAR dependencies
-
http4k
-
….
-
Create a user guide for Tulip with Antora
-
https://www.baeldung.com/java-httpclient-connection-management
-
-Djdk.httpclient.connectionPoolSize=1
-
-Djdk.httpclient.keepalive.timeout=2
-
Write a config.pkl file to generate config.json
Build a native (exe) using GraalVM of a Tulip benchmark application
$ ./gradlew nativeCompile
$ ./build/native/nativeCompile/tulip -c ./config.jsonc
Create a Docker container of a Tulip benchmark application using Docker Compose
Create a tulip-runtime-jvm.jar library and publish it to Maven local.
Create a Maven Central hosted tulip-core.jar runtime library that can be imported by benchmark applications
<dependency>
<groupId>io.github.wfouche</groupId>
<artifactId>tulip-core</artifactId>
<version>0.8.1</version>
</dependency>
Allow benchmark user class to be written in Java or other JVM compatible languages. Add support for:
-
Kotlin
-
Java
Use Kotlin Serialization instead of GSON:
-
Support JSON5 format
-
Support GraalVM
Re-implement how the json_results.txt file is created. Only use a hierarchy of data classes and GSON to create the JSON output, or kotlinx
import kotlinx.serialization.Serializable
@Serializable
data class Car(val type: String, @EncodeDefault val color: String = "Blue")
val car = Car("Ford")
val jsonString = Json.encodeToString(car)
assertEquals("{\"type\":\"Ford\",\"color\":\"Blue\"}", jsonString)
Instrument the benchmark application using Micrometer (http://micrometer.io) and support performance data extraction via Prometheus and Grafana.
-
https://grafana.com/docs/grafana/latest/getting-started/get-started-grafana-prometheus/
-
$ docker run --name prometheus -d -p 127.0.0.1:9090:9090 prom/prometheus
-
https://wfouche.grafana.net/a/cloud-home-app/onboarding-flow/start
Also see docker compose
scripts at:
Change the Tulip project to build using Amper/Gradle.
Remove folder tulip/runtime/glowroot
.
{
"user_class": "user.UserHttp",
"user_actions": {
"0": "start",
"1": "DELAY-6ms",
"2": "DELAY-14ms",
"3": "REST-posts",
"4": "REST-comments",
"5": "REST-albums",
"6": "REST-photos",
"7": "REST-todos",
"8": "login",
"99": "stop"
}
}
{
....
"user_params": {
"url": "https://jsonplaceholder.typicode.com",
....
},
....
}
-
json_print_asciidoc.py
-
json_print_asciidoc.kts
Display a summary of benchmark results at the end of the benchmark:
- Benchmark1
-
-
Name
-
Average TPS
-
Average response time
-
90th percentile
-
Max response time
-
Num-failed nnn (%xyz)
-
- Benchmark2
-
-
Name
-
Average TPS
-
Average response time
-
90th percentile
-
Max response time
-
Num-failed nnn (%xyz)
-
- Benchmark…
-
-
Name
-
Average TPS
-
Average response time
-
90th percentile
-
Max response time
-
Num-failed nnn (%xyz)
-
{
"performance_requirements": {
"avg-tps": "12 tps",
"avg-tps-variance": "10 percent",
...
}
}
Use HdrHistogram to replace Tulip’s own log-linear quantization logic.
HdrHistogram is a standard used by several load testing tools.
-
implementation("org.hdrhistogram:HdrHistogram:2.2.2")
-
https://github.com/Hyperfoil/Hyperfoil uses HdrHistogram
///usr/bin/env jbang "$0" "$@" ; exit $?
//DEPS org.hdrhistogram:HdrHistogram:2.2.2
import org.HdrHistogram.Histogram;
import java.util.concurrent.ThreadLocalRandom;
public class test_hdrhistogram {
public static void main(String[] args) {
//Histogram histogram = new Histogram(3600*1000*1000L, 3);
Histogram histogram = new Histogram(3);
// 6 ms delay (average) with 25% of values
for (int i=0; i != 250000; i++) {
histogram.recordValue(ThreadLocalRandom.current().nextLong(12 + 1));
}
// 14 ms delay (average) with 75% of values
for (int i=0; i != 750000; i++) {
histogram.recordValue(ThreadLocalRandom.current().nextLong(28 + 1));
}
// histogram.getMean() = 12.0
System.out.println(histogram.getTotalCount());
histogram.outputPercentileDistribution(System.out,1.0);
System.out.println(histogram.getMean());
System.out.println(histogram.getStdDeviation());
System.out.println(histogram.getMaxValue());
System.out.println(histogram.getValueAtPercentile(50.0));
System.out.println(histogram.getValueAtPercentile(90.0));
System.out.println(histogram.getValueAtPercentile(95.0));
System.out.println(histogram.getValueAtPercentile(99.0));
System.out.println(histogram.getValueAtPercentile(99.9));
}
}
Implemented HTML reports: full and summary. See reports
folder.
PHASE METRIC THROUGHPUT ACTIONS MEAN STD_DEV p50 p90 p99 p99.9 MAX SUCCESS FAILED example test 29,41 req/s 1 17,37 ms 0 ms 17,43 ms 17,43 ms 17,43 ms 17,43 ms 17,43 ms 1 0
Replace queues in JC queues.
Add a counter that records the total time that the main thread is blocked waiting to assigned tasks to worker threads.
{
"histogram_rt": {
"10000": 839,
"20000": 745,
"3000": 284,
"30000": 175,
"15000": 792,
"10": 95,
"5000": 247,
"25000": 815,
"2000": 242,
"15": 121,
"8000": 259,
"9000": 277,
"7000": 247,
"4000": 273,
"6000": 261,
"1000": 267,
"5": 3,
"50": 1,
"20": 33,
"6500": 1,
"9": 12,
"1500": 1,
"8": 4,
"6": 1,
"7": 3,
"9500": 3,
"3500": 1,
"25": 3,
"2500": 1,
"8500": 1,
"4500": 1,
"5500": 1,
"7500": 1,
"4": 1,
"30": 1
}
}
Measure the wait time for each action, and add it to the service time.
Experiment with assigning one virtual thread to each user object when using Java 21 or above:
Measure resource utilization via Glances:
Use Grazie Pro to write better documentation:
Currently using w3m to display HTML report to text console. Consider using lynx instead,
Enable mermaid diagram plugin for Dokka