|
| 1 | +# Ptrack benchmarks |
| 2 | + |
| 3 | +## Runtime overhead |
| 4 | + |
| 5 | +First target was to measure `ptrack` overhead on TPS due to marking modified pages in the map in memory. We used PostgreSQL 12 cluster of approximately 1 GB size, initialized with `pgbench` on a `tmpfs` partition: |
| 6 | + |
| 7 | +```sh |
| 8 | +pgbench -i -s 133 |
| 9 | +``` |
| 10 | + |
| 11 | +Default `pgbench` transaction script [were modified](pgb.sql) to exclude `pgbench_tellers` and `pgbench_branches` updates in order to lower lock contention and make `ptrack` overhead more visible. So `pgbench` was invoked as following: |
| 12 | + |
| 13 | +```sh |
| 14 | +pgbench -s133 -c40 -j1 -n -P15 -T300 -f pgb.sql |
| 15 | +``` |
| 16 | + |
| 17 | +Results: |
| 18 | + |
| 19 | +| ptrack.map_size, MB | 0 (turned off) | 32 | 64 | 256 | 512 | 1024 | |
| 20 | +|---------------------|----------------|----|----|-----|-----|------| |
| 21 | +| TPS | 16900 | 16890 | 16855 | 16468 | 16490 | 16220 | |
| 22 | + |
| 23 | +TPS fluctuates in a several percent range around 16500 on the used machine, but in average `ptrack` overhead does not exceed 1-3% for any reasonable `ptrack.map_size`. It only becomes noticeable closer to 1 GB `ptrack.map_size` (~3-4%), which is enough to track changes in the database of up to 1 TB size without false positives. |
| 24 | + |
| 25 | + |
| 26 | +<!-- ## Checkpoint overhead |
| 27 | +
|
| 28 | +Since `ptrack` map is completely flushed to disk during checkpoints, the same test were performed on HDD, but with slightly different configuration: |
| 29 | +```conf |
| 30 | +synchronous_commit = off |
| 31 | +shared_buffers = 1GB |
| 32 | +``` |
| 33 | +and `pg_prewarm` run prior to the test. --> |
| 34 | + |
| 35 | +## Backups speedup |
| 36 | + |
| 37 | +To test incremental backups speed a fresh cluster were initialized with following DDL: |
| 38 | + |
| 39 | +```sql |
| 40 | +CREATE TABLE large_test (num1 bigint, num2 double precision, num3 double precision); |
| 41 | +CREATE TABLE large_test2 (num1 bigint, num2 double precision, num3 double precision); |
| 42 | +``` |
| 43 | + |
| 44 | +These relations were populated with approximately 2 GB of data that way: |
| 45 | + |
| 46 | +```sql |
| 47 | +INSERT INTO large_test (num1, num2, num3) |
| 48 | +SELECT s, random(), random()*142 |
| 49 | +FROM generate_series(1, 20000000) s; |
| 50 | +``` |
| 51 | + |
| 52 | +Then a part of one relation was touched with a following query: |
| 53 | + |
| 54 | +```sql |
| 55 | +UPDATE large_test2 SET num3 = num3 + 1 WHERE num1 < 20000000 / 5; |
| 56 | +``` |
| 57 | + |
| 58 | +After that, incremental `ptrack` backups were taken with `pg_probackup` followed by full backups. Tests show that `ptrack_backup_time / full_backup_time ~= ptrack_backup_size / full_backup_size`, i.e. if the only 20% of data were modified, then `ptrack` backup will be 5 times faster than full backup. Thus, the overhead of building `ptrack` map during backup is minimal. Example: |
| 59 | + |
| 60 | +```log |
| 61 | +21:02:43 postgres:~/dev/ptrack_test$ time pg_probackup backup -B $(pwd)/backup --instance=node -p5432 -b ptrack --no-sync --stream |
| 62 | +INFO: Backup start, pg_probackup version: 2.3.1, instance: node, backup ID: QAA89O, backup mode: PTRACK, wal mode: STREAM, remote: false, compress-algorithm: none, compress-level: 1 |
| 63 | +INFO: Parent backup: QAA7FL |
| 64 | +INFO: PGDATA size: 2619MB |
| 65 | +INFO: Extracting pagemap of changed blocks |
| 66 | +INFO: Pagemap successfully extracted, time elapsed: 0 sec |
| 67 | +INFO: Start transferring data files |
| 68 | +INFO: Data files are transferred, time elapsed: 3s |
| 69 | +INFO: wait for pg_stop_backup() |
| 70 | +INFO: pg_stop backup() successfully executed |
| 71 | +WARNING: Backup files are not synced to disk |
| 72 | +INFO: Validating backup QAA89O |
| 73 | +INFO: Backup QAA89O data files are valid |
| 74 | +INFO: Backup QAA89O resident size: 632MB |
| 75 | +INFO: Backup QAA89O completed |
| 76 | +
|
| 77 | +real 0m11.574s |
| 78 | +user 0m1.924s |
| 79 | +sys 0m1.100s |
| 80 | +
|
| 81 | +21:20:23 postgres:~/dev/ptrack_test$ time pg_probackup backup -B $(pwd)/backup --instance=node -p5432 -b full --no-sync --stream |
| 82 | +INFO: Backup start, pg_probackup version: 2.3.1, instance: node, backup ID: QAA8A6, backup mode: FULL, wal mode: STREAM, remote: false, compress-algorithm: none, compress-level: 1 |
| 83 | +INFO: PGDATA size: 2619MB |
| 84 | +INFO: Start transferring data files |
| 85 | +INFO: Data files are transferred, time elapsed: 32s |
| 86 | +INFO: wait for pg_stop_backup() |
| 87 | +INFO: pg_stop backup() successfully executed |
| 88 | +WARNING: Backup files are not synced to disk |
| 89 | +INFO: Validating backup QAA8A6 |
| 90 | +INFO: Backup QAA8A6 data files are valid |
| 91 | +INFO: Backup QAA8A6 resident size: 2653MB |
| 92 | +INFO: Backup QAA8A6 completed |
| 93 | +
|
| 94 | +real 0m42.629s |
| 95 | +user 0m8.904s |
| 96 | +sys 0m11.960s |
| 97 | +``` |
0 commit comments