Skip to content

Fix performance issue with many duplicate ids #40

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Oct 15, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 3 additions & 7 deletions crates/core/src/sync_local.rs
Original file line number Diff line number Diff line change
Expand Up @@ -55,21 +55,17 @@ pub fn sync_local(db: *mut sqlite::sqlite3, _data: &str) -> Result<i64, SQLiteEr

// Query for updated objects

// QUERY PLAN
// |--SCAN buckets
// |--SEARCH b USING INDEX ps_oplog_by_opid (bucket=? AND op_id>?)
// |--SEARCH r USING INDEX ps_oplog_by_row (row_type=? AND row_id=?)
// `--USE TEMP B-TREE FOR GROUP BY
// language=SQLite
let statement = db
.prepare_v2(
"\
-- 1. Filter oplog by the ops added but not applied yet (oplog b).
-- SELECT DISTINCT / UNION is important for cases with many duplicate ids.
WITH updated_rows AS (
SELECT b.row_type, b.row_id FROM ps_buckets AS buckets
SELECT DISTINCT b.row_type, b.row_id FROM ps_buckets AS buckets
CROSS JOIN ps_oplog AS b ON b.bucket = buckets.id
AND (b.op_id > buckets.last_applied_op)
UNION ALL SELECT row_type, row_id FROM ps_updated_rows
UNION SELECT row_type, row_id FROM ps_updated_rows
)

-- 3. Group the objects from different buckets together into a single one (ops).
Expand Down
Loading