Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Optimize slow query that uses a high amount of temporary disk space t…
…o find relations Resolves https://github.com/dbt-labs/dbt-postgres/issues/189 The macro postgres_get_relations in relations.sql was extremely slow and used an extremely high amount of temporary disk space on a system with high numbers of schemas, tables, and dependencies between database objects (rows in pg_depend). Slow to the point of not completing in 50 minutes and using more than 160GB disk space (at which point PostgreSQL ran out of disk space and aborted the query). The solution here optimises the query and so it runs in just under 1 second on my system. It does this by being heavily inspired by the definition of information_schema.view_table_usage, and specifically: - Stripping out CTEs that can be optimisation blockers, often by causing CTEs to be materialised to disk (especially in older PostgreSQL, but I suspect in recent too in some cases). - Removing unnecessary filtering on relkind: going via pg_rewrite (or rather, the equivalent row on pg_depend) is equivalent to that - Avoiding sequential scans on any table by structuring joins/where clause to leverage indexes, especially on pg_depend - Removing unnecessary filtering out system catalog tables from dependents (they are excluded by the remaining filters on referenced tables). - Not having `select distinct ... from pg_dependent` in the innards of the query, and instead having a top level `select distinct` - on my system this saved over 45 seconds. - Excluding self-relations that depend on themselves by using oid rather than using the names of tables and schemas. I suspect this is also more robust because oids I think _can_ be repeated between system tables, and so when querying pg_depend filtering on classid and refclassid is required (and I think also means indexes are better leveraged). Comparing calls to `explain` it reduces the largest "rows" value from 5,284,141,410,595,979 (over five quadrillion) to 219 and the actual run time from never completing within 50 minutes (because it used all of the 160GB available) to completing in ~500ms. It also has some style/naming changes: - Using a `distinct` on the top level rather than a group by for clarity (performance seemed the same in my case). - Flips the definition of "referenced" and "dependent" in the query to match both the definitions in pg_depend, and the code at https://github.com/dbt-labs/dbt-postgres/blob/05f0337d6b05c9c68617e41c0b5bca9c2a733783/dbt/adapters/postgres/impl.py#L113 - Re-orders the join to I think a slightly clearer order that "flows" from views -> the linking table (pg_depend) to the tables referenced in the views. - Lowers the abstraction/indirection levels in naming/aliases, using names closer to the PostgreSQL catalog tables - this made it easier to write and understand, and so I suspect easier to make changes in future (I found I had to keep in mind the PostgreSQL definitions more than the output of the query when making changes).
- Loading branch information