You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This isn't an error caused by this backend, but it is one which people may come across when using Django and TimescaleDB with hypertable compression and may also affect your roadmap plans for managing compression through this engine.
We've enabled compression on one of our hypertables using a Django migration using RunSQL commands. We did this just to ease deployment and so that we can more easily test with compression.
An unexpected consequence of this was errors in several of our pytests, the root cause being django.db.backends.postgresql.operations.sql_flush() truncating every table at the end of some tests, called by django.test.testcases.TransactionTestCase._fixture_teardown().
The problem is that our (now compressed) hypertable has a foreign key relation back to one of our other tables, and with compression enabled it seems that there is now always at least one chunk created in the hypertable, which has that same foreign key (chunks are tables after all) but because the chunk is not in the list of tables to truncate, postgres errors saying that it won't get
Where as before it seems that at least during the minimal unit tests the hypertable itself contained all the data we're inserting in each test.
This presented as:
ERROR: cannot truncate a table referenced in a foreign key constraint
DETAIL: Table "_compressed_hypertable_4" references "other_table".
HINT: Truncate table "_compressed_hypertable_4" at the same time, or use TRUNCATE ... CASCADE.
Short-term our workaround has been to:
Stop using django_db(transaction=True) on some of our tests (turns out they didn't need this anyway)
Skip some of our older migration tests which made use of django_test_migrations which was also using TransactionTestCase internally
Although obviously we want to get back to covering those migrations, and writing new migration tests in future.
I think the correct long-term solution is for us to submit a PR to django to add a way to always set allow_cascade=True in _fixture_teardown(), this would mean no changes to either the postgres backend or this backend.
The text was updated successfully, but these errors were encountered:
This isn't an error caused by this backend, but it is one which people may come across when using Django and TimescaleDB with hypertable compression and may also affect your roadmap plans for managing compression through this engine.
We've enabled compression on one of our hypertables using a Django migration using RunSQL commands. We did this just to ease deployment and so that we can more easily test with compression.
An unexpected consequence of this was errors in several of our pytests, the root cause being
django.db.backends.postgresql.operations.sql_flush()
truncating every table at the end of some tests, called bydjango.test.testcases.TransactionTestCase._fixture_teardown()
.The problem is that our (now compressed) hypertable has a foreign key relation back to one of our other tables, and with compression enabled it seems that there is now always at least one chunk created in the hypertable, which has that same foreign key (chunks are tables after all) but because the chunk is not in the list of tables to truncate, postgres errors saying that it won't get
Where as before it seems that at least during the minimal unit tests the hypertable itself contained all the data we're inserting in each test.
This presented as:
Short-term our workaround has been to:
django_db(transaction=True)
on some of our tests (turns out they didn't need this anyway)django_test_migrations
which was also usingTransactionTestCase
internallyAlthough obviously we want to get back to covering those migrations, and writing new migration tests in future.
I think the correct long-term solution is for us to submit a PR to django to add a way to always set
allow_cascade=True
in_fixture_teardown()
, this would mean no changes to either the postgres backend or this backend.The text was updated successfully, but these errors were encountered: