-
Notifications
You must be signed in to change notification settings - Fork 306
downgraded schema does not always comply with the actual schema version #1172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
An issue closely related to this, is that switching from a branch that knows about tables to one that doesn't, will cause a schema downgrade to happen without removing those tables. If you switch back to the branch with the new tables, attempts to upgrade will always fail, because the tables already exist. To make the app usable again, the database file has to be deleted. Perhaps we should do that by default when downgrading. |
Yeah, that'd more fully accomplish what that code is aimed at doing. Given the context of this code — we've already opened the database when we learn what the schema version is — it might be annoying to arrange that smoothly, though. It might be easier to instead ask the database what all the tables are, and delete those. |
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
Here's the implementation of Future<void> createAll() async {
for (final entity in _allSchemaEntities) {
await create(entity);
}
} After #1248, we have historical schema objects like final schema = // …
for (final entity in schema.entities) {
await m.create(entity);
} It looks like the initializer for |
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Testing for this is blocked until zulip#1172. Signed-off-by: Zixuan James Li <[email protected]>
For this to handle v? -> v1 downgrades, we need to have access to |
Conveniently I think there's actually no use case for downgrading to schema v1 🙂 — the only downgrades that are useful to support are (a) downgrades to the latest schema we have today, or more specifically to the latest schema as of whenever we merge something to better support downgrades, or (b) to any later schema after that. That's because the downgrades (and upgrades) that happen in practice, outside of tests, are always to whatever is the latest schema version as of the code you're running. So the only reason you'd downgrade to schema v1 is if you're running an old version of the code from when that was the latest… and in that case any changes we might merge now in order to better support downgrades won't be running anyway. |
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Fixes: zulip#1172 Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. Fixes: zulip#1172 Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. The test for this relies on some undocumented drift internals to modify the schema version it sees when running the migration. References: https://github.com/simolus3/drift/blob/18cede15/drift/lib/src/runtime/executor/helpers/engines.dart#L459-L495 https://github.com/simolus3/drift/blob/18cede15/drift/lib/src/sqlite3/database.dart#L198-L211 https://github.com/simolus3/sqlite3.dart/blob/4de46afd/sqlite3/lib/src/implementation/database.dart#L69-L85 Fixes: zulip#1172 Signed-off-by: Zixuan James Li <[email protected]>
We previously missed tables that are not known to the schema. This becomes an issue if a new table is added at a newer schema level. When we go back to an earlier schema, that new table remains in the database, so subsequent attempts to upgrade to the later schema level that adds the table will fail, because it already exists. The test for this relies on some undocumented drift internals to modify the schema version it sees when running the migration. References: https://github.com/simolus3/drift/blob/18cede15/drift/lib/src/runtime/executor/helpers/engines.dart#L459-L495 https://github.com/simolus3/drift/blob/18cede15/drift/lib/src/sqlite3/database.dart#L198-L211 https://github.com/simolus3/sqlite3.dart/blob/4de46afd/sqlite3/lib/src/implementation/database.dart#L69-L85 Fixes: zulip#1172 Signed-off-by: Zixuan James Li <[email protected]>
We try to handle the case when a schema downgrade happens:
zulip-flutter/lib/model/database.dart
Lines 104 to 115 in 28b3536
This will work in dev because if you checkout and build code with an older schema version, the schema shipped with the build matches the version we are migrating to.
But we can't easily test this:
result
This is because
m.createAll()
only has access to the latest schema, so while the version is supposedly v1, additional calls to drop the extra columns are needed.A good step forward would be moving to step-by-step migrations. That won't fix this issue, but it offers a helpful way to access older schemas at specific versions. As this is only a dev-only feature, fixing this will be a low priority.
This feature was added in https://github.com/simolus3/drift/releases/tag/drift-2.10.0, long after we initially added
drift
as a dependency. It is among the many helpful things added since 2.5.2 (as of writing, we are at2.19.1+1
, and the latest is2.22.0
).The text was updated successfully, but these errors were encountered: