-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade Azure Blob Storage SDK to v12 #2573
base: master
Are you sure you want to change the base?
Conversation
There's an admin test that is failing in CI that isn't failing locally, so I'm going to put some debugging in in the next commits to try to figure out what's happening. |
FYI, it is failing for me locally too 👀 Not that it helps much ... but I can possibly investigate a bit myself and see if I find anything. |
If it helps:
|
Thanks! Might be a statefulness thing, I'll start a fresh env. |
...Wouldn't it be better to include actual information in that The check would work the same, and the error could be logged properly before redirect (...as it probably should be, anyway.) |
@samuelhwilliams It was an error due to the tests using an older Azure storage emulator, I've updated them to the official Microsoft hosted emulator now, and they're passing fine. |
@@ -0,0 +1,14 @@ | |||
// For format details, see https://aka.ms/devcontainer.json. | |||
{ | |||
"name": "flask-admin (Python + Azurite)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the future, I could add Postgres and Mongo to this dev container too, to have a single container that can run all the tests. It should be fairly easy given tests.yaml has the services setup, just copying that to docker-compose.yaml.
Hm, it appears I still don't have an approach for handling unimportable Azure modules that makes mypy happy. Let me know if you have any thoughts about the best practice for importing extra modules in tests (and skipping tests if they don't exist). I'll take another look Monday otherwise. |
I don't think we need to support tests running without azure installed - I'd probably be fine with you removing the try/except around that test import. |
Okay, I've made it so that the tests assume you've got azure-blob-storage installed. I've also made the tests devcontainer bring in Postgres and Mongo too, so I was able to get all the tests passing in my dev container environment, without additional setup. |
self._container_name = container_name | ||
self._connection_string = connection_string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the changes I made on the s3 admin side when bringing it up to date was to have __init__
take a client instance rather than parameters that get passed the client.
Do you think we should do something similar here and accept an instance of BlobServiceClient
, or is it still fine to just use the connection string?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I think thats nice, as I personally don't typically use connection strings (this was my first time using from_connection_string), so that gives developers more flexibility as to how they connect. I can make that change. That'd be breaking, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be, but this is scheduled to go out for the v2 release where we're making a bunch of breaking changes, so I'm ok with it.
If you'd be happy to, feel free :) 🙏
I tried a larger file, no luck, but can you attach your large file and also tell me what you tried renaming it to? In theory, I should be able to replicate, given we're in a containerized environment. If I still fail, then I'll DM next week on Discord and see if we can hop on a screen share or some thing. |
Following up on bug bash issues:
|
I am still working through an issue with "rename" when connecting to a prod account using the connection string (versus using OAuth token-based connection). |
Unfortunately all of the files I've ended up reproducing it with are personal ones that I'd rather not share (eg a passport scan) 😂 I'm willing to accept that this might be something about my setup... |
@samuelhwilliams I worked a bunch on this today, with help from an Azure Blob SDK engineer, and ended up changing the rename method so that it uses an asynchronous copy behind the scenes (versus synchronous). Now the code works for me for:
I tested with files of up to a size of 5 GB on my prod Azure account, and was able to rename them all. I do not currently have any other planned changes, so I've marked this as ready for review again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pamelafox really nice work - sorry it's taken a bit of time to get around to testing this out locally.
When I upload files in a sub-directory, then go back to the root, the directory shows up multiple times (seemingly once for itself and once for each item in the directory). This is with local azurite:
Just spent a bit of time trying to test with a real azure account but I'm getting errors currently when trying to sign up for a storage account, so that's great 🙃
This feels close though - only spotted that one issue, really. A few other comments but nothing major.
if __name__ == "__main__": | ||
app.run(debug=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks again for adding the example - useful 👍
src_blob_client = self._container_client.get_blob_client(src) | ||
dst_blob_client = self._container_client.get_blob_client(dst) | ||
copy_result = dst_blob_client.start_copy_from_url(src_blob_client.url) | ||
if copy_result.get("copy_status") == "success": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it well known in what conditions blob storage will decide to do this synchronously vs asynchronously? Is it just based on file size, or some other information?
I'm fairly uncomfortable with the sleep(10) below in the async journey. It was previously sleep(1), which is better, but I guess I'd prefer to avoid sleeping in a request at all.
I suppose if we're keeping the rename operation as a copy+delete we probably can't just move on immediately if the copy hasn't finished, so maybe this is an OK workaround if async is only going to happen for really large files. It mightstill be nice to poll for updates more frequently than every 10 seconds though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question! I was never able to get it to go down this path, even with large files. I tested up to 20 GB, and it still didn't go down the polling path. The engineer said the Blob service decides behind the scenes when to use async vs sync, and that perhaps it's always using sync for a same-account copy. I've asked if we can find that out from the Blob service team for sure.
I'm also asking if we can decrease the sleep timeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update: The SDK engineer said sleep(1) is good, so I've changed to that.
…There seems to have been no activity here for a few weeks by now. Was the work on this pull-request paused? 🤔 |
Sorry, just had some busy weeks! I will hopefully have time this week, if not, certainly next week. |
Really appreciate your hard work and persistence on this @pamelafox - thank you 🙇 |
Co-authored-by: Samuel Williams <[email protected]>
@samuelhwilliams I'm pushing a fix for the double-directory issue. That was introduced when I tried to remedy the issue of the Date field displaying a timestamp of 0: However, from what I can see in the S3 code, it might have the same issue, and I think this has always been an issue for Azure. Removing that 1970 date would be a larger change involving the BaseFileAdmin and possibly the templates, depending on how sorting works. For the record, Azure shows blank dates for directories, since directories aren't real entities: I assume that I should just leave that be, for now? Let me know what you think. |
I think it's fine to leave the timestamp issue for directories for now, agreed. That can be addressed separately 👍 |
Okay, then the only pending question is if we can get clarity on when/whether the async path for copying gets used. I've sent an inquiry to the Blob storage team, but don't know if I'll definitely get a response. I've filed the date-for-directories issue here: |
Fixes #2566
This PR upgrades the azure-storage-blob SDK from v2 to v12, which involved a lot of interface changes. I followed the migration guide @ https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/storage/azure-storage-blob/migration_guide.md and was able to get all the previous functionality working, at least according to the tests.
For ease of testing, I added a simple example app and a devcontainer.azure.json which brings in the Azurite local emulator. That means you can open this repo inside a Codespace or Dev Container with that configuration, and Azure Blob Storage will be running for you.