-
Notifications
You must be signed in to change notification settings - Fork 470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auto-backup to coco (feature) #72
Comments
Currently the best option is to us docker to backup the volume. I like the idea of having a option though the web client in which a admin user can control backup intervals, restore, etc. An uber-coco file could have many issues. If users have large datasets the file can become to large and not be able to read into memory. Lazy load could be implemented but I feel that would be to much work when simpler alternatives exist. Before a method can be determine we first must decide what features are key since this will be a big feature update.
Any insight from the community would be nice |
If made available via API method, it can be used in the UI and via automation tools. A toggle option makes sense for including images. I think a zip file containing a coco.json per dataset (and optionally, any included images) makes sense. It should compress well (at least for the coco json files), and the more that it looks like a standard format (COCO), the easier it is to use outside of this tool (as in training networks). Also, if the backup/restore happens via COCO format files, it means that backup/restore and import/export are very close to the same process. This could also get rid of the need for "INITIALIZE_FROM_FILE" option, if a user could just construct their own "backup" zip file with images and one or more coco.json files to bootstrap the server. Note: my perspective is subjective based on a workflow where I don't want to maintain a long-running mongo instance, and I plan to bring-up/tear-down the annotator on an as-needed basis on my desktop. Once I'm done annotating, I want to export the annotated datasets to a cloud bucket in COCO format, and stop the annotator tool. |
For your us case you can disable user authentication. I say can because I haven't tested it extensively. |
I will be your guinea pig for that feature :) |
Seems to work as expected; thanks. |
Feature Request
It would be useful if the app could perform an automated backup on an env-configurable interval.
Ideally, the format could just be coco.json, one per dataset. They could possibly all be combined into a single uber-coco.json (array) so that all the important data could be restored from a single file--or alternately, just allow multiple coco.json files to be imported together from a folder or something.
The text was updated successfully, but these errors were encountered: