-
Notifications
You must be signed in to change notification settings - Fork 10
[Enhancement] Implement Concurrency to Optimize Processing Time for Large File Operations #33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @pradhanhitesh, It’s great to hear that you’re using LST-AI frequently! 😊 You’re absolutely right—LST-AI was designed to work with individual images and doesn’t currently support batch processing. However, you can specify the number of threads to use for registration. If you’d like, feel free to create a new branch with the integrated batch processing update and the stats file. Since we also handle large datasets, I created a repository for processing BIDS-structured databases (https://github.com/twiltgen/LST-AI_BIDS). It assumes a BIDS-compliant structure, so it won’t work if the database isn’t set up that way. If you’re interested, feel free to check it out—it might overlap with what you’ve done. As for the segmented volumes, the latest LST-AI updates introduced some new features, including generation of image-wise CSV files with lesion statistics. It’s possible this overlaps with what you’ve compiled. We really appreciate your contributions and would love to see your proposed updates for LST-AI. 🙌 |
Thanks for the reply, @twiltgen! I noticed similar functionality in the |
Sounds great, we'll have a look at the changes when you've created the branch. In my LST-AI_BIDS repo there is also a script called "collect_volumes.py" which gathers the lesion data and generates a single csv file. If you want (and if it is easier for you), you can simply integrate your updates regarding the stats in there. |
Uh oh!
There was an error while loading. Please reload this page.
Hello, devs! I frequently use LST-AI to extract WMH for large-scale cohort studies, processing 500+ files every 3 months. Currently, LST-AI lacks built-in support for batch processing. I've developed a custom script using
ProcessPoolExecutor
to enable parallel CPU processing. It would be fantastic to integrate this feature into LST-AI. Thoughts?Also,
compile-stats
feature can be added to LST-AI to create a single.csv
file compiling all the segmented volumes, subject-wise.The text was updated successfully, but these errors were encountered: