Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cuda support when loading local onnx model #249

Merged
merged 1 commit into from
Dec 13, 2024

Conversation

jiafatom
Copy link
Contributor

@jiafatom jiafatom commented Dec 13, 2024

Tested on clean conda for the dependency llm-oga-cuda; this can work for evaluating local onnx model:
lemonade -i cuda-fpmixed_14 oga-load --dtype fp16 --device cuda oga-bench
lemonade -i cuda-fpmixed_14 oga-load --dtype fp16 --device cuda accuracy-mmlu --tests management

@jiafatom jiafatom force-pushed the add_cuda branch 2 times, most recently from c8e1eaa to f368f01 Compare December 13, 2024 18:45
@ramkrishna2910
Copy link
Collaborator

Currently supports local model execution Documentation will be added in a separate PR once end to end oga pipeline for int4 and fp16 are also tested.

@ramkrishna2910 ramkrishna2910 merged commit 8c46f6b into onnx:main Dec 13, 2024
8 checks passed
@jiafatom jiafatom deleted the add_cuda branch December 13, 2024 20:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants