Skip to content

devops: add intel oneapi dockerfile #5068

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jan 23, 2024
Merged

Conversation

ngxson
Copy link
Collaborator

@ngxson ngxson commented Jan 22, 2024

After some testing, I found out this configuration that give the best performance on Intel CPU.

See this issue for more details: #5067

@ggerganov ggerganov merged commit 2bed4aa into ggml-org:master Jan 23, 2024
jordankanter pushed a commit to jordankanter/llama.cpp that referenced this pull request Feb 3, 2024
hodlen pushed a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
@winstxnhdw
Copy link

Hey, just out of interest, have you benchmarked against CTranslate2 that also uses Intel MKL? Do you happen to know if llama.cpp can finally beat CTranslate2 in Intel CPU inference?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants