Skip to content

Latest commit

 

History

History
115 lines (111 loc) · 4.27 KB

File metadata and controls

115 lines (111 loc) · 4.27 KB
NameAboutLabelsAssignees
Bug (model use)Something goes wrong when using a model (in general, not specific to a single llama.cpp module).bug-unconfirmed,model evaluation

Thanks for taking the time to fill out this bug report! This issue template is intended for bug reports where the model evaluation results (i.e. the generated text) are incorrect or llama.cpp crashes during model evaluation. If you encountered the issue while using an external UI (e.g. ollama), please reproduce your issue using one of the examples/binaries in this repository. The llama-completion binary can be used for simple and reproducible model inference.

Which version of our software are you running? (use --version to get a version string)

Which operating systems do you know to be affected?

Which GGML backends do you know to be affected?

Which CPUs/GPUs are you using?

Which model(s) at which quantization were you using when encountering the bug? If you downloaded a GGUF file off of Huggingface, please provide a link.

Please give us a summary of the problem and tell us how to reproduce it. If you can narrow down the bug to specific hardware, compile flags, or command line arguments, that information would be very much appreciated by us.
If possible, please try to reproduce the issue using llama-completion with -fit off. If you can only reproduce the issue with -fit on, please provide logs both with and without --verbose.

If the bug was not present on an earlier version: when did it start appearing? If possible, please do a git bisect and identify the exact commit that introduced the bug.

Please copy and paste any relevant log output, including the command that you entered and any generated text. For very long logs (thousands of lines), preferably upload them as files instead. On Linux you can redirect console output into a file by appending > llama.log 2>&1 to your command.