You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
❯ ./bin/llama-cli --version
version: 4978 (5dec47dc)
built with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
Other (Please specify in the next section)
Command line
./build/bin/llama-gguf ~/projects/ggify/models/smol.gguf r
Problem description & steps to reproduce
The problem I'm facing is with one of the examples - when I'm trying to read a GGUF model with llama-gguf the data check step ends up with an error when verifying the first element of the first tensor found in the model.
gguf_ex_read_1: reading tensor 0 data
gguf_ex_read_1: tensor[0]: n_dims = 2, ne = (2048, 49152, 1, 1), name = token_embd.weight, data = 0x7e2026c001b0
token_embd.weight data[:10] : 0.000000 -0.000000 -0.000000 -0.000000 0.000000 0.000000 -0.000000 0.000000 0.000000 0.000000
gguf_ex_read_1: tensor[0], data[0]: found 0.000000, expected 100.000000
/home/tomdol/projects/llama.cpp/examples/gguf/gguf.cpp:261: GGML_ASSERT(gguf_ex_read_1(fname, check_data) && "failed to read gguf file") failed
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
[1] 55546 IOT instruction (core dumped) ./bin/llama-gguf ~/projects/ggify/models/smol.gguf r
I've been trying to figure out why and find the reasoning behind this check but it seems it has been there since the beginning of existence of this file #2398
The error can be omitted by adding the n param to the llama-gguf binary and this is a feature that was added here #6582 although this PR does not mention anything about this particular issue being the reason behind this new param.
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered:
Name and Version
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
Other (Please specify in the next section)
Command line
./build/bin/llama-gguf ~/projects/ggify/models/smol.gguf r
Problem description & steps to reproduce
The problem I'm facing is with one of the examples - when I'm trying to read a GGUF model with
llama-gguf
the data check step ends up with an error when verifying the first element of the first tensor found in the model.The interesting part is that this line https://github.com/ggml-org/llama.cpp/blob/master/examples/gguf/gguf.cpp#L219 expects that each element of the tensor will be equal to
100
plus the tensor index which for the first tensor in the model would mean that it should be filled entirely with the value of100
.I've been trying to figure out why and find the reasoning behind this check but it seems it has been there since the beginning of existence of this file #2398
The error can be omitted by adding the
n
param to thellama-gguf
binary and this is a feature that was added here #6582 although this PR does not mention anything about this particular issue being the reason behind this new param.First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: