Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc. bug: Data check in examples/gguf #12617

Open
tomdol opened this issue Mar 27, 2025 · 0 comments
Open

Misc. bug: Data check in examples/gguf #12617

tomdol opened this issue Mar 27, 2025 · 0 comments

Comments

@tomdol
Copy link

tomdol commented Mar 27, 2025

Name and Version

❯ ./bin/llama-cli --version                             
version: 4978 (5dec47dc)
built with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

Other (Please specify in the next section)

Command line

./build/bin/llama-gguf ~/projects/ggify/models/smol.gguf r

Problem description & steps to reproduce

The problem I'm facing is with one of the examples - when I'm trying to read a GGUF model with llama-gguf the data check step ends up with an error when verifying the first element of the first tensor found in the model.

gguf_ex_read_1: reading tensor 0 data
gguf_ex_read_1: tensor[0]: n_dims = 2, ne = (2048, 49152, 1, 1), name = token_embd.weight, data = 0x7e2026c001b0
token_embd.weight data[:10] : 0.000000 -0.000000 -0.000000 -0.000000 0.000000 0.000000 -0.000000 0.000000 0.000000 0.000000 

gguf_ex_read_1: tensor[0], data[0]: found 0.000000, expected 100.000000
/home/tomdol/projects/llama.cpp/examples/gguf/gguf.cpp:261: GGML_ASSERT(gguf_ex_read_1(fname, check_data) && "failed to read gguf file") failed
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
[1]    55546 IOT instruction (core dumped)  ./bin/llama-gguf ~/projects/ggify/models/smol.gguf r

The interesting part is that this line https://github.com/ggml-org/llama.cpp/blob/master/examples/gguf/gguf.cpp#L219 expects that each element of the tensor will be equal to 100 plus the tensor index which for the first tensor in the model would mean that it should be filled entirely with the value of 100.

I've been trying to figure out why and find the reasoning behind this check but it seems it has been there since the beginning of existence of this file #2398

The error can be omitted by adding the n param to the llama-gguf binary and this is a feature that was added here #6582 although this PR does not mention anything about this particular issue being the reason behind this new param.

First Bad Commit

No response

Relevant log output

@nickhuang99 nickhuang99 marked this as a duplicate of #12647 Mar 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant