You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ ./llama-cli --version
version: 4959 (53af4db)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
Test code
Command line
$ ./llama-gguf /data/models/./QwQ-32B/QwQ-32B-BF16.gguf r
Problem description & steps to reproduce
$ ./llama-gguf /data/models/./QwQ-32B/QwQ-32B-BF16.gguf r
...
gguf_ex_read_1: reading tensor 0 data
gguf_ex_read_1: tensor[0]: n_dims = 2, ne = (5120, 152064, 1, 1), name = token_embd.weight, data = 0x7f414ef641b0
token_embd.weight data[:10] : 0.016814 -0.043757 0.002697 -0.007346 0.018828 -0.040095 0.018827 -0.010635 0.018583 0.022978
gguf_ex_read_1: tensor[0], data[0]: found 0.016814, expected 100.000000
/home/nick/Downloads/llama.cpp/examples/gguf/gguf.cpp:261: GGML_ASSERT(gguf_ex_read_1(fname, check_data) && "failed to read gguf file") failed
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
Aborted (core dumped)
checking source code and I don't understand the logical of data checking below. why if (data[j] != 100 + i) indicates invalid data?
examples/gguf/gguf.cpp:261:
// check data
if (check_data) {
const float * data = (const float *) cur->data;
for (int j = 0; j < ggml_nelements(cur); ++j) {
if (data[j] != 100 + i) {
fprintf(stderr, "%s: tensor[%d], data[%d]: found %f, expected %f\n", __func__, i, j, data[j], float(100 + i));
gguf_free(ctx);
return false;
}
}
}
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered:
Name and Version
$ ./llama-cli --version
version: 4959 (53af4db)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
Test code
Command line
Problem description & steps to reproduce
$ ./llama-gguf /data/models/./QwQ-32B/QwQ-32B-BF16.gguf r
...
gguf_ex_read_1: reading tensor 0 data
gguf_ex_read_1: tensor[0]: n_dims = 2, ne = (5120, 152064, 1, 1), name = token_embd.weight, data = 0x7f414ef641b0
token_embd.weight data[:10] : 0.016814 -0.043757 0.002697 -0.007346 0.018828 -0.040095 0.018827 -0.010635 0.018583 0.022978
gguf_ex_read_1: tensor[0], data[0]: found 0.016814, expected 100.000000
/home/nick/Downloads/llama.cpp/examples/gguf/gguf.cpp:261: GGML_ASSERT(gguf_ex_read_1(fname, check_data) && "failed to read gguf file") failed
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
Aborted (core dumped)
checking source code and I don't understand the logical of data checking below. why if (data[j] != 100 + i) indicates invalid data?
examples/gguf/gguf.cpp:261:
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: