-
Notifications
You must be signed in to change notification settings - Fork 495
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support importing GGUF files #1187
Comments
If gguf contains the model graph information, then we can use what burn-import ONNX facility. In our burn-import, we convert ONNX graph to IR (intermediate representation) (see this doc). So, it would possible to convert the model graph to IR and generate source code + weights. If gguf contains only weights, we can go burn-import pytorch route, where we only download weights. |
From my brief research, GGUF format contains metadata + tensor weights. This aligns with burn-import pytorch route and not burn-import/ONNX. This will mean model needs to be constructed in Burn first and use the weights to load. Here is one Rust lib to parse GGUF file: https://github.com/Jimexist/gguf |
GGUF spec: ggml-org/ggml#302 |
Parser in Rust: https://github.com/Jimexist/gguf |
Hi, it has been about a year since this was last updated, since then pre-existing models on HF often come in GGUF for quantised or Safetensor formats for non-quantised. I think it would be useful to people new to the space to understand how burn can be leveraged with these formats, as they seem to be the most common formats available to start from scratch. Specifically importing quantised GGUF models as I couldn't see much in the docs. Candle is okay for this, but its support for models is a little spotty with quantised models which are more accessible to people with fewer resources. I saw in #1323 some added pieces were available for reconstructing config files but I am wondering about simply ingesting a gguf model and using it with burn directly similar to the import options for onnx or pytorch without people needing to figure out how to reverse engineer what gguf is doing under the hood with little guidance. GGUF's single file format seems like an ideal target for burn's use-case to me and the format is much more universally accessible, similar to ONNX on paper. I am happy to contribute docs, I just need a bit of direction to start testing with the current capabilities or indicators that it is even possible. Edit Ref to the Candle issue I am seeing with Mistral-Nemo Quantizations: |
I'll be happy to assist if you decide to submit a pr. We can leverage Candle's reader similar PyTorch pt reader. We can use the existing burn-import infrastructure. It should be somewhat easier now that PyTorch pt import works. |
I actually made a start last night using Example names from the gguf spec that could be mapped: tok_embd I am very new to rust so it's taking me a bit of time to figure out how to transform the format Content creates as rather than treat things directly as u32 or String everything is stored in 'U32('VALUE')' first and being able to transform those and then map them to the right places to create burn modules etc is a bit of time and effort |
When I say stored as an example of the K, V structure it uses:
Rather than say:
|
I actually haven't used Burn at all until now, I only learnt detailed information about transformer architecture after posting my original comment two days ago, and I started with Rust like 3-4 weeks ago so I will try my best but I apologise in advance if I can't see it through. It's partly my motivation for commenting, as someone new to the whole space all I see is gguf really, I would love to make it more accessible to those of us who want to get started, and from what I can tell Burn is well placed for doing that - I also love you have built in WGPU support - and my ambition for learning Rust to do this is because years ago I did a lot of embedded and I have a load of RPi Picos and various other devices lying around so love you guys have the demo for it, and also I think your approach is fantastic for my goals. Most of my career until now has been more devops oriented, and even then I have been more on the infrastructure and networking side than development so I am out of my depth but trying. I can figure out most things on my own but any general pointers are always welcome, I will try and figure it out. |
I have been making a lot more progress than I anticipated working this out slowly I have one quick question @antimora In the recent burn release notes it says there is now mix precision support in matmul It could still be my naivety but does that mean matmul now supports int type tensors? I am currently looking at this https://github.com/tracel-ai/burn/blob/main/crates/burn-import/src/burn/node/matmul.rs |
Probably something else in Burn's core and not related to ONNX import. |
@leflambeur Nice progress. Let us know when you start dealing with the weights. |
I am ignoring the ONNX import for now, right now I am testing against the LlamaConfig in https://github.com/tracel-ai/models/blob/main/llama-burn/src/llama.rs e.g. In my code:
where candle quantized:
gets me the metadata:
The reason I ask about matmul is that with Quantization support, you could match the Quantization scheme ahead of time from the metadata and ensure you have the right type of tensor (float/int) ( When you load the weights completely from the metadata, not just what I am doing above which get s description of the weights, you get a convenient map of each layer including MatMuls to use I figured if I had those I could use the modules described in the burn part of 'burn-import' to map directly without going through llama-burn but it's baby steps atm so I am starting with llama-burn and seeing how I go |
Example output from loading the full weights using candle:
If you just use the metadata you get a lot of the tensor/weight layouts without needing the whole thing but it's not as granular - which may not be an issue with a consistent format:
Loading the metadata is way quicker than the full weights, you just have to abstract/make assumptions about the weights and their behaviours I am not sure if it would be more future forward to use the more granular weight loading, or just use the metadata and make assumptions as it's quicker So for now I am testing llama-burn works - slowly getting there |
Regarding the mixed precision matmul, that is only for floating-point types. Quantization is still very much a WIP. I've only added simple per-tensor schemes for affine and symmetric (scale only) quantization for int8, so we can load quantized tensors but the operations all perform dequantize -> float op -> quantize. I am not entirely familiar with the GGUF format but Q8_0 is int8 symmetric quantization with quantization parameters (scale only) for a block of weights (not per-tensor, so not the same). |
@laggui I am probably wrong but if I am reading correctly: https://github.com/ggerganov/ggml/blob/master/docs/gguf.md
I think this means that the majority but not all tensors are described (i.e. Q4) in this scheme But then it is expected for each tensor to have it's own marked scheme:
Or
Like you see above So loading the general file_type metadata lets you make an assumption about the majority of tensors, however the specific scheme for each tensor is described in the tensor info of the metadata or by loading the full weights (again not an expert and making some educated guesses) I haven't had the chance to look at this since last Thursday as I have had other things going on, but I am making some progress today - I have been working out of another project I was working on but I will isolate this work out of it and try and get it somewhere more public |
You can visualize this on HF hub. It really depends on the model, some have a lot of different precisions mixed in. For example, this Q4_0 model has some parameters in F16, F32, Q8_0 and Q4_0. I guess the majority of the tensors will have the lowest precision described for the file type, but there could be any other higher precision tensors in the file. |
Yeah, I saw it in HF hub as well I think the main thing I want to validate: Is my approach of using the metadata to generate/build a model in burn (I.e. use the standardisation of GGUF) using standard building blocks and importing the pretrained weights at the same time a sensible approach without needing to define the whole model in advance My idea was you could use the generics of GGUF to infer a consistent skeleton and fill in the details from the model metadata on specific weights The user experience I am aiming for roughly is:
Output:
Where the model file is built by burn-import with no extra input needed really I think this is a sensible-ish approach, and I am working to test the basic idea with the llama-burn model you made to do the top level without touching individual tensors, but I am sure I will probably get some details wrong as I think I am trying to fit a squarer peg in a round hole in this early stage of testing with llama-burn which seems to be tuned to a few specific models The next step would be doing it procedurally from just the model file and tensorinfo and not using llama-burn |
gguf contains hyper parameter information to build a model. However, someone still needs to do the initial work of defining the model structure. You might be able to infer a model structure from the hyper parameter names but there is no standard, so whatever logic you may will be specific to the exporter. |
Yeah that makes sense, I will keep testing and feedback, I think it's a challenge at the moment because it's not easy for people to figure this out without diving really deep and it makes the space less accessible despite the consumer awareness of AI atm I think there are some big opportunities here to make things more usable and so it's worth the effort to try and lower the barrier of entry even if I am stumbling around in the dark a bit |
@leflambeur yes, I agree. I suspect it would be a common operation for many. If you has something working, please share your knowledge (in the comment or book section). That's how many discover about possibilities. I personally did not encounter a use case for myself, so that's why I didn't contribute. Mine is currently limited to dealing with PyTorch files (pt). So I make the tools and docs generic and submitted to Burn's repo. It became very useful for others. |
I apologize if this seems too far fetched, but it seemed in line with how ONNX generation works.
The text was updated successfully, but these errors were encountered: