33 comments

  • Philpax 2 hours ago ago

    I regret that the projection models ended up separate, and I too would have preferred for them to be in a single file. I'm not entirely sure why that ended up happening, but it very much runs counter to the single-file ethos I had in mind when I designed GGUF.

    Hoping that someone will shepherd the cause of merging the two; I think I'm too out of the loop to do it this time around :-)

    • intothemild 26 minutes ago ago

      Well considering right now MTP support is being developed, there was a conversation in that that seemed to throw around the idea of separating the MTP model out of the main GGUF, like with Mmproj. This was rejected.

      Which I'm happy for. So given that decision, I don't think it's unreasonable to think that they might be open to including Mmproj files in the GGUF.

      Only issue I can think of is, which one? BF16, F16? Etc

      • Philpax 14 minutes ago ago

        Quantiser's choice, IMO. They're best-placed to decide what compromise to make for their particular model.

  • uyzstvqs an hour ago ago

    GGML & GGUF have been extremely important to the open-source ML/AI space. Projects like llama.cpp, whisper.cpp, and stable-diffusion.cpp tend to just work perfectly, across a whole bunch of different platforms and hardware backends.

    • doublerabbit an hour ago ago

      while llama.cpp is an meta creation, and meta as I loathe them with a passion, I do admit it's the easiest out of the others. Compile this, give it brain - run. And you get a webui and api.

      • packetlost an hour ago ago

        llama.cpp doesn't really have much to do with Meta other than it was originally developed for the first Llama model released by Meta. The creator doesn't and didn't work for Meta when it was written.

        • doublerabbit 14 minutes ago ago

          well, that solves all my problems. thanks.

  • Sharlin 3 hours ago ago

    > The really neat thing about GGUF is that it's just one file. Compare this to a typical safetensors repo on huggingface, where there's a pile of necessary JSON files scattered around [...]

    Funny, to me AI models have "always" been single files, as that's what has been the norm in the local image gen business. Safetensors files allow stuffing all kinds of stuff inside them too, no GGUF needed for that. Though given that the text encoders of modern models are multi-gigabyte language models themselves, nobody includes redundant copies of those in every checkpoint.

    • Philpax 2 hours ago ago

      Single-file deployments were an intentional design goal on my part. While most image models were/are single-file, LLM safetensors (at least at the time) were not, and I wanted to ensure that we enforced that at a structural level. I also didn't want to mandate a JSON reader for executors (e.g. llama.cpp), which the ST approach would have required. The bigger issue at the time, if I recall, was that ST couldn't support the new-and-upcoming quants that GGML had, and having our own file format offered us flexibility that ST couldn't.

  • theapadayo 3 hours ago ago

    IMO the biggest thing still missing is an actual way to define the model architecture outside of being hard coded into the current build. It doesn't need to be a 1:1 performance parity with the fully supported models. Having proper, vendor validated support for day 1 is what is the difference between people thinking a model is amazing vs horrible. See recent Gemma vs Qwen releases.

    Not sure what the solution is, other than writing a DSL to describe the model graphs which you then embed in the GGUF. The other fallback is to just read the PyTorch modules from the official model releases and convert that to GGML ops somehow.

    • Philpax 2 hours ago ago

      Yeah, I intentionally left space for the computation graph to be included in the GGUF spec in the hopes that this would be picked up by someone. I would have loved to have it in the first version, but I was prioritising getting the MVP spec out and implemented.

      I'd still love to see this, but it would need a cheerleader very familiar with the current state of the GGML IR.

    • LoganDark 2 hours ago ago

      I feel like the computation graph could be embedded into the weights similarly to how ONNX works. Then you expose some common interfaces that except some common parameters, and additional custom ones can practically be extensions, sort of like how Wayland works. So you can support not only transformer-ish models like LLaMa, but also RNN-ish models like RWKV and also multimodal models and more. Not sure how this would be implemented in practice but it sounds like a cool idea. I just worry that if the computation graph is baked into the model file, then improvements to the architecture or optimizations that don't require changes to the weights won't be applied to existing files without a conversion.

  • badsectoracula 4 hours ago ago

    > not to be confused with the somewhat baffling llama_chat_apply_template exposed in the libllama API, which hardcodes a handful of chat formats directly in C++

    As someone who is tinkering with a desktop-based inference app in FLTK[0], i wish this used the actual Jinja2 template parser llama.cpp uses (or there was another C function that did that since AFAICT for "proper" parsing you need to be able to pass a bunch of data to the template so it knows if you, e.g., do tool calling). Currently i'm using this adhocky function, but i guess i'll either write a Jinja2 interpreter or copy/paste the one from llama.cpp's code (depending on how i feel at the time :-P).

    But yeah, GGUF's "all-in-one" approach is very convenient. And i agree that it feels odd to have the projection models as separate files - i remember when i first download a vision-capable model, i just grabbed whatever GGUF looked appropriate, then llama.cpp told me it couldn't do model and took me a bit to realize that i had to download an extra file. Literally my thought once i did was "wasn't GGUF supposed to contain everything?" :-P

    [0] https://i.imgur.com/GiTBE1j.png

    • bitwize 3 hours ago ago

      Oh my God I freaking love your app. The 90s Linux desktop vibes hit like a hammer. FLTK FTW!

  • amelius 3 hours ago ago

    > <|turn>user Hi there!<turn|><|turn>model Hi there, how can I help you today <turn|>

    Good lord, they managed to invent a format that is even less readable than XML.

    • aktuel 2 hours ago ago

      It is not supposed to be readable by humans. You rarely have to look at it. It is designed to not get confused with the actual content, where the content can be any random text from the internet. For that, you have to use a format that is not used anywhere else.

      • stavros 2 hours ago ago

        Are these markers actual text? Or does the model "see" one token per marker?

        • badsectoracula 2 hours ago ago

          AFAIK[0] they are (usually) so-called "special" tokens - e.g <|turn> is token id 105 for the vocabulary Gemma4 uses. When you are tokenizing text you can either tokenize the "<|turn>" as a single token (105) or as a series of other tokens (236820, 236909, 887 and 236813 for the "<", "|", "turn" and ">" tokens) with the idea being that the model will treat "105" as the actual separator but can also use "<|turn>" as part of the content.

          Though using text-based templates make this a bit tricky regardless. AFAIK llama.cpp tries to avoid this confusion by having their Jinja2 implementation use a custom string type that contains metadata about where characters "come from" so that it can distinguish between special tokens (which would be part of the Jinja2 template) and content (which would be either generated text or text given in by the user) - i.e. even if a string is "<|turn>" the metadata would be used to tell if it is meant to be tokenized as a special token or as a series of non-special tokens.

          [0] i might be wrong, this is based on my understanding by messing around with the llama.cpp code, but i never implemented an LLM inference or training engine

        • bashbjorn 2 hours ago ago

          The model sees one token per marker - but the overlap with ingested actual text is still relevant, because the tokenizer will ingest regular text, where it will turn "<|turn>" into the same token.

          For this reason, it can be tricky to work on the runtime for a model with the same model. This really feels like an accidental problem, but I'm not sure if it's really solvable without abandoning the text representations altogether (and the jinja abstraction along with it).

          • lifis an hour ago ago

            Surely one can just escape the input, no? Seems astonishing if someone isn't doing that

            • maxbond an hour ago ago

              The escape algorithm here is very simple, you remove special tokens from the runtime tokenizer's vocabulary so that it's forced to encode them as multiple non-special tokens. (That doesn't actually mean the LLM won't treat them as special tokens though, so this isn't sufficient on it's own.)

  • ge96 5 hours ago ago

    Nice, I recently pulled down TheBloke 7B mistral to try out I have a 4070.

    • bashbjorn 4 hours ago ago

      I love mistral, but that model is... not the best. Maybe try out Gemma 4 e4b, it's a similar size to Mistral 7B, and should run great on your 4070 ("E4B" is slightly misleading naming).

      • ge96 4 hours ago ago

        Thanks for the tip, what do you use Gemma 4 e4b for?

        • redanddead 4 hours ago ago

          some say it’s a miniaturized gemini model

          it’s good at writing, coding, decently intelligent

          you can try it on nvidia nim

    • mixtureoftakes 4 hours ago ago

      7b mistral is quite outdated. On a 12gb 4070 you can run qwen 3.5 9b q4km or qwen 3.6 35b, the latter will be a lot smarter but also a lot slower due to ram offload.

      Try both in lm studio, they really are surprisingly capable

      • ge96 4 hours ago ago

        I have 80gb of ram but it's slow capped by i9 CPU or specific asus mobo sucks I think only 2400mhz despite being ddr4

        Tried all the stuff bios, volting

    • ganelonhb 5 hours ago ago

      I have a 2070 and can confirm it works amazingly fast.

      I love TheBloke I wish he still made stuff

      • bashbjorn 4 hours ago ago

        Yeah, TheBloke era of local LLMs were good times. TBF Unsloth are doing a fantastic job of publishing quants of the major models quickly - they just don't have nearly the volume of "weird" models as TheBloke did.

      • ge96 5 hours ago ago

        What do you use it for? I'm still trying to use agents, I barely use copilot, only at work when I have to.

        I didn't want to get personal with an LLM unless it was local so that's why I was setting this up but yeah. So far just research is what I was looking at.

  • kenreidwilson 5 hours ago ago

    >Published May 18, 2026

    hmmm...

    • bashbjorn 5 hours ago ago

      whoops, my bad. Just a typo in the markdown. Fixed :)

      • 1024bits a few seconds ago ago

        What're you using to render this blog? Any chance there could be an RSS feed?