VibeThinker-1.5B

(github.com)

62 points | by tamnd 4 days ago ago

15 comments

  • Lapel2742 3 days ago ago

    I'm pretty sure that this is some kind of scientific achievement that I do not fully understand but the real world use cases for this model seem to be very limited.

    I gave it two tasks. "Create a new and original story in 500 words" and "Write a Python console game". Both of those resulted in an endless loop with the model repeating itself

    I'm honest. Given that a 1B Granite nano model has only little problems (word count) with such tasks and given that VibeThinker is announced as a programming model it's disappointing to see a 1.6B model fail multiple times.

    • impossiblefork 3 days ago ago

      It's specifically trained on maths. I don't think they care at all about general instruction following or stories.

      • Lapel2742 2 days ago ago

        >> [...] achieving state-of-the-art performance in mathematical and *coding tasks* [...]

        And it fails at one of the simplest coding tasks where a Granite model at nearly half the size has no problems.

        It's probably an important discovery but seemingly only usable in an academic context.

        • impossiblefork 2 days ago ago

          The way I read the paper is that they've only tuned it on that maths dataset, so it's not made to have any coding ability.

    • zhouyi2future 3 days ago ago

      [dead]

  • reaslonik 3 days ago ago

    While impressive that the output isn't completely undecipherable, my real-world queries for SpringBoot project with most popular libraries don't compare so favorably to their benchmarks against Qwen3 32B, which I also run regularly (a 4bit quantized version of). Explaining tasks break completely and often.

    Used their recommended temperature, top_k, top_p and so on settings

    • viraptor 3 days ago ago

      Breaks as in think block contains nonsense or the output finishes? I've had some thinking weirdness which doesn't seem to affect the final answer much.

      Overall it still seems extremely good for its size and I wouldn't expect anything below 30B to behave like that. I mean, it flies with 100 tok/sec even on a 1650 :D

      • reaslonik 3 days ago ago

        Breaks as in contains words that grammatically work but don't make sense, mistakes the symbol | for a person, points back to things that didn't exist in the request etc. I use templates like these for explaining questions:

        from

        ```

        excerpt of text or code from some application or site

        ```

        What is the meaning of excerpt?

        Just doesen't seem to work at a useable level. Coding questions get code that runs, but almost always misses so many things that finding out what it missed and fixing those takes a lot more time than handwriting code.

        >Overall it still seems extremely good for its size and I wouldn't expect anything below 30B to behave like that. I mean, it flies with 100 tok/sec even on a 1650 :D

        For it's size absolutely, I've not seen 1,5B models that form even sentences right most of the time so this is miles ahead of most small models, not just to the hinted at levels the benchmarks would you have believe

        • viraptor 3 days ago ago

          Interesting, I haven't seen it actually return nonsense yet. (Some incorrect things and getting into thinking loops, but always coherent) I'm running it on a latest llama.cpp with the bf16 gguf. What are you using?

          • reaslonik 3 days ago ago

            I'm running the huggingface's .safetensors with vLLM with as little starting parameters as possible. I thought it must not be sending temp right, but after setting temp to something else I got chinese so it should be sending it.

            Overall if you're memory constrained it's probably still worth to try and fiddle around with it if you can get it to work. Speedwise if you got the memory a 5090 can get ~50-100tok/s for a single query with 32B-AWQ and way more if you have something parallel like open-webui

  • DeathArrow 3 days ago ago

    Many interesting open weights models are coming from China.

  • Alifatisk 3 days ago ago

    Does benchmarks look incredible. Like almost too good to be true, what am I missing?

    Is this hosted online somewhere so I can try it out?

    • Balinares 2 days ago ago

      I don't know how the coding benchmarks are computed but this model, on its own and outside of agentic loops, definitely doesn't compare to e.g. Qwen3 Coder. I might still try that for fun, just to see how it performs given a feedback loop.

      On math questions, though, beside a marked tendency towards rambling thinking, it's just plain implausibly good for a 1.5B model. This is probably just rote learning, though. Otherwise this might well be a breakthrough.

    • viraptor 3 days ago ago

      It's so tiny you can download and run it locally on CPU with llama.cpp. It seems weirdly good at some simple python questions. Definitely better than I'd expect from any model of that size.

  • impossiblefork 2 days ago ago

    I don't quite understand the MRPO.

    So during the final they try to ensure the model doesn't get the right answer every time, but only 50% of time, so as to avoid killing all variability-- very sensible, and then they compute a measure of this, take the negative exponential of this measure and then they scale the advantage by this.

    So a question matters in proportion to the variability of the answers. Isn't this more curriculum learning stuff than actually suppressing things that don't vary enough?

    Basically focusing on questions that are still hard instead of trying to push the probability of problem it's often able to solve to 99.99%?

    Also very reasonable, but this isn't how they describe it. Instead, from their description I would think they're sort of forcing entropy to be high somehow.

    I think the way I'd have titled it would be something like "Dynamic curricula to preserve model entropy".