Hmm.. this might make it feasible to build something like a command line program where you can optionally just specify the arguments in natural language. Although I know people will object to including an extra 14 MB and the computation for "parsing" and it could be pretty bad if everyone started doing that.
But it's really interesting to me that that may be possible now. You can include a fine-tuned model that understands how to use your program.
E.g. `> toolcli what can you do` runs `toolcli --help summary`, `toolcli add tom to teamfutz group` = `toolcli --gadd teamfutz tom`
thanks, yeah, the problem is just handling scale, we don't have the infra ready to go, but anyone can do that. Its easy for people to run on their laptops straight up. Will try the VPS route.
I know we all think of bad things when we hear "short form video" but short demos can do a LOT for any project, shows the user how its used, what it looks like, what it solves, etc all in anywhere from 15 seconds to a couple of minutes, doesn't need to be ultra fancy, screen recording is fine. :)
Since there is no GUI here, I feel like a simple plaintext chat transcript would be both 100x smaller and 100x easier to read. (Not to mention accessible.)
I have been building for small (20B or less) models for quite a while. Highly focused/constrained agents, many of them running together in some kind of task orchestration mode to achieve what feels like one "agent".
I build (privacy first) desktop apps this way and I want to get into mobile apps with similar ideas but tiny models.
I don't think they're attacking you, but suggesting you read more carefully. The information provided is correct and clear, but you need to let go of your own biases when consuming it.
I personally prefer the M to the B. I guess as an engineer, noticing the units comes pretty naturally.
1. Distilled means taking the intelligence of a big model and compacting into a tiny model.
2. Google already does so with FunctionGemma, but Needle argues that better performance could be achieved with 10x smaller model using our technologies.
From all the models that do toolcalls the only thing I am confused is why did you pick the worst? Or maybe they are only bad in agentic work it fine for one shot toolcalls?
Hi, would love to know where you get that impression on 1 shot tool calling, was there concrete evaluation carried out? pretty new to this and was a bit lost when trying to compare models on different capabilities.
Come to think of it, this could be a nice model to have as the first pass in a more complex agent system where Needle hands of the results of a tool call to a larger model.
I don't really understand what this is for... there is a lot of ML-researcher talk on the GH page about the model architecture, but how should I use it?
Is it a replacement for Kimi 2.7, Claude Haiku, Gemini Flash 3.1 lite, a conversational LLM for the situations where it's mostly tool-calling like coding and conversational AI?
I'm having trouble understanding why someone would want that? Like, what are the product use-cases of such a thing? I understand why people want that for coding agents--although the jury is still very much out on whether those are terribly useful--but I cannot fathom what someone might want an agent to do on a cell phone? Is there some user-facing activity on a phone that's similar to coding with a tight, objectively measurable feedback loop (analogous to dev/compile/test)?
EDIT: more of you cretins have downvoted than have replied.. so.. show your cards.
I mean.. Siri basically works? When I'm driving I say "Hey Siri, find me a gas station along my route", and it does. Or I say "Hey Siri, call Joe Bob mobile" and it does. Or I say "Hey Siri, play me a podcast". This is kind of a solved problem already? When I'm driving this is literally as complicated of a distraction as I want--I'm not going to be dictating emails or texts. When I'm not driving, the touchscreen keyboard (as shitty an interface as that is) is 100x better than voiced natural language commands.
It does just barely work now after they spent billions, and they may still fall back to cloud LLMs for a significant number of things. This is a way that everyone can get that on the actual Apple Watch or local phone for any application they build.
I get that, but I still can't imagine what it might be for. TBH I don't have a smart watch, because I can't think of anything I'd want one for--my mechanical watch keeps time to within a few seconds per month and the lume lasts all night. I don't know what making it "smarter" would do for me, it does an A+ job of being a watch. What are the things that "everyone" can build with this that actually matter? Like, what is the differentiator?
EDIT: To be clear, the monoculture of phone operating systems sucks. If this somehow enables more entrants into that space then I'm all for it. However, I don't see this in particular being the deciding factor... For example, the reason I don't run a 3rd party operating system on my phone isn't because it's lacking Siri or "OK Google" (if these things went away tomorrow I'd barely notice), it's because it would be a pain in the ass to make it be a phone.
This is pretty much exactly what I want for Home Assistant. I yell out, "Computer! Lights!" and it toggles the lamp in the room on or off. (I mean I can do that now, I think, but probably with a much larger model.)
I haven't played with it yet, but does it ever return anything other than a tool call? What are the failure modes? What if it doesn't understand the request? Does it ever say it can't find a tool? Does it get confused if there are two similar (but different) tools? Can it chain tools together (e.g. one tool to look up and address and another to get directions to the address)?
I mean, I plan on downloading the model later tonight and finding out for myself, but since I'm stuck at work right now, I figured I'd ask anyway...
Can this be a Siri-like core? Set me a timer, tell me what’s the weather, etc. Here is transcribed text and available list of tools for the model to call, and voice the output.
I find this stuff super fascinating and been thinking about it myself. Maybe one could bootstrap tiny models on a rather 'pure' procedural data set. Neglecting [0] of course...
Got a bunch of errors trying to run it on CPU though. Very likely connected to me running this in a container (unpriv LXC), but figured for 26M CPU would suffice.
Many people tease that they will, and start... but then kinda stop. But mostly just been building my own bespoke thing on my own bespoke platform, and kinda running out of steam because I need to make $$ instead.
I don't think the limit is just on tiny devices. It can also be used in apps on generic computers, because its so small anything can run it reasonably quick.
For example, I am thinking this could be helpful for say if you have a complicated build and test infrastructure, fine tune this model on that infrastructure and then people can say more generic things like build and run this library's test, rather than issuing the exact commands to do that or going to Claude, GHCP, etc
I source old, defective high-end radios with timeless designs from brands like Grundig or Braun, and replace the original hardware with a Raspberry Pi while using the original audio parts to build custom smart speakers. Reliable hotword detection and voice command recognition have been a persistent challenge over the years, but whisper and other small models have helped enormously. At the moment I have ollama running on my server with qwen 9b which works fine but a 26M that could be deployed on the pi itself would be amazing.
FYI, distilling Gemini is explicitly against the ToS:
"You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio). You also may not attempt to reverse engineer, extract or replicate any component of the Services, including the underlying data or models (e.g., parameter weights)."
Yeah I think Google should shove that somewhere. They effectively distilled all the internet's knowledge into these models...without asking & without permission
Hmm.. this might make it feasible to build something like a command line program where you can optionally just specify the arguments in natural language. Although I know people will object to including an extra 14 MB and the computation for "parsing" and it could be pretty bad if everyone started doing that.
But it's really interesting to me that that may be possible now. You can include a fine-tuned model that understands how to use your program.
E.g. `> toolcli what can you do` runs `toolcli --help summary`, `toolcli add tom to teamfutz group` = `toolcli --gadd teamfutz tom`
So Needle is trained for INT4, what you see in the playground is INT4, only 14MB, same challenge though.
Oh gotcha. Fixed my comment.
Suggestion: publish a live demo of the "needle playground". It's small enough that it should be pretty cheap to run this on a little VPS somewhere!
Should be quick and easy with WebGPU, too.
That's an even better idea, I bet this could run in Transformers.js.
Good idea. Could you make that.
thanks, yeah, the problem is just handling scale, we don't have the infra ready to go, but anyone can do that. Its easy for people to run on their laptops straight up. Will try the VPS route.
Deployed it to a huggingface space: https://huggingface.co/spaces/benoitfavre/needle-playground
You can check the very simple docker file there.
Here's the Dockerfile, it's delightfully simple https://huggingface.co/spaces/benoitfavre/needle-playground/...
Thanks!
Alternatively, record a video that showcases it.
Ok, will do that now!
I know we all think of bad things when we hear "short form video" but short demos can do a LOT for any project, shows the user how its used, what it looks like, what it solves, etc all in anywhere from 15 seconds to a couple of minutes, doesn't need to be ultra fancy, screen recording is fine. :)
Since there is no GUI here, I feel like a simple plaintext chat transcript would be both 100x smaller and 100x easier to read. (Not to mention accessible.)
Sure, and we've seen those terminal screen recorders that give you back a replayable demo, that could work too.
Yes, a demo might be a good idea.
Lovely to see the push for tiny models.
I have been building for small (20B or less) models for quite a while. Highly focused/constrained agents, many of them running together in some kind of task orchestration mode to achieve what feels like one "agent".
I build (privacy first) desktop apps this way and I want to get into mobile apps with similar ideas but tiny models.
Give it a go and let us know!
That M versus B is way too subtle. 0.026B is my suggestion
Not noticing the difference between an M and B (as a software engineer, no less) seems more like a you problem
Pardon me, do I know you?
Why are you attacking me?
I don't think they're attacking you, but suggesting you read more carefully. The information provided is correct and clear, but you need to let go of your own biases when consuming it.
I personally prefer the M to the B. I guess as an engineer, noticing the units comes pretty naturally.
I read it as 26B as well.
Haha, we were trying to not be hand-wavy too much :)
Awesome! I just tried to set an alarm and add some groceries to the shopping list, and it outperformed Siri.
Music to our ears!
Dumb questions, from someone not in the field...
What is a distilled model?
Why doesn't Google do this (to make their models smaller)?
Seems like you could make a competitor to Gemini?
No question is stupid!
1. Distilled means taking the intelligence of a big model and compacting into a tiny model.
2. Google already does so with FunctionGemma, but Needle argues that better performance could be achieved with 10x smaller model using our technologies.
Model distillation is lossy compression of big model to produce a smaller model.
Smaller model requires less space on disk, less video memory, and less compute (cheaper hardware).
Downside is that distilled model performs worse on the same benchmarks compared to original model.
Can this be converted to onnx or otherwise be used in a browser?
From all the models that do toolcalls the only thing I am confused is why did you pick the worst? Or maybe they are only bad in agentic work it fine for one shot toolcalls?
Gemini is pretty solid for 1-shot tool call and affordable as well.
Hi, would love to know where you get that impression on 1 shot tool calling, was there concrete evaluation carried out? pretty new to this and was a bit lost when trying to compare models on different capabilities.
Looks like you need to open up access to https://huggingface.co/Cactus-Compute/datasets/needle-tokeni... - I get this error when trying to run the steps in your README:
> Repository Not Found for url: http s://huggingface.co/api/datasets/Cactus-Compute/needle-tokenizer/revision/main.
Fixed now, apologies!
Thanks, works now: https://gisthost.github.io/?4ff455792651fe755265b467800f47f3
Can it summarize text it fetches?
Come to think of it, this could be a nice model to have as the first pass in a more complex agent system where Needle hands of the results of a tool call to a larger model.
I will defiantly play around with this!
> I will defiantly play around with this!
Are you Calvin or Hobbes?
The codebase is fully open, feel free to play around!
I don't really understand what this is for... there is a lot of ML-researcher talk on the GH page about the model architecture, but how should I use it?
Is it a replacement for Kimi 2.7, Claude Haiku, Gemini Flash 3.1 lite, a conversational LLM for the situations where it's mostly tool-calling like coding and conversational AI?
It is for building agentic capabilities into very small devices like phones, glasses, watches and more. Does that make sense?
I'm having trouble understanding why someone would want that? Like, what are the product use-cases of such a thing? I understand why people want that for coding agents--although the jury is still very much out on whether those are terribly useful--but I cannot fathom what someone might want an agent to do on a cell phone? Is there some user-facing activity on a phone that's similar to coding with a tight, objectively measurable feedback loop (analogous to dev/compile/test)?
EDIT: more of you cretins have downvoted than have replied.. so.. show your cards.
You can think of “phone use” for instance, what Siri is supposed to be.
I mean.. Siri basically works? When I'm driving I say "Hey Siri, find me a gas station along my route", and it does. Or I say "Hey Siri, call Joe Bob mobile" and it does. Or I say "Hey Siri, play me a podcast". This is kind of a solved problem already? When I'm driving this is literally as complicated of a distraction as I want--I'm not going to be dictating emails or texts. When I'm not driving, the touchscreen keyboard (as shitty an interface as that is) is 100x better than voiced natural language commands.
It does just barely work now after they spent billions, and they may still fall back to cloud LLMs for a significant number of things. This is a way that everyone can get that on the actual Apple Watch or local phone for any application they build.
I get that, but I still can't imagine what it might be for. TBH I don't have a smart watch, because I can't think of anything I'd want one for--my mechanical watch keeps time to within a few seconds per month and the lume lasts all night. I don't know what making it "smarter" would do for me, it does an A+ job of being a watch. What are the things that "everyone" can build with this that actually matter? Like, what is the differentiator?
EDIT: To be clear, the monoculture of phone operating systems sucks. If this somehow enables more entrants into that space then I'm all for it. However, I don't see this in particular being the deciding factor... For example, the reason I don't run a 3rd party operating system on my phone isn't because it's lacking Siri or "OK Google" (if these things went away tomorrow I'd barely notice), it's because it would be a pain in the ass to make it be a phone.
This is pretty much exactly what I want for Home Assistant. I yell out, "Computer! Lights!" and it toggles the lamp in the room on or off. (I mean I can do that now, I think, but probably with a much larger model.)
I haven't played with it yet, but does it ever return anything other than a tool call? What are the failure modes? What if it doesn't understand the request? Does it ever say it can't find a tool? Does it get confused if there are two similar (but different) tools? Can it chain tools together (e.g. one tool to look up and address and another to get directions to the address)?
I mean, I plan on downloading the model later tonight and finding out for myself, but since I'm stuck at work right now, I figured I'd ask anyway...
How many lights are there?
… four. There are four lights.
Let me know what you think!
Can this be a Siri-like core? Set me a timer, tell me what’s the weather, etc. Here is transcribed text and available list of tools for the model to call, and voice the output.
That was the goal!
Is the idea here to add function calling to models that don't have it, or even improve function calling (qwen quirks)?
So it’s a tiny model capable of function calling that could run locally on cheap devices.
I find this stuff super fascinating and been thinking about it myself. Maybe one could bootstrap tiny models on a rather 'pure' procedural data set. Neglecting [0] of course...
[0]: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
Sounds interesting, would love to see it too!
This would be amazing for home assistant.
On my list to check out tomorrow :D
Thanks, keep me posted!
Sounds interesting.
Got a bunch of errors trying to run it on CPU though. Very likely connected to me running this in a container (unpriv LXC), but figured for 26M CPU would suffice.
https://pastebin.com/PYZJKTNk
It better, considering its purpose is to run on devices with no GPU.
hey nice work, is it possible to release the datasets?
We have so far released the dataset generation code
Why pick Gemini? It's probably the worst tool calling model of the major labs.
Cheaper APIs
This is some excellent work Henry! Very excited to try it out.
Thanks, let me know how it goes!
Does the model have capacity for in context learning ?, if we give it examples of patterns can it follow them ?.
Not yet, for now. But it’s in the works!
This is really cool. Any plans to release the dataset?
We include the dataset pipeline in the codebase so far, might release dataset.
This is very cool I'm going to try to carve out some time to try building this into my MOO system ( https://codeberg.org/timbran/moor / https://timbran.org/moor.html ) as alternative command parser front end.
Man, I love that there are still people writing new MOO servers in 2026. Any game out there already running on mooR?
Many people tease that they will, and start... but then kinda stop. But mostly just been building my own bespoke thing on my own bespoke platform, and kinda running out of steam because I need to make $$ instead.
Thanks, let us know how it goes!
What is the use case for this?
Deploying AI on tiny devices like watches, earphones, glasses etc.
Ok, but why? What is the use case?
I don't think the limit is just on tiny devices. It can also be used in apps on generic computers, because its so small anything can run it reasonably quick.
For example, I am thinking this could be helpful for say if you have a complicated build and test infrastructure, fine tune this model on that infrastructure and then people can say more generic things like build and run this library's test, rather than issuing the exact commands to do that or going to Claude, GHCP, etc
I source old, defective high-end radios with timeless designs from brands like Grundig or Braun, and replace the original hardware with a Raspberry Pi while using the original audio parts to build custom smart speakers. Reliable hotword detection and voice command recognition have been a persistent challenge over the years, but whisper and other small models have helped enormously. At the moment I have ollama running on my server with qwen 9b which works fine but a 26M that could be deployed on the pi itself would be amazing.
Sounds cool, play with it and let uk know what you think!
FYI, distilling Gemini is explicitly against the ToS:
"You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio). You also may not attempt to reverse engineer, extract or replicate any component of the Services, including the underlying data or models (e.g., parameter weights)."
Yeah I think Google should shove that somewhere. They effectively distilled all the internet's knowledge into these models...without asking & without permission
Thanks, Needle doesn’t compete with those tools though and the distillation process did not access the weights.
I think GLM 5.1 or Kimi 2.6 could substitute for this type of purpose.
FYI, Gemini was developed using stolen copyrighted works without author consent. The double standard is striking.
So is copying all the books in the world.
Oh no! They stole the model weights! Distillation "attacks" is such bullshit
This is being downvoted but it's worth noting if only for the "be careful" aspect.
That said, we need more people distilling models IMO, just be ready for a C&D and a ban