The first general computer action model

(si.inc)

24 points | by nee1r 6 hours ago ago

20 comments

  • clemvonstengel 5 hours ago ago

    I rly liked the point about ctrl-c only being able to be labelled retrocausally. I do think that with enough past context you should be able to know what was copied - in some sense the past does encode the future - but also an agentic decision is precisely the kind where the future is more informative than the past for reconstructing that decision.

    It does make me wonder if you should have the inverse dynamics model split into specifically retrocausal and causal. You kind of do this already with the inverse and forward dynamics model, but the idea of a model that knows only about the future training in a feedback loop with a model that knows only about the past is kind of interesting.

    I think you could just do a clever masking regime in your diffusion model to achieve the same effect without a whole architecture change.

    • g413n 5 hours ago ago

      yeah we actually had some wacky ideas with ctc + a reverse-causal mask but diffusion does just make it all a bit more simple

  • nee1r 5 hours ago ago

    Hey guys! I’m Neel, been holed up in our south park office for the past year working on model training. excited to share our research!

    This is a preview of a very different type of computer use model—we train on the internet. Specifically we have 11 million hours of computer video stored on our storage cluster (previously shared https://news.ycombinator.com/item?id=45438496 !) and the model can work in 30 FPS. Since we match the fundamental form factor of computer-use, we can get our model to do CAD, browse websites, and even drive a car using arrow keys. I’m super excited to see what our model can do as we scale more, it's a fun frontier to work on (not language models :) ).

    The team and I will be online responding to the comments, so drop any questions.

  • aakashks 4 hours ago ago

    The video compression is very cool. And the small tricks like binning the mouse movements.

    Wonder how much data is generalizable across different UIs? ie how good will the model be at using Figma if it’s never seen it before but has seen a lot of Photoshop

    • nee1r 4 hours ago ago

      this is honestly an issue for the inverse dynamics (for app specific shortcuts etc.) but for general UI learning we still see promising eval trends

  • rio_popper 5 hours ago ago

    Curious about the masked diffusion IDM choice. They mention CTC loss and cross-entropy both underperformed — I'd love to see ablations on that. The claim that typos were "extremely common" with non-causal cross-entropy is interesting but hand-wavy without numbers.

    • nee1r 5 hours ago ago

      the main chain of experiments was trying causal => non-causal => non-causal with ctc and CE. i think a good intuition here is that you need a generative approach fundamentally because there definitely are multiple correct IDM labels.

  • ennucore 5 hours ago ago

    The car thing is very impressive By the way, do you have plans to handle the computer’s audio output?

    • g413n 5 hours ago ago

      yeah we've done audio work in the past so we'll def merge the recipes at some point, long term should have full io that a human has (except maybe not generating video for video calls that seems a bit much)

  • snowhale 4 hours ago ago

    curious about distribution in the training data. 11M hours of internet computer use is probably heavily skewed toward browser, email, and productivity apps -- the long tail of specialized tools (CAD, financial software, lab instruments) is thin. the car demo is impressive but driving is actually well-represented in internet video. how much fine-tune data did you need for blender vs the car task?

    • nee1r 4 hours ago ago

      no finetuning data for the blender task! we actually think its the opposite, there are a lot of video tutorials for complex tasks like onshape/blender/fusion360 but not as much of people idly browsing.

      but also at the 11M hour scales it still sees a substantial amount of data

  • ClaireBookworm 5 hours ago ago

    What sort of fine tuning data was needed to allow the model to self-drive? One hour of video of someone driving, or extra labeling?

    • nee1r 5 hours ago ago

      i actually drove the car (with arrow keys) around south park for around ~45 minutes as finetuning data, no extra labelling other than that. think the car line graph is super cool because you actually see the videegame prior working

    • g413n 5 hours ago ago

      relevant note is that we finetuned by having the human also use arrow keys which keeps it in-distribution but also slower to collect

  • kdrag0n 5 hours ago ago

    what tasks can the model do out of the box? was each of the examples a different fine tuned model?

    • g413n 5 hours ago ago

      it's a pretty general policy but this is all super early, it's great at exploring websites so fuzzing was easy, for CAD it has good enough base rates with the few-shot prompt when we do the repetitive stuff, and we gave it checkpoints on each step, the other stuff in the mosaic are just some of our favorite clips from internal evals

  • ennucore 5 hours ago ago

    How do you tokenize the mouse inputs?

    • nee1r 5 hours ago ago

      good question! we use exponential binning (map the mouse movements onto a plane with exponentially increasing tick marks https://si.inc/fdm1/exponential_binning.webp) but tried a bunch of other methods (linear creates too many tokens for the model to learn well). Polar coordinates seem like a better solution but empirically didn't work well because the tokens got too coarse too fast.

    • g413n 5 hours ago ago

      we do exponential binning but fwiw I think we can do way better just hasn't been the main research area initially

  • undefined 5 hours ago ago
    [deleted]