3 comments

  • magicalhippo 4 hours ago ago

    A local model is an open model you run locally, so I'm not entirely sure the distinction in the question makes sense.

    That said, if you're talking about models you can actually use on a single regular computer that costs less than a new home, the current crop of open models are very capable but also have noticeable limitations.

    Small models will always have limitations in terms of capability and especially knowledge. Improved training data and training regiment can squeeze out more from the same number of weights, but there is a limit.

    So with that in mind, I think such a question only makes sense when talking about specific tasks, like creative writing, data extraction from text, answering knowledge questions, refactoring code, writing greenfield code, etc.

    In some of these areas the smaller open models are very good and not that far behind. In other areas they are lagging much more, due to their inherent limitations.

  • hasperdi 3 hours ago ago

    Well, it depends on the hardware you have. If you have a hardware locally that can run best open models, then your local models are as capable as the open models.

    That said, open models are not far behind SOTA, less than 9 months gap.

    If what you're asking about those models that you can run on retail GPUs, then they're a couple years behind. They're "hobby" grade.

  • softwaredoug 4 hours ago ago

    A local model is a smaller open model, so I’d expect it to be 9 months behind a small (ie nano) closed model as a base assumption