82 comments

  • simonw 10 hours ago ago

    I got a WebAssembly build of this working and fired up a web playground for trying it out: https://simonw.github.io/research/monty-wasm-pyodide/demo.ht...

    It doesn't have class support yet!

    But it doesn't matter, because LLMs that try to use a class will get an error message and rewrite their code to not use classes instead.

    Notes on how I got the WASM build working here: https://simonwillison.net/2026/Feb/6/pydantic-monty/

  • avaer 10 hours ago ago

    This feels like the time I was a Mercurial user before I moved to Git.

    Everyone was using git for reasons to me that seemed bandwagon-y, when Mercurial just had such a better UX and mental model to me.

    Now, everyone is writing agent `exec`s in Python, when I think TypeScript/JS is far better suited for the job (it was always fast + secure, not to mention more reliable and information dense b/c of typing).

    But I think I'm gonna lose this one too.

    • miki123211 an hour ago ago

      3 reasons why Python is much better than JS for this IMO.

      1. Large built-in standard library (CSV, sqlite3, xml/json, zipfile).

      2. In Python, whatever the LLM is likely to do will probably work. In JS, you have the Node / Deno split, far too many libraries that do the same thing (XMLHTTPRequest / Axios / fetch), many mutually-incompatible import syntaxes (E.G. compare tsx versus Node's native ts execution), and features like top-level await (very important for small scripts, and something that an LLM is likely to use!), which only work if you pray three times on the day of the full moon.

      3. Much better ecosystem for data processing (particularly csv/pandas), partially resulting from operator overloading being a thing.

    • giancarlostoro 6 hours ago ago

      Having been doing Python for over a decade and JavaScript. I would pick Python any day of the week over JavaScript. JavaScript is beautiful, and also the most horrific programming language all at once. It still feels incomplete, there's too many oddities I've run into over the years, like checking for null, empty, undefined values is inconsistent all around because different libraries behave differently.

      • whilenot-dev 31 minutes ago ago

        TBF is the Python ecosystem any different? None and dict everywhere, requirements.txt without pinned versions... I'm not complaining either, as I wouldn't expect a unified typed experience in ecosystems where multiple competing type checkers and package managers have been introduced gradually. How could any library from the python3.4 era foresee dataclasses or the typing module?

        Such changes take time, and I favor an "evolution trumps revolution"-approach for such features. The JS/TS ecosystem has the advantage here, as it has already been going through its roughest time since es2015. In hindsight, it was a very healthy choice. The type system with TS is something to be left desired in many programming languages.

        If it weren't for its rich standard library and uv, I would still clearly favor TS and a runtime like bun or deno. Python still suffers from spread out global state and some multi-paradigm approach when it comes to concurrency (if concurrency has even been considered by the library author). Python being the first programming language for many scientists shows its toll too: rich libraries of dubious quality in various domains. Whereas JS' origins in browser scripting contributed to the convention to treat global state as something to be frowned upon.

        I wish both systems would have good object schema validation build into the standard library. Python has the upper hand here with dataclasses, but it still follows some "take it or throw"-approach, rather than support customization for validations.

    • nine_k 7 hours ago ago

      For historical reasons (FFI), Python has access to excellent vector / tensor mathematics (numpy / scipy / pandas / polars) and ML / AI libraries, from OpenCV to PyTorch. Hence the prevalence of Python in science and research. "Everybody knows Python".

      I do like Typescript (not JS) better, because of its highly advanced type system, compared to Python's.

      TS/JS is not inherently fast, it just has a good JIT compiler; Python still ships without one. Regarding security, each interpreter is about as permissive as the other, and both can be sealed off from environment pretty securely.

    • shoeb00m 9 hours ago ago

      A big benefit of letting agents run code is they can process data without bloating their context.

      LLMs are really good at writing python for data processing. I would suspect its due to Python having a really good ecosystem around this niche

      And the type safety/security issues can hopefully be mitigated by ty and pyodide (already used by cf’s python workers)

      https://pyodide.org/en/stable/

      https://github.com/astral-sh/ty

      • DouweM 8 hours ago ago

        (Pydantic AI lead here) That’s exactly what we built this for: we’re implementing Code Mode in https://github.com/pydantic/pydantic-ai/pull/4153 which will use Monty by default, with abstractions to use other runtimes / sandboxes.

        Monty’s overhead is so low that, assuming we get the security / capabilities tradeoff right (Samuel can comment on this more), you could always have it enabled on your agents with basically no downsides, which can’t be said for many other code execution sandboxes which are often over-kill for the code mode use case anyway.

        For those not familiar with the concept, the idea is that in “traditional” LLM tool calling, the entire (MCP) tool result is sent back to the LLM, even if it just needs a few fields, or is going to pass the return value into another tool without needing to see (all of) the intermediate value. Every step that depends on results from an earlier step requires a new LLM turn, limiting parallelism and adding a lot of overhead, expensive token usage, and context window bloat.

        With code mode, the LLM can chain tool calls, pull out specific fields, and run entire algorithms using tools with only the necessary parts of the result (or errors) going back to the LLM.

        These posts by Cloudflare: https://blog.cloudflare.com/code-mode/ and Anthropic: https://platform.claude.com/docs/en/agents-and-tools/tool-us... explain the concept and its advantages in more detail.

        • shoeb00m an hour ago ago

          Oh, I did not mean to imply it (Monty) wasn't secure; just that pyodide used the same sandboxing tech that JS uses.

          You guys and astral are my favorite groups in the python ecosystem

        • solidasparagus 5 hours ago ago

          Why do you think python without access to the library ecosystem is a good approach? I think you will end up with small tool call subgraphs (i.e. more round trips) or having to generate substantially more utility code.

        • 4b11b4 7 hours ago ago

          "But MCP is still useful, because it is uniform"

          Yes, I was also thinking.. y MCP den

          But even my simple class project reveals this. You actually do want a simple tool wrapper layer (abstraction) over every API. It doesn't even need to be an API. It can be a calculator that doesn't reach out anywhere.

          as the article puts it: "MCP makes tools uniform"

        • 4b11b4 8 hours ago ago

          lol "agents are better at writing code that calls MCP, then using mcp itself"

          In hindsight, it's pretty funny and obvious

    • rzerowan 9 hours ago ago

      Tangentially i wonder if the recent changes in the GIL will percolate to mercurial as any improvements.

      Yep still using good old hg for personal repos - interop for outside project defaults to git since almost all the hg host withered.

    • trenchgun 4 hours ago ago

      Python has uv, ruff, ty

    • piskov 10 hours ago ago

      Can we please make as little js as possible?

      Why would one drag this god forsaken abomination on server-side is beyond me.

      Even effing C# nowdays can be run in script-like manner from a single file.

      Even the latest Codex UI app is Electron. The one that is supposed to write itself with AI wonders but couldn’t manage native swiftui, winui, and qt or whatever is on linux this days.

      • aryonoco 9 hours ago ago

        My favourite languages are F# and OCaml, and from my perspective, TypeScript is a far better language than C#.

        Typescript’s types are far more adaptable and malleable, even with the latest C# 15 which is belatedly adding Sum Types. If I set TypeScript to its most strict settings, I can even make it mimic a poor man’s Haskell and write existential types or monoids.

        And JS/TS have by far the best libraries and utilities for JSON and xml parsing and string manipulation this side of Perl (the difference being that the TypeScript version is actually readable), and maybe Nushell but I’ve never used Nushell in production.

        Recently I wrote a Linux CLI tool for managing podman/quadlett containers and I wrote it in TypeScript and it was a joy to use. The Effect library gave me proper Error types and immutable data types and the Bun Shell makes writing shell commands in TS nearly as easy as Bash. And I got it to compile a single self contained binary which I can run on any server and has lower memory footprint and faster startup time than any equivalent .NET code I’ve ever written.

        And yes had I written it in rust it would have been faster and probably even safer but for a quick a dirty tool, development speed matters and I can tell you that I really appreciated not having to think about ownership and fighting the borrow checker the whole time.

        TypeScript might not be perfect, but it is a surprisingly good language for many domains and is still undervalued IMO given what it provides.

      • IshKebab 9 hours ago ago

        I would say the same about Python, a language that has clearly got far too big for its boots.

    • bee_rider 4 hours ago ago

      Python has the advantage that everybody sort of knows it is bad and slow, which is an important trait for a glue language. This increases the incentive to do the right thing: call a library written in C or Fortran or something.

      • wiseowise an hour ago ago

        It might be slow, but it is definitely not bad. In the contrary, it is a great language. The closest to pseudocode you can get in a mainstream.

  • imfing 8 hours ago ago

    This is a really interesting take on the sandboxing problem. This reminds me of an experiment I worked on a while back (https://github.com/imfing/jsrun), which embedded V8 into Python to allow running JavaScript with tightly controlled access to the host environment. Similar in goal to run untrusted code in Python.

    I’m especially curious about where the Pydantic team wants to take Monty. The minimal-interpreter approach feels like a good starting point for AI workloads, but the long tail of Python semantics is brutal. There is a trade-off between keeping the surface area small (for security and predictability) and providing sufficient language capabilities to handle non-trivial snippets that LLMs generate to do complex tasks

    • scolvin 8 hours ago ago

      Can't be sure where this might end, but the primary goal is to enable codemode/programmatic tool calling, using the external function call mechanism for anything more complicated.

      I think in the near term we'll add support for classes, dataclasses, datetime, json. I think that should be enough for many use cases.

    • ushakov 8 hours ago ago

      there’s no way around VMs for secure, untrusted workloads. everything else, like Monty has too many tradeoffs that makes it non-viable for any real workloads

      disclaimer: i work at E2B, opinions my own

      • scolvin 8 hours ago ago

        As discussed on twitter, v8 shows that's not true.

        But to be clear, we're not even targeting the same "computer use" use case I think e2b, daytona, cloudflare, modal, fly.io, deno, google, aws are going after - we're aiming to support programmatic tool calling with minimal latency and complexity - it's a fundamentally different offering.

        Chill, e2b has its use case, at least for now.

        • fulafel 4 hours ago ago

          There's been a constant stream of v8 VM sandbox escape discoveries since its dawn of course. Considering those have mostly existed for a long time before publication it's very porous most of the time.

          And Python VM had/has its sandboxing features too, previously rexec and still https://github.com/zopefoundation/RestrictedPython - in the same category I'd argue.

          Then there's of course hypervisor based virtualization and the vulnerabilities and VM escapes there.

          Browsers use belt-and-suspenders approaches of employing both language runtime VMs and hardware memory protection as layers to some effect, but still are the star act at pwn2own etc.

          It's all layers of porous defenses. There'd definitely be room in the world for performant dynamic language implementations with provably secure foundations.

          • eichin 3 hours ago ago

            part of why rexec is "historical" is that Guido was looking at some lockdown work and asked (twitter, probably?) the community to come up with attack ideas (on a specific more-locked-down-than-default proposed version.) After a couple of hours, it was clear that "patching the problems" was entirely doomed given how flexible python is and it was better to do something else entirely and stop pretending...

        • ushakov 8 hours ago ago

          we’re not disagreeing here - i meant for general use-case VMs are better, for some application-specific calls Monty this might suffice.

          although you’d still need another boundary to run your app in to prevent breaking out to other tenants.

  • theanonymousone 36 minutes ago ago

    I wish someone commanded their agent to write a Python "compiler" targeting WASM. I'm quite surprised there is still no such thing at this day and age...

  • JoshPurtell 8 hours ago ago

    Monty is the missing link that's made me ship my rust-based RLM implementation - and I'm certain it'll come in handy in plenty of other contexts.

    Just beware of panics!

    • JoshPurtell 8 hours ago ago
      • scolvin 7 hours ago ago

        Please report any panics, we'll fix them!

        • IhateAI 6 hours ago ago

          Why do SWE build tools in the open that are openly hostile to their own trade? Like I can understand someone selfishly building tools for themselves, but by contributing to these efforts you're basically donating free software tools to companies that will only be used to shrink their own engineering teams by making llms more capable/efficient.

          While I think all LLMs are shit, they probably eventually will not be shit, and it will because people like you contributed to their progress. Nothing good will come of it for you or your peers. The Billionaires who own everything will kick you out to the curb as soon as you train your replacement that doesn't sleep, eat or complain. Have some class solidarity.

          • simonw 4 hours ago ago

            How do you feel about software engineers who build open source libraries?

            Open source has been responsible for enormous productivity boosts in our industry, because we don't all have to build duplicates of exactly the same thing time and time again.

            But think of all of the jobs that were lost by people who would otherwise been employed building the 500th version of a CSS design system, or a template engine, or code to handle website logins!

            What makes AI tools different? (And I actually do agree that they feel different, but I'm interested in hearing arguments stronger than "it feels different".)

            • achierius 4 hours ago ago

              Because beforehand engineers could be reasonably confident that their work would simply accelerate a the growth of a growing pie; today, most expect that further development will be used, first and foremost, to replace labor. Most sectors do not grow indefinitely, so there's no reason to assume software has to.

              To put it gently, yes it feels different: for people who haven't already saved a lifetime of SWE wages, this is the first credible threat to the sector in which they're employed since the dot com bubble. People need to work to eat.

            • IhateAI 4 hours ago ago

              Previously, open source software didn't contribute to automating away jobs, at least not at scale. Open Source libraries weren't potentially maintaining themselves (I know we aren't there yet, but that seems to be the goal).

              You cannot compare any open source software, even as a whole, to the impact that LLMs have had on labor and are projected too. However, I might now argue it would have been better to not have so much open source, as its clearly being processed through these plagiarism laundering training regimes.

              I don't really think LLMs, robotics and ML in general are going to increase GDP globally, they will instead just replace the inputs that were maintain the status quo (the workers). If they can't successfully replace human labor, it will at minimum greatly reduce its value, which is extremely dangerous.

              Jobs grew greatly during the last 30 years of open source development but over the last 16 months we've had 350-400k SWE layoffs in the last 16 months in the USA. Many of these layoffs have been directly correlated to AI enhanced productivity. 25% of recent college graduates are unemployed. Jobs data is super unreliable at the moment, but we also will see large swaths of the lower skilled sectors, customer service for example, see huge layoffs in the coming 24 months.

              Despite what C-Suites say about AI giving them more free time for their hobbies or whatever, they've yet to answer how people are going to afford those hobbies. Working as a barista lol? These same mouthpieces will say that llms are going to allow the same amount of engineers to get 10x more done, but they're not reflecting that in their business decisions. They are laying people off in swaths when equities are at all time highs, its abnormal.

              I think its more likely the ruling classes will give us something to do by making us so poor that young men will beg to go fight wars. Put us to use on behalf of their conquest for more resources, that certainly did the trick in the 20s, 30s and 40s :/

          • JoshPurtell 5 hours ago ago

            Every AI advancement liberates real humans from drudgery and allows them to create what they want more easily.

            The invention of the digital calculator turned human calculators into accountants, and that's great! We're contributing to the same process now

            • IhateAI 4 hours ago ago

              It liberates those who have massive resources to run gigantic models at whatever scale they want.

              Corporations and billionaires will get Ti-Nspires we get Ti-83s.

              I do not agree that inference will get more affordable in time to prevent harm. It will cause way more problems with the devaluation of labor before it starts to solve those problems, and in that period they will solidify their control over society.

              We already see it in how ML is being used on a vast scale to build advanced surveillance infrastructure. Lets not build the advanced calculators for them for free in open source please, they'd like nothing better. I wrote a lot more in the comments above also.

              If anyone has time, this is required reading imho: https://archive.nytimes.com/www.nytimes.com/books/97/05/18/r...

              • JoshPurtell 2 hours ago ago

                Billionaires and corporations can hire teams of people to work for them full-time. You, likely, can hire one or two (or zero!). Not to make it personal.

                These inequalities already exist

          • rcv 5 hours ago ago

            Staying true to your username at least. While I hear you in principle, I don’t think shaming people into not building things is going to work out. Even if you could convince some people, you’ll never reach them all. Someone will build it. IMO energy is better spent figuring out how to best structure our society to handle the seemingly inevitable end state where superhuman AI is commonplace.

            • IhateAI 4 hours ago ago

              Sorry if I'm shaming. I suppose you're right, someone will probably build them. But in order to prevent bad outcomes for the average joe/worker we are can't just hand optimizations over to corporations for free in the form of open source. We know all too well how open source is exploited.

              I don't know how to prevent people from stopping this without shaming them. I think more shaming might be required, as uncomfortable as that may be. It's a societal wide prisoner's dilemma (well if I don't build it, someone else will), except we this isn't a prisoners dilemma and we can coordinate, sort of.

              It would be one thing if GPUs and Tokens were cheap and everyone could take these implementations and out compete the corporations, but that's not the game theoretical terms we're on here. They have the resources, and I promise they are not going to let the average joe be able afford to out compete them. They are the ones that are going to be able to get the most advantage from these tools.. Why give them the extra leverage. It will be used to displace you. The ruling class or those with the resources, have zero intention of letting the tide rise all boats. And if there are any in the ruling class that do have good intentions, they will be rooted out.

              We see this evidence all across literature, history, and in their own actions. This year in Telluride Colorado the Ski Patrol Union went on strike over wages. The billionaire owner who lives in California, Chuck Horning, did not want to concede to the Ski Patrolers over a $66k spread out over 3 years, like 22k a year over the contract length. He shutdown the ski resort during the Christmas holidays, and brought the town to its knees. This is just one example, but there are many. It is ideological to these people, its about maintaining their control over the working class. We are at the beginning of a class struggle that Earth has never witnessed before, with way more lives at stake.

              I do not think LLMs are going to lead to super intelligence btw, I do believe it will get decent enough to uproot many lives when its used as a weapon against the value of labor and to accelerate concentration of resources into the few(er). We are up against people like Chuck Horner, who'd rather destroy an entire town of workers over 22k a year than concede any power. They have zero interest in building a equitable society, or we wouldn't see this type of behavior. This will 100% get used to replace you, then what will they do with us? They aren't going to just let everyone chill, I promise you that.

              I believe the devaluation (and surveillance )of labor because of LLMs, robotics (machine learning in general) is the most pressing issue of our time.

              I get the draw to building cool tools with these things, but please don't do it in the open. Let someone else do it, and then we can call them out too. The slower these developments can happen the better.

  • c2xlZXB5 9 hours ago ago

    Maybe a dumb question, but couldn't you use seccomp to limit/deny the amount of syscalls the Python interpreter has access to? For example, if you don't want it messing with your host filesystem, you could just deny it from using any filesystem related system calls? What is the benefit of using a completely separate interpreter?

    • thundergolfer 4 hours ago ago

      https://github.com/butter-dot-dev/bvisor is pushing in that direction

    • oofbey 8 hours ago ago

      Yours is a valid approach. But you always gotta wonder if there’s some way around it. Starting with runtime that has ways of accessing every aspect of your system - there are a lot of ways an attacker might try to defeat the blocks you put in place. The point of starting with something super minimal is that the attack surface is tiny. Really hard to see how anything could break out.

      • ushakov 8 hours ago ago

        agree. you still need a secure boundary like VM to isolate the tenants in case the model breaks out of the sandbox.

        everything that you don’t want your agent to access should live outside of the sandbox.

  • SafeDusk 8 hours ago ago

    Sandboxing is going to be of growing interests as more agents go “code mode”.

    Will explore this for https://toolkami.com/, which allows plug and play advanced “code mode” for AI agents.

  • bigcat12345678 5 hours ago ago

    It seems that AI finally give the space to true pure-blood system software systems to unleash their potential.

    Pretty much all morn software tooling, removing the parts that aim at appeal to humans, becomes much more reliable tools. But it's not clear if the performance will be better or not.

  • geysersam 8 hours ago ago

    Is ai running regular python really a problem? I see that in principle there is an issue. But in practice I don't know anyone who's had security issues from this. Have you?

    • scolvin 8 hours ago ago

      No one is going to let an LLM get prompted by end users to write python code I just run on my server, there's no real debate on that.

      • ushakov 8 hours ago ago

        i think there’s a confusion around what use-case Monty is solving (i was confused as well). this seems to isolate in a scope of execution like function calls, not entire Python applications

  • _joel 10 hours ago ago

    Well I love the name, so definitely trying this out later, but first...

    And now for something, completely different.

  • krick 9 hours ago ago

    I don't quite understand the purpose. Yes, it's clearly stated, but, what do you mean "a reasonable subset of Python code" while "cannot use the standard library"? 99.9% of Python I write for anything ever uses standard library and then some (requests?). What do you expect your LLM-agent to write without that? A pseudo-code sorting algorithm sketch? Why would you even want to run that?

    • impulser_ 8 hours ago ago

      They plan to use to for "Code Mode" which mean the LLM will use this to run Python code that it writes to run tools instead of having to load the tools up front into the LLM context window.

      • DouweM 8 hours ago ago

        (Pydantic AI lead here) We’re implementing Code Mode in https://github.com/pydantic/pydantic-ai/pull/4153 with support for Monty and abstractions to use other runtimes / sandboxes.

        The idea is that in “traditional” LLM tool calling, the entire (MCP) tool result is sent back to the LLM, even if it just needs a few fields, or is going to pass the return value into another tool without needing to see the intermediate value. Every step that depends on results from an earlier step also requires a new LLM turn, limiting parallelism and adding a lot of overhead.

        With code mode, the LLM can chain tool calls, pull out specific fields, and run entire algorithms using tools with only the necessary parts of the result (or errors) going back to the LLM.

        These posts by Cloudflare: https://blog.cloudflare.com/code-mode/ and Anthropic: https://platform.claude.com/docs/en/agents-and-tools/tool-us... explain the concept and its advantages in more detail.

    • notepad0x90 8 hours ago ago

      It's pydantic, they're verifying types and syntax, those don't require the stdlib. Type hints, syntax checks, likely logical issues,etc.. static type checking is good with that, but LLMs can take to the next level where they analyze the intended data flow and find logical bugs, or good syntax and typing but not the intended syntax.

      For example, incorrect levels of indentation. Let me use dots instead of space because of HN formatting:

      for key,val in mydict.items():

      ..if key == "operation":

      ....logging.info("Executing operation %s",val)

      ..if val == "drop_table":

      ....self.drop_table()

      This uses good syntax, and I the logging part is not in the stdlib, so I assume it would ignore it or replace it with dummy code? That shouldn't prevent it from analyzing that loop and determining that the second if-block was intended to be under the first, and the way it is written now, the key check isn't done.

      In other words, if you don't want to do validate proper stdlib/module usage, but proper __Python__ usage, this makes sense. Although I'm speculating on exactly what they're trying to do.

      EDIT: I think I my speculation was wrong, it looks like they might have developed this to write code for pydantic-ai: https://github.com/pydantic/pydantic-ai , i'll leave the comment above as-is though, since I think it would still be cool to have that capability in pydantic.

  • dmpetrov 12 hours ago ago

    I like the idea a lot but it's still unclear from the docs what the hard security boundary is once you start calling LLMs - can it avoid "breaking out" into the host env in practice?

  • wewewedxfgdf 6 hours ago ago

    If I say my code is secure does hat make it secure?

    Or is all Rust code secure unquestionably?

    • maxbond 3 hours ago ago

      Of course not, especially when the security model is about access to resources like file systems that are outside the scope of what the Rust compiler can verify. While you won't have a data race in safe Rust you absolutely can have data races accessing the file system in any language.

      Their security model, as explained in the README, is in not including the standard library and limiting all access to the environment to functions you write & control. Does that make it secure? I'll leave it to you to evaluate that in the context of your use case/threat model.

      It would appear to me that they used Rust primarily because a.) they want to deliver very fast startup times and b.) they want it to be accessible from a variety of host languages (like Python and JavaScript). Those are things Rust does well, though not to the exclusion of C or other GC-free compiled languages. They certainly do not claim that Rust is pixie dust you sprinkle on a project to make it secure. That would clearly be cargo culting.

      I find this language war tiring. Don't you? Let's make 2026 the year we all agree to build cool stuff in whatever language we want without this pointless quarreling. (I've personally been saying this for three years at this point.)

  • Retr0id 8 hours ago ago

    I'm enjoying watching the battle for where to draw the sandbox boundaries (and I don't have any answers, either!)

    • ushakov 8 hours ago ago

      best answer is probably to have a layered approach - use this to limit what the generated code can do, wrap it in a secure VM to prevent leaking out to other tenants.

  • globular-toast 2 hours ago ago

    I don't get what "the complexity of a sandbox" is. You don't have to use Docker. I've been running agents in bubblewrap sandboxes since they first came out.[0]

    If the agent can only use the Python interpreter you choose then you could just sandbox regular Python, assuming you trust the agent. But I don't trust any of them because they've probably been vibe coded, so I'll continue to just sandbox the agent using bubblewrap.

    [0] https://blog.gpkb.org/posts/ai-agent-sandbox/

  • falcor84 9 hours ago ago

    Wow, a start latency of 0.06ms

  • OutOfHere 10 hours ago ago

    It is absurd for any user to use a half baked Python interpreter, also one that will always majorly lag behind CPython in its support. I advise sandboxing CPython instead using OS features.

    • simonw 8 hours ago ago

      How do I sandbox CPython using OS features?

      (Genuine question, I've been trying to find reliable, well documented, robust patterns for doing this for years! I need it across macOS and Linux and ideally Windows too. Preferably without having to run anything as root.)

      • nickpsecurity 5 hours ago ago

        It could be difficult. My first thought would be a SELinux policy like this article attempted:

        https://danwalsh.livejournal.com/28545.html

        One might have different profiles with different permissions. A network service usually wouldn't need your hone directory while a personal utility might not need networking.

        Also, that concept could be mixed with subprocess-style sandboxing. The two processes, main and sandboxed, might have different policies. The sandboxed one can only talk to main process over a specific channel. Nothing else. People usually also meter their CPU, RAM, etc.

        INTEGRITY RTOS had language-specific runtimes, esp Ada and Java, that ran directly on the microkernel. A POSIX app or Linux VM could run side by side with it. Then, some middleware for inter-process communication let them talk to each other.

      • OutOfHere 6 hours ago ago

        Docker and other container runners allow it. https://containers.dev/ allows it too.

        https://github.com/microsoft/litebox might somehow allow it too if a tool can be built on top of it, but there is no documentation.

        • simonw 5 hours ago ago

          Every time I use Docker as a sandbox people warn me to watch out for "container escapes".

          I trust Firecracker more because it was built by AWS specifically to sandbox Lambdas, but it doesn't work on macOS and is pretty fiddly to run on Linux.

    • bityard 9 hours ago ago

      Python already has a lot of half-baked (all the way up to nearly-fully-baked) interpreters, what's one more?

      https://en.wikipedia.org/wiki/List_of_Python_software#Python...

    • avaer 10 hours ago ago

      The repo does make a case for this, namely speed, which does make sense.

      • sd2k 9 hours ago ago

        True, but while CPython does have a reputation for slow startup, completely re-implementing isn't the only way to work around it - e.g. with eryx [1] I've managed to pre-initialize and snapshots the Wasm and pre-compile it, to get real CPython starting in ~15ms, without compromising on language features. It's doable!

        [1] https://github.com/eryx-org/eryx

      • OutOfHere 6 hours ago ago

        Speed is not a feature if there isn't even syntax parity with CPython.

        • maxbond 2 hours ago ago

          Not having parity is a property they want, similar to Starlark. They explicitly want a less capable language for sandboxing.

          Think of it as a language for their use case with Python's syntax and not a Python implementation. I don't know if it's a good idea or not, I'm just an intrigued onlooker, but I think lifting a familiar syntax is a legitimate strategy for writing DSLs.

  • spacedatum 6 hours ago ago

    There is no reason to continue writing Python in 2026. Tell Claude to write Rust apriori. Your future self will thank you.

    • JoshPurtell 5 hours ago ago

      I do both and compile times are very unfriendly to AI!

      • spacedatum 2 hours ago ago

        Compile times, I can live with. You can run previous models on the gpu while your new model is compiling. Or switch from cargo to bazel if it is that bad.

        • JoshPurtell 2 hours ago ago

          What compile times do you work with? I use bazel and it still hurts

          • spacedatum 2 hours ago ago

            It is a tradeoff, but I prefer my checks at compile time to runtime. Python can be brittle and silently wrong.

            • wiseowise 42 minutes ago ago

              What kind of type checking do you think Rust does at runtime?

  • rienbdj 10 hours ago ago

    If we’re going to have LLMs write the code, why not something more performant? Like pages and pages of Java maybe?

    • scolvin 9 hours ago ago

      this is pretty performant for short scripts if you measure time "from code to rust" which can be as low as 1us.

      Of course it's slow for complex numerical calculations, but that's the primary usecase.

      I think the consensus is that LLMs are very good at writing python and ts/js, generally not quite as good at writing other languages, at least in one shot. So there's an advantage to using python/js/ts.

      • catlifeonmars 9 hours ago ago

        Seems like we should fix the LLMs instead of bending over backwards no?

        • redman25 5 hours ago ago

          They’re good at it because they’ve learned from the existing mountains of python and javascript.

          • catlifeonmars 2 hours ago ago

            I think the next big breakthrough will be cost effective model specialization, maybe through modular models. The monolithic nature of today’s models is a major weakness.

          • rienbdj an hour ago ago

            Plenty of Java in the training data too.