How we rooted Copilot

(research.eye.security)

266 points | by uponasmile 12 hours ago ago

105 comments

  • simonw 12 hours ago ago

    OK, I think I understand what this is about: the vulnerability that they reported (and Microsoft fixed) is that there was a trick you could use to run your own code with root privileges inside the container - when the system was designed to have you only execute code as a non-root user.

    It turned out not to really matter, because the container itself was still secured - you couldn't make network requests from it and you couldn't break out of it, so really all you could do with root was mess up a container that only you had access to anyway.

    • 0xbadcafebee 8 hours ago ago

      I have to give Microsoft props here. Most companies don't bother to lock things down well enough, but they were thorough.

    • pamelafox 11 hours ago ago

      I don’t know specifically how this container was implemented, but Microsoft has a standard way to do isolated Python sandboxes: https://learn.microsoft.com/en-us/azure/container-apps/sessi... Hopefully this feature is using that or something similar.

    • stevage 5 hours ago ago

      It seems weird to me that copilot sometimes refuses to execute code but sometimes allows it. What exactly are they aiming for?

      • wizzwizz4 4 hours ago ago

        They're not. It's better to think of Copilot as a collaborative storytelling session with a text autocomplete system, which some other program is rudely hijacking to insert the result of running certain commands.

        Sometimes the (completion randomly selected from the outputs of the) predictive text model goes "yes, and". Other times, it goes "no, because". As observed in the article, if it's autocompleting the result of many "yes, and"s, the story is probably going to have another "yes, and" next, but if a story starts off with a certain kind of demand, it's probably going to continue with a refusal.

        • stevage 3 hours ago ago

          funny how it sounds kind of the opposite of how people might work. Get enough 'no's from someone and they might finally cave in. get enough 'yes'es and they might get sick of doing everything you ask.

    • ajross 11 hours ago ago

      In the modern world vulnerabilities are stacks. Asserting that "the container itself was still secured" is just a statement that the attackers didn't find anything there. But container breakouts and VM breakouts are known things. All it takes is a few mistakes in configuration or a bug in a virtio driver or whatever. This is a real and notable result.

      • simonw 11 hours ago ago

        If they had found and reported a container breakout I expect they would've got a bug bounty from it!

        Are there any known unfixed container breakouts at the moment in the kind of systems Microsoft are likely to be using here?

        • DSMan195276 11 hours ago ago

          The problem is that you're encouraging people to keep stuff like this to themselves until they can use it to perform an exploit that they'd get paid for, which is the opposite of what Microsoft wants - they'd much rather you report it now so that if an exploit does get found that requires root they would potentially be protected.

          The simple question for Microsoft to answer is - does it matter to them if attackers have root access on the container? If the answer is yes then the bug bounty for root access should at least pay something to encourage reporting. If the answer is no then this shouldn't have been marked as a vulnerability because root access is not considered a security issue.

        • VBprogrammer 11 hours ago ago

          Presumably someone with mal-intent would sit on the root vulnerability waiting for a container breakout bug to come around.

          • thfuran 11 hours ago ago

            But a $5 wrench isn't a critical security vulnerability just because someone somewhere might one day find the right person to apply it to to extract important credentials.

            • VBprogrammer 11 hours ago ago

              A container root exploit isn't a critical security vulnerability either, describing it as moderate seems fair, but it's a reasonable step towards one.

            • worik 7 hours ago ago

              That is exactly what it is.

              Propper security I depth means that when trusted actors betray the system, the damage is limited.

            • ajross 6 hours ago ago

              Not really the right metaphor. A $5 wrench isn't a "vulnerability" because it's $5! Tools that are accessible to everyone are part of the threat model, not something you can eliminate or avoid. This trick is novel and new.

              Like, consider your personal cult was built around an "unopenable" bolt-tighted box. Then someone invents the wrench in an attempt to open it. That would be a clear "security vulnerability", right?

              • thfuran 5 hours ago ago

                Not a serious one if all the wrench actually gets you is access to the room that contains the box that no known tool can open, which is a closer analogy to what happened.

                • pbhjpbhj 4 hours ago ago

                  And an exploit that breaks out of the sandbox is not really anything if it needs root to work... so if a hacker had those two MS wouldn't care about them selling those bugs because both of them are not serious. See, perfect security and it didn't cost them anything.

                • ajross 5 hours ago ago

                  Again, though, you're taking "all that gets you" as a prior when (abandoning the metaphor) container and VM escapes are routine vulnerabilities. They just weren't the subject of this particular team who wanted to hack on AI. You don't do security analysis by presuming the absence of vulnerabilities!

                  Modern security is defense in depth. The AI pre-prompting setup was the first layer, and it was escaped. The UID separation inside the container was another, and it was broken. The container would have been next. And hopefully there are network firewalls and egress rules on top of that, etc... And all of those can and have failed in the past.

                  • tptacek 37 minutes ago ago

                    Sure, I guess, but a lot more is broken than Copilot if you assume arbitrary container escape. (I do!)

        • tptacek 9 hours ago ago

          Almost certainly yes, since at that point all you're looking for is a Linux kernel LPE.

        • worik 7 hours ago ago

          > they would've got a bug bounty from it!

          Why do you think that, rather than get sued? I am curious

          • simonw 5 hours ago ago

            Microsoft have a bug bounty program which is credible and well run.

            Suing people who responsibly disclose security issues to you is a disastrous thing to do. Word spreads instantly and now you won't get any responsibly disclosed bug reports in the future.

            Microsoft are way too smart to make that mistake.

  • afro88 7 hours ago ago

    It's crazy to me that someone can write a post called "How We Rooted Copilot" when in reality they got root in an ephemeral python sandbox container that was locked down so much that they couldn't do anything.

    I read "rooted copilot" and I think they got root on a vm that is core to copilot itself.

    A much more accurate title would be "How We Rooted the Copilot Python Sandbox"

  • tptacek 12 hours ago ago

    I read this as them breaking out of a Python sandbox into a container. That also squares with MSFT scoring this "moderate" severity.

  • ChuckMcM 10 hours ago ago

    So am I just missing something or could you create a network connection to the "outside" world (clearly by finding your way around the local network? Start fuzzing the router endpoint, Etc. Or is Microsoft able to provide these containers where their customers can get root access to them without them having any risk of exfiltration or exploitation?

    • pinoy420 10 hours ago ago

      Back when openai released python interpretation it was trivial to do what they did there. There was no open network access, the only thing of interest was a little insight in to how their developers program. A couple of internal configuration files.

      This is literally the same.

  • blastonico 9 hours ago ago

    How does he know that the response isn't just hallucinations?

    I'm telling it because I work there and I don't recognize any of those processes.

    In fact I found one script named keepAliveJupyterSvc.sh in a public repo: https://github.com/shivamkm07/code-interpreter/blob/load-tes...

    • eddythompson80 7 hours ago ago

      That repo, and its contributors are MS/Azure employees working on the service for running python code in a container. I don't know why it's under a personal account. Though it says it's a fork from an Office repo that I can't find.

      • what 26 minutes ago ago

        How do you figure? I don’t see anything that suggest they work for ms/azure?

    • jon_adler 7 hours ago ago

      It may not be a hallucination. Perhaps the Copilot code was generated from the GitHub training set?

    • blastonico 9 hours ago ago

      Oh boy, this really seems to be hallucination.

      Guys, chatbots are mostly token generators, they don't run programs e give you responses...it's not a simple shell program, it computes things in GPU and return tokens, in which are translated back to English.

      • MattGaiser 7 hours ago ago

        This is very out of date. They now often trigger tooling and return the outputs of the tooling.

        • blastonico 5 hours ago ago

          Not really. You're referring to agents, but the model doesn't always require agents, and the public chatbot is not connected to a shell freely evaluating arbitrary commands.

  • varispeed 12 hours ago ago

    Earlier LLMs used to be a goldmine for company secrets (when it learned documents that shouldn't be on public internet). Most of it seem to be scrubbed now.

    • furyofantares 11 hours ago ago

      > Earlier LLMs used to be a goldmine for company secrets (when it learned documents that shouldn't be on public internet).

      Sounds fake. LLMs don't usually memorize things that appear once in their training set anyway, nor have I heard about major issues accidentally training on a bunch of non-public data.

      I can see how someone would believe it to be true though, since LLMs can easily hallucinate in a way that looks like this is true.

    • baxtr 12 hours ago ago

      In my humble experience company secrets are mostly useless for other companies.

      • dataviz1000 10 hours ago ago

        This reminds me of that one time after working at a company for 4 months they informed me they were in a middle of an IP lawsuit which is part of the reason they hired me to rewrite the front end without knowing that was going on. That was f*(ked for reasons.

        Whatever the case, the only time people look at your social media history is to look for attacks and the only reason they will look at a company's slack messages and emails are to look for attacks during discovery.

        I would argue that company secrets are mostly useless for the company but very, very useful to other companies. For this reason, there should be retention policy of a day or two for almost all communication unless it is important, required by law, or documentation. And, definitely do not share that information with the public without good reason.

      • simonw 11 hours ago ago

        The bigger issue is around "material non‑public information" in stock market terms - things like unreported sales figures which someone could use to make trading decisions.

        Using that information for trading is illegal, but so is exposing that information outside of approved channels.

      • SoftTalker 12 hours ago ago

        Then why are they secret?

        • wkat4242 9 hours ago ago

          Because its hard to define the parts that are really sensitive. At our work people must classify every document but a lot of people choose public for everything because it doesn't enforce any restrictions. So they can just dump it in a folder and share it with the whole company. This is not what we want them to do obviously but people are lazy, don't like to create access lists. But anyway it means we can't rely on the classification. And indicator detection like credit card and social security numbers is far from perfect. A lot of sensitive info will just be text, like about new products being developed. 3D models, code, strategy emails.

          Also, if people start rooting around in everything they can take things out of context. If I send a message to my boss that I think that something we're doing is stupid, if that were public it could make some waves even though internally it's inconsequential because I'm a nobody. Also, many documents might have one or two bits that hint to really important information and having them can help finding those

          As you probably know, there's tons of information in a multinational and the hardest part is finding the right stuff. This is one of the main tasks I use Copilot for. Also because outlook and SharePoint search are really terrible though. If those actually worked I wouldn't need copilot so much.

        • kingofmen 12 hours ago ago

          Because "mostly" does a lot of work in that sentence. Companies, like militaries, keep secret a lot of information that would be safe to release because they don't know which bits are highly unsafe.

        • samastur 12 hours ago ago

          Paranoia and not knowing which ones fall into "mostly" category :)

        • reaperducer 12 hours ago ago

          At most of the companies I've worked, low-grade managers love to hoard secrets. It makes them feel powerful. Someone gets promoted from Lower Level Manager Grade 4 to Lower Level Manager Grade 5 and they feel all "Oooh! Look at the new things I know!"

          My mother-in-law is like this with knowing what various relatives are doing. Being the gatekeeper of knowledge gives her imagined power. I guess it's just part of the human condition.

          • SoftTalker 11 hours ago ago

            Why limit it to low-grade managers?

            I know sysadmins and programmers who behave exactly they same way. They could give you permission or a script to do the thing you need to do but they'd rather have you come to them and ask them to do it. Gives them a sense of purpose, I guess.

            • pastage 10 hours ago ago

              Being such a person that fixes lots of stuff for other people nothing I do is secret but learning to do it seems too hard for most. What I do is try to delegate if I find people that do want to learn.

              If someone shows me they are good at something they are going to have to expect being sent trickier problems.

              Sometimes it might seem like I keep things a secret. I am probably just having a bad day.

            • dns_snek 9 hours ago ago

              That has an awful lot to do with what "the thing" is. I'm sure there are a few people out there doing it just to feel more important, but often there's a good reason for denying someone access - either it's just a terrible idea to begin with or they don't know you well enough to trust you without someone else (i.e. their boss) specifically requesting it.

              I could be off base here about your experience, but I know that some people made the same comments about me when I pushed back on sharing dangerous credentials with inexperienced coworkers. Damned if you do, damned if you don't.

            • jon_adler 7 hours ago ago

              It may depend on what the script is for or the system being used. Segregation of duties is a risk mitigation principle of ISO 27001 to reduce fraud, waste, and error.

      • wkat4242 9 hours ago ago

        That's why corporate espionage is a really lucrative industry?

        Of course it depends what secrets. 99% will just be internal process drivel and inter departmental bickering but there's some real important stuff in there too.

    • simonw 12 hours ago ago

      Do you have any concrete examples of this? I have not seen any myself.

      • Barbing 8 hours ago ago

        I looked for an alleged case of an LLM apparently reproducing email signatures—but couldn’t find it exactly, and of course many email signatures have been published over the years, especially on newsgroups. (Maybe it was conspiratorial kind of thinking from web commenters assuming ChatGPT was training on emails users were feeding it, which as mentioned certainly doesn’t need to be the case.)

        Something like the top screenshot here, though:

        https://www.zdnet.com/article/chatgpt-can-leak-source-data-v...

        (not parent commenter but) tl;dr no

    • nyarlathotep_ 11 hours ago ago

      When companies (non-"tech") started adopting them they also had no "guardrails" for content outside what the intent of such products were (dunno what the standard term for this is).

      There was a boba tea company that had a free, no-sign-in required LLM that I used to generate a few bash scripts before ChatGPT free-tier started.

    • bongodongobob 12 hours ago ago

      Source?

  • reliablereason 10 hours ago ago

    Don't really seam to be a vulnerability?

    The safety in the system is that the code is executed in a container.

    • dboreham 10 hours ago ago

      Assuming the container was isolated. Which I'd assume it was.

  • ratg13 11 hours ago ago

    Seems like they could have taken a shortcut by giving copilot a sudo binary to use as base64.

    • jfyi 10 hours ago ago

      You would need to change ownership of the file to root also.

  • bramhaag 12 hours ago ago

      > We reported the vulnerability to Microsoft in April and they have since fixed it as a moderate severity vulnerability. As only important and critical vulnerabilities qualify for a bounty award, we did not receive anything, except for an acknowledgement on the Security Researcher Acknowledgments for Microsoft Online Services webpage.
    
    I guess it makes sense that a poor little indie company like Microsoft can't pay bug bounties. Surely no bad things will come out of this.
    • n2d4 12 hours ago ago

      The important part:

        > Now what have we gained with root access to the container?
      
        > Absolutely nothing!
      
        > We can now use this access to explore parts of the container that were previously inaccessible to us. We explored the filesystem, but there were no files in /root, no interesting logging to find, and a container breakout looked out of the question as every possible known breakout had been patched.
      
      I'm sure there are more ways to acquire root. If Microsoft pays out for one, they have to pay out for all, and it seems pretty silly to do that for something that's slightly unintended but not dangerous.
      • bramhaag 10 hours ago ago

          > a container breakout looked out of the question as every possible known breakout had been patched
        
        This is the part that concerns me. It only encourages an attacker to sit on an exploit like this until a new container breakout is discovered.
        • tptacek 9 hours ago ago

          Are you not concerned about all the other platforms that rely on containers as security boundaries between tenants? There are a lot of them.

          • bgwalter 8 hours ago ago

            It is hard to answer that since the stack is so convoluted. Some parts are forced on the user. Copilot is built into Microsoft Office workplace applications.

            If you break out of a container, do you have access to the same system that serves these applications? Who knows, it looks like a gigantic mess.

        • whazor 8 hours ago ago

          I expect that they run their containers more isolated as virtual machines. So they have bigger problems of there is a breakout possible.

      • nicce 9 hours ago ago

        Severity is based on impact. What was the impact here beyond single container and that specific user instance? Feels like moderate was okay, or even too high.

      • amelius 11 hours ago ago

        Maybe this was their honeypot container.

    • citizenpaul 11 hours ago ago

      I'll never understand why people do free dev work for multinational trillion dollar conglomerates.

      • hnthrow90348765 11 hours ago ago

        It's still good for reputation. This is by a researcher at a company, so a benefit for both of them. Plus if we didn't have bug bounty programs, they'd have to willingly work at Microsoft to do this research.

        • nicce 9 hours ago ago

          This could have turned badly in terms of reputation if they had tried to complain that the vulnerability should be critical, e.g. or using other ways to seek attention for not getting bounty, but current way was rather neutral way.

      • hombre_fatal 11 hours ago ago

        Could say the same thing about open source software.

        • blendergeek 11 hours ago ago

          It's why I don't understand why people believe in "open source". Why would I contribute free dev work to a billion dollar corporation? I do believe in "Free Software" which is contributing free dev work to my fellow man for the benefit of all man mankind.

          • CharlesW 10 hours ago ago

            This may be a misconception. "Free software" (e.g. Linux) also benefits billion-dollar corporations and "open source" also benefits all mankind.

            • blendergeek 10 hours ago ago

              Free software and open source are two ideologies for the same thing. Free Software is the ideology of developing the software for the benefit of mankind (it's sometimes termed a "political" stance but I see it as an ethical stance). Open source is the ideology of saving money at a corporation by not paying the developers. Sure open source can benefit mankind but will only develop corporate software for money. When developing on my own time, I will focus on software that either personally benefits me or benefits other regular people.

              • CharlesW 10 hours ago ago

                I applaud your choice! I just can't think of any free software examples that don't also benefit corporations.

                • trueismywork 9 hours ago ago

                  You need to think it in a different manner. When you have AGPL code, then it benefits mankind more than corporations. There's a Harvard report on value of open source to society based on how much money corporations put in.

                  Today linux is working nicely on desktops (even though it's not the year of linux) and is heavily dominated by corporations. The parts where linux doesn't do well are exactly parts without corporate support.

                  Software is becoming complex enough that it's not possible for a single company to just even maintain a compiler let alone an office suite. Its perfect ground for either one company having monopoly or an free software (not open source) being a base for masses.

                  • kortilla an hour ago ago

                    That’s not an example of open source that doesn’t benefit corporations. Linux is amazing for corporations.

                • Wilder7977 8 hours ago ago

                  Lichess, the gazillion of self-hosting software. There are many examples of free software that are exclusively (or let's say predominantly) used in noncommercial environments.

                  In any case, I agree with the commenter, and I think that developing a software which is also used by companies is different from looking for vulnerabilities in the context and scope of a bug bounty program for a specific company. Yes, you could argue that users of said company are going to be more secure, but it's evidence t like even in this case the company is the direct beneficiary.

            • NoOn3 10 hours ago ago

              at least under some licenses like GPL/AGPL you get some code back.

          • victorbjorklund 9 hours ago ago

            Why do basic science which benefits everyone else for free?

          • eastbound 10 hours ago ago

            > Why would I contribute free dev work to a billion dollar corporation?

            The billion dollars company contributed more to your startup than you do to them. Microsoft provides:

            - VSCode,

            - Hosts all NPM repositories. You know, the ones small startups are too lazy to cache (also because it’s much harder to cache NPM repositories than Maven) and then you re-download them at each build,

            - Typescript

            • wkat4242 9 hours ago ago

              Meh it depends whether you use those things of course. There's other IDEs, other languages. And Microsoft isn't doing this out of charity. A lot of the really useful plugins are not working on the open source version, so people that use them provide telemetry which is probably valuable. Or they use it as a gateway to their services like GitHub Copilot.

              If a mega corporation gives you something for free it's always more beneficial to them otherwise they wouldn't do it in the first place.

              • eastbound 9 hours ago ago

                So, no OSS contribution is valid unless you are using this very library?

                Did Microsoft contribute more to the OSS world, or did the OSS world contribute more to Microsoft? I pardon Microsoft because they have donated Typescript, which is a true civilizational progress. You could say the OSS world has contributed to Microsoft because they’ve given them a real OS, which they didn’t have inner expertise to develop. We’re even.

                Now you sound like you have a beef against large companies and would find any argument against them. Some guy once told me that I didn’t increase my employees by 30% out of benevolence, but because I must be an awful employer. See, why else would I increase employees.

                This behavior is actively harmful to the rest of the world. You are depriving good actions from a “thank you” and hence you are depriving recipients of good actions from more of them. With this attitude, the world becomes exactly like you project it to be: Shitty.

                • bgwalter 8 hours ago ago

                  The open source ecosystem was perfect before Microsoft tried to meddle, assimilate and destroy.

                  Microsoft has destroyed several open source projects by infiltrating them with mediocre MSFT employees.

                  Microsoft bought the GitHub monopoly in order to control open source further. Microsoft then stole and violated the copyright by training "AI" on the GitHub open source.

                  Microsoft finances influential open source organizations like OSI in order to make them more compliant and business friendly.

                  The useful projects are tiny compared to the entire open source stack. Paying for NPM repositories is a goodwill gesture and another power grab.

                • wkat4242 7 hours ago ago

                  > So, no OSS contribution is valid unless you are using this very library?

                  You said Microsoft contributes to my start-up. That's only true if we actually use it.

                  > Now you sound like you have a beef against large companies and would find any argument against them.

                  I certainly have beef with Microsoft in particular yes. And most big tech. I work a lot with Microsoft people and they're always trying to get us to do things that benefits them and not us (and I hate the attitude of a mere supplier trying to tell us what to do). Always trying to get us to evangelize their stuff which is mostly mediocre, dumping constant rebranding campaigns on us etc.

                  I'm not looking for arguments but I do hate the mega corporations and I don't believe in any benevolence on their side. I think the world would be much better off without them. They have way too much influence on the world. They should have none, after all they are not people and can't vote.

                  I also don't appreciate their contributions to eg Linux and OpenStreetMap. There's always ulterior motives. Like giving running on their cloud a step up, embedding their own IP like RedHat/IBM do (and Canonical always tries but fails at). Most of the contributions are from big tech now. I don't believe in a 'win/win' scenario involving corporations.

                  But I'm very much against unbridled capitalism and neoliberalism yes. I think it causes most of what's wrong with this world, from unequal distribution of wealth, extreme pollution, wars (influenced by the MIC) etc. Even the heavy political polarisation. The feud between the democrats and republicans is really just a proxy war for big corporate interests. Running a campaign requires so much trouble that it's no longer possible with a real grassroots movement.

                  But anyway this is my opinion. Take it as it is or don't. You have the right to you own opinions of course! I'm aware my opinion isn't very nuanced.

                  > This behavior is actively harmful to the rest of the world. You are depriving good actions from a “thank you” and hence you are depriving recipients of good actions from more of them.

                  Nah. Microsoft doesn't care what I think. I'm nothing but an ant on the floor to them.

                  Besides, they are doing this for reasons. The thank you isn't one of them. Hosting npm is peanuts for a big cloud provider, just advertising really. And it gives them a lot of metrics about the usage of libraries and from where. And VS Code, I'm sure they had a discussion about "what's in it for us in the long term" with some big envisioned benefits. You don't start a big project without that.

                  With most of their other products it's more clear. Like edge, they clearly made this to lock corporate customers further into their ecosystem (it can be deeply locked down which corporate IT loves because they enjoy playing BOFH) and for customers for upselling to their services. It's not better than Google's, they just replaced Google's online services with their own.

          • exe34 10 hours ago ago

            I think the argument is that when big companies make use of stuff, it gets more scrutiny and occasionally they contribute back improvements, and the occasional unicorn gets actual man hours paid for improving it. So if your project gets big enough, it's beneficial. But you have to have a MIT/BSD license usually, because companies will normally stay away from GPL.

        • dylan604 10 hours ago ago

          I know maintainers of projects have been hired directly by companies using their code as it is the most expedient way forward. Others might just offer up enough money to get the maintainer to take up a few of their specific issues/requests in a way that makes it worth their while. Just because someone is working on a project that is open source does not mean that money cannot be involved in the development. The company paying that money knows that the updates released as a normal part of the project will be available to anyone else using it as well.

        • Disposal8433 9 hours ago ago

          No, we can't say. I'm not an asshole, it helps people, and companies shun GPL licenses. That's not a valid comparison. Microsoft can go fuck itself, people around me love my software and it improves their lives.

          • tptacek 9 hours ago ago

            It's... 100% a valid comparison? The point is that doing free vulnerability research isn't irrational, not that doing open source work is bad. You're twisting yourself into a pretzel trying to keep the original argument alive.

        • pharrington 10 hours ago ago

          It's called "I use the software, I already want to improve the software I'm using, so after I improve it I'll contribute the improvements I've already made to the broader community."

          Granted, I myself have been guilty of not giving back to the open source community this way in the past, but I won't pretend that was reasonable or ethical of me!

          edit: after reading some commemnts, i realize i may have meant to say "free software" instead of "open source"

      • jimbokun 9 hours ago ago

        Well a lot of people do this kind of work to be able to commit crimes.

      • MattGaiser 11 hours ago ago

        It mostly pays in career benefits. Same reason why plenty intern for free.

        • qbit42 8 hours ago ago

          Who is interning for free as a software engineer?

          • koakuma-chan 5 hours ago ago

            Me

          • MattGaiser 7 hours ago ago

            People people who did bootcamps and thus are too risky to hire for most roles and cannot get into the standard CS hiring pipeline. Especially now that junior roles are drying up.

            In professions like fashion, virtually everyone seems to at some point.

      • apwell23 11 hours ago ago

        i don't think they did the work for them. they just reported it to them.

    • 0xbadcafebee 8 hours ago ago

      M$: If you're not going to send any money, send some swag. Make it cool and hackers will wear it, and now you have them advertising for you and possibly even want to work for you. Culture is a tool, and hackers have culture, so learn how to use it.

    • paulddraper 11 hours ago ago

      As you’ll see elsewhere, “root” got them literally nothing. They tried but there was nothing to be had.

      • wkat4242 9 hours ago ago

        They didn't find anything they could do with it but that container isn't there for no reason. I agree with the rating but it's nonetheless worrying. You don't leave the house you bought unlocked because there's nothing in it to steal yet.

        • paulddraper 6 hours ago ago

          More like leaving your front gate unlocked.

  • bgwalter 11 hours ago ago

    There was a time in programming that tried to avoid monstrosities like the Python scientific data stack combined with Copilot integration hacks.

    That time produced qmail and postfix. We are back to the early 1990s.

  • oxguy3 11 hours ago ago

    It's wild how easy this was. I feel like we're really in the wild west era of security with these AI tools -- reminds me of early Web 2.0 days, like when "samy is my hero" hit and Myspace didn't even have a security team. I anticipate many high-profile incidents before they figure out how to tame this beast.

    • tptacek 9 hours ago ago

      I don't think there's really much "AI" involved in this; this is basically like breaking any hosted code IDE. I get that an LLM was the direct vector, but the underlying security issue is common to everything that runs remote code.