Another GitHub outage in the same day

(githubstatus.com)

277 points | by Nezteb 8 hours ago ago

206 comments

  • noodlesUK 7 hours ago ago

    Can someone in GitHub senior leadership please start paying attention and reprioritise towards actually delivering a product that's at least relatively reliable?

    I moved my company over to GH enterprise last year (from AzDO) and I'm considering moving us away to another vendor altogether as a result of the constant partial outages. Things that used to "just work" now are slow in the UI, and GH actions fail to schedule in a reasonable timeframe way more than they ever used to. I enjoy GH copilot as much as the next person, but ultimately I came to GH because I needed a git forge, and I will leave GH if the git forge doesn't work.

    • sobjornstad 7 hours ago ago

      I second this. GitHub used to be a fantastic product. Now it barely even works. Even basic functionality like the timeline updating when I push commits is unreliable. The other day I opened a PR diff (not even a particularly large one) and it took fully 15 seconds after the page visually finished loading -- on a $2,000 dev machine -- before any UI elements became clickable. This happened repeatedly.

      It is fairly stunning to me that we've come to accept this level of non-functional software as normal.

      • HoldOnAMinute 7 hours ago ago

        The trend of "non-functional software" is happening everywhere. See the recent articles about Copilot in Notepad, failing to start because you aren't signed in with your Microsoft Account.

        We are in a future that nobody wanted.

        • amarant 6 hours ago ago

          Not quite everywhere. There's a common denominator for all of those: Microsoft.

          Their business is buying good products and turning them into shit, while wringing every cent they can out of the business. Always has been.

          They have a grace period of about 2-4 years after acquisition where interference is minimal. Then it ramps up. How long a product can survive once the interference begins largely depends on how good senior leadership at that product company is at resisting the interference. It's a hopeless battle, the best you can do is to lose slowly.

          • Andrex 6 hours ago ago

            Things don't always ramp up after 2-4 years. Sometimes MS just kills the project or company after that period of time.

            See also their moves in the gaming industry.

            • amarant 4 hours ago ago

              Heh, I was working at 2 of those gaming companies when they were acquired by m$. I almost fear taking another job in the gaming industry, there seems to be some kind of bastardised version of Murphy's law that any gaming company that hires me will be acquired by ms 6 months later.

              I mean, that's obviously not the case, but it's weird that it happened twice!

              • Andrex an hour ago ago

                Very weird it happened twice! But that's a kind of a cool factoid to tell people haha.

                Even with devs and publishers that don't die or are killed, they still lay hundreds off when a game is done. Then the studio limps along in pre-production mode on their next game for 4-5 years it seems like...

                Maybe the only job stability in the industry is with indies, and... Nintendo?

                • amarant 5 minutes ago ago

                  I'd add the hugely successful studios to that list. Even after ms acquisition, to the best of my knowledge neither of the 2 studios I worked at had any layoffs.

                  But they boast the most sold video game in the history of videogames (Tetris a close-ish second), and most downloaded free mobile game, respectively. Each have player bases larger than the population of the country they're from!

                  Here's to hoping ms is hesitant to gut either!

          • its_magic 6 hours ago ago

            I for one am shocked--SHOCKED, I say!--to learn that anything bad could happen as a result of a) putting everything in "the cloud" and b) handing control over the entire world's source code to the likes of Microsoft.

            Who could have POSSIBLY foreseen any kind of dire consequences?

            • endgame 2 hours ago ago

              Nobody. Nobody at all could have seen it. Microsoft is cool now, haven't you seen VSCode? They do Open Source, they run Linux, they've joined the fold, the tiger shed its stripes.

        • bonesss 6 hours ago ago

          This thread has complaints about software coming from the same supplier both degrading.

          The person(s) who wanted this want Azure to get bigger and have prioritized Azure over Windows and Office, and their share price has been growing handsomely.

          ‘Microslop’, perhaps, but their other nickname has a $ in it for a reason.

        • habitable5 7 hours ago ago

          > We are in a future that nobody wanted.

          some people wanted this future and put in untold amount of money to make it happen. Hint: one of them is a rabid Tolkien fan.

          • b00ty4breakfast 5 hours ago ago

            the irony of Tolkien being associated with a techno-dystopia makes me nauseous

          • cyanydeez 7 hours ago ago

            Rent seekers paradise (ft copilot)

            • markus_zhang 3 hours ago ago

              It’s just feudal with Capital.

          • tayo42 3 hours ago ago

            Who is it?

        • michaelcampbell 7 hours ago ago

          MS PM's wanted it, got their OKR's OK'd, got their bonuses, and moved on.

        • its_magic 6 hours ago ago

          Laughs in my own Linux distro

        • dylan604 7 hours ago ago

          > We are in a future that nobody wanted.

          Nor deserved.

          • heliumtera 6 hours ago ago

            Then why is it the future we have?

            • its_magic 6 hours ago ago

              It was a complete accident. Nobody could have foreseen it. We are currently experiencing the sudden discovery that Microsoft is an evil corporation and maybe putting everything in the cloud wasn't the best move after all.

            • timacles 5 hours ago ago

              Let’s just say there are a couple of guys, who are up to no good. And they started making trouble in our neighborhood.

              jokes aside it’s all because of hyper financial engineering. Every dollar every little cent must be maximized. Every process must be exploited and monetized, and there are a small group of people who are essentially driving all this all across the world in every industry.

      • matthewisabel 5 hours ago ago

        Hey from the GitHub team. Outages like this are incredibly painful and we'll share a post-mortem once our investigation is complete.

        It stings to have this happen as we're putting a lot of effort specifically into the core product, growing teams like Actions and increasing performance-focused initiatives on key areas like pull requests where we're already making solid progress[1]. Would love if you would reach out to me in DM around the perf issues you mentioned with diffs.

        There's a lot of architecture, scaling, and performance work that we're prioritizing as we work to meet the growing code demand.

        We're still investigating today's outage and we'll share a write up on our status page, and in our February Availability Report, with details on root cause and steps we're taking to mitigate moving forward.

        [1] https://x.com/matthewisabel/status/2019811220598280410

        • Etheryte 4 hours ago ago

          Literally everyone who has used Github to look at a pull request in say the last year has experienced the ridiculous performance issues. It's a constant laughing point on HN at this point. There is no way you don't know this. Inviting to take this to a private channel, along with the rest of your comment really, is simply standard corporate PR.

          • matthewisabel 4 hours ago ago

            Yes agreed it's been a huge problem, and we shipped changes last week to address some of the gnarly p99 interactions. It doesn't fix everything and large PRs have a lot of room to be faster. It's still good to know where some worst performance issues are to see if there's anything particularly problematic or if a future change will help.

        • materielle 4 hours ago ago

          Hopefully the published postmortem will announce that all features will be frozen for the foreseeable future and every last employee will be focused on reliability and uptime?

          I don’t think GitHub cares about reliability if it does anything less than that.

          I know people have other problems with Google, but they do actually have incredibly high uptime. This policy was frequently applied to entire orgs or divisions of the company if they had one outage too many.

        • whstl 2 hours ago ago

          It's insulting to see the word "progress" being used when the PR experience is orders of magnitude slower than it was years ago, when everyone had way worse computers. I have a maxed M5 MacBook and sometimes I can barely review some PRs.

        • cebert 2 hours ago ago

          Can you guys stop adding new features for a while please and just make what’s there more reliable?

        • danudey 4 hours ago ago

          For what it's worth, I doubt that people think it's the engineering teams that are the problem; it feels as though leadership just doesn't give a crap about it, because, after all, if you have a captive audience you can do whatever you want.

          (See also: Windows, Internet Explorer, ActiveX, etc. for how that turned out)

          It's great that you're working on improving the product, but the (maybe cynical) view that I've heard more than anything is that when faced with the choice of improving the core product that everyone wants and needs or adding functionality to the core product that no one wants or needs and which is actively making the product worse (e.g. PR slop), management is too focused on the latter.

          What GitHub needs is a leader who is willing and able to say no to the forces enshittifying the product with crap like Copilot, but GitHub has become a subsidiary of Copilot instead and that doesn't bode well.

          • tayo42 3 hours ago ago

            > people think it's the engineering teams that are the problem;

            It could be, some people are just terrible at their job. Lots of teams have low quality standards for their work.

            Maybe that still comes down to leaders but for different reasons. You can ship useless features without downtime.

            • tjwebbnorfolk 2 hours ago ago

              Permitting terrible engineers to continue to work for you is a management problem.

              • tayo42 5 minutes ago ago

                Sort of I think. There's a culture aspect to it too. Everything is blameless, there's no reason to not mess up.

      • dev_l1x_be 5 hours ago ago

        So React rewrite did not help after all? Imagine, one of the largest software tool companies on Earth cannot reliably REbuild something in React. I lost count of the inconsistency issues React introduced.

        https://news.ycombinator.com/item?id=33576722

        • catigula 5 hours ago ago

          React isn't causing these issues.

          • dham 2 hours ago ago

            Then why is the site slower than it was in 2012 on a 2009 Macbook?

          • dev_l1x_be 5 hours ago ago

            Good to know. So it only causes the UI inconsistency bugs.

            • danudey 4 hours ago ago

              The new design/architecture allows them to do great stuff in the name of efficiency; for example, when browsing through some parts of the UI, it's now much more capable of just updating the part of the page that's changed, rather than having to reload the entire thing. This is a significantly better approach for a lot of things.

              I understand that the 'updating the part of the page that's changed' functionality is now dramatically slower, more unresponsive, and less reliable than the 'reload the entire thing' approach was, and it feels like browsing the site via Citrix over dial-up half the time, but look, sacrifices have to be made in the name of making things better even if the sacrifice is that things get worse instead.

              • hunterpayne 3 hours ago ago

                > for example, when browsing through some parts of the UI

                React allows this? I didn't realize that I needed React to do this when we used Java and Js to do this 20 years ago. I also didn't realize I needed React to do this when we used Scala and generated Js to do this 10 years ago. JFC, the world didn't start when you turned 18.

      • samgranieri 6 hours ago ago

        I've been a GitHub user since the very early days. I had a beta invite to the service. I really wish they didn't swap out the FE for a React FE.

        They need to start rolling back some of their most recent changes.

        I mean, if they want people to start moving to self hosted GitLab, this is gonna get that ball rolling.

        • throw20251220 5 hours ago ago

          GitLab is slower for me than that React GH app. Why would I move to GitLab?

          • tarellel 4 hours ago ago

            Was this a local/on prem version of GL or the hosted web version?

            My previous org had an on prem version hosted on a local VM. It was extremely fast, we setup another VM for the runners, and one for storing all the docker containers. The thing I’ve seen people do it use the VM they put their gitlab instance on for everything and ends up bogging things down quite a bit.

      • sodapopcan 7 hours ago ago

        Ya, it really was one of the most enjoyable web apps to use pre-MS. I'm sure there are lots of things that have contributed to this downfall. We certainly didn't need bullshit features like achievements.

        • noodlesUK 7 hours ago ago

          Even just a year or two ago its web interface was way snappier. Now an issue with a non-trivial number of comments, or a PR with a diff of even just a few hundred or thousand lines of changes causes my browser to lock up.

          • sodapopcan 6 hours ago ago

            But even clicking around tabs and whatnot is noticeably slower. It used to be incredibly snappy.

          • bethekidyouwant an hour ago ago

            Which website lets you load PRs with 1000 lines and it’s fast? Honest question, it’s not gitlab.

      • oldestofsports 4 hours ago ago

        This is just microsoft doing the only thing they know, which is taking a good product and turning it into a monster by bashing out whatever feature is on some investors mind that barely even work in a isolated vacuum-sealed test chamber. All microsoft producs are like bad experiments.

      • kimixa 7 hours ago ago

        We loved Github as a product when it needed to return or profit beyond "getting more users".

        I feel this is just the natural trajectory for any VC-funded "service" that isn't actually profitable at the time you adopt it. Of course it's going to change for the worse to become profitable.

        • tibbar 7 hours ago ago

          GitHub isn't VC funded at the moment, though. It's owned by Microsoft. Not that this necessarily changes your point.

          • danudey 4 hours ago ago

            > Of course it's going to change for the worse

            > It's owned by Microsoft.

            I see no contradictions here.

        • notpushkin 6 hours ago ago

          I don’t get it. Why making the UI shittier would possibly lead to more profit?

          • danudey 4 hours ago ago

            Moving to client-side rendering via React means less server load spent generating boilerplate HTML over and over again.

            If you have a captive audience, you can get away with making the product shittier because it's so difficult for anyone to move away from it - both from an engineering standpoint and from network effects.

          • kimixa 5 hours ago ago

            It seems most of the complaints are about the reliability and infrastructure - which is very much often a direct result of lack of investment and development resources.

            And then many UI changes people have been complaining about are related to things like copilot being forcibly integrated - which is very much in the "Microsoft expect to gain a profit by encouraging it's use" camp.

            It's pretty rare companies make a UI because they want a bad UI, it's normally a second order thing from other priorities - such as promoting other services or encouraging more ad impressions or similar.

      • blibble 5 hours ago ago

        > GitHub used to be a fantastic product. Now it barely even works.

        it's almost as if Microsoft bought it, isn't it?

    • kasey_junk 7 hours ago ago

      “ I enjoy GH copilot as much as the next person”

      So not at all?

      • 1f60c 7 hours ago ago

        That does seem to be the implication, yes. :D

      • nfg 5 hours ago ago

        Really? I’d be interested to hear more.

        Disclaimer: I work in Microsoft (albeit in a quite disconnected part of it, nothing to do with GitHub or Copilot).

        • kasey_junk 3 hours ago ago

          In testing for my workflows copilot significantly underperforms the SOTA agents, even when using the exact same models. It's not particularly close either.

          This has lead to 2 classes of devs at my company a) AI hesitant, who for many copilot is their only interaction, having their worst fears confirmed about how bad AI is. b) AI enthusiasts who are irritated by dealing with management that don't know the difference pushing back on their asks for access to SOTA agents.

          If I were the frontier labs, and wasn't billions of dollars beholden to Microsoft, I'd cut Copilot off. It poisons the well for adoption of their other systems. I don't deal with the other copilots besides the coding agent variants but I hear similar things about the business application variants.

          Microsofts AI reputation is in the toilet right now, I'm not sure if its understood how bad it really is within the org.

        • macintux 4 hours ago ago

          I’ve only started using it, so maybe I’m holding it wrong, but the other day I asked the IntelliJ plugin to explained two lines of code by referencing the line numbers. It printed & explained two entirely different lines in a different part of the file. I asked again. It picked two lines somewhere else.

          After using ChatGPT for the last 6 months or so, Copilot feels like a significant downgrade. On the other hand, it did easily diagnose a build failure I was having, so it’s not useless, just not as helpful.

        • 0xy 4 hours ago ago

          Not even Microsoft employees like Copilot. Maybe start why not even your coworkers can use your own slop.

          https://www.theverge.com/tech/865689/microsoft-claude-code-a...

    • tibbar 6 hours ago ago

      Github used to publish some pretty interesting postmortems. Maybe they still do. IIRC that they were struggling with scaling their SQL db and were starting to hit the limits. It's a tough position to be in because you have to either to a massive migration to a data layer with much different semantics, or you have to keep desperately squeezing performance and skirting on the edge of outages with a DB that wasn't really meant to handle what you're doing with it now. The OpenAI blog post on "scaling" Postgres to their current scale has much the same flavor, although I think they're doing it better than Github appears to be doing.

    • co_king_3 7 hours ago ago

      > Can someone in GitHub senior leadership please start paying attention and reprioritise towards actually delivering a product that's at least relatively reliable?

      It's Microsoft. A reliable product is not a reasonable expectation.

    • gerdesj 4 hours ago ago

      The ultimate irony is that Linus Thorvalds designed git with the Linux kernel codebase in mind to work without any form of infrastructure centralisation. No repo trumps any other.

      Surely some of your crazy kids can rummage up a CI pipeline on their laptop? 8)

      Anyway, I only use GH as something to sync interesting stuff from, so it doesn't get lost.

      • yoyohello13 4 hours ago ago

        Setting up a git server for yourself is actually really easy. I use it at home for personal stuff.

        https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protoco...

      • lovich 4 hours ago ago

        I wonder how many engineers have even worked on a git repo with multiple remotes.

        I’ve only worked on a team once where we all were set up as remotes to each other and that was over a decade ago.

        • flowardnut 4 hours ago ago

          hg really spoiled us with these features, though I also haven't used them in ages

          • lovich 4 hours ago ago

            We actually did it with raw git in the cli, but I doubt I could set that up correctly nowadays without pouring over the man pages again.

    • bigbuppo 6 hours ago ago

      Not going to happen. This is terminal decline. Next step is to kill off free repos, and then they'll start ratcheting up the price to the point that they have one small dedicated engineering team supporting each customer they have. They will have exactly one customer. At some point they'll end up owned by Broadcom, OpenText, Rocket, or Progress.

      • tazjin 5 hours ago ago

        Killing off free repos is not going to happen. That would be a suicide move on the level of the Digg redesign, or Tumblr's porn ban.

        It kind of would be good for everyone if they did do it though. Need to get rid of this monopoly, and maybe people will discover that there are alternatives with actually good workflows out there.

        • rcakebread 2 hours ago ago

          Microsoft has suicide-ed bigger than Digg.

        • bigbuppo 4 hours ago ago

          They are owned by Microsoft. When has Microsoft ever had a good idea?

          • danudey 4 hours ago ago

            Buying Github seems like a good idea? But fucking it up wasn't, so maybe it comes out even.

    • wnevets 7 hours ago ago

      > Can someone in GitHub senior leadership please start paying attention and reprioritise towards actually delivering a product that's at least relatively reliable?

      They claim that is what they are doing right now. [1]

      [1] https://thenewstack.io/github-will-prioritize-migrating-to-a...

      • semiquaver 7 hours ago ago

        Zero indication that migrating to azure will improve stability over the colos they are in now. The outages aren’t caused by the datacenter, whatever MS execs say.

      • amluto 7 hours ago ago

        The problem with the GH front end being an unbelievably bloated mess will not be even slightly improved by moving to Azure.

      • skywhopper 7 hours ago ago

        "Migrating to Azure" is, unfortunately, often the opposite of "delivering a reliable product".

    • markus_zhang 6 hours ago ago

      Maybe take the initiative and move your own first? It definitely would have a bigger effect than begging here.

    • jbreckmckye 7 hours ago ago

      As an aside, God, Azure DevOps, what a total pile of crap that product is

      My "favourite" restriction that an Azure DevOps PR description is limited to a pathetic 4000 characters.

      • OkayPhysicist 5 hours ago ago

        My favourite restriction is the fact that colored text doesn't work in dark mode. Why? Because whatever intern they had implement dark mode didn't understand how CSS works, and just slapped !important on all the style changes that make dark mode dark, and thus overwrite the color data.

        I ended up writing a browser extension for my team to fix it, because the boss loved to indicate stuff with red/green text.

      • dylan604 7 hours ago ago

        Amazon's deprecated CodeCommit is limited to 150 chars like it's an old SMS or Tweet.

        • heartbreak an hour ago ago

          Surprisingly they un-deprecated CodeCommit recently.

          https://aws.amazon.com/blogs/devops/aws-codecommit-returns-t...

        • jbreckmckye 6 hours ago ago

          Ha! Nice. I never worked with CodeStar / CodeCommit. Was it pretty bad?

          • dylan604 6 hours ago ago

            That's going to depend on each user's demands. The PR message limit is the biggest pain for me. I don't depend on the UI very often. I'm not trying to do any CI/CD nonsense. I just use it as a bog standard git repo. When used as that, it works just fine for me

      • noodlesUK 7 hours ago ago

        It shows you the level of quality to expect from a Microsoft flagship cloud product...

        • jbreckmckye 7 hours ago ago

          So I work for a devtools vendor (Snyk) and 6 months ago I signed into Azure DevOps for the first time in my life

          I couldn't believe it. I actually thought the product was broken. Just from a visual perspective it looked like a student project. And then I got to _using_ the damn thing

          • noodlesUK 7 hours ago ago

            It's also completely unloved. Even MSFT Azure's own documentation regularly treats it as a second class citizen to GitHub. I have no idea why they don't just deprecate the service and officially feature freeze it.

            Honestly that's the case with a lot of Azure services though.

            • stackskipton 4 hours ago ago

              Someone mentioned the boards but Pipelines/Actions are not 100% compliant.

              My company uses Azure DevOps for a few things and any attempt to convert to GitHub was quickly abandoned after we spent 3 hours trying to get some Action working.

              However, all usability quarks aside, I actually prefer these days since Microsoft doesn't really touch it and it just sits in corner doing what I need.

            • easton 6 hours ago ago

              It's the boards. GitHub issues doesn't let you do all the arcane nonsense Azure DevOps' boards let you do.

              • bigfudge 4 hours ago ago

                Isn’t that a feature?

                • easton 2 hours ago ago

                  A feature for devs, but I have often been told management is paid by the required field on tickets.

      • yoyohello13 4 hours ago ago

        My favorite is that it doesn't support ed25519 ssh keys.

      • tibbar 7 hours ago ago

        You would kind of expect with the pressure of supporting OpenAI and GitHub etc. that Azure would have been whipped into shape by now.

        • semiquaver 6 hours ago ago

          AZDO has been in KTLO maintenance mode for years.

    • Wojtkie 4 hours ago ago

      My org just moved to Gitlab because of the GH actions problems.

    • rvz 6 hours ago ago

      You might as well self-host at this point as that is far more reliable than depending on GitHub.

      Additionally, there is no CEO of GitHub this time that is going to save us here.

      So as I said many years ago [0] in the long term, a better way is to self host or use alternatives such as Codeberg or GitLab which at least you can self host your own.

      [0] https://news.ycombinator.com/item?id=22867803

      • none2585 33 minutes ago ago

        You can self host GitHub as well

    • philipallstar 4 hours ago ago

      Honestly, Gitlab is pretty decent.

  • jamiemallers 3 hours ago ago

    What's interesting about GitHub outages is how they've become a forcing function for teams to re-examine their deployment pipeline resilience.

    We've gotten so used to GitHub being "always there" that many teams have zero fallback. CI/CD stops. Deploys halt. Hotfixes can't ship. During an active incident on your own systems, that's brutal.

    A few things I've seen teams do after getting burned:

    1. Mirror critical repos to a secondary git host (GitLab, self-hosted Gitea) 2. Cache dependencies aggressively so builds don't fail on external fetches 3. Have a manual deploy runbook that doesn't require GitHub Actions

    The status page being hours behind reality is a separate frustration. I've started treating official status pages as "eventually consistent" at best — by the time they update, Twitter/X and internal monitoring have usually told me what I need to know.

  • kevmo314 7 hours ago ago

    I wonder if GitHub is feeling the crush of fully automated development workflows? Must be a crazy number of commits now to personal repos that will never convert to paid orgs.

    • 1f60c 7 hours ago ago

      IME this all started after MSFT acquired GitHub but well before vibe coding took the world by storm.

      ETA: Tangentially, private repos became free under Microsoft ownership in 2019. If they hadn't done that, they could've extracted $4 per month from every vibe coder forever(!)

      • dizhn 5 hours ago ago

        Is someone who is not really using github's free service losing something important?

        • _heimdall 4 hours ago ago

          As an individual, likely not. As a team or organization there are nice benefits though.

    • multisport 3 hours ago ago

      No, its because they are in the middle of a AWS->Azure migration, and because they cannot/will not be held accountable for downtime.

    • reactordev 7 hours ago ago

      This is the real scenario behind the scenes. They are struggling with scale.

      • jbreckmckye 7 hours ago ago

        How much has the volume increased, from what you know?

        • reactordev 6 hours ago ago

          Over 100x is what I’m hearing. Though that could just be panic and they don’t know the real number because they can’t handle the traffic.

          • bredren 6 hours ago ago

            An anecdote: On one project, I use a skill + custom cli to assist getting PRs through a sometimes long and winding CI process. `/babysit-pr`

            This includes regular checks on CI checks using `gh`. My skill / cli are broken right now:

            `gh pr checks 8174 --repo [repo] 2>&1)`

               Error: Exit code 1
            
               Non-200 OK status code: 429 Too Many Requests
               Body:
               {
                 "message": "This endpoint is temporarily being throttled. Please try again later. For more on scraping GitHub and how it may affect your rights, please review our Terms of Service (https://docs.github.com/en/site-policy/github-terms/github-terms-of-service)",
                 "documentation_url": "https://docs.github.com/graphql/using-the-rest-api/rate-limits-for-the-rest-api",
                 "status": "429"
               }
          • yallpendantools 3 hours ago ago

            Goodness if that's true... And I actually felt bad when they banned me from the free tier of LFS.

          • falloutx 3 hours ago ago

            Lmao not even close. Github themselves have released the numbers and it was 121M new repos with 2025 ending up with 630M

            https://github.blog/news-insights/octoverse/octoverse-a-new-...

          • chasd00 6 hours ago ago

            So much for GitHub being a good source of training data.

            Btw, someone prompt Claude code “make an equivalent to GitHub.com and deploy it wherever you think is best. No questions.”

          • jbreckmckye 6 hours ago ago

            One hundred? Did I read that right?

            • falloutx 3 hours ago ago

              No its not. 121M repos added on github in 2025, and overall they have 630 million now. There is probably at best 2x increased in output (mostly trash output), but no where near 100x

              https://github.blog/news-insights/octoverse/octoverse-a-new-...

              • reactordev an hour ago ago

                Published in Oct 2025... I think your estimate is off.

                Note the hockey stick growth in the graph they showed in Oct.

                Here we are in February.

                It's gotten way worse now with additional Claude's, Claw's, Ralph's, and such.

                It may not be 100x as was told to me but it's definitely putting the strain on the entire org.

            • 9cb14c1ec0 6 hours ago ago

              Yes, millions of people running code agents around the clock, where every tiny change generates a commit, a branch, a PR, and a CI run.

              • neuropacabra 5 hours ago ago

                I simply do not believe that all of these people can and want to setup a CI. Some maybe, but even after the agent will recommend it only a fraction of people would actually do it. Why would they?

                • ncruces 4 hours ago ago

                  But if you setup CI, you can pick up the mobile site with your phone, chat with Copilot about a feature, then ask it to open a PR, let CI run, iterate a couple of times, then merge the PR.

                  All the while you're playing a wordle and reading the news on the morning commute.

                  It's actually a good workflow for silly throw away stuff.

                • dmix 4 hours ago ago

                  Github CI is extremely easy to set up and agents can configure it from the local codebase.

                • cactusplant7374 4 hours ago ago

                  Codex did it automatically for me without asking.

            • reactordev 6 hours ago ago

              There’s a huge up tick in people who weren’t engineers suddenly using git for projects with AI.

              This is all grapevine but yeah, you read that right.

    • winddude 7 hours ago ago

      I was wondering about that the other day, the sheer amount of code, repos, and commits being generated now with AI. And probably more large datasets as well.

    • dwoldrich 5 hours ago ago

      Live by the AI Agent hype, die by the AI Agent crush.

  • h4kunamata 4 hours ago ago

    GitLab is the solution, if you aren't on it already.

    I worked for one of Australia largest airline company, monthly meeting with Github team resumed in one word: AI

    There is zero focus into the actual platform as we knew it, it is all AI, Copilot, more AI and more Copilot.

    If you are expecting things to get better, I have bad news for you. Copilot is not being adopted by companies as they hoped, they are using Claude themselves. If Microsoft ever rollback, boy oh boy, things will get ugly.

    • philipwhiuk 3 hours ago ago

      To be honest all the GitLab dev focus is also AI.

      * Originally it was Dev (issues)

      * Then it was DevOps (runners)

      * Then it was DevSecOps (SAST)

      * Now it's AI DevSecOps (reviews, etc)

      The problem is that each feature has been slightly more half-baked than the last one. The SecOps stuff is full of gotchas which don't exist. Troubleshooting a pipeline behaving correctly is extremely painful.

      The other problem is that if you want a feature you have to upgrade the seat license for everyone :(

      • h4kunamata an hour ago ago

        End users are being screwed over left and right, you better host your own code. GitHub, GitLab only adds a GUI for git.

        Enterprise helm will pay if that means no interruption, no AI being pushed everywhere. Some companies adopt GitLab because you can self host it, even the runners are self-hosted, there is no built-in runner like GitHub.

    • bsimpson 4 hours ago ago

      Do they have their own model? I thought Copilot was a frontend for Clause et. al..

      • h4kunamata 2 hours ago ago

        To the best of my knowledge, Copilot is a Microsof in-house thing and it sucks on everything. Claude is far superior and Microsoft is allegedly using Claude internally over its own AI solution.

        • rohanmallya an hour ago ago

          Github Copilot is just a front-end. You pay for the frontend and some premium requests every month.

          The base models like GPT 4o, 4.1 don’t have a usage cap. Models like Claude Sonet, Opus, etc have a monthly limit and you can pay more to use these through Github Copilot.

  • dec0dedab0de 5 hours ago ago

    I still say that mixing CI/CD with code/version control hosting is a mistake.

    At it's absolute best, everything just works silently, and you now have vendor lock-in with whichever proprietary system you chose.

    Switching git hosting providers should be as easy as changing your remotes and pushing. Though now a days that requires finding solutions for the MR/PR process, and the wiki, and all the extra things your team might have grown to rely on. As always, the bundle is a trap.

    • bamboozled 5 hours ago ago

      I don't think any of this was a mistake ;) Lock-in was by design.

    • monkaiju 4 hours ago ago

      I mean, not necessarily proprietary right? There are OSS solutions like forgejo that make it pretty simple, at least as simple as running a git system and a standalone CI system

      • dec0dedab0de 4 hours ago ago

        i mean that is certainly better, but I still don’t like having them coupled. Webhooks were a great idea, and everyone seems to have forgotten about them.

  • falloutx 7 hours ago ago

    We can all chill for couple weeks, Github guys take your time. Infact, don't even worry about it.

  • vampiregrey 7 hours ago ago

    At this point, GitHub outages feel closer to cloud provider outages than a SaaS blip. Curious how many people here still run self-hosted Git (GitLab / Gitea) vs fully outsourcing version control.

    • neilv 6 hours ago ago

      Yay for GitLab and Forgejo/Gitea.

      My previous two startups used GitLab successfully. The smaller startup used paid-tier hosted by gitlab.com. The bigger startup (with strategic cutting-edge IP, and multinational security sensitivity) used the expensive on-prem enterprise GitLab.

      (The latter startup, I spent some principal engineer political capital to move us to GitLab, after our software team was crippled by the Microsoft Azure-branded thing that non-software people had purchased by default. It helped that GitLab had a testimonial from Nvidia, since we were also in the AI hardware space.)

      If you prefer to use fully open source, or have $0 budget, there's also Forgejo (forked from Gitea). I'm using it for my current one-person side-startup, and it's mostly as good as GitLab for Git, issues, boards, and wiki. The "scoped" issue labels, which I use heavily, are standard in Foregejo, but paid-tier in GitLab. I haven't yet exercised the CI features.

    • arthur-st 6 hours ago ago

      Self-hosted Gitea is a good time if you're comfortable taking care of backups and other self-hosting stuff.

    • betaby 7 hours ago ago

      Self hosted GitLab is absolutely worth it.

      • edverma2 7 hours ago ago

        I was just looking into this today but it seems pricey. $29/user/month for basic features like codeowners and defining pr approval requirements. Going with Forgejo.

        • rcakebread an hour ago ago

          Forgejo isn't comparable.

        • 1f60c 7 hours ago ago

          Wait, what? So you're on the hook for backups, upgrades, etc. and you have to pay them for the privilege? I thought GitLab was free as in speech and beer.

          • cyberax 6 hours ago ago

            It's an Open Core model. You can deploy the free version, but it lacks some pretty important features like SSO.

            But that $30 per month per user is also the cost for their cloud-hosted version. It also includes quite a bit of CI/CD runtime.

      • vampiregrey 7 hours ago ago

        I think i will slowly start moving to self hosted git intra at my homelab.

      • monkaiju 7 hours ago ago

        or forgejo!

        • DeepYogurt 7 hours ago ago

          Forgejo should 100% be people's default for self hosting

        • zhouzhao 7 hours ago ago

          Yeah man. Forgejo (albeit it being a weird name from a language that nobody wants to use), is doing very well in my homelab.

          When I worked at the univerity we used Gitea.

          Every job outside of univerity I had used Gitlab self hosted. While I don't like the UI or any aspect of Gitlab a lot, it gets the job done.

          • zer00eyz 7 hours ago ago

            I use Gitea already... I haven't seen Forejo before today. Im now curious if it is worth the switch.

            • terminalbraid 6 hours ago ago

              Forejo was originally forked from Gitea

      • sam_lowry_ 7 hours ago ago

        Self-hosted git is absolutely worth it.

      • blibble 7 hours ago ago

        forgejo doesn't need half a supercomputer to run it

    • yoyohello13 4 hours ago ago

      We self-host the full fat version of GitLab and it's very worth it.

  • Kovah 7 hours ago ago

    I consider moving away from Github, but I need a solid CI solution, and ideally a container registry as well. Would totally pay for a solution that just works. Any good recommendations?

    • adamcharnock 6 hours ago ago

      We can run a Forgejo instance for you with Firecracker VM runners on bare metal. We can also support it and provide an SLA. We're running it internally and it is very solid. We're running the runners on bare metal, with a whole lot of large CI/CD jobs (mostly Rust compilation).

      The down side is that the starting price is kinda high, so the math probably only works out if you also have a number of other workloads to run on the same cluster. Or if you need to run a really huge Forgejo server!

      I suspect my comment history will provide the best details and overview of what we do. We'll be offering the Firecracker runner back to the Forgejo community very soon in any case.

      https://lithus.eu

      • dizhn 2 hours ago ago

        You've got any docs for firecracker as forgejo runners?

    • joeskyyy 7 hours ago ago

      Long time GitLab fan myself. The platform itself is quite solid, and GitLab CI is extremely straightforward but allows for a lot of complexity if you need it. They have registries as well, though admittedly the permission stuff around them is a bit wonky. But it definitely works and integrates nicely when you use everything all in one!

    • dylan604 7 hours ago ago

      Should our repos be responsible for CI in the first place? Seems like we keep losing the idea of simple tools to do specific jobs well (unix-like) and keep growing tools to be larger while attempting to do more things much less well (microsoft-like).

      • tibbar 7 hours ago ago

        I think most large platforms eventually split the tools out because you indeed can get MUCH better CI/CD, ticket management, documentation, etc from dedicated platforms for each. However when you're just starting out the cognitive overhead and cost of signing up and connecting multiple services is a lot higher than using all the tools bundled (initially for free) with your repo.

    • swamp-agr 7 hours ago ago
      • dysoco 6 hours ago ago

        Why this and not Garnix?

    • tibbar 7 hours ago ago

      Lots of dedicated CI/CD out there that works well. CircleCI has worked for me

    • import 5 hours ago ago

      Gitea / forgejo. It supports GitHub actions.

    • yoyohello13 4 hours ago ago

      GitLab has all the things.

    • hhh 5 hours ago ago

      GitLab, best ci i’ve ever used.

    • cyanydeez 6 hours ago ago

      GitLab can be selfhosted with container based CI and fairly easy to setup CE

      • IshKebab 6 hours ago ago

        CE is pretty good. The things that you will miss that made us eventually pay:

        * Mandatory code reviews

        * Merge queue (merge train)

        If you don't need those it's good.

        Also it's written in Ruby so if you think you'll ever want to understand or modify the code then look elsewhere (probably Forgejo).

  • natas 2 hours ago ago

    This is exactly why my employer is unlikely to adopt Azure. When CoreAI assets like GitHub appear poorly managed, it undermines confidence in the rest of the ecosystem. It’s unfortunate, because Microsoft seems to overlook how strongly consumer experience shapes business perception. Once trust is damaged, no amount of advertising spend can fully restore it.

    • athorax 2 hours ago ago

      They dont care. Their sales reps absolutely know that if you are using Microsoft products it is because you are locked in so deeply that escape is nearly impossible.

  • mrshu 4 hours ago ago

    This (multiple major outages a day) has unfortunately been happening for quite a while now -- on the 2nd of February, 2026 for instance.

    The GitHub Status Page does not visualize these very well but you can see them parsed out and aggregated here:

    https://mrshu.github.io/github-statuses/

  • ariedro 7 hours ago ago

    It would be interesting to have a graph showing AI adoption in coding against the number of weekly outages across different companies. I am sure they are quite correlated.

    • the_real_cher 6 hours ago ago

      I bet there's other factors that are correlated as well!

  • danhon 5 hours ago ago

    Isn't github in the middle of their (latest) attempt to migrate to Azure?[0]

    [0]: https://www.theverge.com/tech/796119/microsoft-github-azure-...

  • atonse 5 hours ago ago

    I'm starting to wonder if people doing what were previously unconventional workflows (which may not be performance optimized) are affecting things.

    For example, today, I had claude basically prune all merged branches from a repo that's had 8 years of commits in it. It found and deleted 420 branches that were merged but not deleted.

    Deleting 420 branches at once is probably the kind of long tail workflow that was not worth optimizing in the past, right? But I'm sure devs are doing this sort of housekeeping often now, whereas in the past, we just never would've made the time to do so.

  • jamiemallers 4 hours ago ago

    The irony of githubstatus.com itself being hosted on a third-party (Atlassian Statuspage) is not lost on anyone who works in incident management. Your status page being up while your product is down is table stakes, not a feature.

    What's more interesting to me is the pattern: second major outage in the same day, and the status page showed "All Systems Operational" for a good chunk of the first one. The gap between when users notice something is broken and when the status page reflects it keeps growing. That's a monitoring and alerting problem, not just an infrastructure one.

    At some point the conversation needs to shift from "GitHub is down again" to "why are so many engineering orgs single-threaded on a platform they don't control and can't observe independently?" Git is distributed by design. Our dependency on a centralized UI layer around it is a choice we keep making.

    • philipwhiuk 3 hours ago ago

      > The irony of githubstatus.com itself being hosted on a third-party (Atlassian Statuspage) is not lost on anyone who works in incident management. Your status page being up while your product is down is table stakes, not a feature

      That's WHY it's hosted externally, so that if GitHub goes down the status page doesn't.

  • bstsb 7 hours ago ago

    my four-core VPS running a Git server has higher uptime than GitHub at this point

    (although admittedly less load and redundancy)

    • chilipepperhott 6 hours ago ago

      Does redundancy even matter if the end result is still poorer uptime?

      • monkaiju 4 hours ago ago

        Exactly! Also operating "at scale" is only impressive if you can do it with comparable speed and uptime, it doesn't mean much if every page takes seconds to load and it falls over multiple times a day lol

  • oldestofsports 3 hours ago ago

    My company just migrated to GitHub, and it's been a shockingly bad experience. BitBucket never felt like anything more than a tool that did the job, but now I really miss it.

  • sisve 5 hours ago ago

    I moved everything on github to a self hosted foregjo instanse some days ago. I really did not do anything. Created some tokens so that CC could access github and forgejo and my dns API. Self hosting is so much simpler and easier with AI. Expect more people to self host small to medium stuff.

    • monkaiju 4 hours ago ago

      Ironic that that same AI you're mentioning is probably a large part of why this class of outages are increasing. Id highly recommend folks understand their infrastructure enough to setup/run it without AI before they put anything critical on it.

      • sisve 3 hours ago ago

        Sure. I can agree with that. At the same time, the reason people aren't doing it is not solely a skill issue. It's also a matter of time, energy, and what you want to prioritise.

        I believe I have good enough control over it to fix issues that may arise. But then again, CC will probably do it faster. I will most likely not need to fix my own issues, but if needed, I think I will be able to.

        "Critical" plays an important role in what you're saying. The true core of any business is something you should have good control over. You should also accept that less important parts are OK for AI to handle.

        I think the non-critical part is a larger part than most people think.

        We are lagging behind in understanding what AI can handle for us.

        I'm an optimistic grey beard, even if the writing makes me sound like a naive youth :)

  • thomasfromcdnjs 7 hours ago ago

    Someone needs to make an mcp server for my claude so it can check if services are down, it goes stir crazy when github is down and adds heaps of work around code =D

  • devy 7 hours ago ago

    They were talking about prioritizing migration into Azure for a long while now. Not sure this incident today is related.

    https://thenewstack.io/github-will-prioritize-migrating-to-a...

    And coincidentally, an early CircleCI engineer wrote an article about GitHub Action (TLDR: don't use GitHub Action for CI/CD!)

    https://www.iankduncan.com/engineering/2026-02-05-github-act...

    https://news.ycombinator.com/item?id=46908491

    • baq 6 hours ago ago

      > TLDR: don't use GitHub Action for CI/CD!

      You should reach the same conclusion by trying to use it for this purpose, but also indeed for any purpose at all. Incidents that make you unable to deploy making all your CD efforts pointless are only the cherry on top.

  • alexellisuk 7 hours ago ago

    I’m seeing 429s cascading downloading things like setup-buildx on self hosted runners. That seems odd/off.

    Anyone else having issues? It is blocking any kind of release

  • nhuser2221 7 hours ago ago

    I am glad I have finally started self hosting my own git server, and stop worrying about github :-)

  • an0malous 7 hours ago ago

    Claude, make me an SCM provider

    • jraph 6 hours ago ago

      Sure!

      Do you allow me to run the following command?

          cd project; find -type f | while read f; do mv "$f" /dev/null; done
      • tryauuum 4 hours ago ago

        Don't do this It will break your /dev/null

  • panny 2 hours ago ago

    GitHub started moving to Azure 3-4 months ago,

    https://archive.is/VD38Q

    I wonder how much of these outages are related.

  • elzbardico 5 hours ago ago

    Yeah, Vibe code more github!

    • neuropacabra 5 hours ago ago

      So far it feels they are vibe coding it day and night lol…probably with GitHub Copilot

  • varispeed 7 hours ago ago

    Did they replace developers and devops with openclaw?

  • WhyNotHugo 7 hours ago ago

    How is this "news" when it comes up multiple times a week?

    It's just "yet another day of business as usual" as this point.

  • musha68k 7 hours ago ago

    Radicle moment.

  • rvz 7 hours ago ago

    A great time to consider self hosting instead. Since there is no CEO of GitHub to contact anymore.

    A prophecy that was predicted half a decade ago [0] which is now more important then as it is now today.

    [0] https://news.ycombinator.com/item?id=22867803

  • heliumtera 6 hours ago ago

    Remember the other day when a bunch of yous were making fun of zig moving away from GitHub? Now suddenly you all say this is not the future you wanted.

    Everyday you opt in to get wrecked by Microsoft.

    You all do realize you all could, for a change, learn something and never again touch anything Microsoft related?

    Fool me once...

    • TacticalCoder 5 hours ago ago

      > You all do realize you all could, for a change, learn something and never again touch anything Microsoft related?

      I learned that lesson in the 90s and became an "ABM" (Anything But Microsoft).

      People sadly shall never learn: Windows 12 is going to come out and shall suck more than any previous version of Windows except Windows 11, so they'll see it as progress. Then Windows 13 is going to be an abysmal piece of crap and people shall hang to their Windows 12, wondering how it's possible that Microsoft came out with a bad OS.

      There are still people explaining, today, that Microsoft ain't all bad because Windows XP was good (for some definition of good). Windows XP came out in late 2001.

      Stockholm syndrome and all that.

  • skywhopper 7 hours ago ago

    This is the predictable outcome of subordinating the GitHub product to the overarching "AI must be part of everything whether it makes sense or not" mandate coming down from the top. It was only a year ago that GitHub was moved under the "CoreAI" group at Microsoft, and there's been plenty of stories of massive cost-cutting and forcing teams to focus on AI workflows instead of their actual product priorities. To the extent they are drinking their own Kool-Aid, this sort of ops failure is also an entirely predictable outcome of too much reliance on LLM-generated code and workflows rather than human expertise, something we see happening at an alarming scale in a number of public MS repos.

    Hopefully it will get bad enough fast enough that they'll recognize they need to drastically change how they are operating. But I fear we're just witnessing a slow slide into complacency and settling for being a substandard product with monopoly-power name recognition.

  • ChrisArchitect 6 hours ago ago
    • rcakebread 21 minutes ago ago

      I'll bet you're fun at... nowhere.

    • rpns 6 hours ago ago

      Not quite, that one is an earlier outage while this one started at (or a bit before) 19:01 UTC.

      The history for today is a bit of a mess really: https://www.githubstatus.com/history

      • ChrisArchitect 5 hours ago ago

        They are all being discussed in that thread, the submitted url is just one of the various incident links on the day. Duplicate discussion.

    • esafak 6 hours ago ago

      No, it's a new outage -- that's the point! Check the URLs.

      • ChrisArchitect 5 hours ago ago

        That's not the point. The point is it's a duplicate discussion of one of a number of incident links being discussed, all over there.

        • bigstrat2003 2 hours ago ago

          The point is that when a second, independent event occurs, a new thread is not a duplicate.