We all have opinions about ci/cd. Why? Because it's getting between us and what we're attempting to do. In all honesty GitHub actions solves the biggest problem for a lot of Devs, infrastructure management and performance. I have managed a lot of build infrastructure and don't ever want to touch that again. GitHub fixed that for me. My build servers were often more power hungry than my production servers. GitHub fixed that for me. Basically what I'm saying is for 80% of people this is an 80% good enough solution and that's more important than everything else. Can I ship my code quickly. Can I define build deps next my code that everyone can see. Can I debug it, can others contribute to it. It just ticks so many boxes. I hope ci dies a good death because I think people are genuinely just thinking about the wrong problem. Stop making your life more difficult. Appreciate what this solves and move on. We can argue about it until we're blue in the face but it won't change the fact that often the solution that wins isn't the best, it's the one that reduces friction and solves the UX problem. I don't need N ways to configure somehow. I need to focus on what I'm trying to ship and that's not a build server.
I've used many of the CI systems that the author has here, and I've done a lot of CircleCI and GitHub Actions, and I don't come to quite the same conclusions. One caveat though, I haven't used Buildkite, which the author seems to recommend.
Over the years CI tools have gone from specialist to generalist. Jenkins was originally very good at building Java projects and not much else, Travis had explicit steps for Rails projects, CircleCI was similarly like this back in the day.
This was a dead end. CI is not special. We realised as a community that in fact CI jobs were varied, that encoding knowledge of the web framework or even language into the CI system was a bad idea, and CI systems became _general workflow orchestrators_, with some logging and pass/fail UI slapped on top. This was a good thing!
I orchestrated a move off CircleCI 2 to GitHub Actions, precisely because CircleCI botched the migration from the specialist to generalist model, and we were unable to express a performant and correct CI system in their model at the time. We could express it with GHA.
GHA is not without its faults by any stretch, but... the log browser? So what, just download the file, at least the CI works. The YAML? So it's not-quite-yaml, they weren't the first or last to put additional semantics on a config format, all CI systems have idiosyncrasies. Plugins being Docker images? Maybe heavyweight, but honestly this isn't a bad UX.
What does matter? Owning your compute? Yeah! This is an important one, but you can do that on all the major CI systems, it's not a differentiator. Dynamic pipelines? That's really neat, and a good reason to pick Buildkite.
My takeaway from my experience with these platforms is that Actions is _pretty good_ in the ways that truly matter, and not a problem in most other ways. If I were starting a company I'd probably choose Buildkite, sure, but for my open source projects, Actions is good.
In game development we care a lot about build systems- and annoyingly, we have vanishingly few companies coming to throw money at our problems.
The few that do, charge a kings ransom (Incredibuild). Our build times are pretty long, and minimising them is ideal.
If, then, your build system does not understand your build-graph then you’re waiting even longer for builds or you’re keeping around incremental state and dirty workspaces (which introduces transient bugs, as now the compiler has to do the hard job of incrementally building anyway).
So our build systems need to be acutely aware of the intricacies of how the game is built (leading to things like UnrealEngine Horde and UBA).
If we used a “general purpose” approach we’d be waiting in some cases over a day for a build, even with crazy good hardware.
Also game dev here - I disagree with your take. Our _build tools_ need to be hyper aware but our CI systems absolutely do not and would be better served as general purpose. What good is Horde when you need to deploy your already packaged game to steam via steamcmd, or when you need to update a remote config file for a content hotfix. Horde used BuildGraph meaning you need a full engine sync’ed node to run curl -X POST whatever.com
Game dev has a serious case of NIH - sometimes for good reasons but in lots of cases it’s because things have been set up in a way that makes changing that impractical. Using UBA as an example - FastBuild, Incredibuild, SNDBS Sccache all exist as either caching or distribution systems. Compiling a game engine isn’t much different to compiling a web browser (which ninja was written for).
I’ve worked at two game studios where we’ve used general purpose CI systems and been able to push out builds in < 15 minutes. Horde and UBA exist to handle how epic are doing things internally, rather than as an inherent requirement on how to use the tools effectively. If you don’t have the same constraints as developing Unreal Engine (and Fortnite) then you don’t have the same needs.
(I worked for epic when horde came online, but don’t any more).
If you're at a games studio that values build-times, value that. I worked at a very good SRE-mindset studio and missed it, deeply, after I left. Back then I expected everyone to think and care about such things and have spent many, many hours advocating for best-in-class, more efficient, cheaper development practices.
WRT github actions... I agree with OOP, they leave much to be desired, esp when working on high-velocity work. My ci/cd runs locally first and then GHA is (slower) verification, low-noise, step.
Actions is many things. It’s an event dispatcher, an orchestrator, an execution engine and runtime, an artifact registry and caching system, a workflow modeler, a marketplace, and a secrets manager. And I didn’t even list all of the things Actions is. It’s better at some of those things and not others.
The systems I like to design that use GHA usually only use the good parts. GitHub is a fine events dispatcher, for instance, but a very bad workflow orchestrator. So delegate that to a system that is good at that instead
> but... the log browser? So what, just download the file, at least the CI works.
They answer your "so what" quite directly:
>> Build logs look like terminal output, because they are terminal output. ANSI colors work. Your test framework’s fancy formatting comes through intact. You’re not squinting at a web UI that has eaten your escape codes and rendered them as mojibake. This sounds minor. It is not minor. You are reading build logs dozens of times a day. The experience of reading them matters in the way that a comfortable chair matters. You only notice how much it matters after you’ve been sitting in a bad one for six hours and your back has filed a formal complaint.
Having to look mentally ignore ANSI escape codes in raw logs (let alone being unable to unable to search for text through them) is annoying as hell, to put it mildly.
> Having to look mentally ignore ANSI escape codes in raw logs (let alone being unable to unable to search for text through them) is annoying as hell, to put it mildly.
You have a tool here, which is noted elsewhere: it's "less --raw". Also there's another tool which analyzes your logs and color codes them: "lnav".
lnav is incredibly powerful and helps understanding what's happening, when, where. It can also tail logs. Recommended usage is "your_command 2>&1 | lnav -t".
The winning strategy for all CI environments is a build system facsimile that works on your machine, your CI's machine, and your test/uat/production with as few changes between them as your project requirements demand.
I start with a Makefile. The Makefile drives everything. Docker (compose), CI build steps, linting, and more. Sometimes a project outgrows it; other times it does not.
But it starts with one unitary tool for triggering work.
Make is incredibly cursed. My favorite example is it having a built-in rule (oversimplified, some extra Makefile code that is pretended to exist in every Makefile) that will extract files from a version control system.
https://www.gnu.org/software/make/manual/html_node/Catalogue...
What you're saying is essentially ”Just Write Bash Scripts”, but with an extra layer of insanity on top. I hate it when I encounter a project like this.
Ye, kick off into some higher-level language instead of being at the mercy of your CI provider's plugins.
I use Fastlane extensively on mobile, as it reduces boilerplate and gives enough structure that the inherent risk of depending on a 3rd-party is worth it. If all else fails, it's just Ruby, so can break out of it.
This line of thinking inspired me to write mkincl [0] which makes Makefiles composable and reusable across projects. We're a couple of years into adoption at work and it's proven to be both intuitive and flexible.
I agree, but this is kind of an unachievable dream in medium to big projects.
I had this fight for some years in my present work and was really nagging in the beginning about the path we were getting into by not allowing the developers to run the full (or most) of the pipeline in their local machines… the project decided otherwise and now we spend a lot of time and resources with a behemoth of a CI infrastructure because each MR takes about 10 builds (of trial and error) in the pipeline to be properly tested.
It's not an unachievable dream. It's a trade-off made by people who may or may not have made the right call. Some things just don't run on a local machine: fair. But a lot of things do, even very large things. Things can be scaled down; the same harnesses used for the development environment and your CI environment and your prod environment. You don't need a full prod db, you need a facsimile mirroring the real thing but 1/50th the size.
Yes, there will always be special exemptions: they suck, and we suffer as developers because we cannot replicate a prod-like environment in our local dev environment.
But I laugh when I join teams and they say that "our CI servers" can run it but our shitty laptops cannot, and I wonder why they can't just... spend more money on dev machines? Or perhaps spend some engineering effort so they work on both?
> It's not an unachievable dream. It's a trade-off made by people who may or may not have made the right call.
In my experience at work. Anything that demands too much though, collaboration between teams and enforcing hard development rules, is always an unachievable dream in a medium to big project.
Note, that I don't think it's technically unachievable (at all). I just accepted that it's culturally (as in work culture) unachievable.
But it isn't a question of security. The project would very much like the developers to be able to run the pipelines on their machines.
It's just that management don't see it as worth it, in terms of development cost and limitations it would introduce in the current workflow, to enable the developers to do that.
I tend to disagree with this as it seems like an ad for Nix/Buildkite...
If your CI invocations are anything more than running a script or a target on a build tool (make, etc.) where the real build/test steps exist and can be run locally on a dev workstation, you're making the CI system much more complex than it needs to be.
CI jobs should at most provide an environment and configuration (credentials, endpoints, etc.), as a dev would do locally.
This also makes your code CI agnostic - going between systems is fairly trivial as they contain minimal logic, just command invocations.
This so much - I remember migrating from one CI system to another a few years ago - I had built all of our pipelines to pull in some secrets and call a .sh file that did all the heavy lifting. The migration had a few pain points but was fairly easy. Meanwhile, the teams who had created their pipelines with the UI and broken them up in to multiple steps were not happy at all.
The problem isn't CI/CD; the problem is "programming in configuration". We've somehow normalized a dev loop that involves `git commit -m "try fix"`, waiting 10 minutes, and repeating. Local reproduction of CI environments is still the missing link for most teams.
These tool fails are as a consequence of a failure of proper policy.
Tooling and Methodology!
Here’s the thing: build it first, then optimize it. Same goes for compile/release versus compile/debug/test/hack/compile/debug/test/test/code cycles.
That there is not a big enough distinction between a development build and a release build is a policy mistake, not a tooling ‘issue’.
Set things up properly and anyone pushing through git into the tooling pipeline are going to get their fingers bent soon enough, anyway, to learn how the machine mangles digits.
You can adopt this policy of environment isolation with any tool - it’s a method.
I clicked the article thinking it was about GitLab. Much of the criticism held true for GitLab anyway, particularly the insanely slow feedback loops these CI/CD systems create.
You can though. GHA and Gitlab CI and all the others have a large feature set for orchestration (build matrices, triggers,etc.) that are hard to test on a local setup. Sometimes they interfere with the build because of flags, or the build fails because it got orchestrated on a different machine, or a package is missing, or the cache key was misconfigured, etc.
There are a bunch of failures of a build that have nothing to do with how your build itself works. Asking teams to rebuild all that orchestration logic into their builds is madness. We shouldn’t ask teams to have to replicate tests for features that are in the CI they use.
Indeed there are. But you iterate on local and care about CI once everything is working in local. It's not every tuesday I get CI errors because a package was missing. It's rare unless you're in those 1000-little-microservice shops.
It is rare for our run of the mill Java apps to however, we notice it with:
Integration of code quality gates, documentation checks, linting, cross architecture builds, etc.
Most of this can be solved by doing the builds in a docker image that we also maintain ourselves. Then what remains is the interaction between the ci config for matrices, the tasks/actions to report back quality metrics, the integration with keyvaults to obtain deploy time secrets, etc.
Then there are the soft failures, missing a cache key causing many packages to be downloaded over and over again, or the same for the docker base images, etc.
We fix this for our 1000+ microservices, across hundreds of teams by maintaining a template that all services are mandated to use. It removes whole classes of errors and introduces whatever shenanigans we introduce. But it works for us.
If GHA, Azure Pipelines, etc., would provide a way of running builds locally that would speed up our development greatly.
Until then we have created linting based on CUE to parse the various yamls, resolving references to keystores, key ids, templates, etc., and making sure they exist. I think this is generic enough to open source even.
Yeah, do crons even work consistently for GitHub Actions? I tried to set one up the other day and it just randomly skipped runs. There were some docs that suggested they’re entirely unreliable as well.
Dead on. GitHub Actions is the worst CI tool I’ve ever used (maybe tied with Jenkins) and Buildkite is the best. Buildkite’s dynamic pipelines (the last item in the post) are so amazingly useful you’ll wonder how you ever did without them. You can do super cool things like have your unit test step spawn a test de-flaking step only if a test fails. Or control test parallelism based on the code changes you’re testing.
All of that on top of a rock-solid system for bringing your own runner pools which lets you use totally different machine types and configurations for each type of CI job.
Jenkins had a lot of issues and I’m glad to not be using it overall, but I did like defining pipelines in Groovy and I’ll take Groovy over YAML all day.
Jenkins, like many complex tools, is as good or bad as you make it. My last two employers had rock solid Jenkins environments because they were set up as close to vanilla as possible.
But yes, Groovy is a much better language for defining pipelines than YAML. Honestly pretty much any programming language at all is better than YAML. YAML is fine for config files, but not for something as complex as defining a CI pipeline.
biggest flaw of jenkins is that by default it runs on builder env, as it was made pre-container era. But I do like integration with viewing tests and benchmarks directly in the project, stuff that most CI/CD systems lack
Ian Duncan, I was imagining you on a stage delivering this as a standup comedy show on Netflix.
My pet peeve with Github Actions was that if I want to do simple things like make a "release", I have to Google for and install packages from internet randos. Yes, it is possible this rando1234 is a founding github employee and it is all safe. But why does something so basic need external JS? packages?
GitHub Actions isn’t killing engineering teams; complacency in CI design is. CI should be reliable, inspectable, and reproducible, not just convenient.
Back in... I don't know, 2010, we used Jenkins. Yes, that Java thingy. It was kind of terrible (like every CI), but it had a "Warnings Plugin". It parsed the log output with regular expressions and presented new warnings and errors in a nice table. You could click on them and it would jump to the source. You could configure your own regular expressions (yes, then you have two problems, I know, but it still worked).
Then I had to switch to GitLab CI. Everyone was gushing how great GitLab CI was compared to Jenkins. I tried to find out: how do I extract warnings and errors from the log - no chance. To this day, I cannot understand how everyone just settled on "Yeah, we just open thousands of lines of log output and scroll until we see the error". Like an animal. So of course, I did what anyone would do: write a little script that parses the logs and generates an HTML artifact. It's still not as good as the Warnings Plugin from Jenkins, but hey, it's something...
I'm sure, eventually someone/AI will figure this out again and everyone will gush how great that new thing is that actually parses the logs and lets you jump directly to the source...
Don't get me wrong: Jenkins was and probably still is horrible. I don't want to go back. However, it had some pretty good features I still miss to this day.
My browser can handle tens of thousands of lines of logs, and has Ctrl-F that's useful for 99% of the searches I need. A better runner could just dump the logs and let the user take care of them.
Why most web development devolved into a React-like "you can't search for what you can't see" is a mystery.
> GitHub Actions is not good. It’s not even fine. It has market share because it’s right there in your repo
Microsoft being microsoft I guess. Making computing progressively less and less delightful because your boss sees their buggy crap is right there so why don't you use it
Agreed with absolutely all of this. Really well written. Right now at work we're getting along fine with Actions + WarpBuild but if/when things start getting annoying I'm going to switch us over to Buildkite, which I've used before and greatly enjoyed.
Pretty sure someone at MS told me that Actions was rewritten by the team who wrote Azure DevOps. So bureaucracy would be a feature.
That aside, GH Actions doesn’t seem any worse than GitLab. I forget why I stopped using CircleCI. Price maybe? I do remember liking the feature where you could enter the console of the CI job and run commands. That was awesome.
I hope the author will check out RWX -- they say they've checked out most CI systems, but I don't think they've tried us out yet. We have everything they praise Buildkite for, except for managing your own compute (and that's coming, soon!). But we also built our own container execution model with CI specifically in mind. We've seen one too many Buildkite pipelines that have a 10 minute Docker build up front (!) and then have to pull a huge docker container across 40 parallel steps, and the overhead is enormous.
- Intermediate tasks are cached in a docker-like manner (content-addressed by filesystem and environment). Tasks in a CI pipeline build on previous ones by applying the filesystem of dependent tasks (AFAIU via overlayfs), so you don't execute the same task twice. The most prominent example of this is a feature branch that is up-to-date with main passes CI on main as soon as it's merged, as every task on main is a cache-hit with the CI execution on the feature branch.
- Failures: the UI surfaces failures to the top, and because of the caching semantics, you can re-run just the failed tasks without having to re-run their dependencies.
- Debugging: they expose a breakpoint (https://www.rwx.com/docs/rwx/remote-debugging) command that stops execution during a task and allows you to shell into the remote container for debugging, so you can debug interactively rather than pushing `env` and other debugging tasks again and again. And when you do need to push to test a fix, the caching semantics again mean you skip all the setup.
There's a whole lot of other stuff. You can generate tasks to execute in a CI pipeline via any programming language of your choice, the concurrency control supports multiple modes, no need for `actions/cache` because of the caching semantics and the incremental caching feature (https://www.rwx.com/docs/rwx/tool-caches).
I agree with all the points made about GH actions.
I haven't used as many CI systems as the author, but I've used, GH actions, Gitlab CI, CodeBuild, and spent a lot of time with Jenkins.
I've only touched Buildkite briefly 6 years ago, at the time it seemed a little underwhelming.
The CI system I enjoyed the most was TeamCity, sadly I've only used it at one job for about a year, but it felt like something built by a competent team.
I'm curious what people who have used it over a longer time period think of it.
I used TeamCity for a while and it was decent - I'm sure defining pipelines in code must be possible but the company I worked at seemed to have made this impossible with some in-house integration with their version control and release management software.
tc is probably the best console runner there is and I agree, it made CI not suck. It is also possible to make it very fast, with a bit of engineering and by hosting it on your own hardware. Unfortunately it’s as legacy as Jenkins today. And in contrast to Jenkins it’s not open source or free, many parts of it, like the scheduler/orchestrator, is not pluggable.
But I don’t know about competent people, reading their release notes always got me thinking ”how can anyone write code where these bugs are even possible?”. But I guess that’s why many companies just write nonsense release notes today, to hide their incompetence ;)
One of them does not even use a CI. We run tests locally and we deploy from a self hosted TeamCity instance. It's a Django app with server side HTML generation so the deploy is copying files to the server and a restart. We implemented a Capistrano alike system in bash and it's been working since before Covid. No problems.
The other one uses bitbucket pipelines to run tests after git pushes on the branches for preproduction and production and to deploy to those systems. They use Capistrano because it's a Rails app (with a Vue frontend.) For some reason the integration tests don't run reliably neither on the CI instances nor on Macs, so we run them only on my Linux laptop. It's been in production since 2021.
A customer I'm not working with anymore did use Travis and another one I don't remember. That also run a build on there because they were using Elixir with Phoenix, so we were creating a release and deploying it. No mere file copying. That was the most unpleasant deploy system of the bunch. A lot of wasted time from a push to a deploy.
In all of those cases logs are inevitably long but they don't crash the browser.
Good place to ask: I'm not comfortable with NPM-style `uses: randomAuthor/some-normal-action@1` for actions that should be included by default, like bumping version tags or uploading a file to the releases.
What's the accepted way to copy these into your own repo so you can make sure attackers won't update the script to leak my private repo and steal my `GITHUB_TOKEN`?
There are two solutions GitHub Actions people will tell you about. Both are fundamentally flawed because GitHub Actions Has a Package Manager, and It Might Be the Worst [1].
One thing people will say is to pin the commit SHA, so don't do "uses: randomAuthor/some-normal-action@v1", instead do "uses: randomAuthor/some-normal-action@e20fd1d81c3f403df57f5f06e2aa9653a6a60763". Alternatively, just fork the action into your own GitHub account and import that instead.
However, neither of these "solutions" work, because they do not pin the transitive dependencies.
Suppose I pin the action at a SHA or fork it, but that action still imports "tj-actions/changed-files". In that case, you would have still been pwned in the "tj-actions/changed-files" incident [2].
The only way to be sure is to manually traverse the dependency hierarchy, forking each action as you go down the "tree" and updating every action to only depend on code you control.
In other package managers, this is solved with a lockfile - go.sum, yarn.lock, ...
After Azure DevOps and Jenkins, GitHub is like afresh breath of air. It might be a fart in your face, but at least it's available within IT department guidelines, and any movement of air is preferable to the stifling insanity of the others.
Hooo boy where do I begin? Dependency deadlocks are the big one - you try to share resource attributes (eg ARN) from one stack to another. You remove the consumer and go to deploy again. The producer sees no more dependency so it prunes the export. But it can't delete the export, cause the consumer still needs it. You can't deploy the consumer, because the producer has to deploy first sequentially. And if you can't delete the consumer (eg your company mandates a CI pipeline deploy for everything) you gotta go bug Ops on slack, wait for someone who has the right perms to delete it, then redeploy.
You can't actually read real values from Parameters/exports (you get a token placeholder) so you can't store JSON then read it back and decode (unless in same stack, which is almost pointless). You can do some hacks with Fn:: though.
Deploying certain resources that have names specified (vs generated) often breaks because it has to create the new resource before destroying the old one, which it can't, because the name conflicts (it's the same name...cause it's the same construct).
It's wildly powerful though, which is great. But we have basically had to create our own internal library to solve what should be non-problems in an IaC system.
Would be hilarious if my coworker stumbled upon this. I know he reads hn and this has been my absolute crusade this quarter.
> Dependency deadlocks are the big one - you try to share resource attributes (eg ARN) from one stack to another. You remove the consumer and go to deploy again. The producer sees no more dependency so it prunes the export.
I’m a little puzzled. How are you getting dependency deadlocks if you’re not creating circular dependencies?
Also, exports in CloudFormation are explicit. I don’t see how this automatic pruning would occur.
> Deploying certain resources that have names specified (vs generated) often breaks
CDK tries to prevent this antipattern from happening by default. You have to explicitly make it name something. The best practice is to use tags to name things, not resource names.
I'll just echo the other poster with "deadlocks". It's obscene how slow CF is, and the fact that its failure modes often leave you in a state that feels extremely dangerous. I've had to contact AWS Support before due to CF locking up in an irrecoverable way due to cycles.
What I find hardest about CI offerings is that each one has a unique DSL that inevitably has edge cases that you may only find out once you’ve tried it.
You might face that many times using Gitlab CI. Random things don’t work the way you think it should and the worst part is you must learn their stupid custom DSL.
Not only that, there’s no way to debug the maze of CI pipelines but I imagine it’s a hard thing to achieve. How would I be able to locally run CI that also interacts with other projects CI like calling downstream pipelines?
I assume by DSL they mean some custom templating language built on top, for things like iterating and if-conditions. If it's plain JSON/YAML you can produce that using any language you wish.
But do you provide SDKs in the languages? I mean even in gitlab I could technically generate YAML in python but what I needed was an SDK that understood the domain.
Personally I like Drone more than Buildkite. It's as close to a perfect CI system as I've seen; just complex enough to do everything I need, with a design so stripped-down it can't be simpler. I occasionally check on WoodpeckerCI to see if it's reached parity with Drone. Now that AI coding is a thing, hopefully that'll happen soon
Nice write up, but wondering now what nix proposes in that space.
I've never used nix or nixos but a quick search led me to nixops, and then realized v4 is entirely being rewritten in rust.
I'm surprised they chose rust for glue code, and not a more dynamic and expressive language that could make things less rigid and easier to amend.
In the clojure world BigConfig [0], which I never used, would be my next stop in the build/integrate/deploy story, regardless of tech stack. It integrates workflow and templating with the full power of a dynamic language to compose various setups, from dot/yaml/tf/etc files to ops control planes (see their blog).
I really wonder in which universe people are living. GitHub Actions was a godsend when it was first released and it still continues to be great. It has just the right amount of abstractions. I've used many CIs in the past and I'd definitely prefer GA over any of them.
Have you used the log viewer? Because I swear the log viewer is the biggest letdown. I love that GitHub Actions is deeply integrated into GitHub. I hate the log viewer, and that's like one of the core parts of it.
> I have mass-tested these systems so that you don’t have to, and I have the scars to show for it, and I am here to tell you: GitHub Actions is not good.
> Every CI system eventually becomes “a bunch of YAML.” I’ve been through the five stages of grief about it and emerged on the other side, diminished but functional.
> I understand the appeal. I have felt it myself, late at night, after the fourth failed workflow run in a row. The desire to burn down the YAML temple and return to the simple honest earth of #!/bin/bash and set -euo pipefail. To cast off the chains of marketplace actions and reusable workflows and just write the damn commands. It feels like liberation. It is not.
Ah yes, misery loves company! There's nothing like a good rant (preferably about a technology you have to use too, although you hate its guts) to brighten up your Friday...
I have not had this experience. It sounds like a bad process rather than being GitHubs fault. I’ve always had GitHub actions double checking the same checks I run locally before pushing.
I don't care if this is an advertisement for buildkite masquerading as a blog post or if this is just an honest rant. Either way, I gotta say it speaks a lot of truth.
GHA is quite empowering for solo devs. I just dev on my tiny machine and outsource all heavy work to GHA, and basically let Claude rip on the errors, rinse repeat.
I don't have much experience with Guthub Actions, but I'll say this does sound worse than Azure DevOps, which I did not imagine was possible. I've never liked any CI system, but ADO must be one of the lower circles of hell.
We started using Buildkite at $DAYJOB years ago and haven't looked back. Incredibly, GitHub Actions seems to have gotten _worse_ in the interim. Absolutely no regrets from switching.
I matured as an Engineer using various CI tools and discovering hands-on that these tools are so unreliable (pipes often failing inconsistently). I am surprised to find that there are better systems, and I'd like to learn more.
I think people shouldn't go installing random browser extensions like they shouldn't go installing random package manager packages, which is part of his argument
> If you’re a small team with a simple app and straightforward tests, it’s probably fine. I’m not going to tell you to rip it out.
> But if you’re running a real production system, if you have a monorepo, if your builds take more than five minutes, if you care about supply chain security, if you want to actually own your CI: look at Buildkite.
Goes in line with exactly what I said in 2020 [0] about GitHub vs Self-hosting. Not a big deal for individuals, but for large businesses it's a problem if you can push that critical change when your CI is down every week.
I know this is off topic, but that homepage is a piece of work: https://buildkite.com
I get it's quirky, but I'm at a low energy state and just wanted to know what it does...
Right before I churned out, I happened to click "[E] Exit to classic Buildkite" and get sent to their original homepage:
https://buildkite.com/platform/
It just tells you what it Buildkite does! Sure it looks default B2B SaaS, but more importantly it's clear. "The fastest CI platform" instead of some LinkedIn-slop manifesto.
If I want to know why it's fast, I scroll down and learn it scales to lots of build agents and has unlimited parallelism!
And if I wonder if it plays nice with my stack, I scroll and there's logos for a bunch of well known testing frameworks!
And if I want to know if this isn't v0.0001 pre-alpha software by a pre-seed company spending runway on science-fair home pages, this one has social proof that isn't buried in a pseudo-intellectual rant!
-
I went down the rabbit hole of what lead to this and it's... interesting to say the least.
Hello mate, Head of Brand and Design at BK here. Thanks for the feedback, genuinely; the homepage experiment has been divisive, in a great way. Some folk love it, some folk hate it, some just can't be bothered with it. All fair.
Glad that the classic site hit the mark, but a lot work to do to make that clearer than it is; we're working on the next iteration that will sunset the CLI homepage into an easter egg.
Happy to take more critique, either on the execution or the rabbit hole.
I did a BK search earlier in the article and ended on the same page, decided I couldn't be bothered to play those sort of games and clicked away. The GPs link actually looks rather interesting so I'll investigate, so take this a hate-it-folk vote.
Great of you to accept critiques, but I don't think there's anything more I can add.
You brought up Planetscale's markdown homepage rework in one of those posts and I actually think it's great... but it's also clear, direct, and has no hidden information.
I'd love to see what happens to conversions once you retire this to an Easter Egg.
We're running GitHub Actions. It's good. All the real logic is in Nix, and we mostly use our own runners. The rest of the UI that GitHub Actions provides is very nice.
We previously used a CI vendor which specialised in building Nix projects. We wanted to like it, but it was really clunky. GitHub Actions was a significant quality of life improvement for us.
None of my colleagues have died. GitHub Actions is not killing my engineering team at any rate.
* Workflows are only registered once pushed to main, impossible to test the first runs in a branch.
* MS/GH don't care much about GHES as they do github.com, I think they'd like to see it just die. Massive lack of feature parity.
* Labels: If any of your workflows trigger from a label, they ALL DO. You can't target labels only to certain workflows, they all run and then cancel, polluting your checks.
* Deployments: What is a deployment even doing? There is no management to deploy.
* Statefulness: No native way to store state between runs in the same workflow or PR, you would think you could save some sort of state somewhere but you have to manage it all yourself with manifests or something else.
This. In my experience the people actively disliking it have only ever used Jenkins 1 or somewhy only used freestyle jobs.
There are numerous ways to shoot yourself in the foot, though, and everything must be configured properly to get to feature parity with GHA (mail server, plugins, credentials, sso, https, port forwarding, webhooks, GitHub app, ...).
But once those are out of the way, its the most flexible and fastest CI system I have ever used.
Nah I don't mind Jenkins either. I think it's unpopular because you can definitely turn it into a monstrosity, and I think a lot of people have only seen it in that state.
> You’ve upgraded the engine but you’re still driving the car that catches fire when you turn on the radio.
And fixing the pyro-radio bug will bring other issues, for sure, so they won't because some's workflow will rely on the fact that turning on the radio sets the car on fire: https://xkcd.com/1172/
I see the appeal of GitHub for sharing open source - the interface is so much cleaner and easier to find all you are looking for (GitLab could improve there).
But for CI/CD GitHub doesn’t even come close to GitLab in the usability department, and that’s before we even talk about pricing and the free tiers. People need to give it a try and see what they are missing.
I hate to say this. I can't even believe I am saying it, but this article feels like it was written in a different universe where LLMs don't exist. I understand they don't magically solve all of these problems, and I'm not suggesting that it's as simple as "make the robot do it for you" either.
However, there are very real things LLMs can do that greatly reduce the pain here. Understanding 800 lines of bash is simply not the boogie man it used to be a few years ago. It completely fits in context. LLMs are excellent at bash. With a bit of critical thinking when it hits a wall, LLM agents are even great at GitHub actions.
The scariest thing about this article is the number of things it's right about. Yet my uncharacteristic response to that is one big shrug, because frankly I'm not afraid of it anymore. This stuff has never been hard, or maybe it has. Maybe it still is for people/companies who have super complex needs. I guess we're not them. LLMs are not solving my most complex problems, but they're killing the pain of glue left and right.
The flip side of your argument is that it no longer matters how obtuse, complicated, baroque, brittle, underspecified, or poorly documented software is anymore. If we can slap an LLM on top of it to paper over those aspects, it’s fine.
Maybe efficiency still counts, but only when it meaningfully impacts individual spend.
Additionally it's not like you're constrained to write it in bash. You could use Python or any other language. The author talks about how you're now redeveloping a shitty CI system with no tests? Well, add some tests for it! It's not rocket science. Yes, your CI system is part of your project and something you should be including in your work. I drew this conclusion way back in the days where I was writing C and C++ and had days where I spent more time on the build system than on the actual code. It's frustrating but at the end of the day having a reliable way to build and test your code is not less important than the code itself. Treat it like a real project.
> this is a product made by one of the richest companies on earth.
nit: no, it was made by a group of engineers that loved git and wanted to make a distributed remote git repository. But it was acquired/bought out then subsequently enshittified by the richest/worst company on earth.
Linux powers the world in this area and bash is the glue which executes all these commands on servers.
Any program or language you write to try and 'revolutionise CI' and be this glue will ultimately make the child process call to a bash/sh terminal anyhow and you need to read both stdout and stderr and exit codes to figure out next steps.
We all have opinions about ci/cd. Why? Because it's getting between us and what we're attempting to do. In all honesty GitHub actions solves the biggest problem for a lot of Devs, infrastructure management and performance. I have managed a lot of build infrastructure and don't ever want to touch that again. GitHub fixed that for me. My build servers were often more power hungry than my production servers. GitHub fixed that for me. Basically what I'm saying is for 80% of people this is an 80% good enough solution and that's more important than everything else. Can I ship my code quickly. Can I define build deps next my code that everyone can see. Can I debug it, can others contribute to it. It just ticks so many boxes. I hope ci dies a good death because I think people are genuinely just thinking about the wrong problem. Stop making your life more difficult. Appreciate what this solves and move on. We can argue about it until we're blue in the face but it won't change the fact that often the solution that wins isn't the best, it's the one that reduces friction and solves the UX problem. I don't need N ways to configure somehow. I need to focus on what I'm trying to ship and that's not a build server.
I've used many of the CI systems that the author has here, and I've done a lot of CircleCI and GitHub Actions, and I don't come to quite the same conclusions. One caveat though, I haven't used Buildkite, which the author seems to recommend.
Over the years CI tools have gone from specialist to generalist. Jenkins was originally very good at building Java projects and not much else, Travis had explicit steps for Rails projects, CircleCI was similarly like this back in the day.
This was a dead end. CI is not special. We realised as a community that in fact CI jobs were varied, that encoding knowledge of the web framework or even language into the CI system was a bad idea, and CI systems became _general workflow orchestrators_, with some logging and pass/fail UI slapped on top. This was a good thing!
I orchestrated a move off CircleCI 2 to GitHub Actions, precisely because CircleCI botched the migration from the specialist to generalist model, and we were unable to express a performant and correct CI system in their model at the time. We could express it with GHA.
GHA is not without its faults by any stretch, but... the log browser? So what, just download the file, at least the CI works. The YAML? So it's not-quite-yaml, they weren't the first or last to put additional semantics on a config format, all CI systems have idiosyncrasies. Plugins being Docker images? Maybe heavyweight, but honestly this isn't a bad UX.
What does matter? Owning your compute? Yeah! This is an important one, but you can do that on all the major CI systems, it's not a differentiator. Dynamic pipelines? That's really neat, and a good reason to pick Buildkite.
My takeaway from my experience with these platforms is that Actions is _pretty good_ in the ways that truly matter, and not a problem in most other ways. If I were starting a company I'd probably choose Buildkite, sure, but for my open source projects, Actions is good.
I actually have the opposite opinion.
In game development we care a lot about build systems- and annoyingly, we have vanishingly few companies coming to throw money at our problems.
The few that do, charge a kings ransom (Incredibuild). Our build times are pretty long, and minimising them is ideal.
If, then, your build system does not understand your build-graph then you’re waiting even longer for builds or you’re keeping around incremental state and dirty workspaces (which introduces transient bugs, as now the compiler has to do the hard job of incrementally building anyway).
So our build systems need to be acutely aware of the intricacies of how the game is built (leading to things like UnrealEngine Horde and UBA).
If we used a “general purpose” approach we’d be waiting in some cases over a day for a build, even with crazy good hardware.
Also game dev here - I disagree with your take. Our _build tools_ need to be hyper aware but our CI systems absolutely do not and would be better served as general purpose. What good is Horde when you need to deploy your already packaged game to steam via steamcmd, or when you need to update a remote config file for a content hotfix. Horde used BuildGraph meaning you need a full engine sync’ed node to run curl -X POST whatever.com
Game dev has a serious case of NIH - sometimes for good reasons but in lots of cases it’s because things have been set up in a way that makes changing that impractical. Using UBA as an example - FastBuild, Incredibuild, SNDBS Sccache all exist as either caching or distribution systems. Compiling a game engine isn’t much different to compiling a web browser (which ninja was written for).
I’ve worked at two game studios where we’ve used general purpose CI systems and been able to push out builds in < 15 minutes. Horde and UBA exist to handle how epic are doing things internally, rather than as an inherent requirement on how to use the tools effectively. If you don’t have the same constraints as developing Unreal Engine (and Fortnite) then you don’t have the same needs.
(I worked for epic when horde came online, but don’t any more).
If you're at a games studio that values build-times, value that. I worked at a very good SRE-mindset studio and missed it, deeply, after I left. Back then I expected everyone to think and care about such things and have spent many, many hours advocating for best-in-class, more efficient, cheaper development practices.
WRT github actions... I agree with OOP, they leave much to be desired, esp when working on high-velocity work. My ci/cd runs locally first and then GHA is (slower) verification, low-noise, step.
Actions is many things. It’s an event dispatcher, an orchestrator, an execution engine and runtime, an artifact registry and caching system, a workflow modeler, a marketplace, and a secrets manager. And I didn’t even list all of the things Actions is. It’s better at some of those things and not others.
The systems I like to design that use GHA usually only use the good parts. GitHub is a fine events dispatcher, for instance, but a very bad workflow orchestrator. So delegate that to a system that is good at that instead
> but... the log browser? So what, just download the file, at least the CI works.
They answer your "so what" quite directly:
>> Build logs look like terminal output, because they are terminal output. ANSI colors work. Your test framework’s fancy formatting comes through intact. You’re not squinting at a web UI that has eaten your escape codes and rendered them as mojibake. This sounds minor. It is not minor. You are reading build logs dozens of times a day. The experience of reading them matters in the way that a comfortable chair matters. You only notice how much it matters after you’ve been sitting in a bad one for six hours and your back has filed a formal complaint.
Having to look mentally ignore ANSI escape codes in raw logs (let alone being unable to unable to search for text through them) is annoying as hell, to put it mildly.
Doesn't `less -R` solve the ANSI escape problem?
> Having to look mentally ignore ANSI escape codes in raw logs (let alone being unable to unable to search for text through them) is annoying as hell, to put it mildly.
You have a tool here, which is noted elsewhere: it's "less --raw". Also there's another tool which analyzes your logs and color codes them: "lnav".
lnav is incredibly powerful and helps understanding what's happening, when, where. It can also tail logs. Recommended usage is "your_command 2>&1 | lnav -t".
I was a very early customer of BuildKite. It’s lovely, very ergonomic, and gives you so much control.
The winning strategy for all CI environments is a build system facsimile that works on your machine, your CI's machine, and your test/uat/production with as few changes between them as your project requirements demand.
I start with a Makefile. The Makefile drives everything. Docker (compose), CI build steps, linting, and more. Sometimes a project outgrows it; other times it does not.
But it starts with one unitary tool for triggering work.
Make is incredibly cursed. My favorite example is it having a built-in rule (oversimplified, some extra Makefile code that is pretended to exist in every Makefile) that will extract files from a version control system. https://www.gnu.org/software/make/manual/html_node/Catalogue...
What you're saying is essentially ”Just Write Bash Scripts”, but with an extra layer of insanity on top. I hate it when I encounter a project like this.
https://github.com/casey/just is an uncursed make (for task running purposes - it's not a general build system)
Ye, kick off into some higher-level language instead of being at the mercy of your CI provider's plugins.
I use Fastlane extensively on mobile, as it reduces boilerplate and gives enough structure that the inherent risk of depending on a 3rd-party is worth it. If all else fails, it's just Ruby, so can break out of it.
This line of thinking inspired me to write mkincl [0] which makes Makefiles composable and reusable across projects. We're a couple of years into adoption at work and it's proven to be both intuitive and flexible.
[0]: https://github.com/mkincl/mkincl
it's 2026. People still build fucking makefile generators
I think the README would be better with a clearer, up-front explanation of what this builds on top of using `make` directly.
I agree, but this is kind of an unachievable dream in medium to big projects.
I had this fight for some years in my present work and was really nagging in the beginning about the path we were getting into by not allowing the developers to run the full (or most) of the pipeline in their local machines… the project decided otherwise and now we spend a lot of time and resources with a behemoth of a CI infrastructure because each MR takes about 10 builds (of trial and error) in the pipeline to be properly tested.
It's not an unachievable dream. It's a trade-off made by people who may or may not have made the right call. Some things just don't run on a local machine: fair. But a lot of things do, even very large things. Things can be scaled down; the same harnesses used for the development environment and your CI environment and your prod environment. You don't need a full prod db, you need a facsimile mirroring the real thing but 1/50th the size.
Yes, there will always be special exemptions: they suck, and we suffer as developers because we cannot replicate a prod-like environment in our local dev environment.
But I laugh when I join teams and they say that "our CI servers" can run it but our shitty laptops cannot, and I wonder why they can't just... spend more money on dev machines? Or perhaps spend some engineering effort so they work on both?
> It's not an unachievable dream. It's a trade-off made by people who may or may not have made the right call.
In my experience at work. Anything that demands too much though, collaboration between teams and enforcing hard development rules, is always an unachievable dream in a medium to big project.
Note, that I don't think it's technically unachievable (at all). I just accepted that it's culturally (as in work culture) unachievable.
Funny enough, the LLMs are allowed to run builds on your local machine. The humans, not any more.
But it isn't a question of security. The project would very much like the developers to be able to run the pipelines on their machines.
It's just that management don't see it as worth it, in terms of development cost and limitations it would introduce in the current workflow, to enable the developers to do that.
> But it isn't a question of security.
Where did i mention security?
> in terms of development cost and limitations it would introduce in the current workflow
Well said. "in the current workflow". As in, not "in the development process". Those are unrelated items.
Sometimes the problem is that the project is bigger than it needs to be.
I tend to disagree with this as it seems like an ad for Nix/Buildkite...
If your CI invocations are anything more than running a script or a target on a build tool (make, etc.) where the real build/test steps exist and can be run locally on a dev workstation, you're making the CI system much more complex than it needs to be.
CI jobs should at most provide an environment and configuration (credentials, endpoints, etc.), as a dev would do locally.
This also makes your code CI agnostic - going between systems is fairly trivial as they contain minimal logic, just command invocations.
[delayed]
Can 100% confirm this is not an ad (at least not for Buildkite) and was a lovely surprise to read for the team.
This so much - I remember migrating from one CI system to another a few years ago - I had built all of our pipelines to pull in some secrets and call a .sh file that did all the heavy lifting. The migration had a few pain points but was fairly easy. Meanwhile, the teams who had created their pipelines with the UI and broken them up in to multiple steps were not happy at all.
Hey, at least you didn't pull the reflexive, "this must be AI slop!" comment that seems quite prevalent on HN lately.
The problem isn't CI/CD; the problem is "programming in configuration". We've somehow normalized a dev loop that involves `git commit -m "try fix"`, waiting 10 minutes, and repeating. Local reproduction of CI environments is still the missing link for most teams.
Bingo.
These tool fails are as a consequence of a failure of proper policy.
Tooling and Methodology!
Here’s the thing: build it first, then optimize it. Same goes for compile/release versus compile/debug/test/hack/compile/debug/test/test/code cycles.
That there is not a big enough distinction between a development build and a release build is a policy mistake, not a tooling ‘issue’.
Set things up properly and anyone pushing through git into the tooling pipeline are going to get their fingers bent soon enough, anyway, to learn how the machine mangles digits.
You can adopt this policy of environment isolation with any tool - it’s a method.
Tooling and Methodology!
`act` should help most teams reproducing CI locally.
act is horrible if:
* you have any remote resources that are needed during build
* for some reason your company doesn't have standardize build images
Killing engineer teams? Hyperbole thread titles need to be killed. I find github actions to be just fine. I prefer it to bitbucket and gitlab.
Yeah I was wondering how Microsoft is okay with Github murdering people but then was let down by the article.
I mean if you really wanted to get to that conclusion they do support Israel.
all the sides in that conflict are ok with killing civilians
aaand there we have godwin's law again
It's interesting that your invocation of Godwin's law equates Jews with Nazis. Why is that?
I clicked the article thinking it was about GitLab. Much of the criticism held true for GitLab anyway, particularly the insanely slow feedback loops these CI/CD systems create.
Can't blame gitlab for team not having a local dev setup.
You can though. GHA and Gitlab CI and all the others have a large feature set for orchestration (build matrices, triggers,etc.) that are hard to test on a local setup. Sometimes they interfere with the build because of flags, or the build fails because it got orchestrated on a different machine, or a package is missing, or the cache key was misconfigured, etc.
There are a bunch of failures of a build that have nothing to do with how your build itself works. Asking teams to rebuild all that orchestration logic into their builds is madness. We shouldn’t ask teams to have to replicate tests for features that are in the CI they use.
Indeed there are. But you iterate on local and care about CI once everything is working in local. It's not every tuesday I get CI errors because a package was missing. It's rare unless you're in those 1000-little-microservice shops.
It is rare for our run of the mill Java apps to however, we notice it with:
Integration of code quality gates, documentation checks, linting, cross architecture builds, etc.
Most of this can be solved by doing the builds in a docker image that we also maintain ourselves. Then what remains is the interaction between the ci config for matrices, the tasks/actions to report back quality metrics, the integration with keyvaults to obtain deploy time secrets, etc.
Then there are the soft failures, missing a cache key causing many packages to be downloaded over and over again, or the same for the docker base images, etc.
We fix this for our 1000+ microservices, across hundreds of teams by maintaining a template that all services are mandated to use. It removes whole classes of errors and introduces whatever shenanigans we introduce. But it works for us.
If GHA, Azure Pipelines, etc., would provide a way of running builds locally that would speed up our development greatly.
Until then we have created linting based on CUE to parse the various yamls, resolving references to keystores, key ids, templates, etc., and making sure they exist. I think this is generic enough to open source even.
Github being less and less reliable nowadays just makes this more true.
In the past week I have seen:
- actions/checkout inexplicably failing, sometimes succeeding on 3rd retry (of the built-in retry logic)
- release ci jobs scheduling _twice_, causing failures, because ofc the release already exists
- jobs just not scheduling. Sometimes for 40m.
I have been using it actively for a few years and putting aside everything the author is saying, just the base reliability is going downhill.
I guess zig was right. Too bad they missed builtkite, Codeberg hasn't been that reliable or fast in my experience.
Yeah, do crons even work consistently for GitHub Actions? I tried to set one up the other day and it just randomly skipped runs. There were some docs that suggested they’re entirely unreliable as well.
Dead on. GitHub Actions is the worst CI tool I’ve ever used (maybe tied with Jenkins) and Buildkite is the best. Buildkite’s dynamic pipelines (the last item in the post) are so amazingly useful you’ll wonder how you ever did without them. You can do super cool things like have your unit test step spawn a test de-flaking step only if a test fails. Or control test parallelism based on the code changes you’re testing.
All of that on top of a rock-solid system for bringing your own runner pools which lets you use totally different machine types and configurations for each type of CI job.
Highly, highly recommend.
Jenkins had a lot of issues and I’m glad to not be using it overall, but I did like defining pipelines in Groovy and I’ll take Groovy over YAML all day.
Jenkins, like many complex tools, is as good or bad as you make it. My last two employers had rock solid Jenkins environments because they were set up as close to vanilla as possible.
But yes, Groovy is a much better language for defining pipelines than YAML. Honestly pretty much any programming language at all is better than YAML. YAML is fine for config files, but not for something as complex as defining a CI pipeline.
What kills me is when these things add like control flow constructs to YAML.
Like just use an actual programming language!
biggest flaw of jenkins is that by default it runs on builder env, as it was made pre-container era. But I do like integration with viewing tests and benchmarks directly in the project, stuff that most CI/CD systems lack
what's wrong with Jenkins? It's battle tested and hardened. Works flawless even with thousands of tasks, and WORKS OUT OF THE BOX.
imo top 10 best admin/devs free software written in past 25 years.
It's too old and easy-to-use for anyone to hype it up as the next cool thing.
Ian Duncan, I was imagining you on a stage delivering this as a standup comedy show on Netflix.
My pet peeve with Github Actions was that if I want to do simple things like make a "release", I have to Google for and install packages from internet randos. Yes, it is possible this rando1234 is a founding github employee and it is all safe. But why does something so basic need external JS? packages?
Yeah, their "standard library" so to speak (basically everything under the actions org) is lacking. But for this specifically, you can use the gh CLI.
GitHub Actions isn’t killing engineering teams; complacency in CI design is. CI should be reliable, inspectable, and reproducible, not just convenient.
The log viewer thing is what baffles me most.
Back in... I don't know, 2010, we used Jenkins. Yes, that Java thingy. It was kind of terrible (like every CI), but it had a "Warnings Plugin". It parsed the log output with regular expressions and presented new warnings and errors in a nice table. You could click on them and it would jump to the source. You could configure your own regular expressions (yes, then you have two problems, I know, but it still worked).
Then I had to switch to GitLab CI. Everyone was gushing how great GitLab CI was compared to Jenkins. I tried to find out: how do I extract warnings and errors from the log - no chance. To this day, I cannot understand how everyone just settled on "Yeah, we just open thousands of lines of log output and scroll until we see the error". Like an animal. So of course, I did what anyone would do: write a little script that parses the logs and generates an HTML artifact. It's still not as good as the Warnings Plugin from Jenkins, but hey, it's something...
I'm sure, eventually someone/AI will figure this out again and everyone will gush how great that new thing is that actually parses the logs and lets you jump directly to the source...
Don't get me wrong: Jenkins was and probably still is horrible. I don't want to go back. However, it had some pretty good features I still miss to this day.
Why do we need a log viewer at all?
My browser can handle tens of thousands of lines of logs, and has Ctrl-F that's useful for 99% of the searches I need. A better runner could just dump the logs and let the user take care of them.
Why most web development devolved into a React-like "you can't search for what you can't see" is a mystery.
The only thing I can understand is that GHA is awesome because it's YAML and everyone loves YAML. Irrationally. YAML is terrible.
> GitHub Actions is not good. It’s not even fine. It has market share because it’s right there in your repo
Microsoft being microsoft I guess. Making computing progressively less and less delightful because your boss sees their buggy crap is right there so why don't you use it
Agreed with absolutely all of this. Really well written. Right now at work we're getting along fine with Actions + WarpBuild but if/when things start getting annoying I'm going to switch us over to Buildkite, which I've used before and greatly enjoyed.
Pretty sure someone at MS told me that Actions was rewritten by the team who wrote Azure DevOps. So bureaucracy would be a feature.
That aside, GH Actions doesn’t seem any worse than GitLab. I forget why I stopped using CircleCI. Price maybe? I do remember liking the feature where you could enter the console of the CI job and run commands. That was awesome.
I agree though that yaml is not ideal.
I hope the author will check out RWX -- they say they've checked out most CI systems, but I don't think they've tried us out yet. We have everything they praise Buildkite for, except for managing your own compute (and that's coming, soon!). But we also built our own container execution model with CI specifically in mind. We've seen one too many Buildkite pipelines that have a 10 minute Docker build up front (!) and then have to pull a huge docker container across 40 parallel steps, and the overhead is enormous.
Can you explain how your product solves this problem? I clicked around your site and couldn't figure it out.
As a (very happy) RWX customer:
- Intermediate tasks are cached in a docker-like manner (content-addressed by filesystem and environment). Tasks in a CI pipeline build on previous ones by applying the filesystem of dependent tasks (AFAIU via overlayfs), so you don't execute the same task twice. The most prominent example of this is a feature branch that is up-to-date with main passes CI on main as soon as it's merged, as every task on main is a cache-hit with the CI execution on the feature branch.
- Failures: the UI surfaces failures to the top, and because of the caching semantics, you can re-run just the failed tasks without having to re-run their dependencies.
- Debugging: they expose a breakpoint (https://www.rwx.com/docs/rwx/remote-debugging) command that stops execution during a task and allows you to shell into the remote container for debugging, so you can debug interactively rather than pushing `env` and other debugging tasks again and again. And when you do need to push to test a fix, the caching semantics again mean you skip all the setup.
There's a whole lot of other stuff. You can generate tasks to execute in a CI pipeline via any programming language of your choice, the concurrency control supports multiple modes, no need for `actions/cache` because of the caching semantics and the incremental caching feature (https://www.rwx.com/docs/rwx/tool-caches).
And I've never had a problem with the logs.
I agree with all the points made about GH actions.
I haven't used as many CI systems as the author, but I've used, GH actions, Gitlab CI, CodeBuild, and spent a lot of time with Jenkins.
I've only touched Buildkite briefly 6 years ago, at the time it seemed a little underwhelming.
The CI system I enjoyed the most was TeamCity, sadly I've only used it at one job for about a year, but it felt like something built by a competent team.
I'm curious what people who have used it over a longer time period think of it.
I feel like it should be more popular.
I used TeamCity for a while and it was decent - I'm sure defining pipelines in code must be possible but the company I worked at seemed to have made this impossible with some in-house integration with their version control and release management software.
tc is probably the best console runner there is and I agree, it made CI not suck. It is also possible to make it very fast, with a bit of engineering and by hosting it on your own hardware. Unfortunately it’s as legacy as Jenkins today. And in contrast to Jenkins it’s not open source or free, many parts of it, like the scheduler/orchestrator, is not pluggable.
But I don’t know about competent people, reading their release notes always got me thinking ”how can anyone write code where these bugs are even possible?”. But I guess that’s why many companies just write nonsense release notes today, to hide their incompetence ;)
Pour one out for the memory of CruiseControl, the OG (?) granddaddy of all CI systems in the form we would recognise them today.
> But Everyone Uses It!
All of my customers are on bitbucket.
One of them does not even use a CI. We run tests locally and we deploy from a self hosted TeamCity instance. It's a Django app with server side HTML generation so the deploy is copying files to the server and a restart. We implemented a Capistrano alike system in bash and it's been working since before Covid. No problems.
The other one uses bitbucket pipelines to run tests after git pushes on the branches for preproduction and production and to deploy to those systems. They use Capistrano because it's a Rails app (with a Vue frontend.) For some reason the integration tests don't run reliably neither on the CI instances nor on Macs, so we run them only on my Linux laptop. It's been in production since 2021.
A customer I'm not working with anymore did use Travis and another one I don't remember. That also run a build on there because they were using Elixir with Phoenix, so we were creating a release and deploying it. No mere file copying. That was the most unpleasant deploy system of the bunch. A lot of wasted time from a push to a deploy.
In all of those cases logs are inevitably long but they don't crash the browser.
Good place to ask: I'm not comfortable with NPM-style `uses: randomAuthor/some-normal-action@1` for actions that should be included by default, like bumping version tags or uploading a file to the releases.
What's the accepted way to copy these into your own repo so you can make sure attackers won't update the script to leak my private repo and steal my `GITHUB_TOKEN`?
There are two solutions GitHub Actions people will tell you about. Both are fundamentally flawed because GitHub Actions Has a Package Manager, and It Might Be the Worst [1].
One thing people will say is to pin the commit SHA, so don't do "uses: randomAuthor/some-normal-action@v1", instead do "uses: randomAuthor/some-normal-action@e20fd1d81c3f403df57f5f06e2aa9653a6a60763". Alternatively, just fork the action into your own GitHub account and import that instead.
However, neither of these "solutions" work, because they do not pin the transitive dependencies.
Suppose I pin the action at a SHA or fork it, but that action still imports "tj-actions/changed-files". In that case, you would have still been pwned in the "tj-actions/changed-files" incident [2].
The only way to be sure is to manually traverse the dependency hierarchy, forking each action as you go down the "tree" and updating every action to only depend on code you control.
In other package managers, this is solved with a lockfile - go.sum, yarn.lock, ...
[1] https://nesbitt.io/2025/12/06/github-actions-package-manager...
[2] https://unit42.paloaltonetworks.com/github-actions-supply-ch...
After Azure DevOps and Jenkins, GitHub is like afresh breath of air. It might be a fart in your face, but at least it's available within IT department guidelines, and any movement of air is preferable to the stifling insanity of the others.
This is roughly how I feel about cloudformation. May we please have terraform back? Ansible, even?
I think cdk is the one to use nowadays. Infrastructure as real code.
The worst part about CDK is, by far, that it's still backed by Cloudformation.
What pains are you experiencing? Cdk has far exceeded Ansible and Terraform in my experience.
Hooo boy where do I begin? Dependency deadlocks are the big one - you try to share resource attributes (eg ARN) from one stack to another. You remove the consumer and go to deploy again. The producer sees no more dependency so it prunes the export. But it can't delete the export, cause the consumer still needs it. You can't deploy the consumer, because the producer has to deploy first sequentially. And if you can't delete the consumer (eg your company mandates a CI pipeline deploy for everything) you gotta go bug Ops on slack, wait for someone who has the right perms to delete it, then redeploy.
You can't actually read real values from Parameters/exports (you get a token placeholder) so you can't store JSON then read it back and decode (unless in same stack, which is almost pointless). You can do some hacks with Fn:: though.
Deploying certain resources that have names specified (vs generated) often breaks because it has to create the new resource before destroying the old one, which it can't, because the name conflicts (it's the same name...cause it's the same construct).
It's wildly powerful though, which is great. But we have basically had to create our own internal library to solve what should be non-problems in an IaC system.
Would be hilarious if my coworker stumbled upon this. I know he reads hn and this has been my absolute crusade this quarter.
> Dependency deadlocks are the big one - you try to share resource attributes (eg ARN) from one stack to another. You remove the consumer and go to deploy again. The producer sees no more dependency so it prunes the export.
I’m a little puzzled. How are you getting dependency deadlocks if you’re not creating circular dependencies?
Also, exports in CloudFormation are explicit. I don’t see how this automatic pruning would occur.
> Deploying certain resources that have names specified (vs generated) often breaks
CDK tries to prevent this antipattern from happening by default. You have to explicitly make it name something. The best practice is to use tags to name things, not resource names.
I'll just echo the other poster with "deadlocks". It's obscene how slow CF is, and the fact that its failure modes often leave you in a state that feels extremely dangerous. I've had to contact AWS Support before due to CF locking up in an irrecoverable way due to cycles.
Why not just use Terraform, if you prefer that?
Because my employer has already standardized on CF?
What I find hardest about CI offerings is that each one has a unique DSL that inevitably has edge cases that you may only find out once you’ve tried it.
You might face that many times using Gitlab CI. Random things don’t work the way you think it should and the worst part is you must learn their stupid custom DSL.
Not only that, there’s no way to debug the maze of CI pipelines but I imagine it’s a hard thing to achieve. How would I be able to locally run CI that also interacts with other projects CI like calling downstream pipelines?
That’s the nice thing about buildkite. Generate the pipeline in whatever language you want and upload as JSON or yaml.
JSON or YAML imply a buildkite DSL as there's no standard JSON or YAML format for build scripts
I assume by DSL they mean some custom templating language built on top, for things like iterating and if-conditions. If it's plain JSON/YAML you can produce that using any language you wish.
But do you provide SDKs in the languages? I mean even in gitlab I could technically generate YAML in python but what I needed was an SDK that understood the domain.
At which point did someone force OP to use GH Actions ?
It's fantastic for simple jobs, I use it for my hobbyist projects because I just need 20 to 30 lines to build and deploy a web build.
Just because a bike isn't good for traveling in freezing weather doesn't mean no one should own a bike.
Pick the right tool for the job.
Plus CI/CD is the boring part. I always imagined GH Actions as a quick and somewhat sloppy solution for hobbyist projects.
Not for anything serious.
Personally I like Drone more than Buildkite. It's as close to a perfect CI system as I've seen; just complex enough to do everything I need, with a design so stripped-down it can't be simpler. I occasionally check on WoodpeckerCI to see if it's reached parity with Drone. Now that AI coding is a thing, hopefully that'll happen soon
Nice write up, but wondering now what nix proposes in that space.
I've never used nix or nixos but a quick search led me to nixops, and then realized v4 is entirely being rewritten in rust.
I'm surprised they chose rust for glue code, and not a more dynamic and expressive language that could make things less rigid and easier to amend.
In the clojure world BigConfig [0], which I never used, would be my next stop in the build/integrate/deploy story, regardless of tech stack. It integrates workflow and templating with the full power of a dynamic language to compose various setups, from dot/yaml/tf/etc files to ops control planes (see their blog).
[0] https://bigconfig.it/
nods. nods again. Yep, this is exactly why we left GitHub for GitLab two years ago. Not one moment of regret.
Still, I wonder who is still looking manually at CI build logs. You can use an agent to look for you, and immediately let it come up with a fix.
GitHub has an integrated "let copilot look at the logs and figure out the issue" and I swear it has never worked once for me.
I really wonder in which universe people are living. GitHub Actions was a godsend when it was first released and it still continues to be great. It has just the right amount of abstractions. I've used many CIs in the past and I'd definitely prefer GA over any of them.
Have you used the log viewer? Because I swear the log viewer is the biggest letdown. I love that GitHub Actions is deeply integrated into GitHub. I hate the log viewer, and that's like one of the core parts of it.
Yeah, that's not a good part, I tend to avoid it by downloading the log and looking at it that way. I find it easier and it's just on click.
> I have mass-tested these systems so that you don’t have to, and I have the scars to show for it, and I am here to tell you: GitHub Actions is not good.
> Every CI system eventually becomes “a bunch of YAML.” I’ve been through the five stages of grief about it and emerged on the other side, diminished but functional.
> I understand the appeal. I have felt it myself, late at night, after the fourth failed workflow run in a row. The desire to burn down the YAML temple and return to the simple honest earth of #!/bin/bash and set -euo pipefail. To cast off the chains of marketplace actions and reusable workflows and just write the damn commands. It feels like liberation. It is not.
Ah yes, misery loves company! There's nothing like a good rant (preferably about a technology you have to use too, although you hate its guts) to brighten up your Friday...
I have not had this experience. It sounds like a bad process rather than being GitHubs fault. I’ve always had GitHub actions double checking the same checks I run locally before pushing.
I don't care if this is an advertisement for buildkite masquerading as a blog post or if this is just an honest rant. Either way, I gotta say it speaks a lot of truth.
GHA is quite empowering for solo devs. I just dev on my tiny machine and outsource all heavy work to GHA, and basically let Claude rip on the errors, rinse repeat.
I just can't stand using a build system tied to the code host. And that is really because I have an aversion to vendor lock-in.
webhooks to an external system was such a better way to do it, and somehow we got away from that, because they don't want us to leave.
webhooks are to podcasts as github actions are to the things that spotify calls podcasts.
I don't have much experience with Guthub Actions, but I'll say this does sound worse than Azure DevOps, which I did not imagine was possible. I've never liked any CI system, but ADO must be one of the lower circles of hell.
Guthub! May be a typo, I'll be using it anyway ...
That if anything was a fun read, explains why I’ve always heard that GitHub actions were only good for personal projects
We started using Buildkite at $DAYJOB years ago and haven't looked back. Incredibly, GitHub Actions seems to have gotten _worse_ in the interim. Absolutely no regrets from switching.
I think Github Actions is just a lead for Microsoft customers to use paid Azure DevOps. It is bad intentionally.
Out of the frying pan into the molten core of the sun.
I matured as an Engineer using various CI tools and discovering hands-on that these tools are so unreliable (pipes often failing inconsistently). I am surprised to find that there are better systems, and I'd like to learn more.
I agree with the gripes, but buildkite is not the answer
If I cannot fully self host an open source project, it is not a contender for my next ci system
I was excited for actions because it was “next to” my source code.
I (tend to) complain about actions because I use them.
Open to someone telling me there is a perfect solution out there. But today my actions fixes were not actions related. Just maintenance.
>the GitHub Actions log viewer is the only one that has crashed my browser. Not once. Repeatedly. Reliably.
Well, THIS blog post page reliably eats the CPU on scrolling, and the scrolling is very jerky, despite it has only text and no other visible elements.
Is it great? No. Is it usually good enough? Yes. CI shouldn’t be a main quest for most engineers. Just get it rolling early and adjust as needed.
I think this author would benefit from using the Refined GitHub browser extension, which fixes a lot of these problems.
I think people shouldn't go installing random browser extensions like they shouldn't go installing random package manager packages, which is part of his argument
Declarative (a la bazel and garnix) is obviously the way to go, but we're still living in the s̶t̶o̶n̶e̶ YAML age.
> If you’re a small team with a simple app and straightforward tests, it’s probably fine. I’m not going to tell you to rip it out.
> But if you’re running a real production system, if you have a monorepo, if your builds take more than five minutes, if you care about supply chain security, if you want to actually own your CI: look at Buildkite.
Goes in line with exactly what I said in 2020 [0] about GitHub vs Self-hosting. Not a big deal for individuals, but for large businesses it's a problem if you can push that critical change when your CI is down every week.
[0] https://news.ycombinator.com/item?id=22867803
I know this is off topic, but that homepage is a piece of work: https://buildkite.com
I get it's quirky, but I'm at a low energy state and just wanted to know what it does...
Right before I churned out, I happened to click "[E] Exit to classic Buildkite" and get sent to their original homepage: https://buildkite.com/platform/
It just tells you what it Buildkite does! Sure it looks default B2B SaaS, but more importantly it's clear. "The fastest CI platform" instead of some LinkedIn-slop manifesto.
If I want to know why it's fast, I scroll down and learn it scales to lots of build agents and has unlimited parallelism!
And if I wonder if it plays nice with my stack, I scroll and there's logos for a bunch of well known testing frameworks!
And if I want to know if this isn't v0.0001 pre-alpha software by a pre-seed company spending runway on science-fair home pages, this one has social proof that isn't buried in a pseudo-intellectual rant!
-
I went down the rabbit hole of what lead to this and it's... interesting to say the least.
https://medium.com/design-bootcamp/nothing-works-until-you-m...
https://www.reddit.com/r/branding/comments/1pi6b8g/nothing_w...
https://www.reddit.com/r/devops/comments/1petsis/comment/nsm...
Hello mate, Head of Brand and Design at BK here. Thanks for the feedback, genuinely; the homepage experiment has been divisive, in a great way. Some folk love it, some folk hate it, some just can't be bothered with it. All fair.
Glad that the classic site hit the mark, but a lot work to do to make that clearer than it is; we're working on the next iteration that will sunset the CLI homepage into an easter egg.
Happy to take more critique, either on the execution or the rabbit hole.
I did a BK search earlier in the article and ended on the same page, decided I couldn't be bothered to play those sort of games and clicked away. The GPs link actually looks rather interesting so I'll investigate, so take this a hate-it-folk vote.
Great of you to accept critiques, but I don't think there's anything more I can add.
You brought up Planetscale's markdown homepage rework in one of those posts and I actually think it's great... but it's also clear, direct, and has no hidden information.
I'd love to see what happens to conversions once you retire this to an Easter Egg.
oh wow, that's not good.
I run a company that uses Nix for everything.
We're running GitHub Actions. It's good. All the real logic is in Nix, and we mostly use our own runners. The rest of the UI that GitHub Actions provides is very nice.
We previously used a CI vendor which specialised in building Nix projects. We wanted to like it, but it was really clunky. GitHub Actions was a significant quality of life improvement for us.
None of my colleagues have died. GitHub Actions is not killing my engineering team at any rate.
I'll be that guy.
For what boils down to a personal take, light on technicalities, this reads like uncannily impersonal, prolonged attempt at dramatic writing.
If you believe the dates in this blog, it's totally different in tone, style, and wording to a safely distant 2021 post (https://www.iankduncan.com/personal/2021-10-04-garbage-in-ne...).
It made me feel paranoid just in about three paragraphs. I apologize to the author if I'm wrong but we all understand what my gut tells me.
I also sense an LLM vibe in this post.
@dang can we get this renamed to "GitHub Actions could be better"
Things I dislike about GHA (on Enterprise Server)
* Workflows are only registered once pushed to main, impossible to test the first runs in a branch.
* MS/GH don't care much about GHES as they do github.com, I think they'd like to see it just die. Massive lack of feature parity.
* Labels: If any of your workflows trigger from a label, they ALL DO. You can't target labels only to certain workflows, they all run and then cancel, polluting your checks.
* Deployments: What is a deployment even doing? There is no management to deploy.
* Statefulness: No native way to store state between runs in the same workflow or PR, you would think you could save some sort of state somewhere but you have to manage it all yourself with manifests or something else.
I can go on
RA the specified array and query polkit prior to k-mod in o-space. Xenosystem upload
#git --clone [URL]
The internet makes me feel like the only person that doesn't mind Jenkins. Idk it just gets the job done ime.
I used Jenkins for years at a previous job - for the longest time it was a confusing mess of pipelines coupled with being a fairly outdated version.
Once it was updated to latest and all the bad old manually created jobs were removed it was decent.
This. In my experience the people actively disliking it have only ever used Jenkins 1 or somewhy only used freestyle jobs.
There are numerous ways to shoot yourself in the foot, though, and everything must be configured properly to get to feature parity with GHA (mail server, plugins, credentials, sso, https, port forwarding, webhooks, GitHub app, ...).
But once those are out of the way, its the most flexible and fastest CI system I have ever used.
Nah I don't mind Jenkins either. I think it's unpopular because you can definitely turn it into a monstrosity, and I think a lot of people have only seen it in that state.
I also like Jenkins. I think you can turn it into a mess, but in the right hands it’s a powerful tool.
> You’ve upgraded the engine but you’re still driving the car that catches fire when you turn on the radio.
And fixing the pyro-radio bug will bring other issues, for sure, so they won't because some's workflow will rely on the fact that turning on the radio sets the car on fire: https://xkcd.com/1172/
I think we can honestly remove the word Actions in the headline and still agree.
It used to be fast ish!
Now it's full ugh.
Happy user of GitLab CI here.
I see the appeal of GitHub for sharing open source - the interface is so much cleaner and easier to find all you are looking for (GitLab could improve there).
But for CI/CD GitHub doesn’t even come close to GitLab in the usability department, and that’s before we even talk about pricing and the free tiers. People need to give it a try and see what they are missing.
“Microsoft is where ambitious developer tools go to become enterprise SKUs“
It’s hard to remember, sometimes, that Microsoft was one of the little gadflies that buzzed around annoying the Big Guys.
I hate to say this. I can't even believe I am saying it, but this article feels like it was written in a different universe where LLMs don't exist. I understand they don't magically solve all of these problems, and I'm not suggesting that it's as simple as "make the robot do it for you" either.
However, there are very real things LLMs can do that greatly reduce the pain here. Understanding 800 lines of bash is simply not the boogie man it used to be a few years ago. It completely fits in context. LLMs are excellent at bash. With a bit of critical thinking when it hits a wall, LLM agents are even great at GitHub actions.
The scariest thing about this article is the number of things it's right about. Yet my uncharacteristic response to that is one big shrug, because frankly I'm not afraid of it anymore. This stuff has never been hard, or maybe it has. Maybe it still is for people/companies who have super complex needs. I guess we're not them. LLMs are not solving my most complex problems, but they're killing the pain of glue left and right.
The flip side of your argument is that it no longer matters how obtuse, complicated, baroque, brittle, underspecified, or poorly documented software is anymore. If we can slap an LLM on top of it to paper over those aspects, it’s fine. Maybe efficiency still counts, but only when it meaningfully impacts individual spend.
Additionally it's not like you're constrained to write it in bash. You could use Python or any other language. The author talks about how you're now redeveloping a shitty CI system with no tests? Well, add some tests for it! It's not rocket science. Yes, your CI system is part of your project and something you should be including in your work. I drew this conclusion way back in the days where I was writing C and C++ and had days where I spent more time on the build system than on the actual code. It's frustrating but at the end of the day having a reliable way to build and test your code is not less important than the code itself. Treat it like a real project.
> this is a product made by one of the richest companies on earth.
nit: no, it was made by a group of engineers that loved git and wanted to make a distributed remote git repository. But it was acquired/bought out then subsequently enshittified by the richest/worst company on earth.
Otherwise the rest of this piece vibes with me.
All CI is just various levels of bullshit over a bash script anyway.
Yes, but no need for the attitude.
Linux powers the world in this area and bash is the glue which executes all these commands on servers.
Any program or language you write to try and 'revolutionise CI' and be this glue will ultimately make the child process call to a bash/sh terminal anyhow and you need to read both stdout and stderr and exit codes to figure out next steps.
Or you can just use bash.