In praise of –dry-run

(henrikwarne.com)

112 points | by ingve 10 hours ago ago

67 comments

  • mycall 5 hours ago ago

    I like the opposite too, -commit or -execute as it is assumed running it with defaults is immutable as the dry run, simplifying validation complexity and making the go live explicit.

    • torstenvl an hour ago ago

      I have a parallel directory deduper that uses hard links and adopted this pattern exactly.

      By default it'll only tell you which files are identical between the two parallel directory structures.

      If you want it to actually replace the files with hard links, you have to use the --execute flag.

    • Twirrim 4 hours ago ago

      I've biased towards this heavily in the last 8 or so years now.

      I've yet to have anyone mistakenly modify anything when they need to pass --commit, when I've repeatedly had people repeatedly accidentally modify stuff because they forgot --dry-run.

      • IgorPartola 4 hours ago ago

        I wouldn’t want most things to work this way:

            $ rm file.bin
            $ rm —-commit file.bin
            $ cat foo.txt > bar.txt
            $ cat foo.txt | tee —-write-for-real bar.txt
            $ cp balm.mp3 pow.mp3
            $ cp —-i-mean-it balm.mp3 pow.mp3
        
        There is a time and a place for it but it should not be the majority of use cases.
        • ronjakoi 34 minutes ago ago

          I used to have alias rm='rm -i' for a few years to be careful, but I took it out once I realised that I had just begun adding -f all the time

        • Darfk 3 hours ago ago

          Totally agree it shouldn't be for basic tools; but if I'm ever developing a script that performs any kind of logic before reaching out to a DB or vendor API and modifies 100k user records, creating a flag to just verify the sanity of the logic is a necessity.

          • Joker_vD 3 hours ago ago

                if [ -n "$DRY_RUN" ] ; then
                    alias rm='echo rm'
                    alias cp='echo cp'
                fi
            
            Of course, output redirects will still overwrite the files, since the shell does it and IIRC this behaviour can't be changed.
            • digiown 3 hours ago ago

              set -o noclobber

          • james_marks 3 hours ago ago

            Yep. First thing I do for this kind thing is make a preview=true flag so I don’t accidentally run destructive actions.

        • digiown 3 hours ago ago

          For most of these local data manipulation type of commands, I'd rather just have them behave dangerously, and rely on filesystems snapshots to rollback when needed. With modern filesystems like zfs or btrfs, you can take a full snapshot every minute and keep it for a while to negate the damage done by almost all of these scripts. They double as a backup solution too.

        • hdjrudni 3 hours ago ago

          Even in those basic examples, it probably would be useful. `cp` to a blank file? No problem. `cp` over an existing file? Yeah, I want to be warned.

          `rm` a single file? Fine. `rm /`? Maybe block that one.

    • spike021 2 hours ago ago

      There was a tool I used some time ago that required typing in a word or phrase to acknowledge that you know it's doing the run for real.

      Pros and cons to each but I did like that because it was much more difficult to fat finger or absentmindedly use the wrong parameter.

    • xyse53 4 hours ago ago

      Yeah I'm more of a `--wet-run` `-w` fan myself. But it does depend on how serious/annoying the opposite is.

      • aqme28 4 hours ago ago

        I've done that, but I hate the term "wet run."

        I use "live run" now, which I think gets the point across without being sort of uncomfortable.

        • IgorPartola 4 hours ago ago

          --with-danger

          --make-it-so

          --do-the-thing

          --go-nuts

          --safety-off

          So many fun options.

          • Darfk 3 hours ago ago

            I'm a fan of --safety-off. It gives off a 'aim away from face' or 'mishandle me and I'll blow a chunk out of your DB' vibe.

          • torstenvl an hour ago ago

            It's in the UI not the command line, but I like Chromium's thisisunsafe

          • JsonCameron 3 hours ago ago

            I've done a few --execute --i-know-what-im-doing for some more dangerous scripts

            • altairprime 2 hours ago ago

              May I recommend --I-take-responsibility-for-the-outcome-of-proceeding and require a capital I?

          • altairprime 2 hours ago ago

            --commit is solid too

        • Quekid5 3 hours ago ago

          Moist run is the way.

    • lazide 2 hours ago ago

      Just don’t randomly mix and match the approaches or you are in for a bad time.

  • arjie 6 hours ago ago

    In order to make it work without polluting the code-base I find that I have to move the persistence into injectable strategy, which makes it good anyway. If you keep passing in `if dry_run:` everywhere you're screwed.

    Also, if I'm being honest, it's much better to use `--wet-run` for the production run than to ask people to run `--dry-run` for the test run. Less likely to accidentally fire off the real stuff.

    • wging 5 hours ago ago

      One nice way to do things, if you can get away with it, is to model the actions your application takes explicitly, and pass them to a central thing that actually handles them. Then there can be one place in your code that actually needs to understand whether it's doing a dry run or not. Ideally this would be just returning them from your core logic, "functional core, imperative shell" style.

      • WCSTombs 4 hours ago ago

        I totally agree with both this and the comment you replied to. The common thread is that you can architect the application in such a way that dry vs. wet running can be handled transparently, and in general these are just good designs.

        • IgorPartola 4 hours ago ago

          That’s what I prefer as well. A generation step and an execution step where the executor can be just a logger or the real deal.

    • ryandrake 4 hours ago ago

      I don't want to have to type rm --wet-run tempfile.tmp every time, or mkdir -p --yes-really-do-it /usr/local/bin

      The program should default to actually doing whatever thing you're asking it to do.

      On the other hand it would be great if every tool had an --undo argument that would undo the last thing that program did.

    • sh-run 4 hours ago ago

      I don't like the sound of `--wet-run`, but on more than one occasion I've written tools (and less frequently services) that default to `dry-run` and require `--no-dry-run` to actually make changes.

      For services, I prefer having them detect where they are running. Ie if it's running in a dev environment, it's going to use a dev db by default.

    • segmondy 6 hours ago ago

      this is where design patterns come in handy even tho folks roll their eyes at it.

      • nstart 4 hours ago ago

        Design patterns are one of those things where you have to go through the full cycle to really use it effectively. It goes through the stages:

        no patterns. -> Everything must follow the gang of four's patterns!!!! -> omg I can't read code anymore I'm just looking at factories. No more patterns!!! -> Patterns are useful as a response to very specific contexts.

        I remember being religious about strategy patterns on an app I developed once where I kept the db layer separated from the code so that I could do data management as a strategy. Theoretically this would mean that if I ever switched DBs it would be effortless to create a new strategy and swap it out using a config. I could even do tests using in memory structures instead of DBs which made TDD ultra fast.

        DB switchover never happened and the effort I put into maintaining the pattern was more than the effort it would have taken me to swap a db out later :,) .

        • tbossanova an hour ago ago

          What about the productivity gains from in memory db for tests though? Hard to measure I guess

      • cake-rusk 5 hours ago ago

        Design patterns exist to paper over language deficiencies. Use a language which is not deficient.

        • WCSTombs 5 hours ago ago

          There's some truth to this, since some design patterns can simply be implemented "for good" in a sufficiently powerful language, but I don't find it's true in general. Unfortunately, it has become something of a thought-terminating cliché. Some common design patterns are so flexible that if you really implemented them in full generality as, say, some library function, its interface would be so complex that it likely wouldn't be a net win.

        • awesome_dude 4 hours ago ago

          Just my two cents - but a general purpose language is going to need to be coupled with design patterns in order to be useful for different tasks.

          I'm using MVC design patterns for some codebases, I'm using DDD plus Event sourcing and Event Driven for others.

          I suspect that you are thinking of a small subset of design patterns (eg. Gang of Four derived patterns like Visitor, Strategy, or Iterator )

        • antinomicus 5 hours ago ago

          Like what?

  • ElevenLathe 7 hours ago ago

    I usually do the opposite and add a --really flag to my CLI utilities, so that they are read-only by default and extra effort is needed to screw things up.

    • eichin 6 hours ago ago

      I've committed "--i-meant-that" (for a destroy-the-remote-machine command that normally (without the arg) gives you a message and 10s to hit ^C if you're not sure, for some particularly impatient coworkers. Never ended up being used inappropriately, which is luck (but we never quantified how much luck :-)

      • tkclough 3 hours ago ago

        I like the timer idea. I do something kinda similar by prompting the user to enter some short random code to continue.

        I guess the goal for both is to give the user a chance to get out of autopilot, and avoid up-arrowing and re-executing.

    • weikju 6 hours ago ago

      Came here to say the same

  • skissane 5 hours ago ago

    In one (internal) CLI I maintain, I actually put the `if not dry_run:` inside the code which calls the REST API, because I have a setting to log HTTP calls as CURL commands, and that way in dry-run mode I can get the HTTP calls it would have made without it actually making them.

    And this works well if your CLI command is simply performing a single operation, e.g. call this REST API

    But the moment it starts to do anything more complex: e.g. call API1, and then send the results of API1 to API2 – it becomes a lot more difficult

    Of course, you can simulate what API1 is likely to have returned; but suddenly you have something a lot more complex and error-prone than just `if not dry_run:`

    • scruple 3 hours ago ago

      Having 1 place (or just generally limiting them) that does the things keeps the dry_run check from polluting the entire codebase. I maintain a lot of CLI tooling that's run by headless VMs in automation pipelines and we do this with basically every single tool.

  • BrouteMinou 2 hours ago ago

    One of the kick-ass feature of PowerShell is you only need to add `[CmdletBinding(SupportsShouldProcess)] ` to have the `-whatIf` dry-run for your functions.

    Quite handy.

  • CGamesPlay 4 hours ago ago

    For me the ideal case is three-state. When run interactively with no flags, print a dry run result and prompt the user to confirm the action; and choose a default for non-interactive invocations. In both cases, accept either a --dry-run or a --yes flag that indicates the choice to be made.

    This should always be included in any application that has a clear plan-then-execute flow, and it's definitely nice to have in other cases as well.

  • mystifyingpoi an hour ago ago

    I like doing the same in CI jobs, like in Jenkins I'll add a DRY_RUN parameter, that makes the whole job readonly. A script that does the deployment would then only write what would be done.

  • alexhans an hour ago ago

    Agreed. For me a good help, a dry run and a readme with good examples has been the norm for work tools for a while.

    It's even more relevant now that you can get the LLMs/CLI agents to use your deterministic CLI tools.

  • cjonas 5 hours ago ago

    We have an internal framework for building migrations and the "dry run" it's a core part of the dev cycle. Allows you to test your replication plan and transformations without touching the target. Not to mention, a load that could take >24 hours completes in minutes

  • zzo38computer 6 hours ago ago

    I think dry run mode is sometimes useful for many programs (and, I sometimes do use them). In some cases, you can use standard I/O so that it is not needed because you can control what is done with the output. Sometimes you might miss something especially if the code is messy, although security systems might help a bit. However, you can sometimes make the code less messy if the I/O is handled in a different way that makes this possible (e.g. by making the functions that make changes (the I/O parts of your program) to handle them in a way that the number of times you need to check for dry run is reduced if only a few functions need to); my ideas of a system with capability-based security would allow this (as well as many other benefits; a capability-based system has a lot of benefits beyond only the security system). Even with the existing security it can be done (e.g. with file permissions), although not as well as capability-based security.

  • bikelang 7 hours ago ago

    I love `—-dry-run` flags for CLI tooling I build. If you plan your applications around this kind of functionality upfront - then I find it doesn’t have to pollute your code too much. In a language like Go or Rust - I’ll use a option/builder design pattern and whatever I’m ultimately writing to (remote file system, database, pubsub, etc) will instead write to a logger. I find this incredibly helpful in local dev - but it’s also useful in production. Even with high test coverage - it can be a bit spooky to turn on a new, consequential feature. Especially one that mutates data. I like to use dry run and enable this in our production envs just to ensure that things meet the functional and performance qualities we expect before actually enabling. This has definitely saved our bacon before (so many edge cases with prod data and request traffic).

  • tegiddrone 4 hours ago ago

    I’m interested to know the etymology and history of the term. Somehow I imagine an inked printing press as the “wet run.”

    • hydrox24 3 hours ago ago

      It seems to have originated in the US with Fire Departments:

      > These reports show that a dry run in the jargon of the fire service at this period [1880s–1890s] was one that didn’t involve the use of water, as opposed to a wet run that did.

      https://www.worldwidewords.org/qa/qa-dry1.htm

    • jofzar 3 hours ago ago

      Interestingly the one place I have seen "dry run" to actually mean "dry run" is using a air compressor to check to see if a water loop (in a computer) doesn't leak by seeing if there no drop in pressure.

  • gooseyman 4 hours ago ago
  • calebhwin an hour ago ago

    And it's more important than ever in the age of coding agents.

  • taude 5 hours ago ago

    Funny enough, when creating CLIs with Claude Code (and Github Copilot), they've both added `--dry-run` to my CLIs without me even prompting it.

    I prefer the inverse, better, though. Default off, and then add `--commit` or `--just-do-it` to make it actually run.

  • aappleby 4 hours ago ago

    What if the tool required an "un-safeword" to do destructive things?

    "Do you really want to 'rm -rf /'? Type 'fiberglass' to proceed."

    • jabroni_salad 3 hours ago ago

      There is a package called molly-guard that makes you type the computer's hostname when you are trying to do a shutdown or restart. I love it.

    • nthdeui 4 hours ago ago

      Like tarsnap's --nuke command:

        --nuke  Delete all of the archives stored.  To protect against accidental
                data loss, tarsnap will ask you to type the text "No Tomorrow"
                when using the --nuke command.
  • throwaway314155 4 hours ago ago

    Sort of a strange article. You don't see that many people _not_ praising --dry-run (speaking of which, the author should really learn to use long options with a double dash).

    • analog31 2 hours ago ago

      I only saw the emdash in the thread link, but I do know that an iPad "wants" to turn a double dash into an emdash automatically. I have no idea how to disable that default.

    • CGamesPlay 4 hours ago ago

      I'm not aware of any CLI arguments that accept emdash for long arguments–but I'm here for it. "A CLI framework for the LLM era"

  • TZubiri 4 hours ago ago

    I use --dry-run when I'm coding and I control the code.

    Otherwise it's not very wise to trust the application on what should be a deputy responsibility.

    Nowadays I'd probably use OverlayFS (or just Docker) to see what the changes would be, without ever risking the original FS.

    • throwaway290 2 hours ago ago

      How do you easily diff what changed between Docker and host?

  • calvinmorrison 4 hours ago ago

    --dry-run

    --really

    --really-really

    --yolo

  • awesome_dude 5 hours ago ago

    pffft, if you aren't dropping production databases first thing in the morning by accident, how are you going to wake yourself up :-)