Turn Dependabot Off

(words.filippo.io)

396 points | by todsacerdoti 10 hours ago ago

103 comments

  • nfm 9 hours ago ago

    The number of ReDoS vulnerabilities we see in Dependabot alerts for NPM packages we’re only using in client code is absurd. I’d love a fix for this that was aware of whether the package is running on our backend or not. Client side ReDoS is not relevant to us at all.

    • staticassertion 8 hours ago ago

      TBH I Think that DoS needs to stop being considered a vulnerability. It's an availability concern, and availability, despite being a part of CIA, is really more of a principle for security rather than the domain of security. In practice, availability is far better categorized as an operational or engineering concern than a security concern and it does far, far more harm to categorize DoS as a security conern than it does to help.

      It's just a silly historical artifact that we treat DoS as special, imo.

      • jpollock 8 hours ago ago

        The severity of the DoS depends on the system being attacked, and how it is configured to behave on failure.

        If the system is configured to "fail open", and it's something validating access (say anti-fraud), then the DoS becomes a fraud hole and profitable to exploit. Once discovered, this runs away _really_ quickly.

        Treating DoS as affecting availability converts the issue into a "do I want to spend $X from a shakedown, or $Y to avoid being shaken down in the first place?"

        Then, "what happens when people find out I pay out on shakedowns?"

        • staticassertion 8 hours ago ago

          If the system "fails open" then it's not a DoS, it's a privilege escalation. What you're describing here is just a matter of threat modeling, which is up to you to perform and not a matter for CVEs. CVEs are local properties, and DoS does not deserve to be a local property that we issue CVEs for.

          • otabdeveloper4 an hour ago ago

            You're making too much sense for a computer security specialist.

        • michaelt 8 hours ago ago

          > If the system is configured to "fail open", and it's something validating access (say anti-fraud),

          The problem here isn't the DoS, it's the fail open design.

          • jpollock 8 hours ago ago

            If the majority of your customers are good, failing closed will cost more than the fraud during the anti-fraud system's downtime.

            • lazyasciiart 3 hours ago ago

              Until any bad customer learns about the fail-open.

        • vasco an hour ago ago

          > Treating DoS as affecting availability converts the issue into a "do I want to spend $X from a shakedown, or $Y to avoid being shaken down in the first place?"

          > Then, "what happens when people find out I pay out on shakedowns?"

          What do you mean? You pay to someone else than who did the DoS. You pay your way out of a DoS by throwing more resources at the problem, both in raw capacity and in network blocking capabilities. So how is that incentivising the attacker? Or did you mean some literal blackmailing??

      • bawolff 8 hours ago ago

        The real problem is that we treat vulnerabilities as binary without nuance. Whether a security vulnerability is an issue depends on context. This comes up a lot for DoS (and especially ReDoS) as it is comparatively rare for it to be real, but it can happen for any vulnerability type.

        • jayanmn 17 minutes ago ago

          Our top management has zero interest in context. There is a chart , that must not have red items.

          Security team cannot explain attach surface. In the end it is binary. Fix it or take the blame

        • staticassertion 7 hours ago ago

          I don't really agree. Maybe I do, but I probably have mixed feelings about that at least.

          DoS is distinct because it's only considered a "security" issue due to arbitrary conversations that happened decades ago. There's simply not a good justification today for it. If you care about DoS, you care about almost every bug, and this is something for your team to consider for availability.

          That is distinct from, say, remote code execution, which not only encompasses DoS but is radically more powerful. I think it's entirely reasonable to say "RCE is wroth calling out as a particularly powerful capability".

          I suppose I would put it this way. An API has various guarantees. Some of those guarantees are on "won't crash", or "terminates eventually", but that's actually insanely uncommon and not standard, therefor DoS is sort of pointless. Some of those guarantees are "won't let unauthorized users log in" or "won't give arbitrary code execution", which are guarantees we kind of just want to take for granted because they're so insanely important to the vast majority of users.

          I kinda reject the framing that it's impossible to categorize security vulnerabilities broadly without extremely specific threat models, I just think that that's the case for DoS.

          There are other issues like "is it real" ie: "is this even exploitable?" and there's perhaps some nuance, and there's issues like "this isn't reachable from my code", etc. But I do think DoS doesn't fall into the nuanced position, it's just flatly an outdated concept.

          • bawolff 6 hours ago ago

            I am kind of sympathetic to that view. In practise i do find most DoS vulns to be noise or at least fundamentally different from other security bugs because worst case you get attacked, have some downtime, and fix it. You dont have to worry about persistence or data leaks.

            But at the same time i don't know. Pre-cloudflare bringing cheap ddos mitigation to the masses, i suspect most website operators would have preferred to be subject to an xss attack over a DoS. At least xss has a viable fix path (of course volumetric dos is a different beast than cve type dos vulns)

          • bigfatkitten 3 hours ago ago

            There are good reasons for that history which are still relevant today.

            We have decades of history of memory corruption bugs that were initially thought to only result in a DoS, that with a little bit of work on the part of exploit developers have turned into reliable RCE.

      • akerl_ 6 hours ago ago

        Maybe we should start issuing CVEs for all bugs that might negatively impact the security of a system.

        • ranger207 6 hours ago ago

          The Linux kernel approach

      • Lichtso 7 hours ago ago

        > I Think that DoS needs to stop being considered a vulnerability

        Strongly disagree. While it might not matter much in some / even many domains, it absolutely can be mission critical. Examples are: Guidance and control systems in vehicles and airplanes, industrial processes which need to run uninterrupted, critical infrastructure and medicine / health care.

        • technion 5 hours ago ago

          These redos vulnerabilities always come down to "requires a user input of unbounded length to be passed to a vulnerable regex in JavaScript ". If someone is building a hard real time air plane guidance system they are already not doing this.

          I can produce a web server that prints hello world and if you send it enough traffic it will crash. If can put user input into a regex and the response time might go up by 1ms and noone will say its suddenly a valid cve.

          Then someone will demonstrate that with a 1mb input string it takes 4ms to respond and claim they've learnt a cve for it. I disagree. If you simply use Web pack youve probably seen a dozen of these where the vulnerable input was inside the Web pack.config.json file. The whole category should go in the bin.

          • bandrami 4 hours ago ago

            > If someone is building a hard real time air plane guidance system they are already not doing this.

            But if we no longer classed DOSes as vulnerabilities they might

        • staticassertion 7 hours ago ago

          I think this is just sort of the wrong framing. Yes, a plane having a DoS is a critical failure. But it's critical at the level where you're considering broader scopes than just the impact of a local bug. I don't think this framing makes any sense for the CVE system. If you're building a plane, who cares about DoS being a CVE? You're way past CVEs. When you're in "DoS is a security/ major boundary" then you're already at the point where CVSS etc are totally irrelevant.

          CVEs are helpful for describing the local property of a vulnerability. DOS just isn't interesting in that regard because it's only a security property if you have a very specific threat model, and your threat model isn't that localized (because it's your threat model). That's totally different from RCE, which is virtually always a security property regardless of threat model (unless your system is, say, "aws lambda" where that's the whole point). It's just a total reversal.

        • clickety_clack 4 hours ago ago

          I just hate being flagged for rubbish in Vanta that is going to cause us the most minor possible issue with our clients because there’s a slight risk they might not be able to access the site for a couple of hours.

      • kortilla 2 hours ago ago

        If I can cause a server to not serve requests to anyone else in the world by sending a well crafted set of bytes, that’s absolutely a vulnerability because it can completely disable critical systems.

        If availability isn’t part of CIA then a literal brick fulfills the requirements of security and the entire practice of secure systems is pointless.

    • junon 8 hours ago ago

      I maintain `debug` and the number of nonsense ReDoS vulnerability reports I get (including some with CVEs filed with high CVSS scores, without ever disclosing to me) has made me want to completely pull back from the JS world.

    • Twirrim 6 hours ago ago

      I've been fighting with an AI code review tool about similar issues.

      That and it can't understand that a tool that runs as the user on their laptop really doesn't need to sanitise the inputs when it's generating a command. If the user wanted to execute the command they could without having to obfuscate it sufficient to get through the tool. Nope, gotta waste everyone's time running sanitisation methods. Or just ignore the stupid code review tool.

    • adverbly 9 hours ago ago

      Seriously!

      We also suffer from this. Although in some cases it's due to a Dev dependency. It's crazy how much noise it adds specifically from ReDoS...

      • monkpit 4 hours ago ago

        ReDoS cves in your dev dependencies like playwright that could literally never be exploited, so annoying.

      • robszumski 9 hours ago ago

        Totally hear you on the noise…but we should want to auto-merge vs ignore, no? Given the right tooling of course.

        • UqWBcuFx6NV4r 8 hours ago ago

          We could just skip some steps and I could send you a zip file of malware for you to install on your infra directly if you’d like.

        • dotancohen 8 hours ago ago

          No

    • candiddevmike 8 hours ago ago

      Using something like npm-better-audit in your linting/CI allows you exclude devDependencies which cut down a ton of noise for us. IDGAF about vite server vulnerabilities.

  • ImJasonH 9 hours ago ago

    Govulncheck is one of the Go ecosystem's best features, and that's saying something!

    I made a GitHub action that alerts if a PR adds a vulnerable call, which I think pairs nicely with the advice to only actually fix vulnerable calls.

    https://github.com/imjasonh/govulncheck-action

    You can also just run the stock tool in your GHA, but I liked being able to get annotations and comments in the PR.

    Incidentally, the repo has dependabot enabled with auto-merge for those PRs, which is IMO the best you can do for JS codebases.

  • apitman 8 hours ago ago

    I find dependabot very useful. It's drives me insane and reminds me of the importance of keeping dependencies to an absolute minimum.

    • keyle 4 hours ago ago

      I agree, I don't have a ton of projects out there though.

  • p1nkpineapple 28 minutes ago ago

    we struggle with a similar problem at my workplace - vuln alerts from GCP container image scans put a ton of noise into Vanta which screams bloody murder at CVEs in base images which we A) can't fix, and B) aren't relevant as they're not on the hot path (often some random dependency that we don't use in our app).

    Are there any tools for handling these kind of CVEs contextually? (Besides migrating all our base images to chainguard/docker hardened images etc)

    • maciuz 10 minutes ago ago

      I'm working at a medium sized SaaS vendor. We've been using Aikido Code which tries to filter vulnerability impact using AI. Results are generally positive, though we are still struggling with keeping the amount of CVEs down, due to the size of our code bases and the amount of dependencies.

  • tracker1 9 hours ago ago

    I kind of wish Dependabot was just another tab you can see when you have contributor access for a repository. The emails are annoying and I mostly filter, but I also don't want a bunch of stale PRs sitting around either... I mean it's useful, but would prefer if it was limited to just the instances where I want to work on these kinds of issues for a couple hours across a few repositories.

    • BHSPitMonkey 8 hours ago ago

      You can add a dependabot.yml config to regulate when Dependabot runs and how many PRs it will open at a time:

      https://docs.github.com/en/code-security/reference/supply-ch...

    • curtisf 33 minutes ago ago

      Isn't it?

      You can have Dependabot enabled, but turn off automatic PRs. You can then manually generate a PR for an auto-fixable issue if you want, or just do the fixes yourself and watch the issue number shrink.

    • operator-name 7 hours ago ago

      The refined github extension[0] has some defaults that make the default view a little more tolerable. Past that I can personally recommend Renovate, which supports far more ecosystems and customisation options (like auto merging).

      [0]: https://github.com/refined-github/refined-github

  • indiestack 8 hours ago ago

    The govulncheck approach (tracing actual code paths to verify vulnerable functions are called) should be the default for every ecosystem, not just Go.

    The fundamental problem with Dependabot is that it treats dependency management as a security problem when it's actually a maintenance problem. A vulnerability in a function you never call is not a security issue — it's noise. But Dependabot can't distinguish the two because it operates at the version level, not the call graph level.

    For Python projects I've found pip-audit with the --desc flag more useful than Dependabot. It's still version-based, but at least it doesn't create PRs that break your CI at 3am. The real solution is better static analysis that understands reachability, but until that exists for every ecosystem, turning off the noisy tools and doing manual quarterly audits might actually be more secure in practice — because you'll actually read the results instead of auto-merging them.

    • staticassertion 8 hours ago ago

      Part of the problem is that customers will scan your code with these tools and they won't accept "we never call that function" as an answer (and maybe that's rational if they can't verify that that's true). This is where actual security starts to really diverge from the practices we've developed in the name of security.

      • unshavedyak 8 hours ago ago

        Would be neat if the call graph could be asserted easily.. As you could not only validate what vulnerabilities you are / aren't exposed to, but also choose to blacklist some API calls as a form of mitigation. Ensuring you don't accidentally start using something that's proven unsafe.

        • chii an hour ago ago

          but then if you could assert the call graph (easily, or even provably correctly), then why not just cull the unused code that led to vulnerability in the first place?

        • viraptor 8 hours ago ago
    • bandrami 4 hours ago ago

      If you never call it why is it there?

      • inejge an hour ago ago

        It's in the library you're using, and you're not using all of it. I've had that exact situation: a dependency was vulnerable in a very specific set of circumstances which never occurred in my usage, but it got flagged by Dependabot and I received a couple of unnecessary issues.

  • 12_throw_away 7 hours ago ago

    I'm a little hung up on this part:

    > These PRs were accompanied by a security alert with a nonsensical, made up CVSS v4 score and by a worrying 73% compatibility score, allegedly based on the breakage the update is causing in the ecosystem.

    Where did the CVSS score come from exactly? Does dependabot generate CVEs automatically?

    • amluto 4 hours ago ago

      I’m kind of curious whether anything is vulnerable to this bug at all. It seems like it depends on calling the offending function incorrectly, which seems about as likely to cause the code using it to unconditionally fail to communicate (and thus have already been fixed) as to fail in a way that’s insecure.

  • fulafel 36 minutes ago ago

    Alert fatigue has been long identified and complained about, this is just a new kind of that. But it's hitting a different set of people.

  • samhclark 10 hours ago ago

    This makes sense to me. I guess I'll start hunting for the equivalent of `govulncheck` for Rust/Cargo.

    Separately, I love the idea of the `geomys/sandboxed-step` action, but I've got such an aversion to use anyone else's actions, besides the first-party `actions/*` ones. I'll give sandboxed-step a look, sounds like it would be a nice thing to keep in my toolbox.

    • FiloSottile 10 hours ago ago

      > I've got such an aversion to use anyone else's actions, besides the first-party `actions/*` ones

      Yeah, same. FWIW, geomys/sandboxed-step goes out of its way to use the GitHub Immutable Releases to make the git tag hopefully actually immutable.

    • conradludgate 9 hours ago ago
    • bpavuk 9 hours ago ago

      > I guess I'll start hunting for the equivalent of `govulncheck` for Rust/Cargo.

      how about `cargo-audit`?

      • mirashii 9 hours ago ago

        cargo-audit is not quite at an equivalent level yet, it is lacking the specific features discussed in the post that identify the vulnerable parts of the API surface of a library. cargo-audit is like dependabot and others here in that it only tells you that you're using a version that was vulnerable, not that you're using a specific API that was vulnerable.

        • hobofan 8 hours ago ago

          Saddly, since it relies on a Cargo.lock to be correct it also is affected by bugs that place dependencies in the Cargo.lock, but are not compiled into the binary. e.g. weak features in Cargo currently cause unused dependencies to show up in the Cargo.lock.

  • esafak 9 hours ago ago

    I automate updates with a cooldown, security scanning, and the usual tests. If it passes all that I don't worry about merging it. When something breaks, it is usually because the tests were not good enough, so I fix them. The next step up would be to deploy the update into a canary cluster and observe it for a while. Better that than accrue tech debt. When you update on "your schedule" you still should do all the above, so why not just make it robust enough to automate? Works for me.

    • FiloSottile 9 hours ago ago

      For regular updates, because you can minimize but not eliminate risk. As I say in the article that might or might not work for your requirements and practices. For libraries, you also cause compounding churn for your dependents.

      For security vulnerabilities, I argue that updating might not be enough! What if your users’ data was compromised? What if your keys should be considered exposed? But the only way to have the bandwidth to do proper triage is by first minimizing false positives.

  • woodruffw 8 hours ago ago

    I think this is pretty good advice. I find Dependabot useful for managing scheduled dependency bumps (which in turn is useful for sussing out API changes, including unintended semver breakages from upstreams), but Dependabot’s built-in vulnerability scanning is strictly worse than just about every ecosystem’s own built-in solution.

  • SamuelAdams 9 hours ago ago

    What’s nice about Dependabot is that it works across multiple languages and platforms. Is there an equivalent to govulncheck for say NPM or Python?

    • mirashii 9 hours ago ago

      > Is there an equivalent to govulncheck for say NPM or Python?

      There never could be, these languages are simply too dynamic.

      • woodruffw 8 hours ago ago

        In practice this isn’t as big of a hurdle as you might expect: Python is fundamentally dynamic, but most non-obfuscated Python is essentially static in terms of callgraph/reachability. That means that “this specific API is vulnerable” is something you can almost always pinpoint usage for in real Python codebases. The bigger problem is actually encoding vulnerable API information (not just vulnerable package ranges) in a way that’s useful and efficient to query.

        (Source: I maintain pip-audit, where this has been a long-standing feature request. We’re still mostly in a place of lacking good metadata from vulnerability feeds to enable it.)

        • caned 3 hours ago ago

          The imports themselves may be dynamic. I once did a little review of dependencies in a venv that had everything to run pytorch llama. The number of imports gated by control flow or having a non-constant dependency was nontrivial.

          • woodruffw 3 hours ago ago

            Imports gated by control flow aren’t a huge obstacle, since they’re still statically observable. But yeah, imports that are fully dynamic i.e. use importlib or other import machinery blow a hole in this.

        • mirashii 6 hours ago ago

          The thing is that almost always isn't good enough. If it can't prove it, then a human has to be put back in the loop to verify and assert, and on sensitive timelines when you have regulatory requirements on time to acknowledge and resolve CVEs in dependencies.

          • woodruffw 3 hours ago ago

            Sure, but I think the useful question is whether it’s good enough for the median Python codebase. I see the story as similar to that of static typing in Python; Python’s actual types are dynamic and impossible to represent statically with perfect fidelity, but empirically static typing for Python has been very successful. This is because the actual exercised space is much smaller than the set of all valid Python programs.

      • danudey 7 hours ago ago

        With type hints it's possible for code to assert down the possibilities from "who knows what's what" to "assuming these type hints are correct, this function is never called"; not perfect (until we can statically assert that type hints are correct, which maybe we can idk) but still a pretty good step.

      • robszumski 8 hours ago ago

        I commented elsewhere but our team built a custom static analysis engine for JS/TS specifically for the dep update use-case. It was hard, had to do synthetic execution, understands all the crazy remapping and reexporting you can do, etc. Even then it’s hard to penetrate a complex Express app due to how the tree is built up.

    • tech2 8 hours ago ago

      For python maybe pip-audit, and perhaps bandit for a little extra?

      It doesn't have the code tracing ability that my sibling is referring to, but it's better than nothing.

  • mehagar 9 hours ago ago

    Is there an equivalent for the JS ecosystem? If not, having Dependabot update dependencies automatically after a cooldown still seems like a better alernative, since you are likely to never update dependencies at all if it's not automatic.

    • seattle_spring 9 hours ago ago

      RenovateBot supports a ton of languages, and ime works much better for the npm ecosystem than Dependabot. Especially true if you use an alternative package manager like yarn/pnpm.

    • mook 9 hours ago ago

      Too bad dependabot cooldowns are brain-dead. If you set a cooldown for one week, and your dependency can't get their act together and makes a release daily, it'll start making PRs for the first (oldest) release in the series after a week even though there's nothing cool about the release cadence.

      • kleyd 8 hours ago ago

        The cooldown is to allow vulnerabilities to be discovered. So auto update on passing tests, which should include an npm audit check.

  • adamdecaf 8 hours ago ago

    govulncheck is the much better answer and we use it.

    We also let renovate[bot] (similar to dependabot) merge non-major dep updates if tests pass. I hardly notice when deps have small updates.

    https://github.com/search?q=org%3Amoov-io+is%3Apr+is%3Amerge...

  • robszumski 9 hours ago ago

    We’ve built a modern dependabot (or works with it) agent: fossabot analyzes your app code to know how you use your dependencies then delivers a custom safe/needs review verdict per upgrade or packages groups of safe upgrades together to make more strategic jumps. We can also fix breaking changes because the agents context is so complete.

    https://fossa.com/products/fossabot/

    We have some of the best JS/TS analysis out there based on a custom static analysis engine designed for this use-case. You get free credits each month and we’d love feedback on which ecosystems are next…Java, Python?

    Totally agree with the author that static analysis like govulncheck is the secret weapon to success with this problem! Dynamic languages are just much harder.

    We have a really cool eval framework as well that we’ve blogged about.

  • snowhale 9 hours ago ago

    govulncheck is so much better for Go projects. it actually traces call paths so you only get alerted if the vulnerable function is reachable from your code. way less noise.

  • operator-name 7 hours ago ago

    The custom Github Actions approach is very customisable and flexible. In theory you could make and even auto approve bumps.

    If you want something more structured, I’ve been playing with and can recommend Renovate (no affiliation). Renovate supports far more ecosystems, has a better community and customisation.

    Having tried it I can’t believe how relatively poor Dependabot, the default tool is something we put up with by default. Take something simple like multi layer dockerfiles. This has been a docker features for a while now, yet it’s still silently unsupported by dependabot!

    • esafak 7 hours ago ago

      That's what a lack of competition does. Github is entrenched, complacent.

  • bpavuk 10 hours ago ago

    is there a `govulncheck`-like tool for the JVM ecosystem? I heard Gradle has something like that in its ecosystem.

    search revealed Sonatype Scan Gradle plugin. how is it?

  • aswihart 7 hours ago ago

    > Dependencies should be updated according to your development cycle, not the cycle of each of your dependencies. For example you might want to update dependencies all at once when you begin a release development cycle, as opposed to when each dependency completes theirs.

    We're in this space and our approach was to supplement Dependabot rather than replace it. Our app (https://www.infield.ai) focuses more on the project management and team coordination aspect of dependency management. We break upgrade work down into three swim lanes: a) individual upgrades that are required in order to address a known security vulnerability (reactive, most addressed by Dependabot) b) medium-priority upgrades due to staleness or abandonedness, and c) framework upgrades that may take several months to complete, like upgrading Rails or Django. Our software helps you prioritize the work in each of these buckets, record what work has been done, and track your libyear over time so you can manage your maintenance rotation.

  • arianvanp 7 hours ago ago

    At this point your steps are so simple id skip GitHub actions security tyre fire altogether. Just run the go commands whilst listening on GitHub webhooks and updating checks with the GitHub checks API.

    GitHub actions is the biggest security risk in this whole setup.

    Honestly not that complicated.

    • NewJazz 6 hours ago ago

      I learned recently that self-hosted GHA runners are just VMs your actions have shell access to, and cleanup is on the honor system for the most part.

      Absolutely wild.

  • NewJazz 6 hours ago ago

    Besides go, what languages have this type of fidelity for vulnerability scope. Python? Node? Rust?

  • literallyroy 10 hours ago ago

    The go ecosystem is pretty good about being backwards compatible. Dependabot regular update prs once a week seems like a good option in addition to govulncheck.

  • seg_lol 10 hours ago ago

    Be wary of upgrading dependencies too quickly. This is how supply chain incursions are able to spread too quickly. Time is a good firwall.

    • ImJasonH 9 hours ago ago

      Here's a Go mod proxy-proxy that lets you specify a cooldown, so you never get deps newer than N days/weeks/etc

      https://github.com/imjasonh/go-cooldown

      It's not running anymore but you get the idea. It should be very easy to deploy anywhere you want.

    • esafak 10 hours ago ago
      • jamietanna 9 hours ago ago

        Yep, and we've had it for a while in Renovate too: https://docs.renovatebot.com/key-concepts/minimum-release-ag...

        (I'm a Renovate maintainer)

        (I agree with Filippo's post and it can also be applied to Renovate's security updates for Go modules - we don't have a way, right now, of ingesting better data sources like `govulncheck` when raising security PRs)

    • bityard 9 hours ago ago

      A firwall also makes a good firewall, once ignited.

    • Hamuko 10 hours ago ago

      >Time is a good firwall.

      That just reminds me that I got a Dependabot alert for CVE-2026-25727 – "time vulnerable to stack exhaustion Denial of Service attack" – across multiple of my repositories.

  • focusedmofo 9 hours ago ago

    Is there an equivalent for JS/TS?

  • KPGv2 3 hours ago ago

    This is a symptom of JS culture, where people believe you must at all times and in all places have THE latest version of every library, and you MUST NOT wait more than a day to update your entire codebase accordingly.

    • lazyasciiart 3 hours ago ago

      This blog post is entirely about Go, and doesn’t mention JS at all.

  • TZubiri 9 hours ago ago

    Coming from someone with an almost ascetic dependency discipline, I look at some meta-dependencies as an outsider (dependabot, pnpm/yarn, poetry/venv/pipenv, snap/flatpak), a solution to too many dependencies that is yet another dependency, it feels like trying to get out of a hole by digging.

    I think that for FOSS the F as in Gratis is always going to be the root cause of security conflicts, if developers are not paid, security is always going to be a problem, you are trying to get something out of nothing otherwise, the accounting equation will not balance, exploiting someone else is precisely the act that leaves you open to exploitation (only according to Nash Game Theory). "158 projects need funding" IS the vector! I'm not saying that JohnDoe/react-openai-redux-widget is going to go rogue, but with what budget are they going to be able to secure their own systems?

    My advice is, if it ever comes the point where you need to install dependencies to control your growing dependency graph? consider deleting some dependencies instead.

  • indiekitai 7 hours ago ago

    The core problem is that Dependabot treats dependency graphs as flat lists. It knows you depend on package X, and X has a CVE, so it alerts you. But it has no idea whether you actually call the vulnerable code path.

    Go's tooling is exceptional here because the language was designed with this in mind - static analysis can trace exactly which symbols you import and call. govulncheck exploits this to give you meaningful alerts.

    The npm ecosystem is even worse because dynamic requires and monkey-patching make static analysis much harder. You end up with dependency scanners that can't distinguish between "this package could theoretically be vulnerable" and "your code calls the vulnerable function."

    The irony is that Dependabot's noise makes teams less secure, not more. When every PR has 12 security alerts, people stop reading them. Alert fatigue is a real attack surface.

  • newzino 7 hours ago ago

    The part that kills me is the compliance side. SOC2 audits and enterprise security reviews treat "open Dependabot alerts" as a metric. So teams merge dependency bumps they don't understand just to get the count to zero before the next audit. That's actively worse for security than ignoring the alerts.

    govulncheck solves this if your auditor understands it. But most third-party security questionnaires still ask "how do you handle dependency vulnerabilities?" and expect the answer to involve automated patching. Explaining that you run static analysis for symbol reachability and only update when actually affected is a harder sell than "we merge Dependabot PRs within 48 hours."