40 comments

  • fulafel 2 hours ago ago

    The industry culture related to security work and career paths seem just f'd up.

    Instead of ensuring we build systems with robust foundations, people end up in a swamp of frustrating roles like SOC staff chasing alarms about false positives all day, peddling ineffective add-on security products, management CISO roles where you're expected to take responsibility of existing insecure Microsoft etc infrastructure without power to change things, working on demotivating compliance bureucracy that don't actually improve security.

    I'd argue work on meaningful security improvements is mostly available outside industry security roles.

    • evan_a_a 2 hours ago ago

      The company I work for (consulting) upended the entire strategy to basically use pentests to sell managed services (XDR, NDR, SOC, vuln scanning, "continuous pentest") that does nothing to meaningfully move the needle on security. Which of course the market will buy, but it is incredibly demoralizing to see expertise sacrificed to the alter of recurring revenue.

      • xorcist 2 hours ago ago

        And every time some company got hacked and embarrassed, the same refrain is played out in the comments: "Those cheapskates, they invest too little in security!".

        Spend all you want. Buy the most advanced products, and then most expensive services to manage them. I have never seen a company that improved their security by buying it.

        • sillysaurusx 2 hours ago ago

          Whoa, that’s a bit far. I’m a former pentester. I meaningfully improved security at quite a few places. The standout was Citadel, where a product was set to launch within a few weeks. When I first got there, typing ‘ into their search fields resulted in SQL injection right away. They had never thought to defend against it. Over the next week, I fed them a steady list of bugs and vulns (there were many) until by the end of it that product was watertight. I was particularly proud of that one.

          Pentests work.

          • evan_a_a an hour ago ago

            Pentests work to secure the product under test at the point in time of the test (if the company cares to fix the bugs...). The real solution is to design in security throughout the software lifecycle, not play pentest wack-a-mole game at the end of the cycle. If a pentester is finding trivial SQL injection in an app, then it is clear that the company never considered security. And unless the pentest makes them care, the cycle will just continue.

            • PradeetPatel an hour ago ago

              Precisely, the industry needs to empower the engineers to shift left and integrate security as a part of the SDLC. this is the only way to provide continuous assurance in the age of AI.

          • Veserv an hour ago ago

            "Improved" is a useless word. Is their security now adequate? Is it secure against the run-of-the-mill financially motivated threat actors we see regularly and orchestrating thousands of profitable attacks annually?

            We regularly see attacks extorting tens of millions of dollars from major multinationals like Citadel. Is the cost of breaching their systems in excess of ten million dollars (which would net you a nice fat profit against multiple tens of millions extorted)? You get a team of 10 professionals for 1-3 years and you can not breach their systems?

            That is the minimum standard of adequate against commonplace, prevailing threats for large multinationals. Even that ignores the fact that major corporations are frequently attacked by state actors, so really the minimum standard for protection against expected threats should include those as well, but I will leave that aside for now since the overwhelming sentiment is that protection against state actors is so utterly hopeless it is not even worth mentioning.

            For that matter, can you point to literally any system in the entire world that is positively demonstrated (absence of evidence is not evidence of absence) to have reached that standard?

            • evan_a_a an hour ago ago

              >Even that ignores the fact that major corporations are frequently attacked by state actors, so really the minimum standard for protection against expected threats should include those as well, but I will leave that aside for now since the overwhelming sentiment is that protection against state actors is so utterly hopeless it is not even worth mentioning.

              It always has been, it's just now the state actors are more and more active (and visibly so).

        • evan_a_a 2 hours ago ago

          It is an investment problem, they need to invest in security expertise, not security products and services. And that is the sad part, absent the company really caring to spend that money or an external demand (regulatory or customers) it just isn't going to happen. They'll just layer on more products and services and call it a day.

  • giancarlostoro 3 hours ago ago

    Just commented this elsewhere but my takes on cybersecurity today: Its about to blow up in high demand with so many skiddies being able to hack anybody with an LLM. We are seeing an increase in websites, systems and companies being compromised at an alarming rate. I suspect one of these days we will see a headline of a compromise that will shock and horrify us all. Anyone sleeping on cyber security is a ticking timebomb.

    Honestly, if you wanted to make a YC company today that targets AI in a meaningdful way, I'd say make it focused on cyber security analysis. ;)

    • thewebguyd 3 hours ago ago

      > I suspect one of these days we will see a headline of a compromise that will shock and horrify us all

      But we've had the shock headlines already, and nothing changes. We've seen hospitals get hit that had real-life consequences for patients, the entirety of US citizens SSNs have been breached multiple times now. Passwords as a concept are basically obsolete now. There's even more.

      That bomb has already been going off.

      If anything I'm seeing the opposite. Companies are throwing security to the wind to go all in on AgEnTiC AI.

      If we want change irt cybersecurity, then there needs to start being real consequences for a breach. Not just free credit monitoring. The companies that are proven to be negligent should face actual financial & criminal consequences.

    • debarshri 3 hours ago ago

      I am building in cybersec space. I dont think you even need script kiddies now. Internal employees run dangerous bad ops with AI that itself is a cybersec nightmare.

    • evan_a_a 3 hours ago ago

      Whenever I tell people I work in computer security, their first question is "are you worried about AI taking your job"? To which I just laugh and respond "AI is job security"

      • giancarlostoro 3 hours ago ago

        It really is! AI will only help you if anything, you aren't worried about AI giving you bad code, just bad answers, which you would validate anyway. I think the other area where AI could be interesting, and I don't hear much buzz about it is, during outages, if it can query all online systems and logs in your cloud, it could probably triage it faster than an entire outage team could in theory anyway. Surprised nobodys built such a system yet. ;)

        • evan_a_a 3 hours ago ago

          I mean it in the sense that AI security hype and the larger geopolitical environment has woken up a lot of people to the reality that they need to consider security. And the ones that haven't woken up yet will get a wakeup call when they are breached. It also increases the demand for real security expertise, which is already scarce.

          Also, in my niche (hardware and embedded product security), AI doesn't a have a functional impact to the work except in code analysis, but even that is difficult given the level of abstraction these systems are built at.

          • giancarlostoro 3 hours ago ago

            That's fair, though even that could just be a matter of time, as people build tools that interface LLMs to the physical world. I wonder how something like Bus Pirate could be used with an LLM (maybe a more powerful version of it?) to grok and poke hardware all over the place.

            • evan_a_a 2 hours ago ago

              I forsee issues with really getting use out of any commodity language model in the hardware security context, because hardware systems notoriously lack standardization. And often times, the technical knowledge (datasheets, app notes) is locked behind vendor NDAs, or straight up not documented, only existing in the minds of engineers. The implementations of said designs are similarly highly proprietary, with little public "real" systems to train models on.

              So the issue is two-fold:

              * The knowledge must be documented and accessible for training.

              * A bespoke model must be trained this documentation.

              It is unlikely that both of these things happen in the general model context. Perhaps individual chip vendors will eventually pursue this, but I suspect it is just not a priority for them.

    • xnx 2 hours ago ago

      Do you think that AI helps security offense more than defense? It's not obvious to me that it does.

  • nubinetwork an hour ago ago

    "Cybersecurity" is why everything breaks on a daily basis at $DAYJOB. One random garbage packet from the internet can cause entire VLANs to go down, kicking people out of VPNs and Citrix. I get the need for security, but these security types cause more issues than they're trying to solve.

  • PearlRiver 7 minutes ago ago

    Man I have been in 3 data breaches this year and it's only April! My spam folder is going crazy. Security indeed.

  • everdrive 2 hours ago ago

    Companies don't fundamentally care about cybersecurity. Most of them see cybersecurity as being similar to waste management; it's not something you get excited about. Sure, your company _must_ have a waste management plan, but it only exists out of pure necessity. It's required to do the real work of the company, but if you had a magic wand and never had to deal with it, you'd choose that option. And, like waste management, plenty of companies outsource their cybersecurity, since it's cheaper and they don't really care about it.

    • liquid_thyme 2 hours ago ago

      Yes, you're correct. To add - companies don't fundamentally care about all the things that we like to think of as "nice things", like good design, lack of dark patterns, robust security architecture, minimizing technical debt, etc.

      If customers cared about reputational damage from cybersecurity incidents (sure.. some do) , then you would see that reflected in their priorities. Also, non-technical customers don't really know who to blame for security anyway. They'll just blame the OS vendor or other random parties even if its the Application that is not secure.

  • lenerdenator 5 hours ago ago

    "Show me the incentives, and I'll show you the outcomes." - Charlie Munger

    Right now, if you have a security breach, at least in the US, you send out a letter telling the person that their data could be God-knows-where and offer them two free years of credit monitoring. Victims aren't going to really use that because it's essentially useless. If they've got absolutely, positively nothing better to do with their time, I guess you could file a lawsuit. Who knows what the outcome would be. Probably not in their favor.

    In other words, it's cheaper for them to overwork the InfoSec guys/gals and barely care about what is happening outside of day-to-day operations, than it is to really secure their stuff. So they don't spend that money.

    If you saw corporate valuation-cratering fines being implemented - the kind that would end the c-suite's careers and bring shame to their family lines for seven generations - I bet that they'd start catering lunches for the InfoSec team.

    • gadders 4 hours ago ago

      New idea: AI tool to help generate legal letters to companies after they leak data to cause them maximum inconvenience.

      • intended 4 hours ago ago

        The human speed legal system would become collateral damage.

      • lenerdenator 4 hours ago ago

        You could also create an AI tool to help generate letters to lawmakers about how they need to make a real dent in this between reruns of Matlock in the retirement home.

    • xnx 2 hours ago ago

      > "Show me the incentives, and I'll show you the outcomes." - Charlie Munger

      Also note that -like pharmaceutical companies- treatment is more profitable than cure for infosec consultants.

    • jcgrillo 4 hours ago ago

      I don't think fines are enough of an incentive. They're too easy to evade and insufficiently consequential to the people who are actually shipping code. Moreover, making them enormous (as you put it well "valuation-cratering") unfairly punishes people who are not directly responsible for the failure. Instead, like in other engineering disciplines, Engineers need to be personally liable for the consequences of failure. Not necessarily every engineer--not every mechanical engineer needs to be a P.E.--but someone directly responsible for the quality of the work needs to stake their reputation on it, and suffer the consequences when it fails.

      • adrianN 4 hours ago ago

        In practice this would mean that you need to show conformance to some kind of security process. The actual outcome of that process is of secondary importance as long as you can show that you’re compliant. Very carefully written process documents _can_ improve things, but my confidence in security processes is low for companies without intrinsic motivation.

        I think one can reasonably argue that sufficiently large fines that don’t have a „but we followed iso-xyz“ loophole could produce better outcomes. The difficult part is making the companies care about existential tail risks.

        • TheRealDunkirk 4 hours ago ago

          Companies are already following a bunch of standards like SOX, SOC2, HIPAA, etc., and documenting their adherence to checking all of the boxes, but incidents still happen every week.

        • jcgrillo 4 hours ago ago

          Yes, it'll generate a lot of super annoying paperwork. But, hopefully, it will also tighten up software engineering standards. It has worked well in other disciplines.

          • adrianN 4 hours ago ago

            There already are areas where such standards exist, eg safety critical applications in aviation. Arguably the defect rate there _is_ lower, but I still think that this method for achieving this is quite inefficient. And I think that writing aviation software that doesn’t crash is a lot easier to define a process for than for writing software that is difficult to hack.

            • jcgrillo 4 hours ago ago

              The missing piece is the requirement for a certified Professional Engineer to sign off on the system. That decouples the incentives from the corporate objectives, and makes it personal. We need that kind of professional accountability in software, otherwise it'll continue to be bad.

              • adrianN 4 hours ago ago

                It is my understanding that personal responsibility already exists in safety critical software development.

                • jcgrillo 34 minutes ago ago

                  I hadn't actually heard of this, I have never worked on safety critical systems, but doing some googling I found a reference to a "Designated Engineering Representative" in this article about the FAA's DO-178B: https://en.wikipedia.org/wiki/DO-178B

                  I wasn't able to find much information about U.S. P.E. certification for SWE's, although there is at least one state which offers it. I wasn't able to find any indication anywhere that a compliance process requires a P.E. to sign off on software. That doesn't mean it doesn't exist though!

                  One major problem is that now that software is "everywhere" it's escaping the boundaries of safety critical standards. Nobody will be killed directly by a bank getting hacked, but it could result in mortal harm to an individual who has their identity stolen. There are all kinds of systems that aren't labeled safety critical in the kinetic sense which are nonetheless very load-bearing. Software which runs on phones, for example. Surely people have died due to buggy phone software. Nobody is being held meaningfully accountable, so it will continue to happen.

                  To be clear, I'm not saying we should heap a whole lot more pressure onto security teams. Instead we need to find better ways to make security every engineer's professional ethical responsibility--either directly because they're signing off on the system or indirectly because their respected senior colleague is. I just don't see fines getting us there.

    • FireBeyond 3 hours ago ago

      > offer them two free years of credit monitoring. Victims aren't going to really use that because it's essentially useless

      It's generally actively harmful, and the CRAs fight for this business from breaches because universally, to accept the free credit monitoring you have to sign up for their highest tier credit monitoring package (which can be up to $50/month), supply a credit card, and then hope to remember, a year later, to cancel at the end of the free period, because at that point they'll convert you to a paying customer.

  • a34729t 3 hours ago ago

    With Claude writing so much of the software in big companies, Anthropic is well-positioned to eat up SAST, DAST and a lot of the supply chain analysis. EDR and proactive security are still going to be massive businesses, however.

  • mystraline 4 hours ago ago

    Yep. I had a chance to go for a cybersecurity degree. And every time ive looked at that, the career path is basically an applied insurance job.

    Cybersecurity does not make money. They do not raise the profit for a company. Instead, they are compliance, contractual, and legal defences to repel lawsuits and keep data boundaries clean.

    And who's the first to go? Groups that dont make money. Like cybersec.

    • blueside 2 hours ago ago

      Cybersecurity certainly makes money. The good ones make a lot. I mean a lot.

      But if you think you can just study for a year and get some security certificates and call it a day, you're going to be sorely disappointed in the compensation.

      • czbond 2 hours ago ago

        OP means internal to a company security ops don't generate add on revenue. Cybersec definitely adds revenue to services and provider companies.