427 comments

  • Wowfunhappy 4 hours ago ago

    > Schwartz's experiment is the most revealing, and not for the reason he thinks. What he demonstrated is that Claude can, with detailed supervision, produce a technically rigorous physics paper. What he actually demonstrated, if you read carefully, is that the supervision is the physics. Claude produced a complete first draft in three days. It looked professional. The equations seemed right. The plots matched expectations. Then Schwartz read it, and it was wrong. Claude had been adjusting parameters to make plots match instead of finding actual errors. It faked results. It invented coefficients. [...] Schwartz caught all of this because he's been doing theoretical physics for decades. He knew what the answer should look like. He knew which cross-checks to demand. [...] If Schwartz had been Bob instead of Schwartz, the paper would have been wrong, and neither of them would have known.

    And so the paradox is, the LLMs are only useful† if you're Schwartz, and you can't become Schwartz by using LLMs.

    Which means we need people like Alice! We have to make space for people like Alice, and find a way to promote her over Bob, even though Bob may seem to be faster.

    The article gestures at this but I don't think it comes down hard enough. It doesn't seem practical. But we have to find a way, or we're all going to be in deep trouble when the next generation doesn't know how to evaluate what the LLMs produce!

    ---

    † "Useful" in this context means "helps you produce good science that benefits humanity".

    • conception 3 hours ago ago

      Sadly I don’t see how our current social paradigm works for this. There is no history of any sort of long planning like this or long term loyalty (either direction) with employees and employers for this sort of journeyman guild style training. AI execs are basically racing, hoping we won’t need a Schwartz before they are all gone. But what incentives are in place to high a college grad, have them work without llms for a decade and then give them the tools to accelerate their work?

      • Wowfunhappy 3 hours ago ago

        Then the social paradigm needs to change. Is everyone just going to roll over and die while AI destroys academia (and possibly a lot more)?

        Last September, Tyler Austin Harper published a piece for The Atlantic on how he thinks colleges should respond to AI. What he proposes is radical—but, if you've concluded that AI really is going to destroy everything these institutions stand for, I think you have to at least consider these sorts of measures. https://www.theatlantic.com/culture/archive/2025/09/ai-colle...

        • mrob 2 hours ago ago

          >What he proposes is radical

          It sounds entirely reasonable and moderate to me.

        • conception 3 hours ago ago

          Well, we are already rolling over and dying (literally) on everything from vaccine denial to climate change. So, yes, we are. Obviously yes.

          • dragontamer 2 hours ago ago

            In the US it is dying off.

            Not so in plenty of other countries. Hopefully US reverses the anti-science trend before it's too late

            • conception 2 hours ago ago

              These movements are growing in every western nation. The trend has been growing over decades. It would be nice to see it reverse but seems unlikely before calamity.

              • adriand 26 minutes ago ago

                It’s a deliberate process powered by rightwing and capitalist interests designed to create a dumber, less educated and more distracted population. A war as stupid as the one with Iran would not have been possible three decades ago. As ill-advised as the Iraq war was, Bush at least spent months explaining the rationale and building support for it, successfully. Now that’s not needed.

                I saw interviews with young Americans on spring break and they were so utterly uninformed it was mind-blowing. Their priorities are getting drunk and getting laid while their country bombs a nation “into the stone ages”, according to their president. And it’s not their fault: they are the product of a media environment and education system designed for exactly this outcome.

                • vinceguidry 8 minutes ago ago

                  I was there for that war. Kids weren't listening and didn't care back then either. If anything, Gen Z is the most politically-aware generation we've had since we started keeping track.

                  Trump doesn't have to justify a single thing because the billionaires behind him know that every last bet is off and their very livelihoods are at risk, and his entire base of support up and down the chain are either complicit or fooled.

                  What the world does when they finally realize Democrats and Republicans are simply two sides of the vast apparatus suppressing the will of the people by any means necessary will be... spectacular.

        • senordevnyc an hour ago ago

          Article is paywalled, so perhaps you could just summarize his proposal?

          • Wowfunhappy an hour ago ago

            > At the type of place where I taught until recently—a small, selective, private liberal-arts college—administrators can go quite far in limiting AI use, if they have the guts to do so. They should commit to a ruthless de-teching not just of classrooms but of their entire institution. Get rid of Wi-Fi and return to Ethernet, which would allow schools greater control over where and when students use digital technologies. To that end, smartphones and laptops should also be banned on campus. If students want to type notes in class or papers in the library, they can use digital typewriters, which have word processing but nothing else. Work and research requiring students to use the internet or a computer can take place in designated labs. [...] Colleges that are especially committed to maintaining this tech-free environment could require students to live on campus, so they can’t use AI tools at home undetected.

            You can access the full article at https://archive.is/zSJ13 (I know archive.is is kind of shady, but it works).

            • boothby 32 minutes ago ago

              > If students want to type notes in class or papers in the library, they can use digital typewriters, which have word processing but nothing else.

              Only, replacing the guts of such a machine to contain a local LLM is damn easy today. Right now the battery mass required to power the device would be a giveaway, but inference is getting energetically cheaper.

              > Colleges that are especially committed to maintaining this tech-free environment could require students to live on campus, so they can’t use AI tools at home undetected.

              Just like my on-campus classmates never smoked weed or drank underage, I'm sure.

      • jayd16 17 minutes ago ago

        Some folks need to touch the hot stove before they learn but eventually they learn.

        If AI output remains unreliable then eventually enough companies will be burned and management will reinstate proper oversight. All while continuing to pay themselves on the back.

      • FrojoS 3 hours ago ago

        > There is no history of any sort of long planning

        Sure there is. Its the formal education system that produced the college grad.

        • conception 3 hours ago ago

          … between employees and employers.

          The proposal that everyone pay for college until they are in their 40s doesn’t seem viable.

          • FrojoS 2 hours ago ago

            Maybe, but there is a trend towards more and longer education. More college graduates, more PhD grads, etc.

    • cmiles74 3 hours ago ago

      I think we already know what we need to do: encourage people to do the work themselves, discourage beginners from immediately asking an LLM for help and re-introducing some kind of oral exam. As the article mentions, banning LLMs is impractical and what we really need are people who can tell when the LLM is confidently wrong; not people who don't know how to work with an LLM.

      I hope it will encourage people to think more about what they get out of the work, what doing the work does for them; I think that's a good thing.

      • atomicnumber3 2 hours ago ago

        I think we'll get there. We need to get at least some AI bust going first though. It's impossible to talk sense into people who think AI is about to completely replace engineers, or even those who think that, while it might not replace engineers, it's going to be doing 100% of all coding within a year. Or even that it can do 100% of coding right now.

        There's a couple unfortunate truths going on all at the same time:

        - People with money are trying to build the "perfect" business: SaaS without software eng headcount. 100% margin. 0 Capex. And finally near-0 opex and R&D cost. Or at least, they're trying to sell the idea of this to anyone who will buy. And unfortunately this is exactly what most investors want to hear, so they believe every word and throw money at it. This of course then extends to many other business and not just SaaS, but those have worse margins to start with so are less prone to the wildfire.

        - People who used to code 15 years ago but don't now, see claude generating very plausible looking code. Given their job is now "C suite" or "director", they don't perceive any direct personal risk, so the smell test is passed and they're all on board, happily wreaking destruction along the way.

        - People who are nominally software engineers but are bad at it are truly elevated 100x by claude. Unfortunately, if their starting point was close to 0, this isn't saying a lot. And if it was negative, it's now 100x as negative.

        - People who are adjacent to software engineering, like PMs, especially if they dabble in coding on the side, suddenly also see they "can code" now.

        Now of course, not all capital owners, CTOs, PMs, etc exhibit this. Probably not even most. But I can already name like 4 example per category above from people I know. And they're all impossible to explain any kind of nuance to right now. There's too many people and articles and blog posts telling them they're absolutely right.

        We need some bust cycle. Then maybe we can have a productive discussion of how we can leverage LLMs (we'll stop calling it "AI"...) to still do the team sport known as software engineering.

        Because there's real productivity gains to be had here. Unfortunately, they don't replace everyone with AGI or allow people who don't know coding or software engineering to build actual working software, and they don't involve just letting claude code stochastically generate a startup for you.

        • Wowfunhappy 2 hours ago ago

          > Or even that [AI] can do 100% of coding right now.

          I don't actually think the article refutes this. But the AI needs to be in the hands of someone who can review the code (or astrophysics paper), notice and understand issues, and tell the AI what changes to make. Rinse, repeat. It's still probably faster than writing all the code yourself (but that doesn't mean you can fire all your engineers).

          The question is, how do you become the person who can effectively review AI code without actually writing code without an AI? I'd argue you basically can't.

          • silver_silver 32 minutes ago ago

            My boss decreed the other day that we’re all to start maximising our use of agents, and then set an accordingly ambitious deadline for the current project. I explained that being relatively early in my career I’ve been hesitant to use any kind of LLM so I can gain experience myself (to say nothing of other concerns), and yeah in his words I’ve “missed the opportunity”

            • iugtmkbdfil834 11 minutes ago ago

              Interesting, we only have generic 'use AI' in our goals. Though its generic framing probably indicates more company's belief in this tech than anything else.

          • bluefirebrand 4 minutes ago ago

            > The question is, how do you become the person who can effectively review AI code without actually writing code without an AI? I'd argue you basically can't.

            I agree, and I'd go a step further:

            You can be the absolute best coder in the world, the fastest and most accurate code reviewer ever to live, and AI still produces bad code so much faster than you can review that it will become a liability eventually no matter what

            There is no amount of "LLM in a loop" "use a swarm of agents" or any other current trickery that fixes this because eventually some human needs to read and understand the code. All of it.

            Any attempt to avoid reading and understanding the code means you have absolutely left the realm of quality software, no exceptions

    • fomoz an hour ago ago

      AI is an accelerant, not a replacement for skill. At least, not yet.

      I built a full stack app in Python+typescript where AI agents process 10k+ near-real-time decisions and executions per day.

      I have never done full stack development and I would not have been able to do it without GitHub Copilot, but I have worked in IT (data) for 15 years including 6 in leadership. I have built many systems and teams from scratch, set up processes to ensure accuracy and minimize mistakes, and so on.

      I have learned a ton about full stack development by asking the coding agent questions about the app, bouncing ideas off of it, planning together, and so on.

      So yes, you need to have an idea of what you're doing if you want to build anything bigger than a cheap one shot throwaway project that sort of works, but brings no value and nobody is actually gonna use.

      This is how it is right now, but at the same time AI coding agents have come an incredibly long way since 2022! I do think they will improve but it can't exactly know what you want to build. It's making an educated guess. An approximation of what you're asking it to do. You ask the same thing twice and it will have two slightly different results (assuming it's a big one shot).

      This is the fundamental reality of LLMs, sort of like having a human walking (where we were before AI), a human using a car to get to places (where we are now) and FSD (this is future, look how long this took compared to the first cars).

    • einszwei 2 hours ago ago

      > And so the paradox is, the LLMs are only useful† if you're Schwartz, and you can't become Schwartz by using LLMs.

      I have gained a lot of benefit using LLMs in conjunction with textbooks for studying. So, I think LLMs could help you become Schwartz.

      • Peritract 2 hours ago ago

        How do you know you have?

        • einszwei an hour ago ago

          I have been using it to learn Chinese along with other standard resources. My reading comprehension has improved a lot after I started to use LLMs to understand sentence structures and grammar.

    • mezyt an hour ago ago

      Profession (1957) by Isaac Asimov is relevant: https://news.ycombinator.com/item?id=46664195

    • grey-area 2 hours ago ago

      Why use a tool that generates plausible garbage?

      • therealdrag0 an hour ago ago

        Because I’m skilled enough to use a tool that generates plausible garbage to be more productive than those who don’t use it at making non-garbage.

        • grey-area 16 minutes ago ago

          Are you sure you’re more productive?

          Doesn’t sound like these tools should be used to write scientific papers for example and they seem to bamboozle people far more than help them.

      • Henchman21 43 minutes ago ago

        Because there is no appreciable difference between outputs. Most of the work that most of us do isn't important. It's busywork byproducts of making widgets that most people don't even need. So if your job is already pointless why not make it easier using LLMs?

        • grey-area 17 minutes ago ago

          Sounds a little sad. I think I’d rather find another job.

    • thePhytochemist an hour ago ago

      I totally agree - the article misses this point in a very conspicuous way. It suggests that Alice and Bob will both graduate at the same level.

      What may well happen instead is that Bob publishes two papers. He then outcompetes Alice based on the insistence that others have on "publish or perish". Alice becomes unemployed and struggles, having been pushed out.

      The person who puts the time and effort in doesn't just sit at the same level and they don't both just find decent employment. Competition happens and the authentic learning is considered a waste of time, which leads to real and often life threatening consequences (like being homeless after being unable to find employment).

      • iugtmkbdfil834 a minute ago ago

        << authentic learning is considered a waste of time

        This, I think, may be the more interesting bit. Steve Jobs anecdotally did caligraphy in school, which some would consider a waste of time, but Steve credited some of the stylistic choices to.

        The question then becomes whether it will become an issue now or later. Having seen some of the output, I have no doubt that a lot can now be built by non-programmers ( including myself; I suppose I belong in the adjacent category ). The building blocks exist and as long as the problem was part of the initial training, odds are, LLM will help you build what you want.

        It may not be perfect, safe, or optimized, but it may still be exactly what the user wanted to do. Now, the problems will start when those will, inevitably, move into production at big corps. In a sense, we have seen some interesting results of that in the past few weeks ( including accidental claude code release ).

        In a grand scheme of things, not much is changing... except for speed of change. But are we quite ready for this?

    • leereeves 3 hours ago ago

      > And so the paradox is, the LLMs are only useful† if you're Schwartz

      Was the LLM even useful for Schwartz, if it produced false output?

      • cmiles74 3 hours ago ago

        Maybe it saved them some time? So far the studies seem to lean toward probably the LLM didn't save them any time.

  • sd9 6 hours ago ago

    The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

    I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading. I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

    To be honest, I’m looking at leaving software because the job has turned into a different sort of thing than what I signed up for.

    So I think this article is partly right, Bob is not learning those skills which we used to require. But I think the market is going to stop valuing those skills, so it’s not really a _problem_, except for Bob’s own intellectual loss.

    I don’t like it, but I’m trying to face up to it.

    • djaro 6 hours ago ago

      > So if Bob can do things with agents, he can do things.

      The problem arrises when Bob encounters a problem too complex or unique for agents to solve.

      To me, it seems a bit like the difference between learning how to cook versus buying microwave dinners. Sure, a good microwave dinner can taste really good, and it will be a lot better than what a beginning cook will make. But imagine aspiring cooks just buying premade meals because "those aren't going anywhere". Over the span of years, eventually a real cook will be able to make way better meals than anything you can buy at a grocery store.

      The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.

      • jacquesm 6 hours ago ago

        Precisely. The first 10 rungs of the ladder will be removed, but we still expect you to be able to get to the roof. The AI won't get you there and you won't have the knowledge you'd normally gain on those first 10 rungs to help you move past #10.

        • NiloCK 4 hours ago ago

          People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

          The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).

          This isn't guaranteed to play out, but it should be the default expectation until we actually see greatly diminishing outputs at the frontier of science, engineering, etc.

          • lukev 4 hours ago ago

            I think that's too easy an analogy, though.

            Calculators are deterministically correct given the right input. It does not require expert judgement on whether an answer they gave is reasonable or not.

            As someone who uses LLMs all day for coding, and who regularly bumps against the boundaries of what they're capable of, that's very much not the case. The only reason I can use them effectively is because I know what good software looks like and when to drop down to more explicit instructions.

            • II2II 4 hours ago ago

              > Calculators are deterministically correct

              Calculators are deterministic, but they are not necessarily correct. Consider 32-bit integer arithmetic:

                30000000 * 1000 / 1000
                30000000 / 1000 * 1000
              
              Mathematically, they are identical. Computationally, the results are deterministic. On the other hand, the computer will produce different results. There are many other cases where the expected result is different from what a computer calculates.
              • wongarsu 4 hours ago ago

                A good calculator will however do this correctly (as in: the way anyone would expect). Small cheap calculators revert to confusing syntax, but if you pay $30 for a decent handheld calculator or use something decent like wolframalpha on your phone/laptop/desktop you won't run into precision issues for reasonable numbers.

                • Ifkaluva 2 hours ago ago

                  He’s not talking about order of operations, he’s talking about floating point error, which will accumulate in different ways in each case, because floating point is an imperfect representation of real numbers

                  • wongarsu 3 minutes ago ago

                    I didn't consider it an order of operations issue. Order of operations doesn't matter in the above example unless you have bad precision. What I was trying to say is that good calculators have plenty of precision.

                  • II2II an hour ago ago

                    Yeap, the specific example wasn't important. I choose an example involving the order of operations and an integer overflow simply because it would be easy to discuss. (I have been out of the field for nearly 20 years now.) Your example of floating point errors is another. I also encountered artifacts from approximations for transcendental functions.

                    Choosing a "better" language was not always an option, at least at the time. I was working with grad students who were managing huge datasets, sometimes for large simulations and sometimes from large surveys. They were using C. Some of the faculty may have used Fortran. C exposes you the vulgarities of the hardware, and I'm fairly certain Fortran does as well. They weren't going to use a calculator for those tasks, nor an interpreted language. Even if they wanted to choose another language, the choice of languages was limited by the machines they used. I've long since forgotten what the high performance cluster was running, but it wasn't Linux and it wasn't on Intel. They may have been able to license something like Mathematica for it, but that wasn't the type of computation they were doing.

                  • skydhash 38 minutes ago ago

                    But floating point error manifest in different ways. Most people only care about 2 to 4 decimals which even the cheapest calculators can do well for a good amount of consecutive of usual computations. Anyone who cares about better precision will choose a better calculator. So floating point error is remediable.

              • anthk 3 hours ago ago

                Good languages with proper number towers will deal with both cases in equal terms.

            • yunwal 4 hours ago ago

              Determinism just means you don't have to use statistics to approach the right answer. It's not some silver bullet that magically makes things understandable and it's not true that if it's missing from a system you can't possibly understand it.

              • lukev 4 hours ago ago

                That's not what I mean.

                If I use a calculator to find a logarithm, and I know what a logarithm is, then the answer the calculator gives me is perfectly useful and 100% substitutable for what I would have found if I'd calculated the logarithm myself.

                If I use Claude to "build a login page", it will definitely build me a login page. But there's a very real chance that what it generated contains a security issue. If I'm an experienced engineer I can take a quick look and validate whether it does or whether it doesn't, but if I'm not, I've introduced real risk to my application.

                • threatofrain 4 hours ago ago

                  Those two tasks are just very different. In one world you have provided a complete specification, such as 1 + 1, for which the calculator responds with some answer and both you and the machine have a decidable procedure for judging answers. In another world you have engaged in a declaration for which the are many right and wrong answers, and thus even the boundaries of error are in question.

                  It's equivalent to asking your friend to pick you up, and they arrive in a big vs small car. Maybe you needed a big car because you were going to move furniture, or maybe you don't care, oops either way.

                  • lukev 3 hours ago ago

                    Yes. That is the point I was making.

                    Calculators provide a deterministic solution to a well-defined task. LLMs don't.

          • didgetmaster 4 hours ago ago

            If you hand a broken calculator to someone who knows how to do math, and they entered 123 + 765 which produced an answer of 6789; they should instantly know something is wrong. Hand that calculator to someone who never understood what the tool actually did but just accepted whatever answer appeared; and they would likely think the answer was totally reasonable.

            Catching an LLM hallucinating often takes a basic understanding of what the answer should look like before asking the question.

            • abustamam an hour ago ago

              One time when I was a kid I was playing with my older sister's graphing calculator. I had accidentally pressed the base button and now was in hex mode. I did some benign calculation like 10+10 and got 14. I believed it!

              I went to school the next day and told my teacher that the calculator says that 10+10 is 14, so why does she say it's 20?

              So she showed me on her calculator. She pressed the hex button and explained why it was 14.

              I think a major problem with people's usage of LLMs is that they stop at 10+10=14. They don't question it or ask someone (even the LLM) to explain the answer.

              • saltcured 35 minutes ago ago

                Totally on a tangent here, but what kind of calculator would have a hex mode where the inputs are still decimal and only the output is hex..?

          • ThrowawayR2 3 hours ago ago

            The calculator analogy is wrong for the same reason. Knowing and internalizing arithmetic, algebra, and the shape of curves, etc. are mathematical rungs to get to higher mathematics and becoming a mathematician or physicist. You can't plug-and-chug your way there with a calculator and no understanding.

            The people who make the calculator analogy are already victims of the missing rung problem and they aren't even able to comprehend what they're lacking. That's where the future of LLM overuse will take us.

          • Wowfunhappy 3 hours ago ago

            > People would have said the same about graphing calculators or calculators before that.

            As it happens, we generally don't let people use calculators while learning arithmetic. We make children spend years using pencil and paper to do what a calculator could in seconds.

            • yoyohello13 2 hours ago ago

              This is why I don’t understand the calculator analogy. Letting beginners use LLMs is like if we gave kids calculators in 1st grade and told Timmy he never needs to learn 2 + 2. That’s not how education works today.

              • Wowfunhappy 2 hours ago ago

                I think this is exactly why calculators are a great analogy, and a hint toward how we should probably treat LLMs.

          • Jensson 4 hours ago ago

            > People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

            Well, we still make people calculate manually for many years, and we still make people listen to lectures instead of just reading.

            But will we still have people to go through years of manual coding? I guess in the future we will force them, at least if we want to keep people competent, just like the other things you mentioned. Currently you do that on the job, in the future people wont do that on the job so they will be expected to do it as a part of their education.

          • nothrabannosir 3 hours ago ago

            What do people mean exactly when they bring up “Socrates saying things about writing”? Phaedrus?

            > “Most ingenious Theuth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; [275a] and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess.

            > "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

            Sounds to me like he was spot on.

            • NiloCK 3 hours ago ago

              But did this grind humanity to a halt?

              Yes - specific faculties atrophied - I wouldn't dispute it. But the (most) relevant faculties for human flourishing change as a function of our tools and institutions.

              • nothrabannosir 2 hours ago ago

                Someone brought up Socrates upthread:

                > People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

                If the conclusion now becomes “actually, Socrates was correct but it wasn’t that bad”, then why bring up Socrates in the first place?

          • II2II 4 hours ago ago

            > The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).

            In a sense, I think you are right. We are currently going through a period of transition that values some skills and devalues others. The people who see huge productivity gains because they don't have to do the meaningless grunt work are enthusiastic about that. The people who did not come up with the tool are quick to point out pitfalls.

            The thing is, the naysayers aren't wrong since the path we choose to follow will determine the outcome of using the technology. Using it to sift through papers to figure out what is worth reading in depth is useful. Using it to help us understand difficult points in a paper is useful. On the other hand, using it as a replacement for reading the papers is counterproductive. It is replacing what the author said with what a machine "thinks" an author said. That may get rid of unnecessary verbosity, but it is almost certainly stripping away necessary details as well.

            My university days were spent studying astrophysics. It was long ago, but the struggles with technology handling data were similar. There were debates between older faculty who were fine with computers, as long as researchers were there to supervise the analysis every step of the way, and new faculty, who needed computers to take raw data to reduced results without human intervention. The reason was, as always, productivity. People could not handle the massive amounts of data being generated by the new generation of sensors or systematic large scale surveys if they had to intervene any step of the way. At a basic level, you couldn't figure out whether it was a garbage-in, garbage-out type scenario because no one had the time to look at the inputs. (I mean no time in an absolute sense. There was too much data.) At a deeper level, you couldn't even tell if the data processing steps were valid unless there was something obviously wrong with the data. Sure, the code looked fine. If the code did what we expected of it, mathematically, it would be fine. But there were occasions where I had to point out that the computer isn't working how they thought it was.

            It was a debate in which both sides were right. You couldn't make scientific progress at a useful pace without sticking computers in the middle and without computers taking over the grunt work. On the other hand, the machine cannot be used as a replacement for the grunt work of understanding, may that involves reading papers or analyzing the code from the perspective of a computer scientist (rather than a mathematician).

          • compass_copium 4 hours ago ago

            We still expect high school students to learn to use graph paper before they use their TI-83, grade school students to do arithmetic by hand before using a calculator. This is essentially the post's point, that LLMs are a useful tool only after you have learned to do the work without them.

          • beepbooptheory 4 hours ago ago

            Socrates does not say this about the written word. Plato has Socrates say it about writing in the beginning sections of the Phaedrus, but it is not Socrates opinion nor the final conclusion he arrives at.

            And yes yes you can pull up the quote or ask your AI, but they will be wrong. The quote is from Socrates reciting a "myth", as is pretty typical in a middle late dialogue like this.

            But here, alas we can recognize the utter absurdity, that this just points out why writing can be bad, as Socrates does pose. Because you get guys 2000 years in future using you and misquoting you for their dumb cause! No more logos, only endless stochastic doxa. Truly a future of sophists!

        • threatofrain 4 hours ago ago

          But AI might actually get you there in terms of superior pedagogy. Personal Q&A where most individuals wouldn't have afforded it before.

        • wongarsu 4 hours ago ago

          There are a lot of people in academia who are great at thinking about complex algorithms but can't write maintainable code if their life depended on it. There are ways to acquire those skills that don't go the junior developer route. Same with debugging and profiling skills

          But we might see a lot more specialization as a result

          • cmiles74 3 hours ago ago

            Do they need to write maintainable code? I think probably not, it's the research and discovering the new method that is important.

          • iterateoften 4 hours ago ago

            They can’t write maintainable code because they don’t have real world experience of getting your hands dirty in a company. The only way to get startup experience is to build a startup or work for one

            • wongarsu 4 hours ago ago

              Duh, the only way to get startup experience is indeed to get startup experience.

              My point is that getting into the weeds of writing CRUD software is not the only way to gain the ability to write complex algorithms, or to debug complex issues, or do performance optimization. It's only common because the stuff you make on the journey used to be economically valuable

              • iterateoften 2 hours ago ago

                > write complex algorithms, or to debug complex issues, or do performance optimization

                That’s the stuff that ai is eating. The stuff I’m talking about (scaling orgs, maintaining a project long term, deciding what features to build or not build etc) is stuff very hard for ai

                • 8note 2 hours ago ago

                  I dont know if id call it "hard for ai" so much as "untreaded ground"

                  agents might be better at it than people are, given the right structure

            • tovej 4 hours ago ago

              What. Are you saying maintainable code is specifically related to startups? I can accept companies as an answer (although there are other places to cut your teeth), but startups is a weird carveout.

              • Jensson 3 hours ago ago

                Writing maintainable code is learned by writing large codebases. Working in an existing codebase doesn't teach you it, so most people working at large companies do not build the skill since they don't build many large new projects. Some do but most don't. But at startups you basically have to build a big new codebase.

        • omega3 5 hours ago ago

          That’s a good analogy but I think we’ve already went from 0 to 10 rungs over the last couple of years. If we assume that the models or harnesses will improve more and more rungs will be removed. Vast majority of programmers aren’t doing novel, groundbreaking work.

      • skippyboxedhero 4 hours ago ago

        The correct distinction is: if you can't do something without the agent, then you can't do it.

        The problem that the author describes is real. I have run into it hundreds of times now. I will know how to do something, I tell AI to do it, the AI does not actually know how to do it at a fundamental level and will create fake tests to prove that it is done, and you check the work and it is wrong.

        You can describe to the AI to do X at a very high-level but if you don't know how to check the outcome then the AI isn't going to be useful.

        The story about the cook is 100% right. McDonald's doesn't have "chefs", they have factory workers who assemble food. The argument with AI is that working in McDonald's means you are able to cook food as well as the best chef.

        The issue with hiring is that companies won't be able to distinguish between AI-driven humans and people with knowledge until it is too late.

        If you have knowledge and are using AI tools correctly (i.e. not trying to zero-shot work) then it is a huge multiplier. That the industry is moving towards agent-driven workflows indicates that the AI business is about selling fake expertise to the incompetent.

      • klabb3 2 hours ago ago

        > The problem arrises when Bob encounters a problem too complex or unique for agents to solve.

        It’s actually worse than that: the AI will not stop and say ”too complex, try in a month with the next SOTA model”. Rather, it will give Bob a plausible looking solution that Bob cannot identify as right or wrong. If Bob is working on an instant feedback problem, it’s ok: he can flag it, try again, ask for help. But if the error can’t be detected immediately, it can come back with a vengeance in a year. Perhaps Bob has already gotten promoted by then, and Bobs replacement gets to deal with it. In either case, Bob cannot be trusted any more than the LLM itself.

      • raldi 5 hours ago ago

        To me it feels more like learning to cook versus learning how to repair ovens and run a farm. Software engineering isn’t about writing code any more than it’s about writing machine code or designing CPUs. It’s about bringing great software into existence.

        • victorbjorklund 4 hours ago ago

          Or farming before and after agricultural machines. The principles are the same but the ”tactical” stuff are different.

      • roenxi 6 hours ago ago

        That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise. Life throws us hard problems. I don't recall if we even assumed Bob was unusually capable, he might be one of life's flounderers. I'd give good odds that if he got through a program with the help of agents he'll get through life achieving at least a normal level of success.

        But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs. At the point, Bob may discover that anything agents can't do, Alice can't do because she is limited by trying to think using soggy meat as opposed to a high-performance engineered thinking system. Not going to win that battle in the long term.

        > The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.

        The market values bulldozers. Whether a human does actual work or not isn't particularly exciting to a market.

        • kelnos 5 hours ago ago

          > we're trending towards superintelligence with these AIs

          The article addresses this, because, well... no we aren't. Maybe we are. But it's far from clear that we're not moving toward a plateau in what these agents can do.

          > Whether a human does actual work or not isn't particularly exciting to a market.

          You seem to be convinced these AI agents will continue to improve without bound, so I think this is where the disconnect lies. Some of us (including the article author) are more skeptical. The market values work actually getting done. If the AIs have limits, and the humans driving them no longer have the capability to surpass those limits on their own, then people who have learned the hard way, without relying so much on an AI, will have an advantage in the market.

          I already find myself getting lazy as a software developer, having an LLM verify my work, rather than going through the process of really thinking it through myself. I can feel that part of my skills atrophying. Now consider someone who has never developed those skills in the first place, because the LLM has done it for them. What happens when the LLM does a bad job of it? They'll have no idea. I still do, at least.

          Maybe someday the AIs will be so capable that it won't matter. They'll be smarter and more through and be able to do more, and do it correctly, than even the most experienced person in the field. But I don't think that's even close to a certainty.

          • zozbot234 4 hours ago ago

            There's no good definition of superintelligence. A calculator is already way more capable than any human at doing simple mathematical operations, and even small AIs for local use can instantly recall all sorts of impressive knowledge about virtually any field of study, which would be unfeasible for any human; but neither of those is what people mean when they wonder whether future AIs will have superintelligence.

            • Jensson 4 hours ago ago

              General superintelligence is more well defined, I assume that is what he meant. When I hear superintelligence I assume they just mean general superintelligence as in its better than humans at every single mental task that exists.

          • dryarzeg 5 hours ago ago

            > But it's far from clear that we're not moving toward a plateau in what these agents can do.

            It is a debatable topic, and I agree with you that it's unclear whether we will hit the wall or not at some point. But one point I want to mention is that at the time when the AI agents were only conceived and the most popular type of """AI""" was LLM-based chatbot, it also seemed that we're approaching some kind of plateau in their performance. Then "agents" appeared, and this plateau, the wall we're likely to hit at some point, the boundary was pushed further. I don't know (who knows at all?) how far away we can push the boundaries, but who knows what comes next? Who knows, for example, when a completely new architecture different from Transformers will come out and be adopted everywhere, which will allow for something new? Future is uncertain. We may hit the wall this year, or we may not hit it in the next 10-20 years. It is, indeed, unclear.

            • bee_rider 4 hours ago ago

              Are agents something special? We already had LLMs that could call tools. Agents are just that, in a loop, right?

              • dryarzeg 4 hours ago ago

                Roughly speaking - yes. Still, it's an advancement - even if it's a small one - on the usual chatbots, right?

                P.S. I am well aware of all of the risks that agents brought. I'm speaking in terms of pure "maximum performance", so to speak.

        • dandellion 5 hours ago ago

          > we're trending towards superintelligence with these AIs

          I wouldn't count on that because even if it happens, we don't know when it ill happen, and it's one of those things where how close it looks to be is no indication of how close it actually is. We could just as easily spend the next 100 years being 10 years away from agi. Just look at fusion power, self driving cars, etc.

          • CuriouslyC 5 hours ago ago

            Fusion isn't a good example. Self driving cars are a battle between regulation and 9's of reliability, if we were willing to accept self driving cars that crashed as much as humans it'd be here already.

            Whatever models suck at, we can pour money into making them do better. It's very cut and dry. The squirrely bit is how that contributes to "general intelligence" and whether the models are progressing towards overall autonomy due to our changes. That mostly matters for the AGI mouthbreathers though, people doing actual work just care that the models have improved.

        • b00ty4breakfast 5 hours ago ago

          >But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs

          do you have any evidence for that, though? Besides marketing claims, I mean.

          • roenxi 5 hours ago ago

            I've always quite liked https://ourworldindata.org/grapher/test-scores-ai-capabiliti... to show that once AIs are knocking at the door of a human capability they tend to overshoot in around a decade.

            • b00ty4breakfast 3 hours ago ago

              we have to look at what LLMs are and are not doing for this to be applicable; they are not "thinking", there is no real cognition going on inside an LLM. They are making statistical connections between data points in their training sets. Obviously, that has born some pretty interesting (and sometimes even useful) results but they are not doing anything that any reasonably informed person would call "intelligent" and certainly not "super intelligent".

            • Lionga 4 hours ago ago

              This is just trash, like almost any AI benchmark. E.g. it says since around 2015 speech recognition is above human yet any any speech input today has more errors than any human would have.

              If I would not type but speak this comment maybe 2 to 5 words would be wrong. For a human it is maybe 10% of that.

        • whateveracct 2 hours ago ago

          > That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise.

          I have literally never run into this in my career..challenges have always been something to help me grow.

        • ozim 3 hours ago ago

          Market values bulldozers for bulldozing jobs. No one is going to use bulldozers to mow a lawn.

          If Bob is going to spend $500 in tokens for something I can do for $50.

          I think Bob is not going to stay long in lawn mowing market driving a bulldozer.

        • mattmanser 5 hours ago ago

          The authors point went a little over your head.

          It doesn't matter if Bob can be normal. There was no point to him being paid to be on the program.

          From the article:

          If you hand that process to a machine, you haven't accelerated science. You've removed the only part of it that anyone actually needed.

          • lelanthran 5 hours ago ago

            > It doesn't matter if Bob can be normal. There was no point to him being paid to be on the program.

            Yeah, I'm surprised at the number of people who read the article and came away with the conclusion that the program was designed to churn deliverables, and then they conclude that it doesn't matter if Bob can only function with an AI holding his hand, because he can still deliver.

            That isn't the output of the program; the output is an Alice. That's the point of the program. They don't want the results generated by Alice, they want the final Alice.

            • alex_suzuki 4 hours ago ago

              It’s a fairly long article, maybe they had it summarized and came to that conclusion…

          • SoftTalker 2 hours ago ago

            And then you realize that most of science is unnecessary. As TFA points out, it doesn't matter if the age of the universe is 13.77 or 13.79 billion years. So you ban AI in science, you produce more scientists who can solve problems that don't matter. So what?

        • uoaei 5 hours ago ago

          "Things that have never been done before in software" has been my entire career. A lot of it requires specific knowledge of physics, modelling, computer science, and the tradeoffs involved in parsimony and efficiency vs accuracy and fidelity.

          Do you have a solution for me? How does the market value things that don't yet exist in this brave new world?

        • ModernMech 3 hours ago ago

          > Not going to win that battle in the long term.

          I would take that bet on the side of the wet meat. In the future, every AI will be an ad executive. At least the meat programming won't be preloaded to sell ads every N tokens.

        • wizzwizz4 5 hours ago ago

          From the article:

          > There's a common rebuttal to this, and I hear it constantly. "Just wait," people say. "In a few months, in a year, the models will be better. They won't hallucinate. They won't fake plots. The problems you're describing are temporary." I've been hearing "just wait" since 2023.

          We're not trending towards superintelligence with these AIs. We're trending towards (and, in fact, have already reached) superintelligence with computers in general, but LLM agents are among the least capable known algorithms for the majority of tasks we get them to do. The problem, as it usually is, is that most people don't have access to the fruits of obscure research projects.

          Untrained children write better code than the most sophisticated LLMs, without even noticing they're doing anything special.

          • jnovek 4 hours ago ago

            The rate of hallucination has gone down drastically since 2023. As LLM coding tools continue to pare that rate down, eventually we’ll hit a point where it is comparable to the rate we naturally introduce bugs as humans programmers.

            • wizzwizz4 3 hours ago ago

              LLMs are still making fundamentally the same kinds of errors that they made in 2021. If you check my HN comment history, you'll see I predicted these errors, just from skimming the relevant academic papers (which is to say they're obvious: I'm far from the only person saying this). There is no theoretical reason we should expect them to go away, unless the model architectures fundamentally change (and no, GPT -> LLaMA is not a fundamental change), because they're not removable discontinuities: they're indicative of fundamental capability gaps.

              I don't care how many terms you add to your Taylor series: your polynomial approximation of a sine wave is never going to be suitable for additive speech synthesis. Likewise, I don't care how good your predictive-text transformer model gets at instrumental NLP subtasks: it will never be a good programmer (except as far as it's a plagiarist). Just look at the Claude Code source code: if anyone's an expert in agentic AI development, it's the Claude people, and yet the codebase is utterly unmaintainable dogshit that shouldn't work and, on further inspection, doesn't work.

              That's not to say that no computer program can write computer programs, but this computer program is well into the realm of diminishing returns.

      • jnovek 4 hours ago ago

        How many people who cook professionally are gourmet chefs? I think it ends up that gourmet cooking is so infrequently needed that we don’t require everyone who makes food to do it, just a small group of professionally trained people. Most people who make food for a living work somewhere like McDonald’s and Applebee’s where a high level of skill is not required.

        There will still be programming specialists in the future — we still have assembly experts and COBOL experts, after all. We just won’t need very many of them and the vast majority of software engineers will use higher-level tools.

        • ThrowawayR2 2 hours ago ago

          That's the problem though: programmers who become the equivalent of McDonald's workers will be paid poorly like McDonald's workers and be treated as disposable like McDonald's workers.

      • cfloyd 4 hours ago ago

        I held this point of view for a while but I came to the (possibly naive) conclusion that it was just forced self-assurance. Truth is, the issues with sub-par output are just a prompting and supervision deficiency. An agent team can produce better end product if supervised and promoted correctly. The issue is most don’t take the time to do that. I’m not saying I like that this is true, quite the opposite. It is the reality of things now.

        • vrganj 4 hours ago ago

          At some point the herding of idiot savants becomes more work than just doing the damn thing yourself in the first place.

          • lxgr 4 hours ago ago

            I'm happy to herd idiots all my life if they come out of it smarter than they went in. The real tragedy with current LLM agents is that they're effectively stateless, and so all the effort of "educating" them feels wasted.

            Once continuous learning is solved, I predict the problem addressed by TFA to become orders of magnitude bigger: What's the motivation for anyone to teach a person if an LLM can learn it much faster, will work for you forever, and won't take any sick days or consider changing careers?

            • vrganj 3 hours ago ago

              At that point, I think it'll be time to admit to ourselves that capitalism is over.

              The only reason we somewhat made it work is due to the interdependence between labor and capital. Once that's broken, the wheels will start falling off.

      • CuriouslyC 5 hours ago ago

        Just because Bob doesn't know e.g. Rust syntax and library modules well, doesn't mean that Bob can't learn an algorithm to solve a difficult problem. The AI might suggest classes of algorithms that could be applicable given the real world constraints, and help Bob set up an experimental plan to test different algorithms for efficacy in the situation, but Bob's intuition is still in the drivers's seat.

        Of course, that assumes a Bob with drive and agency. He could just as easily tell the AI to fix it without trying to stay in the loop.

        • bigfishrunning 4 hours ago ago

          But if Bob doesn't know rust syntax and library modules well, how can he be expected to evaluate the output generating Rust code? Bugs can be very subtle and not obvious, and Rust has some constructs that are very uncommon (or don't exist) in other languages.

          Human nature says that Bob will skim over and trust the parts that he doesn't understand as long as he gets output that looks like he expects it to look, and that's extremely dangerous.

          • ndriscoll 4 hours ago ago

            Then perhaps Bob should have it use functional Scala, where my experience is that if it compiles and looks like what you expect, it's almost certainly correct.

            • bigfishrunning 3 hours ago ago

              Sure, but bob is very unlikely to do that unless his AI tool of choice suggests it.

      • bitwize 4 hours ago ago

        Bob+agents is going to be able to solve much more complex problems than Bob without agents.

        That's the true AI revolution: not the things it can accelerate, the things it can put in reach that you wouldn't countenance doing before.

      • b112 5 hours ago ago

        Worse, soon fewer and fewer people will taste good food, including even higher and higher scale restaurants just using pre-made.

        As fewer know what good food tastes like, the entire market will enshitify towards lower and lower calibre food.

        We already see this with, for example, fruits in cold climates. I've known people who have only ever bought them from the supermarket, then tried them at a farmers when they're in season for 2 weeks. The look of astonishment on their faces, at the flavour, is quite telling. They simply had no idea how dry, flavourless supermarket fruit is.

        Nothing beats an apple picked just before you eat it.

        (For reference, produce shipped to supermarkets is often picked, even locally, before being entirely ripe. It last longer, and handles shipping better, than a perfectly ripe fruit.)

        The same will be true of LLMs. They're already out of "new things" to train on. I question that they'll ever learn new languages, who will they observe to train on? What does it matter if the code is unreadable by humans regardless?

        And this is the real danger. Eventually, we'll have entire coding languages that are just weird, incomprehensible, tailored to LLMs, maybe even a language written by an LLM.

        What then? Who will be able to decipher such gibberish?

        Literally all true advancement will stop, for LLMs never invent, they only mimic.

        • CuriouslyC 5 hours ago ago

          Ironically, apples are one of the fruits where tree ripening isn't a big deal for a lot of varietals. You should have used tomato as the example, the difference there is night and day pretty much across the board.

          If humans can prove that bespoke human code brings value, it'll stick around. I expect that the cases where this will be true will just gradually erode over time.

      • zozbot234 4 hours ago ago

        Real-world cooks don't exactly avoid those newfangled microwave ovens though. They use them as a professional tool for simple tasks where they're especially suitable (especially for quick defrosting or reheating), which sometimes allows them to cook even better meals.

    • xantronix 3 hours ago ago

      I'm glad you've posted this comment because I strongly feel more people need to see sentiment, and push back against what many above want to become the new norm. I see capitulation and compliance in advance, and it makes me sad. I also see two very valid, antipodal responses to this phenomenon: Exit from the industry, and malicious compliance through accelerationism.

      To the reader and the casual passerby, I ask: Do you have to work at this pace, in this manner? I understand completely that mandates and pressure from above may instill a primal fear to comply, but would you be willing to summon enough courage to talk to maybe one other person you think would be sympathetic to these feelings? If you have ever cared about quality outcomes, if for no other reason than the sake of personal fulfillment, would it not be worth it to firmly but politely refuse purely metrics-focused mandates?

    • lelanthran 5 hours ago ago

      > The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

      "Being able to deliver using AI" wasn't the point of the article. If it was the point, your comment would make sense.

      The point of the program referred to in the article is not to deliver results, but to deliver an Alice. Delivering a Bob is a failure of the program.

      Whether you think that a Bob+AI delivers the same results is not relevant to the point of the article, because the goal is not to deliver the results, it's to deliver an Alice.

      • sd9 5 hours ago ago

        I am aware of that - I was adding something along the lines of: I don’t think people care if we deliver Alices any more.

        • bigfishrunning 4 hours ago ago

          People never cared about delivering Alices; they were an implementation detail. I think the article argues that they're still an important one, but one that isn't produced automatically anymore

          • wiseowise 4 hours ago ago

            The article is talking about science research in the context of astrophysics, not coding sweatshops.

            • bigfishrunning 3 hours ago ago

              I was also talking about producing researchers for academia.

        • lelanthran 4 hours ago ago

          > I am aware of that - I was adding something along the lines of: I don’t think people care if we deliver Alices any more.

          That's irrelevant to the goal of the program - they care. Once they stop caring, they'd shut that program down.

          Maybe it would be replaced with a new program that has the goal of delivering Bobs+AI, but what would be the point? I mean, the article explained in depth that there is no market for the results currently, so what would be the point of efficiently generating those results?

          The market currently does not want the results, so replacing the current program with something that produces Bobs+AI would be for... what, exactly?

          • sd9 4 hours ago ago

            There’s no market for the results, but there was a market for Alices, because they were the only people who could produce similar results historically. Now maybe there’s less of a market for Alices. Yes, maybe that means the program disappears.

    • fomoz an hour ago ago

      It's the next level of abstraction. Bob is still learning, he's just learning a different set of skills than Alice.

      Also, the premise that it took each of them a year to do the project means Bob was slacking because he probably could've done it in less than a month.

    • staindk 6 hours ago ago

      They aren't going away but for some they may become prohibitively expensive after all the subsidies end.

      I do think coding with local agents will keep improving to a good level but if deep thinking cloud tokens become too expensive you'll reach the limits of what your local, limited agent can do much more quickly (i.e. be even less able to do more complex work as other replies mention).

      • tonfa 6 hours ago ago

        > They aren't going away but for some they may become prohibitively expensive after all the subsidies end.

        Even if inference was subsidized (afaik it isn't when paying through API calls, subscription plans indeed might have losses for heavy users, but that's how any subscription model typically work, it can still be profitable overall).

        Models are still improving/getting cheaper, so that seems unlikely.

        • SlinkyOnStairs 34 minutes ago ago

          > afaik it isn't when paying through API calls

          There is no evidence for this. The claims that API is "profitable on inference" are all hearsay. Despite the fact that any AI executive could immediately dismiss the misconception by merely making a public statement beholden to SEC regulation, they don't.

          > Models are still improving/getting cheaper

          The diminishing returns have set in for quality, and for a while now that increased quality has come at the cost of massive increases in token burn, it's not getting cheaper.

          Worse yet, we're in an energy crisis. Iran has threatened to strike critical oil infrastructure, and repairs would take years.

          AI is going to get significantly more expensive, soon.

        • ernst_klim 5 hours ago ago

          It probably is still subsidized, just not as much. We won't know if these APIs are profitable unless these companies go public, and till then it's safe to bet these APIs are underpriced to win the market share.

          • zozbot234 4 hours ago ago

            Third-party AI inference with open models is widely available and cheap. You're paying as much as proprietary mini-models or even less for something far more capable, and that without any subsidies (other than the underlying capex and expense for training the model itself).

          • CuriouslyC 5 hours ago ago

            Anthropic has shared that API inference has a ~60% margin. OpenAI's margin might be slightly lower since they price aggressively but I would be surprised if it was much different.

            • bigfishrunning 4 hours ago ago

              Is that margin enough to cover the NRE of model development? Every pro-AI argument hinges on the models continuing to improve at a near-linear rate

              • tonfa 3 hours ago ago

                Yeah but the argument people make is that when the music stops cost of inference goes through the roof.

                I could imagine that when the music stops, advancement of new frontier models slows or stops, but that doesn't remove any curent capabilities.

                (And to be fair the way we duplicate efforts on building new frontier models looks indeed wasteful. Tho maybe we reach a point later where progress is no longer started from scratch)

          • throwthrowuknow 5 hours ago ago

            Then we’ll likely know by the end of this year.

    • KronisLV 4 hours ago ago

      > I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading.

      I dread the flip side of this which is dealing with obtuse bullshit like trying to understand why Oracle ADF won’t render forms properly, or how to optimize some codebase with a lot of N+1 calls when there’s looming deadlines and the original devs never made it scalable, or needing to dig into undercommented legacy codebases or needing to work on 3-5 projects in parallel.

      Agents iterating until those start working (at least cases that are testable) and taking some of the misery and dread away makes it so that I want to theatrically defenestrate myself less.

      Not everyone has the circumstance to enjoy pleasant and mentally stimulating work that’s not a frustrating slog all the time - the projects that I actually like working on are the ones I pick for weekends, I can’t guarantee the same for the 9-5.

      • sd9 4 hours ago ago

        Oh yes, it’s an entirely privileged position to be able to enjoy your work. But it’s a privilege I have enjoyed and not one I want to give up unless I have to. We spend an extraordinary amount of our waking life at work.

        • KronisLV 4 hours ago ago

          I do hope you can find a set of circumstances that don't make you give it up too much. And hey, if you end up moving to another line of work than software, no reason why you couldn't still enjoy working on whatever project you want over the weekend, too.

    • klabb3 2 hours ago ago

      > So if Bob can do things with agents, he can do things.

      Yes, but how does he know if it worked? If you have instant feedback, you can use LLMs and correct when things blow up. In fact, you can often try all options and see which works, which makes it ”easy” in terms of knowledge work. If you have delayed feedback, costly iterations, or multiple variables changing underneath you at all times, understanding is the only way.

      That’s why building features and fixing bugs is easy, and system level technical decision making is hard. One has instant feedback, the other can take years. You could make the ”soon” argument, but even with better models, they’re still subject to training data, which is minimal for year+ delayed feedback and multivariate problems.

    • ozim 3 hours ago ago

      There is still a lot of engineering to be done with LLMs. Maybe not exactly writing code but I think a lot of optimization problems will be there no matter what.

      Some people treat toilet as magic hole where they throw stuff in flush and think it is fine.

      If you throw garbage in you will at some point have problems.

      We are in stage where people think it is fine to drop everything into LLM but then they will see the bill for usage and might be surprised that they burned money and the result was not exactly what they expected.

      • coffeefirst 3 hours ago ago

        Yep. I hate to predict the future but I’m betting on small, open models, used as tools here and there. Which is great, you can get 90% of the speed up with 5-10% of the cost once you account for how time consuming it is to make sense of and fix the output.

        The economics and security model on full agents running in loops all day may come home to roost faster than expertise rot.

    • lxgr 3 hours ago ago

      > if Bob can do things with agents, he can do things.

      This point is directly addressed in the paper: Bob will ultimately not be able to do the things Alice can, with or without agents, because he didn't build the necessary internal deep structure and understanding of the problem space.

      And if Alice later on ends up being a better scientist (using agents!) than Bob will ever be, would you not say there was something lost to the world?

      Learning needs a hill to climb, and somebody to actually climb it. Bob only learned how to press an elevator button.

    • michaelcampbell 3 hours ago ago

      > I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading. I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

      I am in the same boat, but close enough to retirement that I'm less "scared" about it. For me I'm moving up the chain; not people management, but devoting a lot more of my time up the abstraction continuum. Looking a lot more at overall designs and code quality and managing specs and inputs and requirements.

      I wrote some design docs past few days for a big project the team is embarking on. We never had that before, at least not in the level of detail (per time quantum) that I was able to produce. Used 2 models from 2 companies - one to write, one to review, and bounce between them until the 3 of us agree.

      Honestly it didn't take any less time than I would have done it alone, but the level of detail was better, and covered more edge cases. Calling it a "win" right now. I still enjoy it, as most of the code I/we was/are writing is mostly fancy CRUD anyway, and doesn't have huge scaling problems to solve (and too few devs I feel are being honest about their work, here).

    • asHg19237 5 hours ago ago

      Many things have come and gone in this fashion oriented industry. Everyone is already bored to hell by AI output.

      AI in software engineering is kept afloat by the bullshitters who jump on any new bandwagon because they are incompetent and need to distract from that. Managers like bullshit, so these people thrive for a couple of years until the next wave of bullshit is fashionable.

    • qsera 5 hours ago ago

      >The thing is, agents aren’t going away...

      Aren't they currently propped up by investor money?

      What happens when the investors realize the scam that it is and stop investing or start investing less...

      • samusiam 5 hours ago ago

        > Aren't they currently propped up by investor money?

        Are Chinese model shops propped up by investor money? Is Google?

        Open weights models are only 6 months behind SOTA. If new model development suddenly stopped, and today's SOTA models suddenly disappeared, we would still have access to capable agents.

        • qsera 5 hours ago ago

          >we would still have access to capable agents.

          But they would be outdated, right?

          Would an agent that can only code in COBOL would be as useful today?

          • iugtmkbdfil834 4 hours ago ago

            By six months. Surely, non-SOTA models can eventually get not outdated. And your argument ignores 'new model development suddenly stopped' aspect. If it stops, there is nothing be to be outdated to..

          • lxgr 3 hours ago ago

            > But they would be outdated, right?

            Outdated compared to what? In your counterfactual, VC funded agents don't exist anymore, no?

            Your argument, if I understand it correctly, is that they might somehow go away entirely when VC funding dries up, when more realistically they'll probably at most become twice as expensive or regress half a year in performance.

            • Jensson 3 hours ago ago

              > Outdated compared to what? In your counterfactual, VC funded agents don't exist anymore, no?

              Outdated compared to reality / humans, their knowledge cutoff is a year further behind every year they don't get updates. Humans continuously expands their knowledge, the models needs to keep up with that.

        • loeg 4 hours ago ago

          Well, the Chinese shops are propped up by the CCP instead.

          • samusiam 4 hours ago ago

            That's true, but the "AI bubble bursts" scenario is usually tied to Western investors getting essentially margin-called. If that happens, the CCP won't suddenly stop their investment; Chinese models will most likely continue developing.

    • QuantumNomad_ 5 hours ago ago

      > if Bob can do things with agents, he can do things

      I’ve been reminded lately of a conversation I had with a guy at hacker space cafe around ten years ago in Berlin.

      He had been working as a programmer for a significantly longer time than me. Long enough that for many years of his career, he had been programming in assembly.

      He was lamenting that these days, software was written in higher level languages, and that more and more programmers no longer had the same level of knowledge about the lower level workings of computers. He had a valid point and I enjoyed talking to him.

      I think about this now when I think about agentic coding. Perhaps over time most software development will be done without the knowledge of the higher level programming languages that we know today. There will still be people around that work in the higher level programming languages in the future, and are intimately familiar with the higher level languages just like today there are still people who work in assembly even if the percentage of people has gotten lower over time relative to those that don’t.

      And just like there are areas where assembly is still required knowledge, I think there will be areas where knowledge of the programming languages we use today will remain necessary and vibe coding alone wont cut it. But the percentage of people working in high level languages will go down, relative to the number of people vibe coding and never even looking at the code that the LLM is writing.

      • loveparade 5 hours ago ago

        I see these analogies a lot, but I don't like them. Assembly has a clear contract. You don't need to know how it works because it works the same way each time. You don't get different outputs when you compile the same C code twice.

        LLMs are nothing like that. They are probabilistic systems at their very core. Sometimes you get garbage. Sometimes you win. Change a single character and you may get a completely different response. You can't easily build abstractions when the underlying system has so much randomness because you need to verify the output. And you can't verify the output if you have no idea what you are doing or what the output should look like.

        • lxgr 3 hours ago ago

          I think these analogies are largely correct, but TFA is about something subtly different:

          LLMs don't make it impossible to do anything yourself, but they make it economically impractical to do so. In other words, you'll have to largely provide both your own funding and your own motivation for your education, unless we can somehow restructure society quickly enough to substitute both.

          With assembly, we arguably got lucky: It turns out that high-level programming languages still require all the rigorous thinking necessary to structure a programmer's mind in ways that transfer to many adjacent tasks.

          It's of course possible that the same is true for using LLMs, but at least personally, something feels substantially different about them. They exercise my "people management" muscle much more than my "puzzle solving" one, and wherever we're going, we'll probably still need some puzzle solvers too.

      • lelanthran 5 hours ago ago

        > He had been working as a programmer for a significantly longer time than me. Long enough that for many years of his career, he had been programming in assembly.

        Please, not this pre-canned BS again!

        Comparing abstractions to AI is an apples to oranges comparison. Abstractions are dependable due to being deterministic. When I write a function in C to return the factorial of a number, and then reuse it again and again from Java, I don't need a damn set of test cases in Java to verify that factorial of 5 is 120.

        With LLMs, you do. They aren't an abstraction, and seeing this worn out, tired and routinely debunked comparison being presented in every bloody thread is wearing a little thin at this point.

        We've seen this argument hundreds of times on this very site. Repeating it doesn't make it true.

      • sd9 5 hours ago ago

        Lovely story, thanks for sharing.

        I wonder how many assembly programmers got over it and retrained, versus moved on to do something totally different.

        I find the agentic way of working simultaneously more exhausting and less stimulating. I don’t know if that’s something I’m going to get over, or whether this is the end of the line for me.

        • AnimalMuppet 3 hours ago ago

          I wasn't there at the time, but I believe that most assembly programmers learned higher-level languages.

          My mother actually started programming in octal. I don't remember her exact words, but she said something to the effect that her life got so much better when she got an assembler. I suspect that going from assembly to compilers was much the same - you no longer had to worry about register allocations and building stack frames.

          • ThrowawayR2 2 hours ago ago

            It was a trade-off for a very long time (late 1960s to late 1990s IMO): the output of the early compilers was much less efficient than hand writing assembly language but it enabled less skilled programmers to produce working programs. Compilers pulled ahead when eventually processor ISAs evolved to optimize executing compiler generated code (e.g. the CISC -> RISC transition) and optimizing compilers became practical because of more powerful hardware. It definitely was not an overnight transformation.

      • jurgenburgen 3 hours ago ago

        The difference is that you don’t need to review the machine code produced by a compiler.

        The same is not true for LLM output. I can’t tell my manager I don’t know how to fix something in production the agent wrote. The equivalent analogy would be if we had to know both the high-level language _and_ assembly.

    • torben-friis 5 hours ago ago

      Can you run an industry level LLM at home?

      If not, you're changing learning to cook for Uber only meals.

      And since the alternative is starving, Uber will boil the pot.

      Don't give up your self sufficiency.

      • zozbot234 4 hours ago ago

        > Can you run an industry level LLM at home?

        Assuming that by "at home" you mean using ordinary hardware, not something that costs as much as a car. Yes, very slowly, for simple tests. (Not proprietary models obviously, but quite capable ones nonetheless.) Not exactly viable for agentic coding that needs boatloads of tokens for the simplest things. But then you can run smaller local models that are still quite capable for many things.

      • sd9 5 hours ago ago

        I’m very good at the handcrafted stuff, I’ve been doing this a while. I don’t feel like giving up my self sufficiency, I just feel like the writing is on the wall.

        • torben-friis 5 hours ago ago

          By "you" I actually meant this hypothetical person who's only good enough for AI assisted. Though even for us who are already experienced, we should keep the manual stuff even if it's just as going to the gym. I don't see myself retaining my skills for long by just reviewing LLM output.

          • sd9 5 hours ago ago

            Yes sorry, I didn’t think you were addressing me directly, just adding my own thoughts.

            I agree totally with the sentiment, and I am concerned about my own skills atrophying.

      • loeg 4 hours ago ago

        The costs just aren't that high. They could be 10x higher and it still wouldn't be a huge deal.

      • Almondsetat 5 hours ago ago

        Can you build a computer at home?

        There is absolutely nothing self-sufficient about computer hardware

        • jappgar 4 hours ago ago

          Or generate electricity? Or grow enough food to survive? Medicines?

          "Self-sufficiency" arguments coming from tech nerds are so tiring.

        • torben-friis 3 hours ago ago

          No, and that's the reason we're now paying twice what we paid a couple years ago. But I can write software at home.

          We're already vulnerable to enshittification in so many areas, why increase the list? How does that work in my favor at all?

    • mchaver 4 hours ago ago

      > The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

      Following the model of how startups have worked for the last 20 years or so, I expect agents to eventually be locked-down/nerfed/ad-infested for higher payments. We are enjoying the fruits of VC money at the moment and they are getting everyone addicted to agents. Eventually they need to turn a profit.

      Not sure how this plays out, but I would hang on to any competencies you have for anyone (or business) that wants to stick around in software. Use agents strategically, but don't give up your ability to code/reason/document, etc. The only way I can see this working differently is that there are huge advances in efficiency and open-source models.

      • spacechild1 3 hours ago ago

        That's one of several reasons why I'm trying not to rely too much on LLMs. The prospect of only being able to code with a working internet connection and a subscription to some megacorp service is not particularly appealing to me.

    • gbro3n 5 hours ago ago

      I think a good analogy is people not being able to work on modern cars because they are too complex or require specialised tools. True I can still go places with my car, but when it goes wrong I'm less likely to be able to resolve the problem without (paid for) specialised help.

      • b00ty4breakfast 5 hours ago ago

        And just like modern vehicles rob the user of autonomy, so too for coding agents. Modern tech moves further and further away from empowering normal people and increasingly serves to grow the influence of corporations and governments over our day to day lives.

        It's not inherent, but it is reality unless folks stop giving up agency for convenience. I'm not holding my breath.

        • duskdozer 5 hours ago ago

          Soon enough we'll have ads playing in our cars at stoplights.

          • ipaddr 4 hours ago ago

            We have that now for most people, the radio. But hopefully we will be in a llm self driving car and can get ads for the entire trip.

    • jurgenaut23 5 hours ago ago

      I understand your point, but this is a purely utilitarian view and it doesn’t account for the fact that, even if agents may do everything, it doesn’t mean they should, both in a normative and positive sense.

      There is a vast range of scenarios in which being more or less independent from agents to perform cognitive tasks will be both desirable and necessary, at the individual, societal and economic level.

      The question of how much territory we should give up to AI really is both philosophical and political. It isn’t going to be settled in mere one-sided arguments.

      • sd9 5 hours ago ago

        The people who pay my bills operate in a largely utilitarian fashion.

        They’re not going to pay me to manually program because I find it more enjoyable, when they can get Bob to do twice as much for less.

        This is why I say I don’t like it, but it is what it is.

    • codemonkey5 5 hours ago ago

      Some people probably enjoyed writing assembly (I am not one of those people, especially when I had to do it on paper in university exams) and code agents probably can do it well - but for the hard tasks, the tasks that are net new, code agents will produce bad results and you still need those people who enjoy writing that to show the path forward.

      Code agents are great template generators and modifiers but for net new (innovative! work it‘s often barely usable without a ton of handholding or „non code generation coding“

    • zozbot234 4 hours ago ago

      > I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading.

      You're still working on intellectually stimulating programming problems. AI doesn't go all the way with any reliability, it just provides some assistance. You're still ultimately responsible for getting things right, even with key AI help.

    • nidnogg 6 hours ago ago

      I don't like it either. But what is really guaranteeing other markets from flunking similarly later on? What's to say other jobs are going to be any better? Back in college, most of my peers would say "I'm not cut out for anything else. This is it". They were, sure enough, computer and/or math people at heart from an early age.

      More importantly, what's gonna be the next stable category of remote-first jobs that a person with a tech-adjacent or tech-minded skillset can tack onto? That's all I care about, to be honest.

      I may hate tech with a passion at times and be overly bullish on its future, but there's no replacing my past jobs which have graced me and many others with quality time around family, friends, nature and sports while off work.

      • sd9 5 hours ago ago

        I don’t know, it’s only since about December that I felt things really start to shift, and February when my job started to become very different.

        Personally I’m looking at more physical domains, but it’s early days in my exploration. I think if I wanted to stick to remote work (which I have enjoyed since 2020), then the AI story would just keep playing out.

        I’m also totally open to taking a big pay cut to do something I actually enjoy day to day, which I guess makes it easier.

        • throwanem 5 hours ago ago

          So recent? I've been on sabbatical (the real kind, self-funded) for eighteen months, and while my sense has been things have not stopped heading downhill since I stepped off the ride back in 2024, to hear of such a sudden step change is somewhat novel. "Very different" just how, if you don't mind my asking?

          (I'm also looking for local, personally satisfying work, in exchange for a pay cut. Early days, and I am finding the profession no longer commands quite the social cachet it once did, but I'm not foolish enough to fail to price for the buyer's market in which we now seek to sell our labor. Besides, everyone benefits from the occasional reminder to humility! "Memento mori" and all that.)

          • nidnogg 14 minutes ago ago

            Don't you feel that sabbaticals kinda get you off the new tech wave anyway? I usually check in on news much more often when bored at slow work days.

            On the side, this might not have to do at all with your case, but the reason I personally keep putting off sabbaticals is that I feel it can severely compound my routine wrecking habits and I don't think I'd be too strong-willed to give it meaningful purpose. Not to mention the first point, i.e. it would 100% make my industry pessimism worse. I'd like to not bounce away from tech forever. Rather, figure what scratches the same itch I've been seeking since the start.

            I'm all about big road trips, big adventures but I think the couch potato risk is all too real for me.

          • sd9 5 hours ago ago

            I feel like the models and harnesses had a step change in capability around December, as somebody who’s been using them daily since early/mid 2025. It’s gone from me doing the majority of the programming, to me doing essentially none, since December. And that change felt quite sudden.

            The more recent shift after December is mostly explained by people at my company catching up with the events that happened in December. And that’s more about drastically increased productivity expectations, layoffs, etc.

            I’m also considering a self funded sabbatical. I could do it. What sort of thing have you been up to, any advice?

            • nidnogg 11 minutes ago ago

              I can relate to the feeling - this timing tracks for when most, if not all of my friends, all my co-workers (even the few who were resisting to adopt any AI toloing) flocked to just "Claude Code". Similar to how the masses gobbled VS Code a while back.

              Company started doling out Claude Code configs, everything is now cli/agentic AI harnessed and news about "90% of this company's code is now AI Generated" pop up every other day.

              It seems the last frontier to breach before this was nailing agentic black boxes to not crap out during the first hour of work. After that, it's really been much smoother for those tools.

            • throwanem 5 hours ago ago

              Uh, don't come into it expecting to know exactly what you're going to be up to, might be the best advice I could give. Oh, do plan! But loosely: especially early on, as you get out from under the crushing burden of constant stress and misery, there will be surprises. I haven't been doing a lot of hobby programming, for example, not much more than a few faces for my Amazfit wristwatch - but my diary's grown by about a thousand pages, well above the usual rate, and I've begun a new series of crappy-camera snapshot albums, this latter especially being a real surprise despite that I have been a photographer for many years now. (My daily driver since 2021 has been a Nikon D850 with three SB-R200 flashes on a ring mount, mostly chasing wild wasps to get their portraits from six inches away. Shooting a total piece of shit for a change has been a hilarious revelation!)

              Imagination operates more freely and foolishness is less heavily ballasted, and any kind of emotional crap you've been keeping shoved to the side with the force of pressing obligations is likely to come out and start rearranging the metaphorical furniture. If you've got stuff like that, this will be a good opportunity to get to grips with it, whether you mean to or not. Prepare accordingly.

              And finally, there's not too many more appealing social presentations in my experience than that deriving from the confident knowledge that, within reason at least, one has earned and is now deploying the privilege to do more or less whatever the hell one likes: not the confidence contingent on a fat wallet, but that inherent in having only those scheduled obligations one chooses, and also in understanding precisely the difference underlying that distinction. Very few people in this world have the skill to behave as if their time were entirely their own to command, and this makes a difference in deportment that others will notice and attend without necessarily knowing why. It is more subtle and far less brash than the confidence in wielding the name of an employer that everyone knows, but for like reasons it also has worth and durability which the other does not. Whether or not you keep it, the experience of having had it is about as unforgettable and as indescribable as the trick to riding a bike.

              Thanks for the info! My last direct exposure to a frontier model was now almost twelve months ago, so I suppose I'll have to dedicate a few hours pretty soon.

    • loeg 4 hours ago ago

      Being able to deliver junior-level work isn't the goal of training juniors.

    • bakugo 5 hours ago ago

      Bob can't do things, Bob's AI can do things that Bob asks it to do. And the AI can only do things that have been done before, and only up to a certain level of complexity. Once that level is reached, the AI can't do things anymore, and Bob certainly isn't going to do anything about that, because Bob doesn't know how to do anything himself. One has to question what value Bob himself even brings to the table.

      But let's assume Bob continues to have an active role, because the people above him bought in to the hype and are convinced that "prompt engineer" is the job of the future. When things inevitably start falling apart because the Bobs of the world hit a wall and can't solve the problems that need to be solved (spoiler: this is already happening), what do we do? We need Alices to come in and fix it, but the market actively discourages the existence of Alice, so what happens when there are no more Alices left? Do we just give up and collectively forget how to do things beyond a basic level?

      I have a feeling that, yes, we as a species are just going to forget how to do things beyond a certain level. We are going to forget how to write an innovative science paper. We are going to forget how to create websites that aren't giant, buggy piles of React spaghetti that make your browser tab eat 2GB of RAM. We've always been forgetting, really - there are many things that humans in the past knew how to do, but nobody knows how to do today, because that's what happens when the incentive goes missing for too long. Price and convenience often win over quality, to the point that quality stops being an option. This is a form of evolutionary regression, though, and negatively affects our quality of life in many ways. AI is massively accelerating this regression, and if we don't find some way to stop it, I believe our current way of life will be entirely unrecognizable in a few decades.

      • thepasch 5 hours ago ago

        The question is whether it’s more important to be able to do things, or more important to have a good sense and a keen eye for what to do at any given moment. I personally think both are really important, and I also think AI won’t be able to do both better than any human could for another while, and moreso when it comes to doing both at the same time (though I’m not going to claim it’s never going to).

        My point is that both Alice and Bob have a place in this world. In fact, Bob isn’t really doing much different from what a Pricipal Investigator is already doing today in a research context.

        • lelanthran 4 hours ago ago

          > The question is whether it’s more important to be able to do things, or more important to have a good sense and a keen eye for what to do at any given moment.

          Those aren't mutually exclusive.

          "People who do things" can do both, and doing the latter is a function of doing the former, so they tend to do the latter sufficiently well.

          "People who prompt things" can only do the latter, and they routinely do it poorly.

          • thepasch 4 hours ago ago

            > “People who prompt things” can only do the latter, and they routinely do it poorly.

            Right, but what I don’t agree with here is the idea that this category of people will never be able to improve into the first category of people. The value of an experienced anything is that they realize there is a big chasm between something that works now and something that will continue to work long into the future.

            I don’t agree that doing everything yourself manually is the only thing that can grant you that understanding, because I don’t think that understanding is domain-specific. It evolves naturally as soon as someone realizes that their list of unknown unknowns is FAR larger than their list of known anythings, and that the first step in attempting to solve a problem is to prune that list as far as you can get it while realizing you will never ever be able to reduce it to zero.

            You can do that by spending two weeks to build a brick wall by hand, or you can do that by spending two weeks having your magical helpers build ten brick walls that eventually collapse. I don’t think the tools are some sort of fundamental threat to cognition, I think they’re - within this society - a fundamental threat to safety, because the relentless pursuit of profit means even those that realize those ten brick walls should never actually ever be used to hold anything up will find themselves pressured to put a roof on them and hope, pray, they hold.

            And this isn’t an LLM-specific thing. The vast diverse space of building codes around the world proves this, and coincidentally, the countries with laxer building codes tend to get a lot more done a lot faster; and they also tend to deal with a big tragic collapse every now and then, which I suppose someone will file away as collateral somewhere.

            • Jensson 3 hours ago ago

              > I don’t agree that doing everything yourself manually is the only thing that can grant you that understanding, because I don’t think that understanding is domain-specific. It evolves naturally as soon as someone realizes that their list of unknown unknowns is FAR larger than their list of known anythings, and that the first step in attempting to solve a problem is to prune that list as far as you can get it while realizing you will never ever be able to reduce it to zero.

              This isn't true, a car mechanic never evolves into an engineer, a nurse never evolve into a doctor. A car mechanic can learn to do some tasks you normally need an engineer for and same with nurses, but they never build the entire core set of skills that separates engineers from mechanics and doctors from nurses.

              There are maybe some exceptions to this, but those exceptions are so rare that it doesn't matter for this discussion. A few people still learning it properly wont save anything.

              • thepasch 2 hours ago ago

                > This isn’t true, a car mechanic never evolves into an engineer, a nurse never evolve into a doctor.

                “Doesn’t generally happen” =/= “is literally impossible”. The word “never” should be used with care.

                > A car mechanic can learn to do some tasks you normally need an engineer for and same with nurses

                This statement can only make sense if you regard titles as something that’s imbued upon you, and until it is, you are incapable of performing the acts that someone who has earned that tile can perform. I’ll just say I fundamentally disagree with this notion on pretty much every conceivable level, and if that’s the belief system you subscribe to, that would also makes arguing about this any further pointless. But I might just be getting you wrong.

            • Peritract 2 hours ago ago

              > the idea that this category of people will never be able to improve into the first category of people

              The fundamental difference between the categories is that the first is filled with people who put the effort in to learning/understanding, and the second is filled with people who take the shortcut around learning/understanding.

              Changing from the second category to the first is something that would require already being in the first.

              • thepasch an hour ago ago

                > The fundamental difference between the categories is that the first is filled with people who put the effort in to learning/understanding, and the second is filled with people who take the shortcut around learning/understanding.

                Exactly! That’s my entire point. Because now you’re separating the categories by “is willing to put in effort” and “is not willing to put in effort” rather than by “has done the thing” and “hasn’t done the thing”.

                I think the disagreement doesn’t lie in this concept, but rather in whether an LLM can be used by someone who’s willing to put in effort to assist them in doing so, rather than just having it do it for them. But as long as you understand what the thing you’re using it is for, you don’t have to understand how it works exactly. You can shift gears in a car without a physics degree.

    • pigeons 3 hours ago ago

      > So if Bob can do things with agents, he can do things.

      But he does things wrong.

    • coldtea 4 hours ago ago

      >The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

      He'll get things (papers, code, etc) which he can't evaluate. And the next round of agents will be trained on the slop produced by the previous ones. Both successive Bob's and successive agents will have less understanding.

    • edbmiller69 3 hours ago ago

      No - you need to understand the details in order to do the “high level” work.

    • atoav 4 hours ago ago

      The thing is Bob can use HammerAsAService™ to put in a nail. It is so cheap! Way cheaper than buying an actual hammer.

      The problem with unlearning generic tools and relying on ones you rent by big corporations is that it is unreliable in the long term. The prices will be rising. The conditions will worsen. Oh nice that Bob made a thing using HammerAsAService™, but the terms of conditions (changing once a week) he accepted last week clearly say it belongs to the company now. Bob should be happy they are not suing him yet, but Bob isn't sure whether the thing that came out a month after was independently developed by that company or not just a clone of his work. Bob wishes he knew how to use a hammer.

      • thepasch 2 hours ago ago

        The majority of nails people might want to rent a HammerAsAService for these days can already easily be put in by open source hammers you can run on consumer, uh… workbenches.

        • Peritract 2 hours ago ago

          Not to stretch the metaphor too far, but those workbenches require understanding (and hammers) to set up.

          Will the paid tools always tell their users how to use the free versions, and if not, how will the users learn to do it independently?

          • thepasch an hour ago ago

            > Will the paid tools always tell their users how to use the free versions, and if not, how will the users learn to do it independently?

            The same way any open-source infrastructure finds widespread use, I’d say. If you’re willing to put in the elbow grease, you can probably set it up yourself (maybe even with the help of one of the frontier, uh, hammers, in its free tier). Or there might be services that act as middlemen to make it all more convenient and cheaper. But the difference is that if Service X pisses you off, then there will be Services Y, Z, A, and B who sell the same service using the same open-source infrastructure, so you always have a choice.

            If you don’t like GitHub, try Gitlab, Codeberg, Gitea, and so forth. Or Bitbucket or Azure DevOps. (Don’t actually, though.)

    • username223 3 hours ago ago

      > I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

      It’s not for me. Being a middle manager, with all of the liability and none of the agency, is not what I want to do for a living. Telling a robot to generate mediocre web apps and SVGs of penguins on bicycles is a lousy job.

    • troupo 6 hours ago ago

      > The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

      Can he? If he outsources all his thinking and understanding to agents, can he then fix things he doesn't know how to fix without agents?

      Any skill is practice first and foremost. If Bob has had no practice, what then?

      • sd9 5 hours ago ago

        My point is it doesn’t matter whether he can fix things without agents. The real world isn’t an exam hall where your boss tells you “no naughty AI!”, you just get stuff done, and if Bob can do that with agents, nobody cares how he did it.

        • kelnos 5 hours ago ago

          But can Bob actually do that with agents, without limit? Right now, he's going to hit a ceiling at some point, and the Alices of the world will run circles around him.

          The question is: will agents improve to the point that even the most capable Alices will never be needed to solve problems? Maybe? Maybe not? I'm worried that they won't improve to that degree.

          And even if they do, what is the purpose of humans in this world?

          • duskdozer 5 hours ago ago

            I think the real issue is that no, he can't, but corporate and government entities that decide won't care. Things will simply get worse. The problems will be left to fester as things that simply "can't be done".

        • troupo 5 hours ago ago

          > The real world isn’t an exam hall where your boss tells you “no naughty AI!”, you just get stuff done, and if Bob can do that with agents, nobody cares.

          Indeed. That's why Anthropic had to hire real engineers to make sure their vibe-coded shit doesn't consume 68GB of RAM. Because real world: https://x.com/jarredsumner/status/2026497606575398987

          • sd9 5 hours ago ago

            If your job has been totally unaffected by AI, then I am jealous.

            I’m not trying to argue that AI can do everything today. I acknowledge that there are many things that it is not good at.

            • kelnos 5 hours ago ago

              But do you believe that they'll continue to improve until they're good at everything, all the time, in ways a human can never match?

              If yes, then that's dangerously optimistic. If not, then we'll always need humans who have learned the "hard way" (the Alices, not the Bobs). But if LLMs make it impossible for Alices to come up in the field, we're screwed.

              • sd9 5 hours ago ago

                I think that a lot of software engineering work is a lot simpler than people like to think, and that the demand for Alices is far outweighed by the demand for Bobs. I think there will always be a place for Alices, but there will be a drastic reduction in the workforce. I think all of this unconditionally about future improvement in AI - in my view the models today are more than capable of bringing about this shift, it will just take time.

          • imtringued 3 hours ago ago

            Anthropic is still getting weekly memory leak reports with memory leaking at a rate of 61GB/h and all of them are getting closed automatically as duplicates.

            I personally haven't tried Claude Code because I can't install it on my PC. I'm starting to get the impression that they banned non Claude products from using their subscription, because their products are of such a poor quality that everyone is fleeing from them.

    • plato65 6 hours ago ago

      > So if Bob can do things with agents, he can do things.

      I think the key issue is whether Bob develops the ability to choose valuable things to do with agents and to judge whether the output is actually right.

      That’s the open question to me: how people develop the judgment needed to direct and evaluate that output.

      • mattmanser 5 hours ago ago

        There's a long, detailed, often repeated answer to your open question in the article.

        Namely, if you can't do it without the AI, you can't tell when it's given you plausible sounding bullshit.

        So Bob just wasted everyone's time and money.

        • carlosjobim 4 hours ago ago

          You can verify by running the code and see if it works.

    • lowsong 25 minutes ago ago

      > agents aren’t going away

      Why not? Once the true cost of token generation is passed on to the end user and costs go up by 10 or 100 times, and once the honeymoon delusion of "oh wow I can just prompt the AI to write code" fades, there's a big question as to if what's left is worth it. If it isn't, agents will most certainly go away and all of this will be consigned to the "failed hype" bin along with cryptocurrency and "metaverse".

    • croes 4 hours ago ago

      > The thing is, agents aren’t going away.

      Let’s wait until they a business model that creates profit.

      Most of them won’t go away, but many will become outdated or slow or enshittificated.

      Imagine building your career based on the quality of google‘s search

    • rustyhancock 5 hours ago ago

      The whole premise is bad. If the supervisor can do it in 2 months, then they can do it in 2 weeks with AI.

      Didn't PhD projects used to be about advancing the state of art?

      Maybe we'll get back to that.

  • DavidPiper 5 hours ago ago

    I've just started a new role as a senior SWE after 5 months off. I've been using Claude a bit in my time off; it works really well. But now that I've started using it professionally, I keep running into a specific problem: I have nothing to hold onto in my own mind.

    How this plays out:

    I use Claude to write some moderately complex code and raise a PR. Someone asks me to change something. I look at the review and think, yeah, that makes sense, I missed that and Claude missed that. The code works, but it's not quite right. I'll make some changes.

    Except I can't.

    For me, it turns out having decisions made for you and fed to you is not the same as making the decisions and moving the code from your brain to your hands yourself. Certainly every decision made was fine: I reviewed Claude's output, got it to ask questions, answered them, and it got everything right. I reviewed its code before I raised the PR. Everything looked fine within the bounds of my knowledge, and this review was simply something I didn't know about.

    But I didn't make any of those decisions. And when I have to come back to the code to make updates - perhaps tomorrow - I have nothing to grab onto in my mind. Nothing is in my own mental cache. I know what decisions were made, but I merely checked them, I didn't decide them. I know where the code was written, but I merely verified it, I didn't write it.

    And so I suffer an immediate and extreme slow-down, basically re-doing all of Claude's work in my mind to reach a point where I can make manual changes correctly.

    But wait, I could just use Claude for this! But for now I don't, because I've seen this before. Just a few moments ago. Using Claude has just made it significantly slower when I need to use my own knowledge and skills.

    I'm still figuring out whether this problem is transient (because this is a brand new system that I don't have years of experience with), or whether it will actually be a hard blocker to me using Claude long-term. Assuming I want to be at my new workplace for many years and be successful, it will cost me a lot in time and knowledge to NOT build the castle in the sky myself.

    • xandrius 5 hours ago ago

      Then you're using it more towards vibe coding than AI-assisted coding: I use AI to write the stuff the way I want it to be written. I give it information about how to structure files, coding style and the logic flow.

      Then I spend time to read each file change and give feedback on things I'd do differently. Vastly saves me time and it's very close or even better than what I would have written.

      If the result is something you can't explain than slow down and follow the steps it takes as they are taken.

      • greenchair 4 hours ago ago

        AI assisted coding makes you dumber full stop. It's obvious as soon as you try it for the first time. Need a regex? No need to engage your brain. AI will do that for you. Is what it produced correct? Well who knows? I didn't actually think about it. As current gen seniors brains atrophy over the next few years the scarier thing is that juniors won't even be learning the fundamentals because it is too easy to let AI handle it.

        • istrice an hour ago ago

          Strongly disagree. If the complexity of your work it the software development itself, then it means that your work is not very complex to begin with.

          It has always been extremely annoying to fight with people who mistake the ability of building or engaging with complicated systems (like your regex) with competency.

          I work in building AI for a very complex application, and I used to be in the top 0.1% of Python programmers (by one metric) at my previous FAANG job, and Claude has completely removed any barriers I have between thinking and achieving. I have achieved internal SOTA for my company, alone, in 1 week, doing something that previously would have taken me months of work. Did I have to check that the AI did everything correctly? Sure. But I did that after saving months of implementation time so it was very worth it.

          We're now in the age of being ideas-bound instead of implementation-bound.

        • imtringued 3 hours ago ago

          I agree. In the beginning when I was starting, I let the AI do all of the work and merely verified that it does what I want, but then I started running into token limits. In the first two weeks I honestly was just looking forward for the limit to refresh. The low effort made it feel like I would be wasting my time writing code without the agent.

          Starting with week three the overall structure of the code base is done, but the actual implementation is lacking. Whenever I run out of tokens I just started programming by hand again. As you keep doing this, the code base becomes ever more familiar to you until you're at a point where you tear down the AI scaffolding in the places where it is lacking and keep it where it makes no difference.

      • DavidPiper 4 hours ago ago

        I agree that being further along the Vibe end of the spectrum is the issue. Some of the other ways I use Claude don't have the same problems.

        > If the result is something you can't explain than slow down and follow the steps it takes as they are taken.

        The problem is I can explain it. But it's rote and not malleable. I didn't do the work to prove it to myself. Its primary form is on the page, not in my head, as it were.

        • the_sleaze_ 4 hours ago ago

          I'm on the same path as you are it seems. I used to be able to explain every single variable name in a PR. I took a lot of pride in the structure of the code and the tests I wrote had strategy and tactics.

          I still wrote bugs. I'd bet that my bugs/LoC has remained static if not decreased with AI usage.

          What I do see is more bugs, because the LoC denominator has increased.

          What I align myself towards is that becoming senior was never about knowing the entire standard library, it was about knowing when to use the standard library. I spent a decade building Taste by butting my head into walls. This new AI thing just requires more Taste. When to point Claude towards a bug report and tell it to auto-merge a PR and when to walk through code-gen function by function.

        • zozbot234 2 hours ago ago

          > I can explain it. But it's rote and not malleable.

          The AI can help with that too. Ask it "How would one think about this issue, to prove that what was done here is correct?" and it will come up with somethimg to help you ground that understanding intuitively.

      • cmiles74 3 hours ago ago

        It's a spectrum and we don't have clear notches on the ruler letting us know when we're confidently steering the model and when we've wandered into vibe coding. For me, this position is easy to take when I am feeling well and am not feeling pressured to produce in a fixed (and likely short) time frame.

        It also doesn't help that Claude ends every recommendation with "Would you like me to go ahead and do that for you?" Eventually people get tired and it's all to easy to just nod and say "yes".

        • nsvd2 3 hours ago ago

          That is indeed a very annoying part of many AI models. I wish I could turn it off.

    • loeg 4 hours ago ago

      For me it seems more or less similar to reviewing others' changes to a codebase. In any large organization codebase, most of the changes won't be our own.

    • Yokohiii 3 hours ago ago

      This is my primary personal concern. I think it could be an silent psychological landmine going off way too late (sic).

      In a living codebase you spent long stretches to learn how it works. It's like reading a book that doesn't match your taste, but you eventually need to understand and edit it, so you push through. That process is extremely valuable, you will get familiar with the codebase, you map it out in your head, you imagine big red alerts on the problematic stuff. Over time you become more and more efficient editing and refactoring the code.

      The short term state of AI is pretty much outlined by you. You get a high level bug or task, you rephrase it into proper technical instructions and let a coding agent fill in the code. Yell a few times. Fix problems by hand.

      But you are already "detached" from the codebase, you have to learn it the hard way. Each time your agent is too stupid. You are less efficient, at least in this phase. But your overall understanding of the codebase will degrade over time. Once the serious data corruption hits the company, it will take weeks to figure it out.

      I think this psychological detachment can potentially play out really bad for the whole industry. If we get stuck for too long in this weird phase, the whole tech talent pool might implode. (Is anyone working on plumbing LLMs?)

    • zozbot234 4 hours ago ago

      Ask Claude to explain the code in depth for you. It's a language model, it's great at taking in obscure code and writing up explanations of how it works in plain English.

      You can do this during the previous change phase of course. Just ask "How would one plan this change to the codebase? Could you explain in depth why?" If you're expected to be thoroughly familiar with that code, it makes no sense to skip that step.

      • saulpw 3 hours ago ago

        This is like asking Claude to explain some aspect of physics to you. It'll 'feel' like you understand, but in order to really understand you have to work those annoying problems.

        Same with anything. You can read about how to meditate, cook, sew, whatever. But if you only read about something, your mental model is hollow and purely conceptual, having never had to interact with actual reality. Your brain has to work through the problems.

        • zozbot234 2 hours ago ago

          > ...in order to really understand you have to work those annoying problems.

          GP says that they have to come back tomorrow and edit the code to fix something. That's a verification step: if you can do that (even with some effort) you understand why the AI did what it did. This is not some completely new domain where what you wrote would apply very clearly, it's just a codebase that GP is supposed to be familiar with already!

      • AstroBen 3 hours ago ago

        By working in this way you're proactively de-skilling yourself. Do it long enough and you're now replaceable by anyone that can type a prompt.

  • caxap 3 hours ago ago

    If this article was written a year ago, I would have agreed. But knowing what I know today, I highly doubt that the outcomes of LLM/non-LLM users will be anywhere close to similar.

    LLMs are exceptionally good at building prototypes. If the professor needs a month, Bob will be done with the basic prototype of that paper by lunch on the same day, and try out dozens of hypotheses by the end of the day. He will not be chasing some error for two weeks, the LLM will very likely figure it out in matter of minutes, or not make it in the first place. Instructing it to validate intermediate results and to profile along the way can do magic.

    The article is correct that Bob will not have understood anything, but if he wants to, he can spend the rest of the year trying to understand what the LLM has built for him, after verifying that the approach actually works in the first couple of weeks already. Even better, he can ask the LLM to train him to do the same if he wishes. Learn why things work the way they do, why something doesn't converge, etc.

    Assuming that Bob is willing to do all that, he will progress way faster than Alice. LLMs won't take anything away if you are still willing to take the time to understand what it's actually building and why things are done that way.

    5 years from now, Alice will be using LLMs just like Bob, or without a job if she refuses to, because the place will be full of Bobs, with or without understanding.

    • techblueberry 3 hours ago ago

      The problem is in most environments Bob won’t spend the rest of the year figuring out what the LLM did, because bob will be busy promoting the LLM for the next deliverable, and the problem is that if all bob has time for us to prompt LLMs, and not understand, there will be a ceiling to Bob’s potential.

      This won’t affect everyone equally. Some Bob’s will nerd out and spend their free time learning, but other Bob’s won’t.

      • therealdrag0 38 minutes ago ago

        Why would bob only have time to promote llms? Strange strawman. Many uni courses always had a level of you get out what you put in, it’s the same with LLMs.

    • Yokohiii 3 hours ago ago

      Bob will never figure out there is an error in his paper. If someone tells him, the LLM will have trouble to figure it out as well, remember the LLM inserted the error to make it "look right".

      Your perspective is cut off. In the real world Bob is supposed to produce outcomes that work. If he moves on into the industry and keeps producing hallucinated, skewed, manipulated nonsense, then he will fall flat instantly. If he manages to survive unnoticed, he will become CEO. The latter rather unlikely.

    • piiritaja 3 hours ago ago

      "LLMs won't take anything away if you are still willing to take the time to understand what it's actually building"

      But do you actually understand it? The article argues exactly against this point - that you cannot understand the problems in the same way when letting agents do the initial work as you would when doing it without agents.

      from the article: "you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, we've collectively decided that maybe this time it's different. That maybe nodding at Claude's output is a substitute for doing the calculation yourself. It isn't. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient."

  • stavros 6 hours ago ago

    I see this fallacy being committed a lot these days. "Because LLMs, you will no longer need a skill you don't need any more, but which you used to need, and handwaves that's bad".

    Academia doesn't want to produce astrophysics (or any field) scientists just so the people who became scientists can feel warm and fuzzy inside when looking at the stars, it wants to produce scientists who can produce useful results. Bob produced a useful result with the help of an agent, and learned how to do that, so Bob had, for all intents and purposes, the exact same output as Alice.

    Well, unless you're saying that astrophysics as a field literally does not matter at all, no matter what results it produces, in which case, why are we bothering with it at all?

    • djaro 6 hours ago ago

      The problem is that LLMs stop working after a certain point of complexity or specificity, which is very obvious once you try to use it in a field you have deep understanding of. At this point, your own skills should be able to carry you forward, but if you've been using an LLM to do things for you since the start, you won't have the necessary skills.

      Once they have to solve a novel problem that was not already solved for all intentes and purposes, Alice will be able to apply her skillset to that, whereas Bob will just run into a wall when the LLM starts producing garbage.

      It seems to me that "high-skill human" > "LLM" > "low-skill human", the trap is that people with low levels of skills will see a fast improvement of their output, at the hidden cost of that slow build-up of skills that has a way higher ceiling.

      • stavros 6 hours ago ago

        Then test Bob on what you actually want him to produce, ie novel problems, instead of trivial things that won't tell you how good he is.

        Why is it a problem of the LLM if your test is unrelated to the performance you want?

        • skydhash 5 hours ago ago

          What people forget about programming is it is a notation for formal logic, one that can be executed by a machine. That formal logic is for solving a problem in the real world.

          While we have a lot of abstractions that solve some subproblems, there still need to connect those solutions to solve the main problem. And there’s a point where this combination becomes its own technical challenge. And the skill that is needed is the same one as solving simpler problems with common algorithms.

        • troupo 6 hours ago ago

          How can Bob produce novel things when he lacks the skills to do even trivial things?

          I didn't get to be a senior engineer by immediately being able to solve novel problems. I can now solve novel problems because I spent untold hours solving trivial ones.

          • stavros 5 hours ago ago

            Because trivial things aren't a prerequisite for novel things, as any theoretical mathematician who can't do long division will tell you.

            • sgarland 5 hours ago ago

              I would love to see someone attempt to do multiplication who never learned addition, or exponentiation without having learned multiplication.

              There is a vast difference between “never learned the skill,” and “forgot the skill from lack of use.” I learned how to do long division in school, decades ago. I sat down and tried it last year, and found myself struggling, because I hadn’t needed to do it in such a long time.

              • thepasch 2 hours ago ago

                > There is a vast difference between “never learned the skill,” and “forgot the skill from lack of use.”

                This sentence contains the entire point, and the easiest way to get there, as with many, many things, is to ask “why?”

              • ipaddr 4 hours ago ago

                Most people learn multiplication by memorizing a series of cards 2x2,2x3.. 9x9. Later this gets broken down to addition in higher grades.

                • Jensson 4 hours ago ago

                  Most people learn multiplication by counting, it has been in basic mathbooks since forever. "1 box has 4 cookies. Jenny ha 4 boxes of cookies. How many cookies do Jenny have?" and so on, the kids solve that by counting 4 cookies in every of the 4 boxes and reaching 16. Only later do you learn those tables.

                • sgarland 2 hours ago ago

                  That’s definitely not how I learned it, nor how my kids have learned it. I vividly remember writing out “2 x 3 = 2 + 2 + 2 = 3 + 3.” I later memorized the multiplication table up to 12, yes, but that was not a replacement of understanding what multiplication was

            • Folcon 5 hours ago ago

              There's a difference between needing no trivial skills to do novel things and not needing specific prerequisite trivial skills to do a novel thing

            • troupo 5 hours ago ago

              Ah yes. The famous theoretical mathematicians who immediately started on novel problems in theoretical mathematics without first learning and understanding a huge number of trivial things like how division works to begin with, what fractions are, what equations are and how they are solved etc.

              Edit: let's look at a paper like Some Linear Transformations on Symmetric Functions Arising From a Formula of Thiel and Williams https://ecajournal.haifa.ac.il/Volume2023/ECA2023_S2A24.pdf and try and guess how many of trivial things were completely unneeded to write a paper like this.

              • stavros 5 hours ago ago

                Seems that teaching Bob trivial things would be a simple solution to this predicament.

                • sumeno 4 hours ago ago

                  That's what the program he just took was supposed to be for, learning not output. You've just reinvented the article from first principles, congrats

                  • HauntingPin 3 hours ago ago

                    Sometimes I wonder how deeply some people actually read these articles. What's the point of the comments if all we're doing is re-explaining what's already explained in such a precise and succint manner? It's a fantastic article. It's so well-written and clear. And yet we're stuck going in a circle repeating what's in the article to people who either didn't read it, or didn't read it with the care it deserves.

                  • thepasch 2 hours ago ago

                    > That’s what the program he just took was supposed to be for, learning not output.

                    If you send a kid to an elementary school, and they come back not having learned anything, do you blame the concept of elementary schools, or do you blame that particular school - perhaps a particular teacher _within_ that school?

      • brookst 5 hours ago ago

        This whole argument can be made for why every programmer needs to deeply understand assembly language and computer hardware.

        At a certain point, higher level languages stop working. Performance, low level control of clocks and interrupts, etc.

        I’m old enough dropping into assembly to be clever with the 8259 interrupt controller really was required. Programmers today? The vast majority don’t really understand how any of that works.

        And honestly I still believe that hardware-up understanding is valuable. But is it necessary? Is it the most important thing for most programmers today?

        When I step back this just reads like the same old “kids these days have it so easy, I had to walk to school uphill through the snow” thing.

        • imtringued 3 hours ago ago

          Teaching how computer hardware works is pretty smart. There is no need to do it in depth though.

          Writing assembly is probably completely irrelevant. You should still know how programming language concepts map to basic operations though. Simple things like strict field offsets, calling conventions, function calls, dynamic linking, etc.

          • sgarland 2 hours ago ago

            > Writing assembly is probably completely irrelevant.

            ffmpeg disagrees.

            More broadly, though, it’s a logical step if you want to go from “here’s how PN junctions work” to “let’s run code on a microprocessor.” There was a game up here yesterday about building a GPU, in the same vein of nand2tetris, Turing Complete, etc. I find those quite fun, and if you wanted to do something like Ben Eater’s 8-bit computer, it would probably make sense to continue with assembly before going into C, and then a higher-level language.

    • nandomrumber 6 hours ago ago

      > why are we bothering with it at all?

      Because we largely want people who have committed to tens of thousands of dollars of debt to feel sufficiently warm and fuzzy enough to promote the experience so that the business model doesn’t collapse.

      It’s difficult to think anyone would end up truly regretting doing a course in astrophysics, or any of the liberal arts and sciences if they have a modicum of passion, but it’s very believable that a majority of them won’t go on to have a career in it, whatever it is, directly.

      They’re probably more likely to gain employment on their data science skills, or whether core competencies they honed, or just the fact that they’ve proven they can learn highly abstract concepts, or whatever their field generalises to.

      Most of the jobs are in not-highly-specific academic-outcome.

      • imtringued 3 hours ago ago

        Even if you land a job in your field, you will encounter that academia is backwards vs industry in some aspects and decades ahead of what is adopted in the industry in other aspects to the point where both of these mean that you won't make much use of the skills you learned in university.

    • pards 6 hours ago ago

      > Take away the agent, and Bob is still a first-year student who hasn't started yet. The year happened around him but not inside him. He shipped a product, but he didn't learn a trade.

      We're minting an entire generation of people completely dependent on VC funding. What happens if/when the AI companies fail to find a path to profitability and the VC funding dries up?

      • Paradigma11 5 hours ago ago

        What will happen is pretty obvious. Those companies will either be classified as too important to fail and get government support or go bankrupt and will be bought for pennies on the dollar. For the customers nothing much will change since tokens are getting cheaper every year and the business is already pretty profitable. Progress will slow down massively till local open weight models catch up to pre-crash SotA and go on from there.

        • pards 4 hours ago ago

          > the business is already pretty profitable

          As of March 2026, OpenAI generates annual revenue exceeding $12 billion. However, the costs of running ChatGPT are around $17 billion a year.

          Source: https://searchlab.nl/en/statistics/chatgpt-statistics-2026

          • ipaddr 4 hours ago ago

            Big improvement I remember when they were spending billions and getting no profit.

      • stavros 6 hours ago ago

        Do you think that'll take a generation to happen?

        • rafterydj 6 hours ago ago

          ChatGPT 3.5 came out coming on 4 years ago now. I don't think a human generation (~20-30 years) needs to be the benchmark here, but new juniors in the industry for a handful of years can be said to be a whole "generation". That how I was reading OP.

    • hirako2000 6 hours ago ago

      I was reading in the article that what matters is the process that leads to the (typically useless) result, what the people get out of it.

      Once I realized that this white on black contrast was hurting my eyes, I decided to stop as I didn't want to see stripes for too long when looking away.

      Some activity has outcomes that aren't strictly in the results.

      • stavros 6 hours ago ago

        Yeah, it was saying that what matters is the process of training people to be good scientists, so they can produce other, more useful, results. That's literally what training is, everywhere.

        This argument boils down to "don't use tools because you'll forget how to do things the hard way", which nobody would buy for any other tool, but with LLMs we seem to have forgotten that line of reasoning entirely.

        • rglullis 6 hours ago ago

          > so they can produce other, more useful, results

          But to even *know* what is more useful, it is crucial to have walked the walk. Otherwise we will all end up with a bunch of people trying to reinvent the wheel, over and over again, like JavaScript "developers" who keep reinventing frameworks every six months.

          > which nobody would buy for any other tool

          I don't know about you, but I wasn't allowed to use calculators in my calculus classes precisely to learn the concepts properly. "Calculators are for those who know how to do it by hand" was something I heard a lot from my professors.

          • thepasch an hour ago ago

            > But to even know what is more useful, it is crucial to have walked the walk.

            I feel like people tend to forget that among the many things LLMs can do these days, “using a search engine” is among them. In fact, they use them better than the majority of people do!

            The conversation people think they’re having here and the conversation that actually needs to be had are two entirely different conversations.

            > I don’t know about you, but I wasn’t allowed to use calculators in my calculus classes precisely to learn the concepts properly. “Calculators are for those who know how to do it by hand” was something I heard a lot from my professors.

            Suppose I never learned how to derive a function. I don’t even know what a function is. I have no idea how to do make one, write one, or what it even does. So I start gathering knowledge:

            - A function is some math that allows you to draw a picture of how a number develops if you do that math on it.

            - A derivative is a function that you feed a function and a number into, and then it tells you something about what that function is doing to that number at that number.

            - “What it’s doing” specifically means not the result of the math for that particular number, but the results for the immediate other numbers behind and in front of it.

            - This can tell us about how the function works.

            Now I go tell ClaudeGPTimini “hey, can you derive f(x) at 5 so that we can figure out where it came from and where it goes from there?”, and it gives me a result.

            I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?

            What I’ll give you is this: if I knew exactly how the math worked, then it would be far easier for me to instantly spot any errors ClaudeGPTimini produced. And the understanding of functions and derivatives outlined above may be simplistic in some places (intentionally so), in ways that may break it in certain edge cases. But that only matters if I take its output at face value. If I get a general understanding of something and run a test with it, I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct. If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification. Science is what happens when you expect something, test something, and get a result - expected OR unexpected - and then systematically rule out that anything other than the thing you’re testing has had an effect on that result.

            This is not a problem with LLMs. It’s a thing we should’ve started teaching in schools decades ago: how to understand that there are things you don’t understand. In my view, the vast majority of problems plaguing us as a species lies in this fundamental thing that far too many people are just never taught the concept of.

        • defrost 6 hours ago ago

          > This argument boils down to "don't use tools because you'll forget how to do things the hard way", which nobody would buy for any other tool,

          This is false. There absolutely are people that fall back on older tools when fancy tools fail. You will find such people in the military, in emergency services, in agriculture, generally in areas where getting the job done matters.

          Perhaps you're unfamiliar.

          They other week I finished putting holes in fence posts with a bit and brace as there was no fuel for the generator to run corded electric drills and the rechargable batteries were dead.

          Ukrainians, and others, need to fall back on no GPS available strategies and have done so for a few years now.

          etc.

          • thijson 5 hours ago ago

            In the 80's the Americans thought that the Russians were backwards to be still using vacuum tubes in their military vehicles. Later they found out that they were being used because they are more tolerant to EMP from a nuclear blast.

          • Kon5ole 5 hours ago ago

            >This is false. There absolutely are people that fall back on older tools when fancy tools fail. >They other week I finished putting holes in fence posts with a bit and brace as there was no fuel for the generator to run corded electric drills and the rechargable batteries were dead.

            It depends on the task though. If you are in a similar scenario as with your fence posts and want to edit computer programs, you can't. (Not even with xkcd's magnetic needle and a steady hand). ;-)

            As technology marches on it seems inevitable that we will get increasingly large and frequent knowledge gaps. Otherwise progress would stop - we need the giant shoulders to stand on.

            How many people in the world can recreate a ASML lithography machine vs how many people are surviving by doing something that requires that machine to exist?

        • hirako2000 6 hours ago ago

          There is an argument to make that tools that speed up a process whilst keeping acuity intact are legitimate.

          LLMs, the way they typically get used, are solely to save time by handing over nearly the entire process. In that sense acuity can't remain intact, even less so improving over time.

          • stavros 6 hours ago ago

            So?

            • hirako2000 5 hours ago ago

              You previous comment reads as if LLMs get some unjustified different treatment.

              Do you agree the different treatment is justified ? (Many do not). Or are you asking , so what if acuity is diminished so long as an LLM does the job equally well?

        • nathan_compton 6 hours ago ago

          People say this in a very large number of other contexts. Mathematica has been able to do many integrals for decades and yet we still make students learn all the tricks to integrate by hand. This pattern is very common.

          • FrojoS 4 hours ago ago

            Yes. But to be fair to your specific point, symbolic solving of integrals used to be a huge skill in the engineering education. Nowadays, it is not a focus anymore, because numerical solutions are either sufficiently accurate or, more importantly, the only feasible approach anyway.

            • nathan_compton 3 hours ago ago

              There is much more to life than engineering.

              • FrojoS 3 hours ago ago

                Sorry, I should have quoted properly in my reply. My first sentence ("Yes.") was in general agreement with you, the second sentence was specifically about

                > Mathematica has been able to do many integrals for decades and yet we still make students learn all the tricks to integrate by hand

                But maybe, integrating by hand is still as big as ever in other parts of academia. Or were you thinking about high school? I'm fairly sure, that symbolic solving of integrals is treated as less important in education these days, than it was before digital computers, but I could be wrong. Mathematica's symbolic solve sure is very useful, but numeric solutions are what really makes the art of finding integrals much less relevant.

                • nathan_compton 2 hours ago ago

                  I studied physics and mathematics and finding analytic solutions to problems is still useful and enlightening.

    • asHg19237 5 hours ago ago

      The arguments of the LLM psychosis afflicted get more and more desperate. Astrophysics is about understanding and thinking, this comment paints it as result oriented (whatever that means).

      The industrialization of academia hasn't even produced more results, it has produced more meaningless papers. Just like LLMs produce the 10.000th note taking app, which for the LLM psychosis afflicted is apparently enough.

      • nothinkjustai an hour ago ago

        this user is also a massive AI booster on this platform

    • dwa3592 3 hours ago ago

      Hard sciences play this crucial and often unseen role in our society : they help train humans to develop critical thinking. Not everyone with PhD in Astrophysics ends up doing Astrophysics in life; it's a discipline, or a training regime for our minds. After that PhD; the result is a human being who can tackle hard problems. We have many other such disciplines (basically any PhD in hard sciences) which produces this outcome.

    • mzhaase 6 hours ago ago

      Why should we only do things that produce some sort of value? Do we really want to reduce all of human existence to increasing profits?

      • stavros 6 hours ago ago

        You said "value" and "profit". I said "useful".

      • nemo44x 6 hours ago ago

        What’s a better method for determining how to utilize and distribute resources? To determine where energy should be used and where it should be moved from?

        • sgarland 5 hours ago ago

          Some things are just enjoyable. I get no real utility from photography - it’s not my career, it’s not a side gig, and I’m not giving prints out as gifts. Most of the shots never get printed at all. I do it because I enjoy the act itself, of knowing how to make an image frozen in time look a particular way by tweaking parameters on the camera, and then seeing the result. I furthermore enjoy the fact that I could achieve the same result on a dumb film camera, because I spent time learning the fundamentals.

    • cmiles74 3 hours ago ago

      Until the LLM is wrong and Bob passes the erroneous result off as accurate, reliable and vetted by a knowledgeable person. At that point Bob is not producing a useful result. Then it becomes a trap other people might get caught in, wasting valuable time and energy.

    • dsqrt 5 hours ago ago

      The goal of academic research is to create understanding, not papers. If we outsource all research to LLMs, then we are only producing the latter.

    • sega_sai 5 hours ago ago

      You missed the argument. When we are talking about faculty, yes their result is the only thing that matters, so if it was produced quicker with a LLM, that's great. But when we are talking about the student, there is a drastic difference in the student in the with LLM vs without LLM cases. In the latter they have much better understanding. And that matters in the system when we are educating future physicists.

    • nathan_compton 6 hours ago ago

      Is that what "academia" wants? Last I checked "academia" is not a dude I can call and ask for an opinion or definition of what it was interested in.

      I will make an explicit, plausible, counterpoint: academia wants to produce understanding. This is, more or less, by definition, not possible with an AI directly (obviously AIs can be useful in the process).

      Take GR as an example. The vast majority of the dynamical character of the theory is inaccessible to human beings. We study it because we wanted to understand it, and only secondarily because we had a concrete "result" we were trying to "achieve."

      A person who cares only about results and not about understanding is barely a person, in my opinion.

    • selimthegrim 5 hours ago ago

      Completely missed the point of the blog post which was that the point was producing the scientist not the result

    • gedy 5 hours ago ago

      We aren't talking pocket calculators here (I see the irony of phone app in pocket), LLMs are hugely expensive things made and controlled behind costly commercial subscriptions. And likely in the middle of a huge investment bubble and stability is uncertain. So we all need to be careful about "gee we don't need that skill or person anymore", etc.

      • danielbln 4 hours ago ago

        Open weight models that run under your desk are not frontier model level, but they are getting closer. Improvements in agentic post training and things like TurboQuant mean that even if all frontier labs pull the plug tomorrow, we will still have agents to work with.

        • zozbot234 3 hours ago ago

          TurboQuant is not a step change, it's more of a smaller incremental improvement to KV quantization, and possibly (unsure) to quantization more generally. I'm actually more positive about SSD weights offload, which opens up very large local models for slow inference (good enough for slow chat) to virtually any hardware or amount of RAM.

        • gedy 3 hours ago ago

          I'm definitely looking forward to that, as I really want people to control their own tools.

  • katzgrau an hour ago ago

    When you’re deep in a thoughtful read and suddenly get the eerie feeling that you’re being catfished

    > But the real threat isn't either of those things. It's quieter, and more boring, and therefore more dangerous. The real threat is a slow, comfortable drift toward not understanding what you're doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can't produce understanding. Who know what buttons to press but not why those buttons exist. Who can get a paper through peer review but can't sit in a room with a colleague and explain, from the ground up, why the third term in their expansion has the sign that it does.

  • steveBK123 4 hours ago ago

    For the people arguing that the output is the code and the faster we generate it the better..

    I do wonder where all the novel products produced by 10x devs who are now 100x with LLMs, the “idea guys” who can now produce products from whole clothe without having to hire pesky engineers.. where is the one-man 10 billion dollar startups, etc? We are 3-4 years into this mania and all I see on the other end of it is the LLMs themselves.

    Why hasn’t anything gotten better?

    • argee 27 minutes ago ago

      Going 100x faster is the problem. Having more people on board somewhere along the way can really help course correct, but when your first colleague arrives when you already went a million miles off course there’s a huge cost to recovery.

      As they say, “slow is smooth, and smooth is fast,” when going for product market fit this is very important (and understated, in my opinion). It doesn’t help that your thread is spinning out at a hundred yards a second when what you’re doing is trying to thread the needle.

    • ipaddr 4 hours ago ago

      Marketing is the moat llms haven't been able to overcome. Being able to create a Word clone is easier but the difficulty of selling it is as hard or harder than ever.

      Show me an llm that can sell my product and find market fit.

      In reality llms are taking away profitable tools and keeping the revenue themselves.

      • steveBK123 3 hours ago ago

        Right Very Rory Sutherland kind of thought - marketing doesn’t make sense. It is alchemy.

        If I told you the drink tastes bad, is an off putting color, comes in a small bottle, and is expensive you wouldn’t believe it would work. But Red Bull made billions.

      • gedy 3 hours ago ago

        There's definitely folks working on automatically marketing via LLMs, but I have my doubts that it wont just numb people further to marketing as we are close to saturation.

        • steveBK123 3 hours ago ago

          My thought/worry on a lot of the LLM agentic workflow personal assistant stuff is it is just ripe for fraud. The money is more on the adversarial side.

          People think they'll just have a personal bot out there buying airline tickets, hotel rooms, jeans, new phone, etc. Meanwhile as soon as you have agents like this out in the wild, the capital will flow to bad actors creating bots to game those bots.

          The world is PvP unfortunately. There is more money to be made skimming agents trying to buy stuff than there is in getting people to pay for a personal assistant agent subscription.

          It's like why a lot of ad-based stuff doesn't offer a premium option for people to pay to opt out (ex Youtube). The people who can afford to pay and avoid search/social media/etc advertising are exactly the people you can make a lot of money advertising to.

          • nothinkjustai an hour ago ago

            Also, wtf are people doing with their lives where a significant amount of time is spent on stuff like this? It only takes a few minutes to book a flight or hotel/airbnb. Shopping for things can be fun, and if it isn’t, again, a few minutes. The amount of time that a “personal assistant” would save me is minuscule and probably actively harmful.

            Are people just so addicted to doomscrolling or whatever that they just can’t spend a few minutes of their day doing some type of human activity?

          • gedy 2 hours ago ago

            Yes it feels a lot like the earlier internet days, when people only saw the upside/utopian view based on a high trust environment.

    • maplethorpe 4 hours ago ago

      I'm waiting for Anthropic to realise they can just set a few thousand agents loose to do just that, and monopolize the entire software market overnight. I'm not sure why they haven't done this yet.

      • slfnflctd 3 hours ago ago

        You jest, but it's a good question.

        When people talk about the 'plateau of ability' agents are widely expected to reach at some point, I suspect a lot of it will boil down to skyrocketing costs and plummeting accuracy past a certain point of number of agents involved. This seems to me like a much harder limit than context windows or model sizes.

        Things like Gas Town are exploring this in what you might call a reckless way; I'm sure there are plenty of more careful experiments being conducted.

        What I think the ultimate measure of this new tech will be is, how simple of a question can a human put to an LLM group for how complex of a result, and how much will they have to pay for it? It seems obvious to me there is a significant plateau somewhere, it's just a question of exactly where. Things will probably be in flux for a few years before we have anything close to a good answer, and it will probably vary widely between different use cases.

      • steveBK123 4 hours ago ago

        Because a lot of valuable software is the implicit / organizational / human domain knowledge .. not the trillions of lines of code LLms all scraped and trained on.

        • csomar 3 hours ago ago

          There is a lot of software that is just code, though; especially at the foundational level.

          • steveBK123 3 hours ago ago

            I guess the thing is - we've always had open source, frameworks, libraries, whatever for all that though, haven't we?

            So we can glue that together a bit faster, great.

            What if we also stop producing new open source, frameworks, libraries, etc.

            What about stories like Tailwind?

    • mikeaskew4 4 hours ago ago

      Could be possible that the 10x devs working at 100x are just starting down the homestretch…

      The 10x dev doesn’t just set out to build a hello world app, ya know.

      • steveBK123 3 hours ago ago

        I think its telling that the two main places I've seen the biggest in-roads in FinTech in terms of LLMs has been:

        1) Stuff that was astonishingly not automated yet. I am talking about somebody opening up excel on one screen, and a website/pdf/whatever on the other.. and type stuff in to your excel sheet. So stuff where there wasn't any code involved previously, possibly due to diminishing returns of how adhoc it was to automate, skills mismatch, organizational politics or other reasons.

        2) Lot of former BigData / crypto / SaaS guys who were in product/sales roles suddenly starting AI startups to help your company AI better. The product is facilitating the doing of AI.

  • AlexWilkins12 5 hours ago ago

    Ironically, this article reeks of AI-generated phrases. Lot's of "It's not X, it's Y". eg: - "The failure mode isn't malice. It's convenience", - "You haven't saved time. You've forfeited the experience that the time was supposed to give you." - "But the real threat isn't either of those things. It's quieter, and more boring, and therefore more dangerous. The real threat is a slow, comfortable drift toward not understanding what you're doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can't produce understanding."

    And indeed running it through a few AI text detectors, like Pangram (not perfect, by any means, but a useful approximation), returns high probabilities.

    It would have felt more honest if the author had included a disclaimer that it was at least part written with AI, especially given its length and subject matter.

    • zozbot234 4 hours ago ago

      Yes, the overwrought "It's not X it's Y" are signals of LLM involvement. No human uses them like that all the frickin' time. AI loves this construct way too much and cannot really tell whether the contrast is relevant/actually makes sense.

  • cbushko an hour ago ago

    This article makes the assumption that Bob was doing absolutely nothing, maybe at the Pub with this friends, while the AI did all his work.

    How do we know that while the AI was writing python scripts that Bob wasn't reading more papers, getting more data and just overall doing more than Alice.

    Maybe Bob is terrible at debugging python scripts while Alice is a pro at it?

    Maybe Bob used his time to develop different skills that Alice couldn't dream of?

    Maybe Bob will discover new techniques or ideas because he didn't follow the traditional research path that the established Researchers insist you follow?

    Maybe Bob used the AI to learn even more because he had a customized tutor at his disposal?

    Or maybe Bob just spent more time at the Pub with his friends.

  • oncallthrow 6 hours ago ago

    I think this article is largely, or at least directionally, correct.

    I'd draw a comparison to high-level languages and language frameworks. Yes, 99% of the time, if I'm building a web frontend, I can live in React world and not think about anything that is going on under the hood. But, there is 1% of the time where something goes wrong, and I need to understand what is happening underneath the abstraction.

    Similarly, I now produce 99% of my code using an agent. However, I still feel the need to thoroughly understand the code, in order to be able to catch the 1% of cases where it introduces a bug or does something suboptimally.

    It's possible that in future, LLMs will get _so_ good that I don't feel the need to do this, in the same way that I don't think about the transistors my code is ultimately running on. When doing straightforward coding tasks, I think they're already there, but I think they aren't quite at that point when it comes to large distributed systems.

    • spicyusername 5 hours ago ago

      So we already have this problem and things are "fine"?

    • mbbutler 5 hours ago ago

      In my personal experience, the rate at which Claude Code produces suboptimal Rust is way higher than 1%.

      • Lerc 4 hours ago ago

        That is dependent upon the quality of the AI. The argument is not about the quality of the components but the method used.

        It's trivial to say using an inadequate tool will have an inadequate result.

        It's only an interesting claim to make if you are saying that there is no obtainable quality of the tool that can produce an adequate result (In this argument, the adequate result in question is a developer with an understanding of what they produce)

    • kgwxd 5 hours ago ago

      > LLMs will get _so_ good that I don't feel the need to do this, in the same way that I don't think about the transistors my code is ultimately running on.

      The problem is, they're nothing like transistors, and never will be. Those are simple. Work or don't, consistently, in an obvious, or easily testable, way.

      LLM are more akin to biological things. Complex. Not well understood. Unpredictable behavior. To be safely useful, they need something like a lion tamer, except every individual LLM is its own unique species.

      I like working on computers because it minimizes the amount of biological-like things I have to work with.

      • oncallthrow 5 hours ago ago

        I suppose transistors is a bad example.

        Perhaps a better analogy would be the Linux kernel. It's built by biological humans, and fallible ones at that. And yet, I don't feel the need to learn the intricacies of kernel internals, because it's reliable enough that it's essentially never the kernel's fault when my code doesn't work.

  • mkovach 4 hours ago ago

    This isn't new. It's been the same problem for decades, not what gets built, but what gets accepted.

    Weak ownership, unclear direction, and "sure, I guess" reviews were survivable when output was slow. When changes came in one at a time, you could get away with not really deciding.

    AI doesn't introduce a new failure mode. It puts pressure on the old one. The trickle becomes a firehose, and suddenly every gap is visible. Nobody quite owns the decision. Standards exist somewhere between tribal memory, wishful thinking, and coffee. And the question of whether something actually belongs gets deferred just long enough to merge it, but forces the answer without input.

    The teams doing well with agentic workflows aren't typically using magic models. They've just done the uncomfortable work of deciding what they're building, how decisions are made, and who has the authority to say no.

    AI is fine, it just removed another excuse for not having our act together. While we certainly can side-eye AI because of it, we own the problems. Well, not me. The other guy who quit before I started.

    • jappgar 4 hours ago ago

      This is exactly the problem I see today.

      And it's not just a volume problem.

      Mediocre devs previously couldn't complete a project by themselves and were forced to solicit help and receive feedback along the way.

      When all managers care about is "shipping", development becomes a race to the bottom. Devs who used to collaborate are now competing. Whoever gets the slop into the codebase fastest, wins.

      • mkovach 3 hours ago ago

        This is also very true, and while I consider it part of the authority to say no, this is a significant point.

  • toniantunovi an hour ago ago

    The coding-specific version of this is worth naming precisely. The drift does not happen because you stop writing code. It happens because you stop reading the output carefully. With AI-generated code, there is a particular failure mode: the code is plausible enough to pass a quick review and tests pass, so you ship it. The understanding degradation is cumulative and invisible until it is not. The partial fix is making automated checks independent of the developer's attention level: type checking, SAST, dependency analysis, and coverage gates that run regardless of how carefully you reviewed the diff. These are not a substitute for understanding, but they create a floor below which "comfortable drift" cannot silently carry you. The question worth asking of any AI coding workflow is whether that floor exists and where it is.

  • theteapot 5 hours ago ago

    I have a vaguely unrelated question re:

    > You do what your supervisor did for you, years ago: you give each of them a well-defined project. Something you know is solvable, because other people have solved adjacent versions of it. Something that would take you, personally, about a month or two. You expect it to take each student about a year ...

    Is that how PhD projects are supposed to work? The supervisor is a subject matter expert and comes up with a well-defined achievable project for the student?

    • loveparade 5 hours ago ago

      I think it just really depends. There is no fixed rule to how PhD programs are supposed to work. Sometimes your advisor will suggest projects he finds interesting and wants to see done, he just doesn't have time to do it himself. That's pretty common. Sometimes advisors don't have that and/or want students to come up with their own projects proposals, etc.

    • derbOac 4 hours ago ago

      It depends on the program, and even more so, the student and the mentor. It can also vary over time, with more direction early on in a graduate program, and less direction later. Some mentors are very directive, and basically treat students as labor executing tasks they don't have time or want to do. Other times, the student is coming up with all the ideas and the mentor is facilitating it with resources or even nothing but uncertain advice or permissions now and then.

      This can lead to a lot of problems as I think in some fields, by some academics, the default assumption is the former, when it's really the latter. This leads to a kind of overattribution of contribution by senior faculty, or conversely, an underappreciation of less senior individuals. The tendency for senior faculty be listed last on papers, and therefore, for the first and last authors to accumulate credit, is a good example of how twisted this logic has become.

      It's one tiny example of enormous problems with credit in academics (but also maybe far afield from your question).

    • LeonardoTolstoy 5 hours ago ago

      It is a spectrum. My advisor was very hands off. He didn't, ultimately, even really understand my PhD. He knew the problem, but he had no path in mind to solve it, that was up to me. I'm now working (as a software engineer) with a person who is very hands on with his students (and even postdocs) to the point of giving them specific tasks to do and then discussing the result every week. He defines the problems and structure of the solution, the students at least partially are an extension of himself, they are doing stuff he merely doesn't have time to do himself.

      And there is everything in between.

    • InkCanon 5 hours ago ago

      Often at the start yes. So the students gets a bit of recognition, a bit of experience and a bit of knowledge.

    • _gmax1 5 hours ago ago

      From the cases I've observed directly in the area I work in, yes.

  • CharlieDigital 4 hours ago ago

    I recently saw a preserved letterpress printing press in person and couldn't help but think of the parallels to the current shift in software engineering. The letterpress allowed for the mass production of printed copies, exchanging the intensive human labor of manual copying to letter setting on the printing press.

    Yet what did not change in this process is that it only made the production of the text more efficient; the act of writing, constructing a compelling narrative plot, and telling a story were not changed by this revolution.

    Bad writers are still bad writers, good writers still have a superior understanding of how to construct a plot. The technological ability to produce text faster never really changed what we consider "good" and "bad" in terms of written literature; it just allow more people to produce it.

    It is hard to tell if large language models can ever reach a state where it will have "good taste" (I suspect not). It will always reflect the taste and skill of the operator to some extent. Just because it allows you to produce more code faster does not mean it allows you to create a better product or better code. You still need to have good taste to create the structure of the product or codebase; you still have to understand the limitations of one architectural decision over another when the output is operationalized and run in production.

    The AI industry is a lot of hype right now because they need you to believe that this is no longer relevant. That Garry Tan producing 37,000 LoC/day somehow equates to producing value. That a swarm of agents can produce a useful browser or kernel compiler.

    Yet if you just peek behind the curtains at the Claude Code repo and see the pile of unresolved issues, regressions, missing features, half-baked features, and so on -- it seems plainly obvious that there are limitations because if Anthropic, with functionally unlimited tokens with frontier models, cannot use them to triage and fix their own product.

    AI and coding agents are like the printing press in some ways. Yes, it takes some costs out of a labor intensive production process, but that doesn't mean that what is produced is of any value if the creator on the other end doesn't understand the structure of the plot and the underlying mechanics (be it of storytelling or system architecture).

  • lxgr 4 hours ago ago

    > for someone who doesn't yet have that intuition, the grunt work is the work

    Very well said. I think people are about to realize how incredibly fortunate and exceptional it is to actually get paid, and in our industry very well, through a significant fraction of one's career while still "just" doing the grunt work, that arguably benefits the person doing it at least as much as the employer.

    A stable paid demand for "first-year grad student level work" or the equivalent for a given industry is probably not the only possible way to maintain a steady supply of experts (there's always the option of immense amounts of student debt or public funding, after all), but it sure seems like a load-bearing one in so many industries and professions.

    At the very least, such work being directly paid has the immense advantage of making artificially (often without any bad intentions!) created bullshit tasks that don't exercise actually relevant skillsets, or exercise the wrong ones, much easier to spot.

  • matheusmoreira 2 hours ago ago

    I dunno. Claude helped me implement a new memory allocator, compacting garbage collector and object heap for my programming language. I certainly understood what I was doing when I did this. The experience was extremely engaging for me. Claude taught me a lot.

    I think the real danger is no longer caring about what you're doing. Yesterday I just pointed Claude at my static site generator and told it to clean it up. I wanted to care but... I didn't.

  • throwaway132448 5 hours ago ago

    The flip side I don’t see mentioned very often is that having a product where you know how the code works becomes its own competitive advantage. Better reliability, faster fixes and iteration, deeper and broader capabilities that allow you to be disruptive while everything else is being built towards the mean, etc etc. Maybe we’ve not been in this new age for long enough for that to be reflected in people’s purchasing criteria, but I’m quite looking forward to fending off AI-built competitors with this edge.

  • FrojoS 4 hours ago ago

    Every PhD program I'm aware of has a final hurdle known as the defence. You have to present your thesis while standing in front of a committee, and often the local community and public. They will asks questions and too many "I don't know" or false answers would make you fail. So, there is already a system in place that should stop Bob from graduating if he indeed learned much less than Alice. A similar argument can be made for conference publications. If Bob publishes his first year project at a conference but doesn't actually understand "his own work" it will show.

    The difficulty of passing the defence vary's wildly between Universities, departments and committees. Some are very serious affairs with a decent chance of failure while others are more of a show event for friends and family. Mine was more of the latter, but I doubt I would have passed that day if I had spend the previous years prompting instead of doing the grunt work.

    • ipaddr 3 hours ago ago

      In the future the llms can answer those questions for you by listening and feeding you answers into your headset.

      The process you describe is a gate keeping exercise which will change to include llm judges at somepoint.

      • FrojoS 3 hours ago ago

        That would be cheating. If the exam is 'gate keeping', I will say that it is a gate worth keeping.

        To be clear, I am not against alternative forms of education. Degrees are optional. But if you want a degree, there have to be exams and cheating has to be prevented.

  • patcon 5 hours ago ago

    The exciting and interesting to me is that we'll probably need to engage "chaos engineering" principles, and encode intentional fallibility into these agents to keep us (and them) as good collaborators, and specifically on our toes, to help all minds stay alert and plastic

    If that comes to pass, we'll be rediscovering the same principles that biological evolution stumbled upon: the benefits of the imperfect "branch" or "successive limited comparison" approach of agentic behaviour, which perhaps favours heuristics (that clearly sometimes fail), interaction between imperfect collaborators with non-overlapping biases, etc etc

    https://contraptions.venkateshrao.com/p/massed-muddler-intel...

    > Lindblom’s paper identifies two patterns of agentic behavior, “root” (or rational-comprehensive) and “branch” (or successive limited comparisons), and argues that in complicated messy circumstances requiring coordinated action at scale, the way actually effective humans operate is the branch method, which looks like “muddling through” but gradually gets there, where the root method fails entirely.

  • visarga 3 hours ago ago

    > Whether that student walks out the door five years later as an independent thinker or a competent prompt engineer is, institutionally speaking, irrelevant.

    I think this is a simplification, of course Bob relied on AI but they also used their own brain to think about the problem. Bob is not reducible to "a competent prompt engineer", if you think that just take any person who prompts unrelated to physics and ask them to do Bob's work.

    In fact Bob might have a change to cover more mileage on the higher level of work while Alice does the same on the lower level. Which is better? It depends on how AI will evolve.

    The article assumes the alternative to AI-assisted work is careful human work. I am not sure careful human work is all that good, or that it will scale well in the future. Better to rely on AI on top of careful human work.

    My objection comes from remembering how senior devs review PRs ... "LGTM" .. it's pure vibes. If you are to seriously review a PR you have to run it, test it, check its edge cases, eval its performance - more work than making the PR itself. The entire history of software is littered with bugs that sailed through review because review is performative most of the time.

    Anyone remember the verification crisis in science?

  • steveBK123 4 hours ago ago

    I agree with the general premise - the risk is we don’t develop juniors (new Alices) anymore, and at some point people are just sloperators gluing together bits of LLM output they do not understand.

    I have seen versions of this in the wild where a firm has gone through hard times and internally systems have lost all their original authors, and every subsequent generation of maintainers… being left with people in awe of the machine that hasn’t been maintained in a decade.

    I interviewed a guy once that genuinely was proud of himself, volunteering the information to me as he described resolving a segfault in a live trading system by putting kill -9 in a cronjob. Ghastly.

  • zaikunzhang 4 hours ago ago
  • pwr1 an hour ago ago

    I catch myself doing this more than I'd like to admit. Copy something from an LLM, it works, ship it, move on. Then a week later something breaks and I realize I have no idea what that code actually does! The speed is addicting but your slowly trading depth for velocity and at some point that bill comes due.

  • bwfan123 2 hours ago ago

    > The problem isn't that we'll decide to stop thinking. The problem is that we'll barely notice when we do

    Most of what we call thinking is merely to justify beliefs that emotionally make us happy and is not creative per-se. I am making a distinction between "thinking" as we know it and "creative thinking" which is rare, and can see things in an unbiased manner breaking out of known categories. Arguably, at the PhD level, there needs to be a new ideas instead of remixing the existing ones.

  • ahussain 3 hours ago ago

    > When his supervisor sent him a paper to read, Bob asked the agent to summarize it. When he needed to understand a new statistical method, he asked the agent to explain it. When his Python code broke, the agent debugged it. When the agent's fix introduced a new bug, it debugged that too. When it came time to write the paper, the agent wrote it. Bob's weekly updates to his supervisor were indistinguishable from Alice's.

    In my experience, doing these things with the right intentions can actually improve understanding faster than not using them. When studying physics I would sometimes get stuck on small details - e.g. what algebraic rule was used to get from Eq 2.1 to 2.2? what happens if this was d^2 instead of d^3 etc. Textbooks don't have space to answer all these small questions, but LLMs can, and help the student continue making progress.

    Also, it seems hard to imagine that Alice and Bob's weekly updates would be indistinguishable if Bob didn't actually understand what he was working on.

    • sumeno 3 hours ago ago

      Faster doesn't always mean better. I've "learned" things from LLM really fast, but I don't retain the information the same way as if I had taken my time to really work through it

  • omega3 4 hours ago ago

    I wonder what effect AI had on online education - course signups, new resources being added etc.

    I’ve recently started csprimer and whilst mentally stimulating I wonder if I’m not completely wasting my time.

  • pbw 4 hours ago ago

    There's certainly a risk that an individual will rely too much on AI, to the detriment of their ability to understand things. However, I think there are obvious counter-measures. For example, requiring that the student can explain every single intermediate step and every single figure in detail.

    A two-hour thesis defense isn't enough to uncover this, but a 40-hour deep probing examination by an AI might be. And the thesis committee gets a "highlight reel" of all the places the student fell short.

    The general pattern is: "Suppose we change nothing but add extensive use of AI, look how everything falls apart." When in reality, science and education are complex adaptive systems that will change as much as needed to absorb the impact of AI.

  • __MatrixMan__ 5 hours ago ago

    But aren't you still going to have to convince other people to let you do it with their money/data/hardware/etc? The understanding necessary to make that argument well is pretty deep and is unaffected by AI.

    I've been having a lot of fun vibe coding little interactive data visualizations so when I present the feature to stakeholders they can fiddle with it and really understand how it relates to existing data. I saw the agent leave a comment regarding Cramer's rule and yeah its a bit unsettling that I forgot what that is and haven't bothered to look it up, but I can tell from the graphs that its doing the correct thing.

    There's now a larger gap between me and the code, but the chasm between me and the stakeholders is getting smaller and so far that feels like an improvement.

    • danielbln 4 hours ago ago

      Every AI/agentic thread on HN follows the same tension: builders want to build and solve problems. Code or task completion are implementation details to be done on the path to the actual prize: solving the problem. And then there are the coders, that have honed their mechanical skill of implementation and derive their intellectual fulfillment from that. The latter crowd has a rough time because much of it can be automated now, the former camp is happy because look at all the stuff that can now be built!

  • acoye 2 hours ago ago

    I recommend the manga BLAME! that explore what happens to humanity if you push this to 11 https://fr.wikipedia.org/wiki/BLAME!

  • ChrisMarshallNY 3 hours ago ago

    This is not wrong, but the "Bob and Alice" conundrum is not simple, either.

    In academia, understanding is vital. The same for research.

    But in production, results are what matters.

    Alice would be a better researcher, but Bob would be a better producer. He knows how to wrangle the tools.

    Each has its value. Many researchers develop marvelous ideas, but struggle to commercialize them, while production-oriented engineers, struggle to come up with the ideas.

    You need both.

    • bwfan123 an hour ago ago

      > You need both.

      yea, there are multiple parts to education. 1) teach skills useful to the economy 2) teach the theories of the subject, and finally 3) tweak existing theories and create new ones. An electrician can fix problems without understanding theory of electromagnetism. These are the trades folks. A EE college graduate has presumably understood some theory, and can apply it in different useful ways. These are the engineers. Finally, there are folks who not only understand the theory of the craft, but can tweak it creatively for the future. These are the researchers.

      Bob better fits as a trades-person or engineer whereas Alice fits better as a researcher.

    • cmiles74 3 hours ago ago

      I have to disagree that Bob will be a better producer, although I do agree that Bob will produce more. In this scenario, Bob isn't clear on which LLM output is valid and important and which is erroneous and misleading; I think that's a pretty critical distinction. It's the kind of thing that might go undetected for a long time, until a particular paper turns out to be important and it's discovered that it's also entirely wrong, wasting a lot of time and energy.

      • ChrisMarshallNY 2 hours ago ago

        Sounds like you're still thinking of Bob as a researcher.

        In production, there would be no "paper"; just some software/hardware product.

        If there was a problem, that would be fairly obvious, with testing (we are going to be testing our products, right?).

        I have been wrestling all morning, with an LLM. It keeps suggesting stuff that doesn't work, and I need to keep resetting the context.

        I am often able to go in, and see what the issue is, but that's almost worthless. The most productive thing that I can do, is tell the LLM what is the problem, on the output end, and ask it to review and fix. I can highlight possible causes, but it often finds corner cases that I miss. I have to be careful not to be too dictatorial.

        It's frustrating, as the LLM is like a junior programmer, but I can make suggestions that radically improve the result, and the total time is reduced drastically. I have gotten done, in about two hours, what might have taken all day.

    • nothinkjustai an hour ago ago

      If results are what matters, why is popular software so buggy and lacking in features?

      • ChrisMarshallNY 32 minutes ago ago

        Because people will pay for crap.

        As long as that’s the case, those that create crap will thrive.

        Pretty basic, and long predates LLMs.

  • sunir 4 hours ago ago

    I think the mountain of things I don’t understand was already huge. It doesn’t stop me from getting a grip over the things I need to be responsible for and using tools to contain complexity irrelevant to me. Like many scientists have a stats person.

    The risk is that civilization is over its skis because humans are lazy. Humans are always lazy. In science there’s a limit to bs because dependent works fail. In economics there’s a crash. In physics stuff breaks. Then there is a correction.

  • txrx0000 2 hours ago ago

    The threat is if you replace your cognitive capabilities with AI, but you don't control entire the system your AI runs on (hardware, firmware, drivers, OS, weights, frontend), then that's equivalent to someone else owning a part of your brain.

  • inatreecrown2 5 hours ago ago

    Using AI to solve a task does not give you experience in solving the task, it gives you experience in using AI.

  • sam_lowry_ 6 hours ago ago

    See also The Profession by Isaac Asimov [0] and his small story The Feeling of Power [1]. Both are social dramas about societies that went far down the path of ignorance.

    [0] http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...

    [1] https://s3.us-west-1.wasabisys.com/luminist/EB/A/Asimov%20-%...

  • MarcelinoGMX3C an hour ago ago

    Frankly, the "AI as accelerant" argument, as fomoz puts it, holds true only when you have a solid understanding of the domain. In enterprise system builds, we don't often encounter theoretical physics where errors might lead to a broken model rather than a broken system. Instead, a faked coefficient from an LLM could mean a production outage.

    It's why I push for a hybrid mentor-apprentice model. We need to actively cultivate the next generation of "Schwartzes" with hands-on, critical thinking before throwing them into LLM-driven environments. The current incentive structure, as conception points out, isn't set up for this, but it's crucial if we want to avoid building on sand.

  • shellkr 4 hours ago ago

    This is almost the same as going from making fire with a stick to using a lighter.. sure it is simplified but still not wrong. Humans while still doing grunt work can still make mistakes as does the machine.. the machine will eventually discover it. The same can not be said of the human because of the work needed to do so might be too much. In the end we might not learn as much.. but it will not matter and thus is really not an issue.

    • techblueberry 4 hours ago ago

      I think I disagree in what I see around me it’s less like going from fire to lighter and more like going from hand tools to power tools. If your skill was in understanding how the hand tools work, it’s harder to get a level of abstraction up and have a vision for building a house. If we’re not able to learn than less people are going to be able to get that vision, especially if you’re in technical domains where engineering and architecture matter. It’s going to be a weird future. I’m pretty effective with these tools, but I’m fifteen years of hacking on them manually. Some folks who are not as far into their careers don’t seem to know where to start.

      There’s a reason most people aren’t promoted to manager until they have years of experience under their belt. And now we’re expecting folks to be managers on day 1.

  • tmountain 4 hours ago ago

    Thankfully, I am nearing the end of my career with software after 25 years well spent. If I had been born in a different decade, I would be facing the brunt of the AI shift, and I don’t think I would want to continue in the industry. Obviously, this is a personal decision, but we are in a totally different domain now, where, at best, you’re managing an LLM to deliver your product.

  • grafelic 5 hours ago ago

    "He shipped a product, but he didn't learn a trade." I think is the key quote from this article, and encapsulates the core problem with AI agents in any skill-based field.

  • lambdaone 5 hours ago ago

    Very insightful. One key sentence sums it up: "He shipped a product, but he didn't learn a trade."

    This is going to get worse, and eventually cause disastrous damage unless we do something about it, as we risk losing human institutional memory across just about every domain, and end up as child-like supplicants to the machines.

    But as the article says, this is a people problem, not a machine problem.

  • somethingsome 4 hours ago ago

    Personally, I wrote an essay to my students explaining exactly that the purpose is for them to think better and improve over time, they can use LLMs but, if they stop thinking, they are just failing themselves, not me.

    It had great success, now when I propose to them to use some model to do something, they tends to avoid.

  • Lerc 4 hours ago ago

    The problem I see with this argument is that the ship sailed on understanding what you are doing years ago. It seems like it is abstraction layers all the way down.

    If an AI is capable of producing an elegant solution with fewer levels of abstraction it could be possible that we end up drifting towards having a better understanding of what's going on.

  • bluedino 4 hours ago ago

    Look at how bad the auto industry has gotten when it comes to quality and recalls.

    A combination of beancounters running the show and the old, experienced engineers dying, retiring, and going through buyouts has pretty much left things in a pretty sad state.

  • bambushu 4 hours ago ago

    The letterpress analogy is good but misses something. With letterpress you lost a craft skill. With AI coding you risk losing the ability to evaluate the output. Those are different problems.

    I use AI agents for coding every day. The agent handles boilerplate and scaffolding faster than I ever could. But when it produces a subtle architectural mistake, you need enough understanding to catch it. The agent won't tell you it made a bad choice.

    What actually helps is building review into the workflow. I run automated code reviews on everything the agent produces before it ships. Not because the code is bad, usually it's fine. But the one time in ten that it isn't, you need someone who understands what the code should be doing.

  • dwa3592 3 hours ago ago

    What a wonderful read. Thank you!

    The way I think about this is : We can't catch the hallucinations that we don't know are hallucinations.

  • hgo 5 hours ago ago

    I like this article and it reads well, but I have to say, that to me it really reads as something written by an LLM. Probably under supervision by a human that knew what it should say.

    I don't know if I mind.

    Example. This paragraph, to me, has a eerily perfect rhythm. The ending sentence perfectly delivers the twist. Like, why would you write in perfect prose an argument piece in the science realm?

    > Unlike Alice, who spent the year reading papers with a pencil in hand, scribbling notes in the margins, getting confused, re-reading, looking things up, and slowly assembling a working understanding of her corner of the field, Bob has been using an AI agent. When his supervisor sent him a paper to read, Bob asked the agent to summarize it. When he needed to understand a new statistical method, he asked the agent to explain it. When his Python code broke, the agent debugged it. When the agent's fix introduced a new bug, it debugged that too. When it came time to write the paper, the agent wrote it. Bob's weekly updates to his supervisor were indistinguishable from Alice's. The questions were similar. The progress was similar. The trajectory, from the outside, was identical.

    • zozbot234 4 hours ago ago

      > The questions were similar. The progress was similar. The trajectory, from the outside, was identical.

      LLM speak. But the rest of that quote doesn't look LLM-generated, it's too fiddly and complex of an argument. I think this was edited with AI, but the underlying argument at least is human.

    • kelnos 5 hours ago ago

      Or maybe the author is just a competent writer.

      • hgo 4 hours ago ago

        Yes. Let's assume so. My point is the suspicion itself.

        • alex_suzuki 3 hours ago ago

          I hate that this is the first thing that crosses my mind now anytime I read a well-written article.

    • swiftcoder 5 hours ago ago

      > why would you write in perfect prose

      If you could, why wouldn't you? LLM witch-hunts over every halfway competent writer are becoming quite tiresome

      • hgo 4 hours ago ago

        Yes, I agree and I use LLM in writing myself. I raise it because it was eerie to me as a reader and I wonder if its a common thought. I wonder what other readers think on this matter.

        Again, I appreciate the article very much and I'm glad the other comments are on the article's content.

    • JackSlateur 5 hours ago ago

      It sure is

  • talkingtab 2 hours ago ago

    This "drift" is not a drift at all, nor is it new. There are many names for this such as cargo cult and think-by-numbers (like paint by numbers), ant mills. It is recipes. And many, many common recipes demonstrate a wide spread lack of understanding.

    This kind of follow-the-leader kind of "thinking" is probably a requirement. The amount of expertise it would require to understand and decide about things in our daily life is overwhelming. Do you fix your own car, decide each day how to travel, get food and understand how all that works? No.

    So what is the problem? The problem is that if you follow the leader and the leader has an agenda that differs from your agenda. Do you really think that with Jeff Bezos being a (the?) major investor in Washington Post has anything to do with Democraccy? You know as in the WAPO slogan "Democracy dies in the Dark".

    Does Jeff have an agenda that differs from yours? Yes. NYT? Yes. Hacker news? Yes. Google? Yes. We now live in a world so filled with propaganda that it makes no difference whether something is AI. We all "follow". Or not.

  • patapong 5 hours ago ago

    I think this is a very important debate, and I think the author here adds a lot to this discussion! I mostly agree with it, but wanted to point out a few areas where I do not fully agree.

    > Take away the agent, and Bob is still a first-year student who hasn't started yet.

    This may be true, but I can see almost no conceivable word where the agent will be taken away. I think we should evaluate Bob's ability based on what he can do with an agent, not without, and here he seems to be doing quite well.

    > I've been hearing "just wait" since 2023.

    On almost any timeline, this is very short. Given the fact that we have already arrived at models able to almost build complete computer programs based on a single prompt, and solve frontier level math problems, I think any framework that relies on humans continuing to have an edge over LLMs in the medium term may be built on shaky grounds.

    Two very interesting questions today in this vein for me are:

    - Is the best way to teach complex topics to students today to have them carry out simple tasks?

    The author acknowledges that the difference between Bob and Alice only materializes at a very high level, basically when Alice becomes a PI of her own. If we were solely focused on teaching thinking at this level (with access to LLMs), how would we frame the educational path? It may look exactly like it does now, but it could also look very differently.

    - Is there inherent value in humans learning specific skills?

    If we get to a stage where LLMs can carry out most/all intellectual tasks better than humans, do we still want humans to learn these skills? My belief is yes, but I am frankly not sure how to motivate this answer.

    • ThrowawayR2 an hour ago ago

      > "no conceivable word where the agent will be taken away"

      LLM access is a paid service. HN concerns itself with inequality constantly and it's not inconceivable that some individuals get ahead because they can afford to pay for more tokens and better models than those who are poorer.

  • djoldman 6 hours ago ago

    These themes have been going around and around for a while.

    One thing I've seen asserted:

    > What he demonstrated is that Claude can, with detailed supervision, produce a technically rigorous physics paper. What he actually demonstrated, if you read carefully, is that the supervision is the physics. Claude produced a complete first draft in three days... The equations seemed right... Then Schwartz read it, and it was wrong... It faked results. It invented coefficients...

    The argument that AI output isn't good enough is somewhat in opposition to the idea that we need to worry about folks losing or never gaining skills/knowledge.

    There are ways around this:

    "It's only evident to experts and there won't be experts if students don't learn"

    But at the end of the day, in the long run, the ideas and results that last are the ones that work. By work, I mean ones that strictly improve outcomes (all outputs are the same with at least one better). This is because, with respect to technological progress, humans are pretty well modeled as just a slightly better than random search for optimal decisioning where we tend to not go backwards permanently.

    All that to say that, at times, AI is one of the many things that we've come up with that is wrong. At times, it's right. If it helps on aggregate, we'll probably adopt it permanently, until we find something strictly better.

    • jacquesm 6 hours ago ago

      AI is extremely good at producing well formatted bullshit. You need to be constantly on guard against stuff that sounds and looks right but ultimately is just noise. You can also waste a ton of time on this. Especially OpenAI's offering shows poorly in this respect: it will keep circling back to its own comfort zone to show off some piece of code or some concept that it knows a lot about whilst avoiding the actual question. It's really good at jumping to the wrong conclusions (and making it sound like some kind of profound insight). But the few times that it is on the money make up for all of that noise. Even so, I could do without the wasted time and endless back and forths correcting the same stuff over and over again, it is extremely tedious.

  • jerkstate 5 hours ago ago

    Nobody actually understands what they're doing. When you're learning electronics, you first learn about the "lumped element model" which allows you to simplify Maxwell's equations. I think it is a mistake to think that solving problems with a programming language is "knowing how to do things" - at this point, we've already abstracted assembly language -> machine instructions -> logic gates and buses -> transistors and electronic storage -> lumped matter -> quantum mechanics -> ???? - so I simply don't buy the argument that things will suddenly fall apart by abstracting one level higher. The trick is to get this new level of abstraction to work predictably, which admittedly it isn't yet, but look how far it's come in a short couple of years.

    This article first says that you give juniors well-defined projects and let them take a long time because the process is the product. Then goes on to lament the fact that they will no longer have to debug Python code, as if debugging python code is the point of it all. The thing that LLMs can't yet do is pick a high-level direction for a novel problem and iterate until the correct solution is reached. They absolutely can and do iterate until a solution is reached, but it's not necessarily correct. Previously, guiding the direction was the job of the professor. Now, in a smaller sense, the grad student needs to be guiding the direction and validating the details, rather than implementing the details with the professor guiding the direction. This is an improvement - everybody levels up.

    I also disagree with the premise that the primary product of astrophysics is scientists. Like any advanced science it requires a lot of scientists to make the breakthroughs that trickle down into technology that improves everyday life, but those breakthroughs would be impossible otherwise. Gauss discovered the normal distribution while trying to understand the measurement error of his telescope. Without general relativity we would not have GPS or precision timekeeping. It uncovers the rules that will allow us to travel interplanetary. Understanding the composition and behavior of stars informs nuclear physics, reactor design, and solar panel design. The computation systems used by advanced science prototyped many commercial advances in computing (HPC, cluster computing, AI itself).

    So not only are we developing the tools to improve our understanding of the universe faster, we're leveling everybody up. Students will take on the role of professors (badly, at first, but are professors good at first? probably not, they need time to learn under the guidance of other faculty). professors will take on the role of directors. Everybody's scope will widen because the tiny details will be handled by AI, but the big picture will still be in the domain of humans.

    • saulpw 2 hours ago ago

      > as if debugging python code is the point of it all.

      You have a good point, but I would argue that debugging itself is a foundational skill. Like imagine Sherlock Holmes being able to use any modern crime-fighting technology, and using it extensively. If Sherlock is not using his deductive reasoning, then he's not a 'detective'. He's just some schmuck who has a cool device to find the right/wrong person to arrest.

      Debugging is "problem-solving" in a specific domain. Sure, if the problem is solved, then I guess that's the point of it all and you don't have to solve the problem. But we're all looking towards a world in which people have to solve problems, but their only problem-solving skill is trying to get an AI to find someone to arrest. We need more Sherlocks to use their minds to get to the bottom of things, not more idiot cops who arrest the wrong person because the AI told them to.

  • mikeaskew4 5 hours ago ago

    “The world still needs empirical thinkers, Danny.”

    - Caddyshack

  • BobBagwill 4 hours ago ago

    Try giving this problem to different AI LLM chatbots:

    If I could make a rocket that could accelerate at 3 Gs for 10 years, how long would it take to travel from Earth to Alpha Centauri by accelerating at 3 Gs for half the time, then decelerating at 3 Gs for half the time?

    Hint: They don't all get it right. Some of them never got it right after hints, corrections, etc.

  • efields 6 hours ago ago

    I literally don't know how compilers work. I've written code for apps that are still in production 10 years later.

    • Herbstluft 6 hours ago ago

      Are you working on compilers? If not it seems you did not understand what is being talked about here.

      Do you lack fundamental understand of those apps you built that are still in use? Did you lack understanding of their workings when you built them?

    • layer8 5 hours ago ago

      You don’t need to understand compilers because the code it compiles, when valid according to the language specification, is supposed to work as written, and virtually always does. There is no language specification and no “as written” with LLMs.

    • wglb 5 hours ago ago

      No problem with that.

      However, at one point in my career, I was frustrated with limitations in a language (Fortran II) and my curiosity got the better of me and I studied compilers thoroughly.

      This led to a new job and the understanding of many new useful programming concepts. Very rewarding.

      But if you are curious, studying compilers, maybe even writing a new one, will give you tools to do other things.

      While working with LLMs, much of my experience gives me new ideas to push the LLM to explore.

    • bakugo 6 hours ago ago

      Have you written a compiler, though?

  • tom-blk 5 hours ago ago

    Strongly agree,we see this almost everywhere now

  • ghc 6 hours ago ago

    As straw men go, this is an attractive one, but...

    When I was fresh out of undergrad, joining a new lab, I followed a similar arc. I made mistakes, I took the wrong lessons from grad student code that came before mine, I used the wrong plotting libraries, I hijacked python's module import logic to embed a new language in its bytecode. These were all avoidable mistakes and I didn't learn anything except that I should have asked for help. Others in my lab, who were less self-reliant, asked for and got help avoiding the kinds of mistakes I confidently made.

    With 15 more years of experience, I can see in hindsight that I should have asked for help more frequently because I spent more time learning what not to do than learning the right things.

    If I had Claude Code, would I have made the same mistakes? Absolutely not! Would I have asked it to summarize research papers for me and to essentially think for me? Absolutely not!

    My mother, an English professor, levies similar accusations about the students of today, and how they let models think for them. It's genuinely concerning, of course, but I can't help but think that this phenomenon occurs because learning institutions have not adjusted to the new technology.

    If the goal is to produce scientists, PIs are going to need to stop complaining and figure out how to produce scientists who learn the skills that I did even when LLMs are available. Frankly I don't see how LLMs are different from asking other lab members for help, except that LLMs have infinite patience and don't have their own research that needs doing.

    • jacquesm 6 hours ago ago

      AI does not give you knowledge. It magnifies both intelligence and stupidity with zero bias towards either. If you were above average intelligent then you may be able to do a little bit more than before assuming you were trained before AI came along. And if you were not so smart then you will be able to make larger messes.

      The problem, and I think the article indirectly points at that, is that the next generation to come along won't learn to think for themselves first. So they will on average end up on the 'B' track rather than that they will be able to develop their intelligence. I see this happening with the kids my kids hang out with. They don't want to understand anything because the AI can do that for them, or so they believe. They don't see that if you don't learn to think about smaller problems that the larger ones will be completely out of reach.

      • thijson 5 hours ago ago

        Maybe the solution is for an AI that acts as an instructor instead of just trying to solve everything itself. I do this with my kids, they ask me how to do something. I will give them hints, but not outright do it all for them. The article writer in the first part mentioned that this is how they would instruct too.

        • thijson 5 hours ago ago

          I recently heard that a professor said to the class, you can use an ai to solve the assignments. However I'll see if you really understand the material on the final exam.

      • skydhash 5 hours ago ago

        Students are given student-level problem, not because someone wants the result, but because they can learn how solving problems works. Solving those easy problems with LLM does not help anyone.

  • zaikunzhang 3 hours ago ago

    See also

    D. W. Hogg, "Why do we do astrophysics?", https://arxiv.org/abs/2602.10181, February 2026.

  • squirrel 5 hours ago ago

    The article is well-written and makes cogent points about why we need "centaurs", human/computer hybrids who combine silicon- and carbon-based reasoning.

    Interestingly, the text has a number of AI-like writing artifacts, e.g. frequent use of the pattern "The problem isn't X. The problem is Y." Unlike much of the typical slop I see, I read it to the end and found it insightful.

    I think that's because the author worked with an AI exactly as he advocates, providing the deep thinking and leaving some of the routine exposition to the bot.

  • fredgrott 2 hours ago ago

    I know how we can fix this....

    Its of course devious, exactly some of our styles :)

    Give AI to VCs to use for all their domain stuff....

    They than make wrong investment decisions based on AI wrong info and get killed in the market....

    Market ends up killing AI outright....problem solved temporarily

  • robot-wrangler 5 hours ago ago

    Another threat is that you can find tons of papers pointing out how neural AI still struggles handling simple logical negation. Who cares right, we use tools for symbolics, yada yada. Except what's really the plan? Are we going to attempt parallel formalized representations of every piece of input context just to flag the difference between please DONT delete my files and please DO? This is all super boring though and nothing bad happened lately, so back to perusing latest AGI benchmarks..

  • maplethorpe 4 hours ago ago

    I honestly don't know why this guy is hiring Alice and Bob in the first place, instead of just running two agents. He seemed to be saying it's to invest in them as people, but why? What is the end goal? If the end goal is to produce research, then just get the agents to do it.

  • hnzionists 4 hours ago ago

    Noobs love LLMs because they can finally write for loops and generate absolute trash web pages and UI.

    These noobs go “Man this replaces devs!”

    Only the experienced ones really see the LLM as the calculator it is.

  • simianwords 5 hours ago ago

    > Frank Herbert (yeah, I know I'm a nerd), in God Emperor of Dune, has a character observe: "What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; there's the real danger." Herbert was writing science fiction. I'm writing about my office. The distance between those two things has gotten uncomfortably small.

    The author is a bit naive here:

    1. Society only progresses when people are specialised and can delegate their thinking

    2. Specialisation has been happening for millenia. Agriculture allowed people to become specialised due to abundance of food

    3. We accept delegation of thinking in every part of life. A manager delegates thinking to their subordinates. I delegate some thinking to my accountant

    4. People will eventually get the hang of using AI to do the optimum amount of delegation such that they still retain what is necessary and delegate what is not necessary. People who don't do this optimally will get outcompeted

    The author just focuses on some local problems like skill atrophy but does not see the larger picture and how specific pattern has been repeating a lot in humanity's history.

    • zajio1am 5 hours ago ago

      A related quote from A. N. Whitehead:

      > It is a profoundly erroneous truism ... that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.

    • skydhash 5 hours ago ago

      Current civilization is very complex. And it’s also fragile in some parts. When you build systems around instant communication and the availability of stuff built in the other side of the world on a fixed schedule, it’s very easy to disrupt.

      > 4. People will eventually get the hang of using AI to do the optimum amount of delegation such that they still retain what is necessary and delegate what is not necessary. People who don't do this optimally will get outcompeted

      Then they’ll be at the mercy of the online service availability and the company themselves. Also there’s the non deterministic result. I can delegate my understanding of some problems to a library, a software, a framework, because their operation are deterministic. Not so with LLMs.

      • mrugge 5 hours ago ago

        I have been able to produce 20x the amount of useful outputs both in my day job and in my free time using a popular coding agent in 2026. Part of me is uncomfortable at having from some perspective my hard won knowledge of how to write English, code and to design systems partly commoditized. Part of me is amazed and grateful for being in this timeline. I am now learning and building things I only dreamed about for years. Sky is the limit.

      • simianwords 5 hours ago ago

        When technology progressed enough to allow for

        1. outsourcing and offshoring (non deterministic, easy to disrupt)

        2. cloud computing (mercy of the online service availability)

        we had the same dilemma.

        Outsource exactly what you think is not critical to the business. Offshore enough so that you gain good talent across the globe. Use cloud computing so that your company does not spend time working on solving problems that have already been solved. Assess what skills are required and what aren't - an e-commerce company doesn't need deep expertise in linux and postgres.

        Companies that do this well outcompete other companies that obsess over details that are not core to their value proposition. This is how modern startups work: it is in finding that critical balance of buying products externally vs building only the crucial skills internally.

    • lapcat 4 hours ago ago

      I think you missed the point. The entire article is about specialists: astrophysicists. The problem with AI is that specialists are delegating their thinking about their specialty! The fear here is that society will stop producing specialists, and thus society will no longer progress.

      • simianwords 4 hours ago ago

        You are assuming that set of specialists are fixed system! That's not the case. With change in technology, you would get more and more specialists, the same way Agricultural revolution allowed for more specialists to exist.

        • lapcat 4 hours ago ago

          This comment sounds like hand-waving to me.

          The author describes specifically how specialists are produced and how AI undermines their production.

          No, we won't get more and more specialists literally "the same way" as the agricultural revolution. You need to be much more specific about how we'll get more specialists under the incentive structure created by AI, otherwise this sounds like some kind of religious faith in AI and progress.

          • simianwords 4 hours ago ago

            I can't tell what specialists we will get the same way you wouldn't be able to tell me we will have Linux Kernel specialists at the year 1945.

            People do more things with AI.

            More things = more inventions = the field growing.

            The field grows and people become specialists on what used to be a small or trivial.

            A mathematician in 1500's wouldn't think algebraic topology would be a specialisation.

            • lapcat 4 hours ago ago

              > I can't tell what specialists we will get the same way you wouldn't be able to tell me we will have Linux Kernel specialists at the year 1945.

              How about addressing astrophysics specifically. What are you claiming about it? Are you claiming that in the future, we won't need astrophysicists at all, AI can do all of our astrophysics for us, freeing humans to specialize in... other subjects?

              And doesn't the same problem exist for Linux kernel specialists? Why even become a Linux kernel specialists when AI can write your source code for you?

              > people become specialists

              This is precisely what is in question.

              > A mathematician in 1500's wouldn't think algebraic topology would be a specialisation.

              The specific subjects have changed over time, but the production of specialist mathematicians hasn't really changed. It takes hard work, grunt work, struggling, making mistakes and learning from them, as well as expert supervision. The problem with AI is that it encourages and incentivizes intellectual laziness, the opposite of what is required to produce specialists.

              A related problem: LLMs have been trained with papers written and supervised by Alice-type specialists. There's a common claim that LLMs will hallucinate less in the future, but I think that LLMs will hallucinate more in the future, when specialty fields become dominated by Bob-type "specialists" who have a harder time distinguishing fact from fiction. When LLMs have to train on material produced by earlier versions of LLMs, the quality trend will go down, not up.

              • simianwords 4 hours ago ago

                > The specific subjects have changed over time, but the production of specialist mathematicians hasn't really changed. It takes hard work, grunt work, struggling, making mistakes and learning from them, as well as expert supervision. The problem with AI is that it encourages and incentivizes intellectual laziness, the opposite of what is required to produce specialists

                Let's take the example of economics. Economists use ideas in Mathematics like integrals, statistics, PDE's and so on. They know that these concepts exist. They know how to apply them. They don't know these concepts deep enough to make progress here.

                Do you think that Economists should deeply learn integrals, PDE's, Functional Analysis and Differential Geometry and all other concepts they use? Or do you think its better for them to focus just on their specific domain while learning just enough from other domains?

                You keep coming back to AI replacing mathematicians. I'm not making that claim. I'm not saying Linux kernel specialists will be replaced by AI. I'm simply claiming that not everyone needs to be Linux Kernel specialists. This is precisely what AI is allowing: it automates things I don't need to know deeply so that I can focus on things I do need to understand deeply.

                • lapcat 4 hours ago ago

                  > I'm simply claiming that not everyone needs to be Linux Kernel specialists.

                  This is an uninteresting and indeed silly claim, because nobody has ever asserted the opposite.

                  The point is that society needs some Linux kernel specialists, and some astrophysicists, but AI is undermining their production.

                  > This is precisely what AI is allowing: it automates things I don't need to know deeply so that I can focus on things I do need to understand deeply.

                  The submitted article is about how AI is automating the things that a specialist does need to understand deeply. It's about so-called astrophysicists using AI to produce astrophysics papers, not about how non-astrophysicists use AI to produce astrophysics papers so that they can focus on whatever their non-astrophysics specialty may be, if they have any specialty.

                  • simianwords 3 hours ago ago

                    I'm responding to this quote

                    > Frank Herbert (yeah, I know I'm a nerd), in God Emperor of Dune, has a character observe: "What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; there's the real danger." Herbert was writing science fiction. I'm writing about my office. The distance between those two things has gotten uncomfortably small.

                    If we both agree that an astrophysicist may not need to understand things (even in their own domain) to make progress then we are in agreement. Not all the things a researcher works on while writing their paper is useful or necessarily done by them manually. In such cases it becomes necessary to let LLM take over.

                    • lapcat 3 hours ago ago

                      > I'm responding to this quote

                      > > Frank Herbert (yeah, I know I'm a nerd), in God Emperor of Dune, has a character observe

                      The article author and I share a love of Frank Herbert, God Emperor of Dune, and the quote in question. Nonetheless, it's a mistake to focus on this quote rather than on the rest of the article. The quote is nothing more than a nice literary reference; it's not central to the argument.

                      The character who spoke the quote is a magically prescient human-sandworm hybrid, thousands of years old, speaking to his distant relative who was specially bred by him to be invisible to the magical prescience, so let's take the quote with a grain of... sand. ;-)

                      > If we both agree that an astrophysicist may not need to understand things (even in their own domain) to make progress then we are in agreement.

                      Your parenthetical remark is actually the main problem!

  • garn810 8 hours ago ago

    Academia always been full of narcissists chasing status with flashy papers and halfbaked brilliant ideas (70%? maybe) LLMs just made the whole game trivial and now literally anyone can slap together something that sounds deep without ever doing the actual grind. LLMs just speeding up the process, just a matter of time how quickly this shit is exposing what the entire system has been all along

  • itmitica 4 hours ago ago

    Contrarian just for the sake of it. Get on board or stay behind. Whatever good or bad AI brings to the table, it's here to stay. The cat's out of the bag. Might as well enjoy it. Evolution will not stay on your whimsical made-up reality. It will run you over.

    • techblueberry 4 hours ago ago

      What if AI in the long run makes us slower and less effective. As someone who is one of the folks supercharged by these tools, I could see it.

      I think people are underestimating the level of experience and knowledge that’s required to prompt LLM’s. Not in the micro sense but in the macro. It seems so easy because it feels easy. But if you don’t have deep understanding of the domain, it will just feel impossible. The person next to you with domain experience will say “it’s so easy, look at This simple sentence I typed in”. And be like “it’s just a skill issue, why is everyone struggling so much” and not understand the years of accumulated wisdom or innate talent it took to type that simple sentence.

      AI makes the easy things easy and the hard things harder, and more omnipresent.

      • itmitica 30 minutes ago ago

        My experience is AI helping me unload things I am not built for. It allows me to be creative with less drag. Not everyone aims to be useless human automaton or useless human thesaurus. It allows me to exercise my intelligence freely.