The Learning Loop and LLMs

(martinfowler.com)

116 points | by johnwheeler 12 hours ago ago

72 comments

  • onion2k 11 hours ago ago

    Learning what though? When I wrote software I learn the domain, the problem space, the architecture, the requirements, etc, as well as how to turn those things into code. I don't actually care about the code though - as soon as something changes I'll throw the code out or change it. It's an artefact of the process of solving a problem, which isn't the important bit. The abstract solution is what I care about.

    LLMs only really help to automate the production of the least important bit. That's fine.

    • Calavar 11 hours ago ago

      > Learning what though? When I wrote software I learn the domain, the problem space, the architecture, the requirements, etc

      You don't learn these things by writing code? This is genuinely interesting to me because it seems that different groups of people have dramatically different ways of approaching software development

      For me, the act of writing code reveals places where the requirements were underspecifed or the architecture runs into a serious snag. I can understand a problem space at a high level based on problem statements and UML diagrams, but I can only truly grok it by writing code.

      • yannyu 10 hours ago ago

        You're right, but also coding 10 years ago, 20 years ago, and 30 years ago looked very different to coding today in most cases. In every decade, we've abstracted out things that were critical and manual before. Are LLMs writing the code that much different than pulling libraries rather than rolling your own? Or automating memory management instead of manually holding and releasing? Or using if/else/for instead of building your own logic for jumping to a subroutine?

    • orev 11 hours ago ago

      Writing the code is like writing an essay—maybe you have some ideas in your head, but the act of writing them down forces you to interrogate and organize them into something cohesive. Without that process, those ideas remain an amorphous cloud, that as far as you’re concerned, are perfect. The process of forcing those thoughts into a linear stream is what exposes the missing pieces and errors in the logic.

      • onion2k 3 hours ago ago

        Without that process, those ideas remain an amorphous cloud, that as far as you’re concerned, are perfect.

        This is absolutely not the case. My first startup was an attempt to build requirements management software for small teams. I am acutely aware that there is a step between "an idea" and "some code" where you have to turn the idea into something cohesive and structured that you can then turn into language a computer can understand. The bit in the middle where you write down what the software needs to do in human language is the important part of the process - you will throw the code away by deleting it, refactoring it, improving it, etc. What the code needs to do doesn't change anywhere near as fast.

        Any sufficiently experienced developer who's been through the fun of working on an application that's been in production for more than a decade where the only way to know what it does is by reading the code will attest to the fact that the code is not the important part of software development. What the code is supposed to do is more important, and the code can't tell you that.

      • pvelagal 11 hours ago ago

        I totally agree. Trusting the LLM means, you are not thinking anymore and are happy with the high level ideas you had before you started coding, which may be incomplete. Missing pieces will be missed until you see issues in Production and I have seen this happen.

        • onion2k 3 hours ago ago

          Only if your idea of using AI is to write a single prompt to generate everything you need. That's a terrible way of using AI though, because it doesn't work.

          If you approach AI as an iterative process where you're architecting an application just as you would without AI, but using the tool to speed up parts of the process like writing one method or writing just the tests for the part you're building right now, then AI becomes a genuinely useful tool.

          For example, I've been using AI today to build some metrics tooling, and most of what I did with it was using it to assist in writing code to access an ancient version of a build tool that I can't find the documentation for because it's about 30 versions out of date. The API is wildly different to the modern one. Claude knows it though, so I just prompt it for methods to access data from the API that I need and only that. The rest of the app is my terrible Python code. Without AI this would take me 4 or 5 times longer if I could even do it at all.

        • raw_anon_1111 10 hours ago ago

          As if missing pieces don’t happen when people write code. No one is suggesting that you don’t thoroughly test the code.

      • dangets 11 hours ago ago

        Or similarly the difference between reading/listening to a foreign language vs. writing/speaking one. Knowing how to read code or learn algorithms or design is different than actually writing it. The difference between the theory and practice.

    • pvelagal 11 hours ago ago

      LLMs are doing more than that. They are doing so much that I have seen bad ideas creeping into the code base. I used to trust some engineers code, but with introduction of LLMs, I am working more on code reviews and unable to trust significant portions of code checked in.

    • CharlieDigital 11 hours ago ago

      Good taste in how to build software.

    • dionian 11 hours ago ago

      I use llm to generate a lot of code but a large part of what i use it for is orchestration, testing, validation. that's not always 'learning', and by the way, i learn by watching the llm decide and execute, as it draws from knowledge pools faster than me.

      • LtWorf 11 hours ago ago

        You're not learning

        • raw_anon_1111 10 hours ago ago

          Is he getting paid? At the the of the day that’s the only reason I write code or do anything else once I get out of bed, walk over to the next room and log onto my computer.

          Before the pearl clutching starts about Mr lack of coding ability. I started coding in 1986 in assembly on an Apple //e and spent the first dozen years of my career doing C and C++ bit twiddling

          • LtWorf an hour ago ago

            His claim is that he is learning, not if he's making money or not.

  • aeturnum 11 hours ago ago

    The way I talk about is is that the value you deliver as a software "engineer" is: taste and good guesses. Anyone can bang out code given enough time. Anyone can read docs on how to implement an algorithm and implement it eventually. The way you deliver value is by having a feel for the service and good instincts about where to look first and how to approach problems. The only way to develop that taste and familiarity is to work on stuff yourself.

    Once you can show, without doubt, what you should do software engineers have very little value. The reason they are still essential is that product choices are generally made under very ambiguous conditions. John Carmack said "If you aren't sure which way to do something, do it both ways and see which works better."[1] This might seem like it goes against what I am saying but actually narrowing "everything possible" to two options is huge value! That is a lot of what you provide as an engineer and the only way you are going to hone that sense is by working on your company's' product in production.

    [1] https://aeflash.com/2013-01/john-carmack.html

  • Terr_ 10 hours ago ago

    > Software development has always resisted the idea that it can be turned into an assembly line.

    This is... only true in a very very narrow sense. Broadly, it's our job to create assembly lines. We name them and package them up, and even share them around. Sometimes we even delve into FactoryFactoryFactory.

    > The people writing code aren't just 'implementers'; they are central to discovering the right design.

    I often remember the title of a paper from 40 years ago "Programming as Theory Building". (And comparatively-recently discussed here [0].)

    This framing also helps highlight the strengths and dangers of LLMs. The same aspects that lead internet-philosophers into crackpot theories can affect programmers creating their no-so-philosophical ones. (Sycophancy, false appearance of authoritative data, etc.)

    [0] https://news.ycombinator.com/item?id=42592543

  • crabmusket 11 hours ago ago

    "Programming as theory building" still undefeated.

    Also, fun to see the literal section separator glyphs from "A Pattern Language" turn up.

  • Jtsummers 11 hours ago ago

    The actual title is: The Learning Loop and LLMs

    For some reason johnwheeler editorialized it, and most of the comments are responding to the title and not the contents of the article (though that's normal regardless of whether the correct title or a different one is used, it's HN tradition).

    • sedatk 11 hours ago ago

      Does the editorialized title contradict with the article?

      • Jtsummers 11 hours ago ago

        Yes. The editorialized title includes a statement not present in the article at all, "Don't automate". Joshi actually describes how he has used LLMs and his experience with them, and he never says not to use them at all which is what the editorialized title suggested. The bulk of the article is describing how LLMs can break the learning loop (as hinted at in the original title) which is a much more interesting topic than HTML code generation a bunch of people are talking about.

        [The title has been changed, presumably by a mod. For anyone coming later it was originally incorrect and included statements not present in the article.]

        • sedatk 4 hours ago ago

          I'm glad that it's resolved then.

  • marcus_holmes 7 hours ago ago

    This ignores learning styles [0], and assumes that everyone learns by experimentation. Some people don't, they learn by reading/studying and don't ever need to experiment. They go from reading all the literature on the subject to building stuff on the first try. Of course they still make mistakes and learn from those mistakes, but they don't experiment to find out what went wrong; they go back to the books/blogs/docs and work out what they did wrong, then correct the code and try again.

    Similarly there are some engineering departments that absolutely do design everything first and only then code it up, and if there are problems in the coding stage they go back to design. I'm not saying they're efficient or that this is best practice, but it suits some organisations.

    [0] https://en.wikipedia.org/wiki/Learning_styles there's a ton of different approaches to this, and a lot of it is now discredited. But the core concept: that people learn differently, isn't disputed.

    • Jtsummers 7 hours ago ago

      > Similarly there are some engineering departments that absolutely do design everything first and only then code it up, and if there are problems in the coding stage they go back to design.

      That sounds like a slow-motion experiment, not a lack of experimentation.

  • johannes1234321 11 hours ago ago

    There are parts of software development, which requires understanding purpose and code and making good decisions or having in depth understanding to ootikize. And there are parts where it's just boring ceremony for using a library or doing some refactorings.

    The first one is mostly requiring experienced humans, the alter one is boring and good to automate.

    The problem is with all the in between. And in getting people to be able to do the first. There AI can be a tool and a distraction.

    • MarsIronPI 11 hours ago ago

      > There are parts of software development, which requires understanding purpose and code and making good decisions or having in depth understanding to ootikize. And there are parts where it's just boring ceremony for using a library or doing some refactorings.

      I feel like maybe I'm preaching to the choir by saying this on HN, but this is what Paul Graham means when he says that languages should be as concise as possible, in terms of number of elements required. He means that the only thing the language should require you to write is what's strictly necessary to describe what you want.

    • AnIrishDuck 11 hours ago ago

      The most critical skill in the coming era, assuming that AI follows its current trajectory and there are no research breakthroughs for e.g. continual learning is going to be delegation.

      The art of knowing what work to keep, what work to toss to the bot, and how to verify it has actually completed the task to a satisfactory level.

      It'll be different than delegating to a human; as the technology currently sits, there is no point giving out "learning tasks". I also imagine it'll be a good idea to keep enough tasks to keep your own skills sharp, so if anything kinda the reverse.

  • xnx 11 hours ago ago

    I'm happy to learn the essential complexity (e.g. business logic) but see low/no value in learning incidental complexity (code implementation details).

    • ares623 11 hours ago ago

      Spoken like a true CEO. LLMs makes everyone feel like CEOs. Imagine a world where everyone thinks they're CEOs.

      • tharne 10 hours ago ago

        This one of the things that frightens me about LLMs. Like MBA programs, they seem to make people dumber over time, while simultaneously making them feel smarter and more confident in their abilities.

        • xnx 10 hours ago ago

          Dumber? Different eras call for different skills. Is the US populace dumb because it doesn't know how to dip candles or render soap?

          • tharne 10 hours ago ago

            In your opinion, what skills are called for in this current era of LLMs?

            • xnx 9 hours ago ago

              Like any new tool, figuring out where, when, and how to use it appropriately.

        • ares623 10 hours ago ago

          There’s the smarter/dumber aspect yes. But there’s also the empathy aspect. You start looking at other people and their work and you immediately think “what a fucking waste. AI could’ve done that. What the hell is wrong with that person?”

      • xnx 10 hours ago ago

        Not CEO level at all, just one layer up from coding. Just as coding is one layer up from assembly, machin code, binary, logic gates and registers, etc.

    • waynesonfire 10 hours ago ago

      I am completely the opposite. I could care less about whats in that packet of data. But, I deeply care about how I move it from A to B and if gets there according to specifications.

  • redhale 10 hours ago ago

    Respectfully disagree.

    Why is the current level of language abstraction the ideal one for learning, which must be preserved? Why not C? Why not COBOL? Why not assembly? Why not binary?

    My hypothesis is that we can and will adapt to experience the same kind of learning OP describes at a higher level of abstraction, specs implemented by agents.

    It will take time to adapt to that reality, and the patterns and practices we have today will have to evolve. But I think OP's view is too short-sighted, rooted in what they know and are most comfortable with. We're going to need to be uncomfortable for a bit.

  • RobRivera 11 hours ago ago

    I thoroughly love the meta programming features of cpp to generate code for me.

  • sega_sai 11 hours ago ago

    I agree, writing some software requires learning and understanding. But sometimes, one just needs something done (if one's job is not software engineer) and then LLMs are indispensable. Also in some software projects (at least in my personal experience) there is stuff that is important and requires thought, and a lot of other stuff that's just boilerplate, connecting this to that etc. I am more than happy to delegate that. It gives me more time to think about stuff that's actually important.

  • keeganpoppen 6 hours ago ago

    the tool is actually FOR learning, not for replacing it. acting like LLMs are somehow supposed to reflect back more bits than they take in is input is completely absurd to me. i bet my left kidney that i could take the patterns he describes in his book and get an LLM to reproduce them faithfully. and do it with fewer tokens than the book itself.

  • auggierose 8 hours ago ago

    No, what I think LLMs really teach us is that we have too many different languages. I think we need exactly one informal one, and exactly one formal one, and that's about it.

    • Jtsummers 7 hours ago ago

      What would those two languages be, or what out there now is most like what you think they should be?

      • jmogly 7 hours ago ago

        Python and Rust are probably the front runners

  • andai 10 hours ago ago

    >leaving us with zero internalized knowledge of the complex machinery we've just adopted

    To be fair I have this with my own code, about 3 days after writing it.

  • fsndz 11 hours ago ago

    we can't automate it anyway and vibe coding is overrated: https://medium.com/thoughts-on-machine-learning/vibe-coding-...

  • bossyTeacher 11 hours ago ago

    So is drawing and painting. Didn't stop many techies in here from using it. Many techies believe their tech is improving the world even when their tech is stealing people's copyrighted art or making people depressed.

    • Gigachad 11 hours ago ago

      Tech bros here could see that their tech is being used to round up people for camps or blow up children and somehow tell themselves they are doing the right thing.

      • ares623 10 hours ago ago

        I only work on the user experience for the drone operators, not on the parts that go boom. And our NPS scores are through the roof!

        • djohnston 9 hours ago ago

          Good point. We shouldn’t build any drones. Why even have defence research?

  • shadowgovt 11 hours ago ago

    Hm... I think I get what Mr. Joshi is saying, but the headline clashes with the notion that the essence of what we do is automation, and that includes automating the automation.

    This at first blush smells like "Don't write code that writes code," which... Some of the most useful tools I've ever written are macros to automate patterns in code, and I know that's not what he means.

    Perhaps a better way to say it is "Automating writing software doesn't remove the need to understand it?"

    • Jtsummers 11 hours ago ago

      > I think I get what Mr. Fowler is saying

      Martin Fowler isn't the author, though. The author is Unmesh Joshi.

      • shadowgovt 11 hours ago ago

        Thank you! Corrected.

  • Animats 11 hours ago ago

    Front end design should have been all drag and drop years ago. LLMs should be doing it now. If it were not for the fact that HTML is a terrible way to encode a 2D layout, it would have been.

    • Gigachad 11 hours ago ago

      It was drag and drop before we decided websites should work on different screen sizes. And that it should adapt to every size more elegantly than a word document randomly changing layout when things move.

      • xnx 10 hours ago ago

        Definitely a challenge, but many Windows 95 era apps also handled resizeable windows and screens that could be 16x resolution difference.

    • recursive 11 hours ago ago

      HTML is a good enough way of representing a superset of different layout types. It seems display: grid does most of the 2d constraint things that people always used to talk about. I don't know the state of the art for drag-drop grid layout builders, but it seems possible that one could be built.

    • nawgz 11 hours ago ago

      I often hear this, and to an extent I don't disagree. There is an absurd amount of complexity that goes behind CSS/JS/HTML to make it function how it does. Browsers are true monstrosities.

      But what alternatives are really left behind here that you view as superior?

      To me, it is obvious the entire world sees a very high value in how much power can be delivered in a tiny payload via networked JS powering an HTML/CSS app. Are there really other things that can be viewed as equally powerful to HTML which are also able to pack such an information dense punch?

      • Animats 11 hours ago ago

        > tiny payload

        Er, no. Go watch some major site load.

        • nawgz 9 hours ago ago

          Why the bad faith answer? Even a bloated, ad-ridden modern web bundle is tiny compared to native apps. And I'd be happy to hear about an interesting app delivery platform which makes 1s of MBs look absurdly large.

          I think you and I both know a 200kB gzipped web app can be a very powerful tool, so I don't understand what angle you're approaching this from.

    • IshKebab 11 hours ago ago

      I mostly agree. I think it isn't drag and drop because it's surprisingly hard to make a GUI builder interface that doesn't suck balls. Some manage it though, like QtCreator.

      I guess there is stuff like SquareSpace. No idea how good it is though. And FrontPage back in the day but that sucked.

      • genghisjahn 11 hours ago ago

        VB6. yeah it was battleship gray, but you could amazing things.

        • LtWorf 11 hours ago ago

          Amazing, unless you wanted to resize the window that is.

          • genghisjahn 9 hours ago ago

            Dude,you could easily resize. There was the MDI form as well. You could snap controls to a fixed width to the edge of the window. VB6 is hanging out in the cooldown tent while the rest of front end tech stack still has laps to go.

            • LtWorf 3 hours ago ago

              I know you could resize, but it didn't have layouts like Qt has, so the content of the window would not resize :D

          • shadowgovt 8 hours ago ago

            My favorite in that regard was Interface Builder, though I must admit it's been so long since I wrote MacOSX software that I haven't had use for it in ages.

            The ability to drop components in and then move the window around and have them respond as they will in the live program was peak WYSIWYG UI editing. I have not experienced better and I doubt I will.

          • blibble 10 hours ago ago

            delphi had that sorted

        • shadowgovt 11 hours ago ago

          VB6 UIs are the color of getting work done. ;)

    • mkoubaa 11 hours ago ago

      The problem with drag drop frontend is the code generators that support that end up tightly coupling the document with the code, which doesn't lead to good testability and scalability. I'm optimistic that LLMs could accomplish a visual design paradigm while still designing the code in good taste, but so far I'm not holding my breath.

  • dionian 11 hours ago ago

    > I recently developed a framework for building distributed systems—based on the patterns I describe in my book. I experimented heavily with LLMs. They helped in brainstorming, naming, and generating boilerplate. But just as often, they produced code that was subtly wrong or misaligned with the deeper intent. I had to throw away large sections and start from scratch.

    Well i do this but i force it to make my code modular and i replace whole parts quite often, but it's tactical moves in an overall strategy. The LLM generates crap, however, it can replace crap quite efficiently with the right guidance.

  • dboreham 11 hours ago ago

    Spoiler: the author used LLMs to automate much of his development work.

    • yahoozoo 10 hours ago ago

      And write this article.

  • chemotaxis 11 hours ago ago

    In itself, I'm not sure this is a great argument. Putting shoes on a horse is an act of learning. Butchering your own pig is an act of learning. Sewing your own clothes is an act of learning. Writing your own operating system is an act of learning... but if you don't do any of that, you're not necessarily worse off. Maybe you just have more time to learn other things.

    Maybe there's a broader critique of LLMs in here: if you outsource most of your intellectual activity to an LLM, what else is left? But I don't think this is the argument the author is making.