16 comments

  • emerkel 9 hours ago ago

    I hear you here. I really do.

    I don't know where you are in your career, me I am on the backend. But all the time I was working the constant churn of new tools/languages/frameworks and so on, the race to keep up with the vendors just wore me out. And despite all that, building software honestly never changed much.

    I have been working with both Codex and Claude, and you are right, you can't trust them. My best practice I have found is constantly play one off against the other. Doing that I seem to get decent, albeit often frustrating results.

    Yes, the actual building of the code is either over, or soon to be over. The part that I always considered the "art." I often found code to be beautiful, and enjoyed reading, and writing, elegant code all the time I was working with it.

    But the point of code is to produce a result. And it's the result that people pay for. As you mentioned with the evolution of development in your original post, the process and tools might have changed, but the craftsmanship in operation with those using them did not.

    You make a fair point that this abstraction is different — prior layers were engineered and traceable, and an LLM output isn't. But I'd argue that makes the human in the loop more important, not less. When the abstraction was deterministic, you could eventually lean on it fully. When it isn't, you can never fully step away. That actually protects the craft.

    Until AI becomes a "first mover" god forbid, where there is no human in the chain from inception to product, there will always be a person like you who knows where the traps are, knows what to look out for, and knows how to break a problem down to figure out how to solve it. After all, as I have always said, that is all programming really is, the rest is just syntax.

  • geophph 4 hours ago ago

    I honestly share a lot of the same thoughts and I now feel dumber the more i use AI to build out things for me. It's made me lazy tbh. And sadly, it also helps me do the things at my job faster and "better", but only because "better" works since "good enough" is all that's needed. I get pretty lost if I think about it too deeply...

    That said, in response to this:

    > I am terrified that all the effort and time I spend learning it will become obsolete in the future

    I am of the belief that there's no way that effort and time that goes into learning something like Rust will be wasted. You'll learn stuff and gather a sense for things that might not concretely be applied, but will be helpful when it comes to reasoning about whatever comes next language wise. Learning often is about the journey and not just the destination. I think it'd still be worth it.

    Or at least ... that's what I tell myself now as I try to learn Zig.

  • TowerTall 3 hours ago ago

    i don't get. yes, if you just prompt it "make me an app" you learn nothing but you probably also end up with an app that is crap as best.

    I you instead "promote" yourself to architect or lead dev and you steer the ai as it it a team of junior dev you must manage you can learn a lot. not only will you have deep architecture discussion with ai where you, together, explore various approaches and ways to do things.

    an if you do spec driven ai development where you write the specs you will end up with an app that resemble the way you prefer apps to be written.

    Just because ai can cook something up in no time doesn't exclude you from being involved.

  • markus_zhang 8 hours ago ago

    For my personal projects, I use AI for discussion only. I treat it as a better Google and sometimes a fake psychiatrist. I don't really fully trust it, so I verify afterwards. If someone else wants to vibe code, I mean please do so as long as you enjoy the process. Personally I enjoy the coding process so I wouldn't want to copy/paste code, let alone letting it write directly in some editor.

    For my work I use it extensively. I use Cursor as a senior engineer who breaks down problems and only writes the parts that interests me. I trust AI with other parts and do a review afterwards. AI prompting is a real skill. I don't like it but I don't like my work either.

    • geophph 4 hours ago ago

      > I don't like it but I don't like my work either.

      man. this actually seems so profound to me. I feel the same way overall as far as personal vs. work projects and AI use, but this wording hits it on the nose.

  • al_borland 9 hours ago ago

    > Is it only me, or does it feel like there are half-baked features everywhere now?

    This is the argument for actually learning, so you don’t ship half-baked code, because the AI isn’t good enough. The people who are telling you it is likely have a financial interest in pushing the narrative that AI will do it all.

    > LLMs, on the other hand, are non-deterministic. Therefore, we cannot treat their outputs as just another layer of abstraction.

    This is another problem. A lot of code is written so exactly the same thing happens every time. If AI is going in and changing the logic in subtle ways that aren’t noticed right away when updates are made, because no one understands the code anymore-that’s a problem. In hobby or toy apps, no one cares, but in production code and critical systems, it matters.

    • raw_anon_1111 3 hours ago ago

      AI is not changing your logic after it produces your code, you aren’t putting your CLAUDE.md file as the only thing in your build system and rebuilding your code each time you deploy it.

  • bad_username 2 hours ago ago

    Sounds like you didn't lose the ability, you lost motivation. Why learn Rust, you say, if an LLM can crank out a Rust app for me, and it will be good enough?

    LLMs may have removed the critical need for a SW engineer to know details, like the syntax of Rust or the intricacies of its borrow checking semantics. But LLMs, I maintain, didn't remove the critical need for an engineer to learn _concepts_ and have a large, robust library of concepts in your head. Diverse, orthogonal concepts like data structures, security concerns, callbacks, recursion, event driven architecture, big O, cloud computing patterns, deadlocks, memory leaks, etc etc. As long as you are proficient with your concepts, you will easily catch up with the relevant details in any given situation. Once you've ever seen recursion, for example, you will find no trouble recognizing it in any language.

    That's the beauty of LLMs : you don't _have_ to be good at technical details any more. But you still have to be very good with concepts, not just to be able to use LLMs properly, but also _be in control_ of their work. LLM slop is dangerous not because of incorrect details like bad syntax. It is dangerous because it misplaces concepts: it may use a list where you need a hash map and degrade performance, it may forget a security constraint and cause a data leak, or it can be specific where it needs to be general, etc. An engineer needs to know and check the concepts if they want to remain in control. (And you absolutely do want that.)

    But it is impossible, or very impractical, to just learn an abstract concept out of thin air. The normal way to learn a concept is to see its concrete instantiation somewhere, in all its detailed glory, and then retain its abstract version in your head.

    So, the only way to stay relevant and stay in control is to have a robust concept library in your mind. And the only way to get that is to immerse yourself in many real technical situations, the details of which you must crack first, but free to forget later. That is learning, and that is still important today in the age of LLMs.

  • raw_anon_1111 4 hours ago ago

    Context: I’m 51 years old, started programming in 1986 in assembly on an Apple //e, wrote C and C++ on every platform imaginable from mainframes to Windows CE devices between 1996 to 2012 and part of my job has always been to write production level code.

    > I am deeply sad that we may be losing the craftsmanship side of programming;

    Absolutely no company is paying you for your “craftsmanship”. You are getting paid to add business value either by making the company more or saving the company more than your fully allocated cost of employment.

    > Until now, every abstraction was engineered and deterministic. You could reason about it and trace it. LLMs, on the other hand, are non-deterministic. Therefore, we cannot treat their outputs as just another layer of abstraction.

    The output of the LLM is deterministic code. No one is using an LLM in production to test whether a number is even.

    You can run unit and integration tests on the resulting code just like your handcrafted bespoke code. When I delegated tasks to more junior developers, they weren’t deterministic either.

    > For example, I genuinely want to invest time in learning Rust

    I specialize in cloud + app dev consulting. I know CloudFormation like the back of my hand. I’ve been putting off learning Terraform and the Amazon CDK for years. Last year, I had a project that needed the CDK and then another project that required Terraform. I used ChatGPT for both and verified they created the infrastructure I wanted. Guess what I’m not going to waste time doing now? The client was happy, my company got paid, what’s the point?

    > As a developer, I am confused and overwhelmed, and I want to hear what other developers think.

    If you are an enterprise developer (like most developers are) your job has been turning into a commodity for a decade because their were plenty of good enough backend/full stack/web/mobile developers and it’s hard to stand out from the crowd. AI has just accelerated that trend.

    This is no way meant meant to put myself as more than an enterprise developer who happens to know how to talk to people and “add on to what Becky said” and “look at things from the 1000 foot view”.

    By definition, today, AI is the worse that it ever will be.

  • aristofun 8 hours ago ago

    Ive lost my ability to do basic and advanced arithmetic calculations or algebraic calculations in school.

    That didn’t stop me from getting a phd.

    If you think it’s all there’s to programming that llm spits out, then the problem is in you somewhere, not in llms.

  • ljlolel 10 hours ago ago

    Claude is a VM. Programming languages are dead https://jperla.com/blog/claude-is-a-jit

    • dokdev 10 hours ago ago

      As the article states, it is a "wild experiment". I wouldn't let AI control anything serious end to end. Also if Claude really becomes JIT, it is going to be an expensive one.

      The idea is interesting though.

  • lhmiles 10 hours ago ago

    Block the dns on router for two weeks and you will feel alive again

    • dokdev 10 hours ago ago

      LoL :) that makes sense, but what if there is a new AI model releases when I am offline.

      • AnimalMuppet 10 hours ago ago

        You will not suffer much from not adopting a new AI model for two weeks after release.

  • krickelkrackel 10 hours ago ago

    It's like a gym that has automated weights, and just going there to watch them being moved - our mental muscles aren't trained anymore.