Multiple times per week I have the same conversation. It goes something like this:
- AI will make developers irrelevant
- Why?
- Because LLMs can write code
- Do you know what I do for a living?
- Yes, write code?
- Yes, about 2-5% of the time. Less now.
- But you said you are a developer?
- I did
- So what do you do 95-98% of the time?
- I understand things and then apply my ability to formulate solutions
- But I can do that!
- So why aren't you?
The developers who still think their job is about writing code will perhaps not have a job in the future. Brutal as it may sound: I'm fine with that. I'm getting old and I value my remaining time on the planet.
Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.
This is a bit of glib answer. Most of the time is spent coding which encompasses typing, retyping, and retyping again. It also includes banging your head against the wall while trying to get one of your rewrites to work against and under-documented API.
OP's formulation makes SWE sound like a purely noble enterprise like mathematics. It's more like an oil rig worker banging on pieces of metal with large hammers to get the drill string put together. They went in with a plan, but the reality didn't agree and they are on a tight schedule.
Most of the time is spent figuring what the right thing to do is, not writing the implementation. Sometimes the process of writing the implementation surfaces new considerations about what the right thing is, but still, producing text to feed to a compiler is not the bulk of the work of a software engineer. It is to unearth requirements and turn them into repeatable software.
Glib is called for. The amount of information asymmetry that's still on the table as vibe coders and vibe engineers and vibe doctors emerge is staggering. Professional experience is still incredibly valuable. Most software developers might spend more than 6% of their time coding but no Senior Developers are banging their heads for hours over typos.
Try to write a design doc before you implement something (which people find they need to do for LLMs to work at all anyway). You’ll find that you spend much less time actually writing code.
Write proper API documentation laying out the assumptions and intent, generate some good API docs, write a design and architecture document (which people find they need for LLMs to work at all anyway). You’ll find that you spend a lot less time reading code.
> which people find they need to do for LLMs to work at all anyway
Everything we have to do for AI to function well, would help humans to function better too.
If you take the things for AI, but do then for humans instead, that human will easily 2x or more, and someone will actually understand the code that gets written.
Getting the code into a state where it actually does what you want takes time - but a lot of that is research, testing, experimentation, documentation, etc. Those can be faster with AI assistance but you still need to bang on it enough to make sure it works right.
There are also those for whom that percentage is higher, let’s say 6-50%.
> I understand things and then apply my ability to formulate solutions
The AI is coming for that too.
You might just be lucky to be in circumstances that value your contributions or an industry or domain that isn’t well represented in the training data, or problem spaces too complex for AI. Not everyone is, not even the majority of devs.
People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
Agree. It is just like 2 totally separate groups are arguing.
One very tiny slice of speciality/ rare industries where code is critical but overall small part of project costs. I can see if code / software is 5% of overall cost even heavy use of AI for code part is not moving the needle. So people in this group can feel confident in their indispensability.
Second group is much larger and peddling CRUD / JS Frontends and other copy/paste junk. But as per industry classification they are just part of same Coder/Developer/IT Engineer group. And their bleak prospects is not some future scenario, it is playing out right now with tons of them getting laid off. And whole lot of people with IT degrees, certifications are not finding any jobs in this field.
I don't mean this as a snarky jab. It's coming for anything software. I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend APIs, systems programming, embedded programming, they all seem equally threatened it's just a matter of time. Front end is easy to see in the AI web front ends but everything else is still easy pickings.
> I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend
That is not hard. It’s just tedious and very slow to do manually. The hard part would be about designing a usb dongle and ensuring that the associated software has good UX. The reason you don’t see kernel devs REing devices is not because it’s impossible or that it requires expert knowledge. It’s because it’s like counting sands on the beach.
It is irrelevant that complex frontend would be easy for AI or not. To me 1) how many unique complex frontends are needed out of total frontends that millions of sites out there need. 2) Will there be increase in need of such frontend engineers so other displaced folks can land a job there.
I think it will be far fewer to have any positive impact on IT engineers' overall job prospects.
There are periods of time where I might spend 80% of my time "coding", meaning I have minimal meetings and other responsibilities.
However, even out of that 80% of my time, what fraction is actually spent "writing code"?
AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:
- Understanding the problem
- Waiting for the build system and tests to run
- Manually testing the app to make sure it behaves as I'd like
- Reviewing the diff to make sure it's clear
- Uploading the PR and writing a description
- Responding to reviewer feedback
There are times when AI can do the "write the code" portion 10x faster than I could, but if it's production code that actually matters, by the time I actually review the code, I doubt it's more than 2x.
>AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:
- Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback
What part of those you think it doesn't help with?
That may be true I’m not gonna say one way or the other, but if AI comes for that then almost all knowledge work is effectively dead, so all that’s left would be sales or physical labor.
I wonder though, can AI make the next JS framework. I mean that in sincerity, there was the leap from jQuery to React for ex. If an AI only knows jQuery and no one makes React, will React come out of AI.
People didn't leap from jQuery to React. It's a lot easier to imagine an AI looking at jQuery and [insert any server side MVC framework] and inventing Backbone.
The history of the last 250 was moving from agriculture to industrial work to service work. Now the last frontier is starting to be overtaken by automation too.
(And in all of those transitions millions where left behind without work or with very worse prospects. The people that took the new jobs were often a different group, not people who knew the old jobs and were already in their 30s and 40s).
And what would be the new professions that uniquely require humans, when even thinking and creative jobs are eaten by AI? Would there be a boom of demand for dancers and chefs, especially as millions lose their service jobs?
> The history of the last 250 years is inventing new professions as old ones are automated away.
Even if this still holds true ("past performance is no guarantee of future results") the part about it that people handwave away without thinking about or addressing is how awful the transitional period can be.
The industrial revolution worked out well for the human labor force in the long term, but there were multiple generations of people who suffered through a horrendous transition (one that was only alleviated by the rise of a strong labor movement that may not be replicable in the age of AI, given how it is likely to shift the leverage of labor vs. capital).
If you want to lean on history as an indication that massive sudden productivity changes will make things better for humanity in the long run, then fine, but then you have to acknowledge that (based on that same history) the transition could still be absolutely chaotic and awful for the lifespan of anyone who is currently alive.
>> I understand things and then apply my ability to formulate solutions
> The AI is coming for that too.
If this is true, then you'd have to conclude that AI is coming for everything. I'm still not convinced by that. But I am convinced that the part of software development that involves typing code manually into an IDE all day is likely gone forever.
It really doesn't have to come for everything to feel like it's taking everything. If it eliminates 10% of white collar jobs over the next decade, the impact will be felt everywhere.
5+ years in the software world is like 30 years in others...So...given lacking use-cases and humongous amounts of capital already wasted on chatbots...It's more like "we" are closer to closing curtains than to "just started"...
Discovery of the best solution in a problem space is not generative but only verificative. Meaning: the LLM can see if a solution is better than another, but it can't generate the best one from the start. If you trust it, you'll get sub-par solutions.
This is definitely an agent problem instead of an LLM problem. Anybody got something explorative like this working?
Even if AI advances continue, for quite a while there's likely still going to be the 'Steve Jobs' role. That is, even if AI coding agents can, in the future, replace entire teams of SWEs, competently making all implementation decisions with no guidance from a tech-savvy human, the best software will likely still involve a human deciding what should be built and being very picky about how, exactly, it should externally behave.
I don't know if it makes sense to call that person an SWE, and some people currently employed as SWEs either won't be good at this or aren't interested in doing it. But the existing pool of SWEs is probably the largest concentration of people who'll end up doing this job, because it's the largest concentration of people who've thought a lot about, and developed taste with respect to, how software should work.
Yes, AI is coming for solution formulation, absolutely, but not all of it, because it is actually a statistical machine with context limit.
Until the day LLMs are not statistical machine with a context limit, this will hold. Someone need to make something that has intent and purpose, and evidently now not by adding another 10T to the LLM parameter count.
> because it is actually a statistical machine with context limit.
So are humans.
Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)
And I argue that current LLMs have surpassed many of my capabilities already.
For example GPT/Opus can understand and document some ancient legacy project I never saw before in minutes. I would take a week+ to do the same and my report would probably have more mistakes and oversights than the one generated by the LLM.
We are not pre-trained using the summary of all human knowledge over all of history. Yet we make certain decisions with much more ease.
We are much more limited, but we fundamentally work differently. Hence adding more parameter like certain companies are doing isn't necessarily going to help. We need to rethink how LLM work, or how it work in tandem with something that's completely different.
I think it's doable, I just don't believe it's LLM, and I don't think anyone now knows what it is.
Yours is a “God of the gaps” argument. You will remain technically correct (the best kind of correct!) long after the statistical machine has subsumed your practical argument, context limit and all.
I fall into the "pessimistic heavy user" camp, I burn thousands of $ worth of SOTA tokens monthly but it just makes me more acutely aware of the limitation and amount of work I need to do to work around them and what kind of decision that I should reserve to myself instead of trusting the LLMs to do.
To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.
This was something I learnt in my very first job in the 1980s. I worked for someone who did industrial automation beyond PLCs and suchlike. He spent 6 months working in the company. On the factory floor, in the logistics department, in procurement, in accounting and even shadowing the board. Then he delivered a proposal for how to restructure the parts of the company, change manufacturing processes, and show how logistics and procurement could be optimized if you saw them as two parts in a bigger dance.
He redesigned the company so that it could a) be automated, and b) leverage automation to increase the efficiency of several parts of the business. THEN, he started planning how to write the software (this was the 80s after all), and then we started implementing it.
Now think about what went into this. For instance we changed a lot of what happened on the factory floor. Because my boss had actually worked it. So he knew what pain points existed. Pain points even the factory workers didn't know how to address because they didn't know that they could be addressed.
I was naive. I thought this was how everyone approached "software projects". People generally don't. But it did teach me that the job isn't writing code. It is reasoning about complex systems that often are not even known to those who are parts of it.
And this is for _boring_ software that requires very little creativity and mostly zero novelty. Now imagine how you do novel things.
> People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
You make it sound like it is a bad thing that certain tasks become easier.
I spent a lot of time writing CRUD stuff. Because the things i really want to work on depend on them. I don't enjoy what is essentially boilerplate. Who does? If you can do the same job in 1/20 the time, then how is this a bad thing?
It is only a bad thing if writing CRUD webapps is the limit of your ability. We don't argue for banning excavators because it puts people with shovels out of work. We find more meaningful things for them to do and become more productive. New classes of work becomes low-skilled jobs.
If you have been doing software for a while, you are probably doing some subset of this. But these things are hard to articulate. It is hard to articulate because it is not something we think about. Like walking: easy for us to do, hard to program a robot to do it.
>To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.
1 person needs to do that. The other 100 not doing that currently to begin with, but doing the AI-automatable work?
We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need? I think it's mostly a cope.
If they can do those things they can effectively replace any white collar job. That’s about 45% of the workforce. Societies tend to collapse around 25-30% unemployment.
Imagine 45% of higher than average paying jobs gone.
If that happens we’ll either figure out a new economic system, or society will collapse.
Also saying robots are walking just fine is misleading for any definition of just fine that is anywhere near as good as a human.
Look at how the billionaires are talking about AI: Their clear, unambiguous goal is basically to replace all white collar "knowledge" jobs. And there's currently nothing regulatory that's stopping them--they just need to wait for the state of the art to improve. Once AI is "good enough" if it ever is, they won't even think twice about 45% unemployment. What are we unemployed workers going to do about it? There's no effective labor organization left. Workers have basically no political power or seat at the table. We're not going to get violent--the police/military are already owned by the billionaire class. We're just going to eventually become economically irrelevant and die off.
I get that it is popular to hate billionaires these days, but realistically, they did not get to be billionaires by being stupid. It runs directly counter to their own interests to induce anything like 45% unemployment. They will get poorer, the world they live in right along with the rest of us will get noticeably shittier, etc.
More likely they figure out what to do with a bunch of idle talent. Or the coming generation of trillionaires will.
> We're just going to eventually become economically irrelevant and die off.
As harsh it may sound, it seems rather likely to me. It is not like s/w engineers have helped struggling workers in other sectors other than sanctimonious "Learn to code" advice. So software folks can't expect any solidarity or help from others.
The fundamental issue isn't unemployment due to automation, but the fact that society cannot benefit from unemployment.
It should be something for us to celebrate, because it means greater freedom for humans to pursue something else rather than spending time doing drudgery.
Put it another way, the issue is that resources are not shared more equitably. This is especially egregious considering that LLMs are trained on all human knowledge. We've all been contributing to this enterprise, and what we may end up getting in return is unemployment.
45% of folks sitting on their hands are going to have the free time to talk, and this group of people are skilled at organization. Are you planning on throwing your hands up and passively accepting whatever comes your way?
And at least in the US they have >45% of all the small arms weaponry. There is no bunker strong enough nor private army big enough if 100M people come for you.
It's important (and calming) to understand that since the Industrial Revolution started ~250 years ago, we've automated away most jobs several times over, while employment levels have stayed pretty constant.
"Automating half the jobs" is the same as "double productivity per worker".
When the doubling happens in 5 years rather than 50, it might be more disruptive, but I'm convinced we're on the verge of huge improvements in human standard of living!
What in the current state of world affairs outside of IT do you think is indicative of that potential for huge improvements in human standard of living?
We never noticed how easy the code writing part had already become because it happened slowly. Through mechanical means, through the ability to re-use code, and through code generation.
Heck, even long before LLMs about 10% to 30% of my code was already automatically generated. By tooling, by IDLs and by my editor just being able to infer what my most likely input would be.
> We have robots walking just fine now, by the way.
I don't think you got the point I was trying to make.
True, but I guess I see a distinction between scaffolded/templated boilerplate or autocomplete and actual application logic. People have generated boilerplate from templates for ages, as you say. RoR maybe a pretty good example, but there wasn't even early-days AI involved in doing that.
>> We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need?
Because they are currently "generative AI" meaning... autocomplete. They generate stuff but fall down at thinking and problem solving. There is talk of "reasoning models" but I think that's just clever meta-programming with LLMs. I can't say AI won't take that next step, but I think it will take another breakthrough on the order of transformers or attention. Companies are currently too busy exploiting the local maxima of LLMs.
Walking was given as an example of "hard to program a robot to do it" by GP. Well, now we have robots that can walk.
What evidence is there that LLMs have hit a ceiling at being able to do things like talk to users or stakeholders to elicit requirements? Using LLMs to help with design and architecture decisions is already a pretty common example that people give.
>> Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
Anecdotal evidence to support this.
I work with both dev and design teams. Upper management has already gone through several layoffs and offshoring of the two dev teams I work with. The devs they did keep were exactly what you said. The capable ones who reliably closed their Jira tickets. Never missed a deadline for building their features or components. And now? Their work has tripled and now the only help they get from management? "Start to figure out how to leverage AI, we're going to be a in hiring freeze for the next 10 months."
The double whammy of losing onshore team members and not getting any help from management to fix the problem they just created and essentially just telling them to figure out how to use AI to keep up is pretty staggering.
I would echo what one of the devs told me, "If this is the new "AI era" than you can count me right the fuck out of it."
No, I never believed in fully automated tale by Tesla, but as the LLMs improve my personal estimate for the date of human-level AGI is rapidly moving to "present". Before GPT-2 I had it somewhere in 2100, at GPT-2 I thought maybe by 2060 if we are lucky. Now I think it is 2035 or maybe even sooner.
I like to see the optimism, even if I don't share it. I think it's incredible hubris that humans think we are about to reinvent our own level of intelligence, just because we made a machine that talks pretty.
I remember being that kid in high school who ran math and logical problems hard which contributed to me being very technical and to learn to push through painful mental challenges on the regular. Out of most of my graduating class there were not many of us that went on to become engineers for a reason because it isn't easy work by any means and I'm guessing is quite draining for people who don't use their brain like we do.
So while AI will change the industry I don't see any reputable company firing the smartest ones in the room for junior level intelligence.
Even with it advancing someone has to be responsible for when it screws up which we know it will.
I know a an accomplished CS professor, ACM fellow, cited in Knuth's TAOCP (as well as being an easter egg!), who still hunt-and-pecks. In fact, hunt-an-pecks incredibly slowly.
That's very true, which is why I find it insulting that so many AI proponents use the word "typing" to refer to writing code. It carries an implication that if you enjoy writing code by hand, you enjoy a mindless activity.
I've always told my Jr Engineers to "think twice, code once".
If I gave them a task and they immediately started typing it out, I would tell them to stop typing and ask them to explain to me what they were doing; they'd often just spit out what they thought the code should do, and I'd often point out edge cases they missed and would have missed had they just spit out code and a PR, wasting everyone's time. I would also insulate them from upper management to give them time to actually think (e.g. I wouldn't be coding so they could think then code).
To your point and to the GP's point, and one point I keep raising with LLM's: "typing is not where my time sinks are"
Isn't the long term trend just that we don't need as many engineers, not that there will no more software engineers?
Theres another, different loop I keep seeing which is:
- Company A lays off engineers citing AI efficiencies
- People say its because of over hiring during 2020
- Company B lays off engineers citing AI efficiencies
- People say its because it was never a good business
- Company C lays off engineers citing AI efficiencies
- People say its because theres a recession
I guess to cite a counter example, unemployment is still super low, software jobs are still holding up, but the bear case is that eventually 5% of people will be able to do what people do today, and the demand for software won't grow at the same pace.
> Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.
Thing is, natural selection will take care of you at the same time. Because you'll also come to rely on products they make, or services they offer, either directly or indirectly. So eventually, you too, will suffer the consequences of the enshloppification.
This is exactly it. The speed of light has not changed: we're limited by our ability to understand the system, and make decisions about what to do next. AI will speed that up, but the core work is the understanding and decision-making.
Saying otherwise is sort of like reducing the task of writing a novel to typing.
And most of the time the statistical aspect of LLMs result in a less creative solution that is more expensive to run and harder to maintain. LLMs at this stage are good at scaffolding, generating the boilerplate you do not want to write and glue things together quickly. It just makes engineers faster.
Something missed in that computer science was a highly theory driven discipline where people were taught how to think critically about solving complex problems. Industry complained they weren’t teaching enough programming skills, so they dumbed down the thinking part and emphasized the vocational part. Now the vocational part is virtually useless, and the grounding of theory applied to complex problems is suddenly really relevant again. Schools will take time to retool their programs, teaching staff, and two generations if not three graduates will have entered into a work environment that doesn’t need what they learned.
As someone 35 years into my career I agree this is the most exciting part of my career. I love programming and I do it all the time but I do it by reading code and course correction and explaining how to think about the problems and herding cats - just like working with a team of 100 engineers. But the engineers I’m working with now by and large listen, don’t snipe me on perf reviews, aren’t hallucinating intent based on hallway conversations with someone else, etc. This team of AI engineers I have can explain to me their work, mistakes, drift, etc without ego and it’s if not always 100% correct it’s at least not maliciously so. It understands me no matter how complex the domain I reach into, in fact it understands the domain better than I do, so instead of spending a few months convincing people with little knowledge or experience that X is a good idea, I can actually discuss X and explore if it’s a good idea or not and make a better informed decision. I’ve learned more in these discussions than I’ve learned in decades of convincing overly egoistic juniors and managers to listen to me about something I’m an industry authority on.
However I see very clearly we will need very few of the team of 100 human engineers I can leave behind in my work. Some of will be there in a decade, but maybe less than 1:10. This is going to be a more brutal time than the Dotcom bust for CS grads, and I don’t think it will ever improve. Mostly because we simply won’t need the “my parents told me this makes money” people, just the passionate folks remain. But even then, we face a situation where the value of any software developed is very low because so much software is being developed. It’s going to turn into YouTube where software that is paid for is very small relative to the quantity of software developed. We already see this in the last few months with the rate of GitHub projects created. If the value of any software created is low, the compensation of the creator will be low unless they’re very rare talents.
That doesn't hold because the goal for executives is to increase revenue and the main sales pitch of Anthropic et al is to pay for agents instead of paying for engineers. That means 80% of the workforce is out no matter what. Whether or not one belongs to the remaining 20% is a different story, but obviously not all of us will be there.
> I understand things and then apply my ability to formulate solutions
The least experienced developer writes the most code. Juniors would be spending whole day in the IDE, typing, testing, typing etc.
Senior developers will go to a park for a few hours, think, then come back spent an hour or less typing code that just works or write nothing at all, maybe even delete code.
Instead they might update documents, ask clarifications about found edge cases or errors in planning that were not considered.
Since software is in every industry of man, I think you'll need to mention which industry this perspective is coming from. This is definitely NOT the case in certain industries.
For those who claim to be developers who code no more than 5% of their time and resort to arguments like "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?", it's not commenting, it's shilling for the AI corpocracy on HN.
If you spend 95% of your time on that stuff, you better be working on like critical infrastructure where nothing can go wrong, otherwise you are in an incredibly dysfunctional company.
I agree it would be absurd for it to take 95% of your time.
I have, however, seen that it takes a lot more time than one would think.
I did some contracting work for a severely dysfunctional meeting heavy organization and it was about 2 hours of meetings for every hour of real technical work!
Ah yes agreed, if it's more than 90% it just signals to me that a developers skills are probably being wasted too much on business/coordination stuff.
But i guess if we mean actual time tapping your keyboard making code, then it's true some days for senior+ devs, but definitely not technical work overall.
Even when it’s not dysfunctional, you spend a lot of time on communication and reading stuff other people wrote (including code). It’s very rare to work in isolation.
I guess it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing code. If we say it's just typing, then 95% is not absurd no
You’re a ”developer“, i guess, but not a coder (anymore), which is what your interlocutors are probably asking about. You’ve migrated to a middle manager job, not something they probably can just start doing competently. Essentially you’re agreeing with their initial sentiment, that coders will be made irrelevant.
You miss the major factor in your compensation: pricing pressure due to supply/demand.
By removing all the junior engineers, you've fundamentally changed the market forces longer term and most people expect that to negatively impact you in the supply demand curve regardless of whether or not the statements you've made above are true, which they most likely are for senior engineers.
In removing junior developers, leaving only senior developers, wouldn't that reduce supply, making the price go up, not down? It's been a while since Econ 101 for me though.
Weird. I call myself a developer because I don't have an engineering degree from an abet certified engineering program.
I recognize, in some capacity, that this isn't the norm and in the US "professional engineer" is protected and not simply "engineer", but it feels akin to stolen valor to me.
If there were a license in the US for it, I’d agree with you. But as is, if you are “doing” engineering, you’re an engineer.
If you are a licensed engineer of some kind, you’d state that outright.
The equivalent of stolen valor would be claiming to be a licensed software engineer; except there is no such license so it would also be fraud, misrepresentation, etc.
> If there were a license in the US for it, I’d agree with you.
Yeah, that is basically the thing in my country. You can't call yourself an engineer without passing a test, but I can't take it because there isn't one for software engineering.
Same thing for freelancing. Freelance jobs are defined in a list, and other jobs cannot benefit from the simplified tax rules that freelancers enjoy, but that list was written before software development was a thing.
I'm a software dev in the US and I never call myself "engineer" in that capacity. Always "programmer" or "developer".
I agree. Engineers have to clear a much higher bar. Even though my career was spent in medical diagnostic software where we had to get 510k clearance, I was still keenly aware that this was a fundamentally different activity from actual engineering.
I'm an electrical engineer that moved to software engineering and there's a lot of commonalities between what I do now and what I did previously as an electrical engineer. The bar might seem high, but that's the only way I know how to work, honestly.
On the other hand, with the modern division of labour in a lot of companies and with the rhetoric I see here in HN and in other places: a lot of developers are indeed not even close to being engineers.
I dunno, man. I've been doing this for 20+ years and I think we're at a really important fork in the road where there are two possibilities.
The first is that AI is achieving human-level expertise and capability, but since they're now being increasingly trained on their own output they are fighting an uphill battle against model collapse. In that case, perhaps AI is going to just sort of max out at "knowing everything" and maybe agentic coding is just another massive paradigm shift in a long line of technological paradigm shifts and the tooling has changed but total job market collapse is unlikely.
The other possibility is that we're going to continue to see escalating AI capability with regard to context, information retrieval, and most importantly "cognition" (whatever that means). Maybe we overcome the challenges of model collapse. Maybe we figure out better methodologies for training that don't end up just producing a chatbot version of Stack Overflow + Wikipedia + Reddit. Maybe we actually start seeing AI create and not just recreate.
If it's the latter, then I think engineers who think they are going to stay ahead of AI sound an awful lot like saddle makers who said "pffft, these new cars can only go 5 miles per hour."
I'll also add another factor: it's become increasingly clear at our company that AI-enabled humans are getting to the bottom of the backlog of feature ideas much quicker. This makes the 'good ideas' part of the business the rate limiting step. And those are definitely not increasing with AI, beyond that generated by the AI churn itself ("let's bolt on a chat experience or an MCP!")
So maybe the coding assistants don't get a 10x improvement any time soon, but we see engineering job market contraction because there aren't really enough good ideas to turn into code.
Yes, but as the price of getting work done goes down, a lot of companies that were priced out of custom software before now can hire devs, as the value hiring a few can provide just goes up. Fewer people per product, absolutely. No more teams of 10 or 20 working on the same thing. But there's so much out there that doesn't get done at all because you'd never be able to afford it.
Simple marginal thinking: When you lower the price of something, it gets more use cases. A rich person might not take even more flights because they are cheaper, but more people will consider flying when they wouldn't have at old prices
You are supposing that AI is achieving human level expertise and capability is a given. I am not so sure. Right now that's much further from the truth than one might think at first glance.
LLMs know nothing but are great at giving the illusion that they know stuff. (It's "mansplaining as a service"; it is easier to give confident answers every time, even if they are wrong, than to program actual knowledge.) Even your first case seems wildly optimistic. The second case is a lot of "maybes" and "we don't know how but we might figure it out" that seems like a lot to bet an entire farm on, much less an entire industry of farms.
We sure are looking at a shift in the job market, but I don't think it is a fork in the road so much as a Slow/Yield sign. Companies are signalling they are willing to take promises/hope to cut labor costs whether or not the results are real. I don't think anything about current AI can kill the software development industry, but I sure do think it can do a lot to make it a lot more miserable, lower wages, and artificially reduce job demand. I don't think this has anything to do with the real capabilities of today's AI and everything to do with the perception is enough of an excuse and companies were always looking for that excuse. (Just as ageism has always existed. AI is also just a fresh excuse for companies to carry on aging out experience from their staff, especially people with long enough memories/well schooled enough memories to remember previous AI booms and busts.)
But also, yeah if some magic breakthrough makes this a real "buggy whip manufacturer moment" and not just an illusion of one, I don't mind being the engineer on that side of it. There's nothing wrong about lamenting the coming death of an industry that employs a lot of good people and tries to make good products. This is HN, you celebrate the failures, learn from them, and then you pivot or you try something new. If evidence tells me to pivot then I will pivot, I'm already debating trying something entirely new, but learning from the failures can also mean respecting "what went right?" and acknowledging how many people did a lot of good, hard work despite the outcome.
Saying being a programmer is about writing code is a bit like saying being an artist is about drawing lines on a canvas.
Yeah technically drawing lines on canvases may be an very important part of being a painter, but it is hardly the core of what makes or breaks great art.
What you described are senior developers and system architects.
Junior developers spend most of their time writing code (when they're not forced to attend pointless standups, because Agile/blah/blah)
> The developers who still think their job is about writing code will perhaps not have a job in the future.
So you're saying the same thing everyone else is saying. SWEs won't go away, but they will be greatly reduced, because those whose job is about writing code -- junior devs -- will be replaced.
(How will Sr Devs in the future be created? That's the question, isn't it.)
As an extreme example, maybe we’ll see long-running internships and trainings like doctors experience. Doctors don’t start their career until ~12+ years of prep and training.
Pragmatically, software development has a lot of examples of teenagers making apps and college students building software companies. In the 12 years it takes for training, low-knowledge workers could be vibe coding continuously replacements of most commercial software products they’d be hired to build. So I doubt we’ll treat software development as a rarified high skill job.
>- I understand things and then apply my ability to formulate solutions
- Well, and AI can do part of that too, maybe more of it soon.
- ...
- Besides, you don't need 10 guys in a team to do that. A couple of them will do, then AI will do the coding. What will happen to the rest?
- ...
I think the future is pretty up in the air in this respect, but my guess is that AI will just lead to another shift in the set of knowledge that a 'real programmer' is expected to have. I'm old enough to remember when people would make fun of web developers for 'programming' using HTML and JavaScript. And of course, back in the day, you couldn't be a real programmer unless you wrote assembly language. In a few years' time, being able to write (as opposed to read) source code in any specific programming language will probably become a niche skill. The next generation will be able to read Python to about the same extent that I can read x86 assembly.
Perceptions of what knowledge counts as 'low level' are constantly shifting. These days, if you write C, you're a low-level, close to the metal programmer. In the 70s, a lot of people made fun of Unix for being implemented in a high-level programming language (i.e. C) rather than assembly.
Note that just because you know the job is understanding things, the manager who'll boot you and leave you without income probably doesn't. They'll just get their political cookie points for saving money by replacing you with AI.
Pure wage workers should consider dropping the attitude about how tech progress will just make their inferiors in the same line of work be out of a job (hrmph good riddance etc.). Because this pseudo-progress could creep up on them as well.
Then you won’t have this just world of the deserving workers at all. Just formerly deserving workers and idiot billionaires like Musk (while the robots do all of the work).
I normally say that I have zero concerns regarding AI in terms of employment. At most I am concerned in learning the best practices on AI usage to stay on top of things.
It's ability to write code is alright. Sometimes it impresses me, sometimes it leaves me underwhelmed. It certainly can't be left to do things autonomously if you are responsible for its output.
Moderately useful tool, but hellishly expensive when not being subsidized by imbeciles that dream of it undrrmining labor. A fool and his money should be separated anyway.
What I am really concerned about the incoming economic disaster being brewed. I suspect things will get very ugly pretty soon.
In my experience, it's been the complete opposite. The very experienced engineers that are actually willing to use top of the line tooling are much better than they were before, including those that are over 40, and over 50.
Part of the practical degradation of traditional programmers over time has always been concentration and deep calculation, just like in chess. The old chess player knows chess much better than a 19 year old phenom, but they cannot calculate for that many hours at the same speed as before, so their experience eventually loses to the raw calculation. Maybe at 35, or at 45, but you are just not as good. Claude Code and Codex save you the computation, while every single instinct and 2 second "intuition", which is what you build with experience, is still online.
It's not just that it's a more fair competition: It's now unfair in the opposite direction. The senior that before could lead a team of 6 is now leading a team of agents, and reviewing their code just as before. Hell, it's easier to get the agent to change direction than most juniors around me, which will not be easy to correct with just plain, low-judgement feedback.
I highly doubt that a significant portion of farm labor became salesman or researchers. Builders? I could see that but robots already replaced a portion of those too.
less jobs creation is a almost certain for tech, but some people with high IQ get wayy more things done, they already do. This will spread to robots and other areas because robots are not automous yet, maybe will take decade(s). but meanwhile few operators will lead them in a more productive way? That's my bet. It's a clear, logical process with iterations. A lot of things are getting faster with AI, except energy production in some places in the world!
Anecdotally, it feels like something materially changed in the US software hiring market at the start of this year to me. It feels like more and more businesses are taking a wait and see approach to avoid over-investing in human capital in the next few years.
It also feels like the hiring "signal", which was always weak before, is just completely gone now, when every job you do advertise receives over 500 LLM written applications and cover letters that all look and feel the same.
The pro-athlete comparison in this article is bit silly IMO- there are obvious physical body issues that occur with aging if you rely on your muscles etc to make money. If you compare to other fields of knowledge work, such as say law or medicine, there are loads of examples of very experienced, very sharp operators in their 40s and 50s.
Honestly, AI doesn't feel like it's affecting hiring needs from the trenches. We don't have engineers sitting on their hands because AI wrote up everything the leadership could imagine.
Instead, we get an economy that feels like it's on the cusp of a fall or at the very least a roller coaster. Poor tax incentives to hire. ZIRP is long gone. And the hiring managers are overrun with slop.
But bosses are happy to say it's AI because that makes you sound in control.
> Anecdotally, it feels like something materially changed in the US software hiring market at the start of this year to me. It feels like more and more businesses are taking a wait and see approach to avoid over-investing in human capital in the next few years.
My guess is companies overhired in COVID and between that experience and an uncertain market they don't want to make the same mistake twice.
I think the hype peaked around 2016 where Democrats were portrayed as out of touch for saying laid off coal miners could just "learn to code". By 2019 it was a cliché used to mock laid off journalists on Twitter.
This is a great question that rarely gets answered. It’s partially that a ton of student students went to school for computer science because they saw how much money could be made, another fraction is people that switched into software from related fields, maybe with a boot camp or something.
It didn't. The elites never want to admit that they have failed to efficiently use capital for the last 40 years. It's always the fault of workers that should never be trusted. Just continue trusting the elites as they ruined US manufacturing jobs, surely the same institutions won't fail the workers again!
> If you work in construction, you need to lift and carry a series of heavy objects in order to be effective. But lifting heavy objects puts long-term wear on your back and joints, making you less effective over time. Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”.
My parents were both construction workers. There is an understanding that you cannot lift heavy objects forever. You stop lifting objects and move to being a foreman, a supervisor... and if you are uncomfortable learning to get others to do work that you before have done yourself, you burn out your body entirely and the consequences are horrible.
This is factual reality, but it is also a parable that has been important for me to internalize about delegation in my own career. It is not irrelevant to AI use, but I don't think it slots onto it totally as neatly.
Software developers are more architechs than plain programmers. You wouldn't make an architech lift heavy things, you want they to design how those heavy things are used.
> AI-users thus become less effective engineers over time, as their technical skills atrophy
Based on my experience, I think this will prove more true than not in the long run, unfortunately.
Professionally, I see people largely falling into two camps: those that augment their reasoning with AI, and those that replace their reasoning with AI. I’m not too worried about the former, it’s the latter for whom I’m worried.
My mom is a (US public school) high school teacher, and she vents to me about the number of students who just take “Google AI overview” as an absolute source of truth. Maybe it’s just the new “you can’t cite Wikipedia”, but she feels that since the pandemic, there’s a notable decline in the critical thinking skills of children coming through her classes.
We have a whole generation (or two) of kids that have grown up being told what to like, hate, believe, etc. by influencers and anonymous people on the internet. They’d already outsourced their reasoning before LLMs were a thing. Most of them don’t appear to be ready to constructively engage with a system that is designed to make them believe they are getting what they want with dubious quality.
> My mom is a (US public school) high school teacher, and she vents to me about the number of students who just take “Google AI overview” as an absolute source of truth.
I notice many of the adults in my life are doing this now as well.
> After being laid off, a programmer becomes a welder. One day while working, he suddenly muttered to himself, "It's been so long, I've even forgotten how to solve three sum". A coworker next to him quietly replied, "Two pointers".
I keep reading about how AI will be fine because people can just retrain for different careers. However, I never read what those careers are or who is going to pay for retraining.
I certainly don't have the money or time to go back to college and start a new career at the bottom.
The argument is that “that’s what always happened in the past”.
Which is true, but it’s true as long as it’s not true.
The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.
At a mechanistic level, the “we have always found other jobs” argument misses that the reason we’ve always found other jobs is because humans have always had an intelligence advantage over automation. Even something as mechanical as human inputs in an assembly line was eventually dependent on the human ability to make tiny, often imperceptibly, adjustments that a robot couldn’t.
But if something approximating AGI does work out, human labor has absolutely no advantage over automation so it’s not clear why the past “automation has created more human jobs” logic should continue.
> Which is true, but it’s true as long as it’s not true.
It also isn't true. The story of the last 50 years has been that technology, especially computer and communications technology, has facilitated the concentration of wealth. The skilled work got computerized, or outsourced to India or China. That left U.S. workers with service jobs where they have much lower impact on P&L and thus much less leverage.
In my field, we used to have legal secretaries and law librarians and highly experienced paralegals. They got paid pretty well and had pretty good job security because the people who brought in the revenue interacted with them daily and relied on them. Now, big firms have computerized a lot of that work and consolidated much of the rest into centralized off-site back-office locations. Those folks who got downsized never found comparable work. They didn't, and couldn't, go work for WestLaw to maintain the new electronic tools. The law firms also held on to many of them until retirement or offered them early retirement packages, and then simply never filled the positions. It used to be a pretty solid job for someone with an associates degree, and it simply doesn't exist anymore.
The only thing keeping the job market together is the explosion in healthcare workers. My Gen-Z brother and sister in law are both going into those fields. In a typical tertiary American city, the largest employers are the local hospital and perhaps a university or community college. Both of those get most of their revenue directly or indirectly from the government. It's not clear to me how that's sustainable.
If it makes you feel better, I'm pretty sure it isn't sustainable. (But I'm not an economist so take that with a block of salt.)
I don't think anyone has the answers. It's just some of us are honest enough to concede we have no answers, while others promote an answer that aligns best with their belief system.
"It'll all work out."
"It's the immigrants/blacks/jews/whatever dragging us down."
"Nothing's going to happen and we can all continue doing the work we always have."
"Burn the rich."
Etc etc.
Not a lot of serious attempts out there at even getting a hand on the issues, let alone fixing the issues.
I'm also pretty sure in the past industrial transitions, many of the people who lost their jobs at the start of the change never found better ones. It took a generation or so for new opportunities to really be found and fine tuned and you're competing for those new roles with younger people anyway.
If ai does take a lot of white collar work, is it a lot of comfort that maybe jobs in a very different sector will be better in 20 years?
Did the younger people find better jobs? You used to have all these jobs for people who were maybe a bit smarter than average with good judgment. In the 1990s, the local community college used to advertise associates degrees for paralegals. That's a job that doesn't exist in the same way anymore thanks to computers. Now it's become an internship for kids with top credentials before they go to law school. Which is fine for them, but what about everyone else?
It seems to me like all of these people are flocking now to healthcare fields. That seems totally unsustainable.
My understanding is that healthcare keeps growing because the large Boomer generation is aging. When they have passed though, then we should see a corresponding slide in healthcare growth
It’s also not that true, and highly dependent on a lot of factors.
Anecdotally I see a lot of schadenfreude online about tech jobs after a decade or two of lecturing everybody from Appalachians in coal country, to Midwestern autoworkers, that they should just “learn to code.”
Totally agree, and would add another way “that’s what always happened in the past” is a terribly weak argument. Things might have always worked out at the societal level so far, but very often do not at the individual level. Countless successful craftsmen have had their livelihoods ruined by technological changes and spent their remaining years impoverished. How many people funding AI would be willing to throw their own life away for the good of some future strangers that may or may not be born? I'm pretty sure the answer is <=0.
> The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.
Malthusian observation can still be true...It only has to be true once, and the only reason people say it isn't right now is due to industrial fertilizers and short memories.
It's not going to happen, just as it didn't happen for skilled industrial workers whose jobs got outsourced to China. The government will pay just enough in welfare to keep the situation manageable. Then they'll demonize you in the culture, as a Luddite, etc.
> However, I never read what those careers are or who is going to pay for retraining.
There aren't any careers and if there were, you would have to pay. Corporations certainly won't except under extremely rare situations where they have to to compete.
I think the idea of being an employee is fundamentally changing. Not saying its good or bad but it's shifting to a more entrepreneurial phase where people have to step out of their 9 to 5s and find ways to deliver value that others want to pay for.
We saw this pre-ai with uber and door dash. I think as AI automation dies down and most companies are competing at a near optimal level with the new tools we'll need humans again in more traditional roles to build the next generation of innovations. And then the whole cycle will repeat.
That's a lot of SV-speak. How exactly do people step into an entrepreneurial phase? They're at work in corporate settings with fixed defined roles. Most workplaces are not many-hatted-donning startup environments, but restricted roles where there are deliverables, deadlines, meetings, etc. Which leaves out of hours for "entrepreneurship" whatever that is.
Github project work on the weekends? That's not possible for most people in their mature/family years (or shouldn't be necessary - what about living life??)
What about people who have been out of work for a year and all they can do right now is deliver for Uber and Doordash so they can make rent and put some food on the table?
Is it ideal working conditions? No, but its better than nothing, you can set your own hours, and you can leave when the next opportunity comes.
The second part seems obvious to me: the ones who are getting retrained. If it's some kind of formal education, depending where you are, maybe the state at least for part of it.
Education in what, though? No idea. And if there was one answer, and it's true that there will be fewer software developers, you'd likely be competing with many people for few jobs.
Exactly. I have yet to read a single logically sound argument that even gives a hint of what those professions/jobs might be (remember, they have to be plentiful enough to employ large numbers of people, so "I quit my corporate job and making more as a TikTok influencer" doesn't count). Remember that a new profession has to open up new hitherto unknown revenue streams otherwise there are no companies who will pay you.
At least in the US, the only major non-AI growth field seems to be healthcare to deal with the swell of baby boomers living longer than people have before.
But if we're waiting to be paid to retrain there, I wouldn't hold our collective breath.
Baby boomers had already started the face of dying though. The next generation is still going to be right there. That generation is smaller. These people will always be dying. However, I wouldn't hold my breath if you're a young person in that field. Maybe but maybe not
Also, it's not necessarily true that there will be other great careers available. This seems to just be an assumption people are making.
Of course, there are jobs that will still require human labour for some time yet, but in reality a lot jobs that require physical human labour are now done in other parts of the world where labour is cheaper.
Those which cannot be exported like plumbing or waitressing only have limited demand. You can't take 50% of the current white-collar workforce and dump them in these careers and expect them to easily find work or receive a decent wage. The demand simply does not exist.
Additionally, at the same time as white-collar jobs are being lost an increasing number of "low-skill" manual labour jobs are also being automated. Self-checkout machines mean it's harder to get work in retail, robotaxis and drone delivery will make it harder harder to find work in delivery and logistics, robots in warehouses will make it harder to find warehouse jobs.
It seems to me there is an implicate assumption that AI will either create a bunch of new well-paid jobs that employers need humans for (which means AI cannot do them) and jobs which cannot be exported abroad for cheaper. What well-paid jobs would even fit the category of being immune to AI and immune to outsourcing? Are we all going to be really well paid cleaners or something? It makes no sense.
A lot of the advice we're seeing today about retraining in construction worker or plumber seems to assume that there's an unlimited demand for this labour which there simply is not. And even if hypothetically there was about to be a huge increase in demand for construction workers, it would take years to even have the machinery, supply chain and infrastructure in place to support the millions of people entering construction.
The most likely scenario is that people will lose their jobs and will be stuck in an endless race to the bottom fighting for the limited number of jobs that are left in the domestic economy while everything else is either outsourced or done by robots and AI.
The better advice is to start preparing for this reality. Do not assume the government will or can protect you. When wealth concentrates corruption because almost inevitable and politicians have families to look after too.
Please take this seriously. Even if I'm wrong it's better to prepare for the worse rather than to assume everything will be find and you'll be able to retrain into a new well-paid career.
50% of the workforce was in farming near the end of the 1800's. Today, 2%
40% of the workforce was in manufacturing early to mid 1900's. Today 8%
60+% of the current workforce is white collar. What will it be in 20 years ?
LLM's are only a couple of years old, we have no idea where this will go. Maybe it will be a big hallucination, maybe we are looking at the very early version of farm and manufacturing machines.
The ENIAC was larger than a person, we now have watches that are significantly more powerful. Maybe in the future, your Apple watch will have more compute than several racks of H100's.
When they came for the farmers, no one else cared - everyone got cheap and bountiful food.
When they came for the manufacturers, no one else cared - everyone got cheap and bountiful products.
Now they are coming for the white collar workers, and their highly paid laptop lifestyles.
This is the story that's been written since the Luddite revolts, as far as I know. The successors in that case were the capitalists who spent a significant amount of time and money convincing the constabulary and political figures to side with them. People were shot and jailed in the worst cases. The best case, workers were left without work or sent off to work-houses where they became indentured servants to the state.
The last work-house closed in the 1930s.
That all started not because people were afraid jobs were going to be replaced by the new loom. People had been using looms for centuries. They were protesting working conditions: low wages, lack of social protections when people were let go, child labor, work houses, etc. There were no labor laws at the time to protect workers... but there were these valuable new machines that the capital owners valued greatly. The machines were destroyed as leverage: a threat.
Since the capitalists ultimately "won" that conflict it has been written, by technocrats, that technological progress is virtuous and that while workers will initially be displaced the benefit to society will be enough such that those displaced will find productive work else where.
But I think even capitalist economists such as Keynes found the idea a bit preposterous. He wrote about how the gains in productivity from technological advances aren't being distributed back to workers: we're not working less, we're working more than ever. While it isn't about displacement of workers, it is displacement of value and that tends to go hand in hand.
I think asking, "Where do I go?" is a valid question. One that workers have been trying to ask since the Luddites at least. Unfortunately I think it's one that gets brushed under the rug. There doesn't seem to be much political will to provide systems that would make losing a job a non-issue and work optional.
That would give us the most leverage. If I didn't have to work in order to live I could leave a job or get displaced by the latest technological advancement. But I could retrain into anything I wanted and rejoin the work force when I was good and ready. I wouldn't have to risk losing my house, skipping meals, live without insurance, etc.
I really wish seemingly intelligent people would stop using the abstraction analogy (like the article does). The key word is: determinism. Every level of abstraction (inc. power tools, C, etc.) added a deterministic layer you can rely on to more effectively do whatever it is that you're doing - same result, every time. LLM's use natural language to describe programming and the result is varied at the very best (hence agents, so we can brute force the result instead). I think the real moat is becoming the person who can actually still program.
People always say this but it’s misguided imo. Yes LLMs are not deterministic, but that’s totally irrelevant. You aren’t executing the LLMs output directly, you’re using the LLM to produce an artefact once that is then executed deterministically. A spec gets turned into code once. Editing the spec can cause the code to be updated but it’s not recreating the whole program each time, so why does determinism matter?
In my experience, I'm using LLMs as my abstraction to "junior engineer". A junior engineer isn't deterministic either. I find that if you treat the LLM output like a person's output, you're good. Or at least in my projects, it's been very successful. I don't have it generate more code than I can review, or if I give it a snippet to help me fix it, if it ends up re-writing it like an ambitious engineer would do, I tell it to start over and make minimal changes.
I guess I'm not spun up about the determinism because I've been working at the "treat it like a person" level more than the "treat it like a compiler" level.
To me, it's really like an engineer who knows the docs and had a good memory rather than infallable code generator.
I work at a small company, so we don't have tons of processes in place, but I imagine that if you already had huge "standards" docs that engineers need to follow, then giving the LLM those standards would make things even better.
The thing is you can quickly teach a Junior how to respect a specification contract, so that with very minimal oversight, you get the wanted implementation. And after a few years (or months), the communication overhead get shorter. What would have been multiple rounds of meetings and review sessions are a short email and one or two demos.
try distributing this spec amongst your team members, ask each of them to drive it to completion. no follow up edits. deploy to individual environments and then run a rigorous test suite against all of the deployments. see if all of them behave the same way.
> Reviewing code is still way faster than writing code.
Writing code results in a much better understanding of the code than reviewing it
In fact I would say that in large complex codebases, in order to develop the same understanding of what the code is doing might actually take longer than writing it from scratch would have
this is the way LLMs _should_ be used, as an assistant to create reliable, deterministic code. and honestly, they're fantastic when used this way. build the thing you need with the LLM, then put the LLM away.
but in practice, the current obsession with agents means people are creating applications that depend entirely on sending requests to LLMs for their core functionality. which means abandoning the whole idea of deterministic software in favor of just praying that all of the prompts you put around those API requests will lead to the right result.
I see what you're getting at, but determinism isn't the right word either. LLMs are fundamentally deterministic -- they are pure functions which output text as a function of the input text and the network parameters[1]. Depending on your views on free will, it could be effectively argued that humans are deterministic as well.
The concept you're touching on is the idea that LLMs (and humans) are functions which are inscrutable. Their behavior cannot be distilled into a series of logical steps that you can fit in your head, there are no invariants which neatly decompose their complexity into a few interpretable states, and the input and output spaces are unstructured, ambiguous, underspecified, and essentially infinite. This makes them just about impossible to reason about or compose using the same strategies and analysis we apply to traditional programs.
[1] Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice. I can't imagine there are many workflows which feed an LLM the exact same prompt multiple times and rely on the output having some statistical distribution. In fact, even if you wanted this you may just end up getting a cached response.
Let's be real, if you and I both ask claude to generate a feature on the same project, what are the chances that it spits out 100% replicated code? But if we are to build the project using a Dockerfile, we will get the same binary and the same image. Products around LLMs are non deterministic unlike compilers.
> Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice
Everyone added /dev/random to their offerings, so every LLM tools for coding are non deterministic.
There's something to be said about the fact that the very people who would use deterministic layers to build stuff are... non-deterministic. We, as humans, have our set of pros and cons, wins and failures. Even the most brilliant coders on earth will make mistakes from time to time. I often fail to see this getting accounted in any conversation when there is a critique towards LLMs, as if we humans are not flawed in our own ways, with a huge degree of variance across individuals. Good and bad code existed prior to LLMs. If you're hiring someone to do code, you're basically using some heuristics to trust this person will do a good job. But nothing is ever guaranteed 100% deterministically ever. Without thinking it that much, LLMs will sometimes produce better code and manage systems that some people who are earning salaries out there. Possibly sub-par developers if we were precise, but professionals in the meaning of the word (that are being paid to do work).
At the end of the day, what matters is how willing the person behind a given task is when it comes to deliver quality work, how transparent and honest they are, to understand requirements, and a pleasure to work with along other humans. AI/LLMs are just extra tools for them. As crazy as it might sound, but not so many people are willing to push boundaries and deliver great work. That is what makes the difference.
I grant that there's a definition of abstraction that LLMs don't fall into. But people describing LLMs as another abstraction layer aren't all misunderstanding this. Instead, they are using the term ... more abstractly.
EG: How did Mark Zuckerberg make software five years ago?
He's as capable of opening up an editor as I am, but circumstance had offered him a different interface in terms of human resources. Instead of the editor, he interacts with those humans, who produced the software. This layer between him and the built systems is an abstraction, deterministic or not.
Today, you and I have a broader delegation mandate over many tasks than we did a few years ago.
LLM's don't have to achieve perfect reliability to replace lots of work. They just have to reach the balance of reliability and cost suitable for a given task. This will depend on the task.
> The career of a pro athlete has a maximum lifespan of around fifteen years. You have the opportunity to make a lot of money until around your mid-thirties,
This sounds ageist - I'm around 40 and feel I am at my mental peak, compared to even my mid 20's. This isn't a good analogy at all, the brain doesn't "wear out" like a professional athletes' body does, it just changes its structure. The brain is a remarkable organ.
He just means: by this age you've probably found your preferred title and level, unless you want to rise to more C-level / executive positions, which are rarer in any case and most folks don't want.
This is definitely more charitable, but isn't this already the case now? It seems he's saying past your mid 30's you'd no longer be viable as a software engineer. That's never been the case, and I'm not sure why it would now suddenly be the case.
Even clearer, if you don't adapt to the changes taking place in the field there might not be a future for you. Its not about age, it is about attitude and flexibility (which are, admittedly, issues when getting older).
In other words, if you want to continue stubbornly typing out code by hand, the person right over there has already mastered agentic tooling and is doing vastly more than you, more quickly, and with greater precision, and will simply be a more fit candidate to hire. Roles for this type of legacy stubborn personality will be less and less, and you will age out as part of the old school.
I see what you're getting at, but if it's not about age, why use an age related analogy? I probably should have amended my first statement in this thread is that it sounds ageist, if even implying that the people who will refuse to adapt will be older. This day is already here, people are already adapting to this. He seemed to frame it as the current young 20's career people will have this limited timeframe of productivity.
If by software engineering, one means typing code character by character into a text editor, sure it's going to be difficult to find someone to pay you for it.
If you mean creating software, well we are creating more software than ever before and the definition of what software is has never been so diverse. I can see many different careers branching off from here.
We are experiencing what Civil Engineers experienced going from slide rules to calculators. Or electrical engineers going from manual circuit path drawing to CAD tools.
The interesting thing to me is that, Software Engineering will have to evolve. Processes and tools will have to evolve, as they had evolved through the years.
When I was finishing university in 2004, we learned about the "crisis of software " time, the Cascade development process and how new "iterative methods" were starting.
We learned about how spaghetti code gave way to Pascal/C structured prpgramming, which gave way to OOP.
Engineering methods also evolved, with UML being one infamous language, but also formal methods such as Z language for formal verification; or ABC or cyclomatic complexity measurements of software complexity.
Which brings me to Today: Now that computers are writing MOST of the code; the value of current languages and software dev processes is decreasing. Programming Languages are made for people (otherwise we would continue writing in Assembler). So now we have to change the abstractions we use to communicate our intent to the computers, and to verify that the final instructions are doing what we wanted.
I'm very interested to see these new abstractions. I even believe that, given that all the small details of coding will be fully automated, MAYBE we will finally see more Engineering (real engineering) Rigor in the Software Engineering profession. Even if there still will be coders, the same way there are non-engineers building and modifying houses (common in Mexico at least)
Calculators and CAD tools do not give you non-deterministic answers. Both of them simply automate part of the manual work for them without creating anything "new". I haven't used CAD tools but I did use some level editors such as Trenchbroom -- I think what is automated is the 3d shapes that you want to make -- e.g. back in the day of '96, when ID Software is creating Quake, there was very little pre-drawn shapes in the level editor and they have to make the blocks by themselves, thus it is very difficult and time consuming to make complex shapes such as curved walls and tunnels. Then better tools were invented and now it is much easier to create a complex shape. But you don't type "a Quake level with theme A, and blah blah" and then you get a more or less working level -- this is what AI is doing right now.
I think the right analogy to calculators and CAD tools, is IDE with Intellisense for SWE -- instead of typing code one char by one char, we can tab to automate some part of it.
But I agree with your consensus -- SWE is changing, whether we like it or not. We need to adapt, or find a niche and grit to retirement.
It doesn't make sense to get hung up on this aspect of LLMs. We prefer non deterministic so far because it tends to work slightly better even if it is completely possible to ask for a temperature=0 deterministic answer.
With more scale and research, at some point you'll get results that are both useful and deterministic, if it's not already the case.
It absolute makes sense to get "hung" up on something when it comes to planning society around it JFC. I'm with the other commentator, your understanding of these tools should be taken into question since you seem to be reading the tea leaves of statistical noise.
In 2020, there are two companies that are competitors with each other. They each employ 100 programmers to do their job, and we all know how those organizations operated; perpetually behind, each feature added generating yet more possible future features, we've all lived it and are still largely living it today.
In 2026, both companies decide that AI can accelerate their developers by a factor of 10x. I'm not asserting that's reality, it's just a nice round number.
Company 1 fires 90 of their programmers and does the same work with 10.
Company 2 keeps all their programmers and does ten times the work they used to do, and maybe ends up hiring more.
Who wins in the market?
Of course the answer is "it depends" because it always is but I would say the winning space for Company 1 is substantially smaller than Company 2. They need a very precise combination of market circumstances. One that is not so precise that it doesn't exist, but it's a risky bet that you're in one of the exceptions.
In the time when the acceleration is occurring and we haven't settled in to the new reality yet the Company 1 answer seems superficially appealing to the bean counters, but it only takes one defector in a given market to go with Company 2's solution to force the entire rest of their industry to follow suit to compete properly.
The value generation by one programmer that can be possibly captured by that programmer's salary is probably not going down in the medium and long term either.
Your hypothetical ignores the distribution of programmer talent. Company 1 can pay more per person and hire 10x programmers, who can then leverage AI to produce the same or more as Company 2.
We have seen this in other knowledge industries. U.S. legal sector job count is about the same today as it was 20 years ago. But billing rates have exploded and revenues in the 200 largest firms have increased more than 50% after adjusting for inflation. Higher-end law firms have leveraged technology to be able to service much more of the demand and push out smaller regional competitors.
Of course it does. It ignores a lot of things. Mostly I just want to present the view that things aren't entirely hopeless and the entire industry is doomed to contract by 90% because of AI. Your legal system point also fits in precisely with what I'm trying to convey, just in a different direction.
I think paying significantly more was a very localized thing that happened for AI researchers who were familiar with the alchemy that made GPT4 suddenly work much better than anything else seen before.
My concern would be whether creating that software pays enough to keep up with skyrocketing costs of living. In the past, the jobs created by automation have generally been lower paid with less autonomy.
Ignoring the preference of people generally wanting to live in HCOL areas, this only works if every company hires equally from LCOL areas. One of the benefits of living in a HCOL area is access to the job market it provides. It's much easier to get hired for a software position living in San Francisco than it is living in Deming, New Mexico.
More importantly, in San Francisco, there are a lot more opportunities than in coming. I've never been to either city (i'm not going to come to the conference I was at because I never left the hotel.) however, I can still tell you confidently that if you have a weird hobby, you're much lower than you can find other people with that interest, stores that sell the things you need to complete the hobby, and all those other things in life that you want. If you love doing the types of things people in Deming do, well, it's a great life, I'm sure. However, as soon as you want to do something off the wall, well, you may not even find enough people in Deming to have your cricket team, while I have no doubt that San Francisco has a team that you could join.
but moving to a lower COL area can reduce that amount of public and private services one gets access to, no? network connectivity will, for example, likely be worse out in the sticks
Unfortunately, in America places with low cost of living are generally, to put it diplomatically, unpleasant places to live. That's even more the case if you don't fit into the white, cis, straight, and Christian box that rural areas are willing to accept.
This problem is not a software engineering problem nor an AI problem but a problem of the balance of power between working hard vs. investing. If the people who believe in working hard organize and slow down the tendency to rig everything for investors, then the markets should stabilize at a more generally prosperous place.
The balance of power is dictated by economic facts, not by organizing or politics. Auto workers in 1950 weren't better organized than auto workers in 2026. They just had more leverage because they weren't competing with auto workers in China. Likewise, Silicon Valley isn't paying people writing web apps $$$ because those workers are organized. They are doing it because they don't have a feasible alternative. If AI enables them to do more with less, they'll take that option.
Creating more software does not solve anything if that software is mostly a functional duplicate of other software. Or, in other words, all companies re-invent the wheel many times over. It doesn't matter if you 10x the development of software that brings nothing new besides being written in a shiny new framework.
We should, IMHO, start getting rid of most software. Go back to basics: what do you need, make that better, make it complete. Finish a piece of software for once.
In a world where software programming/architecting is solved by AI, value will accrue to people with expertise in other domains (who have now been granted the power of 1000 expert developers), not the people whose skillsets have been made redundant by better, faster and cheaper AI tools.
It could go either way. Don't forget that LLMs also have expertise in the other domains. Who would do better - the chemist with vibe coded app or the developer with vibe coded chemistry?
My premise is that a vibe-coded app will be indistinguishable from a ‘hand-crafted’ one. So in that scenario the chemist wins, because the developer has no value to add.
It is clear to me that SWE and ML research will be subsumed before other domains because labs are focusing their efforts there, in their quest to build self-improving systems.
There will be more software in the same way there is more agricultural output today.
The idea that productivity gains which result in more of something being produced also create more demand for labour to produce that thing is more often wrong that true as far as I can tell. In fact, it's quite hard to point to any historical examples of this happening. In general labour demand significantly decreases when productivity significantly increases and typically people need to retrain.
Except we are now in the golden age where people with 20 or 30 years of experience know what quality software is - or at least what it is not. So they are able to steer the LLMs. Once this knowledge is gone - the quality could go downhill.
Unless I'm missing something, there's an obvious logic issue here.
If we truly need to sacrifice our skill to be productive by using LLMs that atrophy us, then the only devs that have a limited lifespan are us. The next ones won't have a skillset to atrophy since they won't have built it through manual work.
Also, I hereby propose to publicly ban the "LLMs generating code are like compilers generating machine code" analogy, it's getting old to reargue the same idea time after time.
Besides probabilistic and non reproducible output, programming languages are designed to be unambiguous and explicit, and human language doesn't have that.
For(){} it's normally either undefined or has a specific meaning. "Then iterate and do x" might mean many subtly different things.
Most programmers never deal with a compiler bug in their whole career, and can dismiss the possibility. For LLMs it would be hard to even define what a "compiler bug" would be since there is no specification for English.
Then there's the fact that models generally don't guarantee anything at all. Sonnet can change under your feet.
Models also degrade as the context window gets larger. Compilers handle one line just the same as 20.
I could keep going, there's so many fundamental differences in the process that the analogy only serves to provide a false feeling of security.
I don't think its just, there's also the fact that if you're working with c or cpp or any systems level language, you typically do know how to read assembly because you've stumbled upon it for some reason, and if you're writing low level programs (which is typically what these languages are used for) you will definitely at some point need to know to read assembly and maybe even write some. But with LLMs the entire field has shifted. You don't need to know anything to write any language and you don't even need to have high level computer science knowledge nowadays to get something that works and the world increasingly just seems to want something that works.
I have mentioned it several times lately, but if the analogy was correct, people would be committing prompts and not code. High-level source code gets committed, binaries don't. If prompts were really "just a higher level of abstraction", then there wouldn't be a need for saving the code. Or at least you'd see people publish their prompts and chat history alongside the code.
We're forgetting one thing: we (mere engineers) have control over nothing. The vast majority of us are at the mercy of executives and investors. Before AI we had some sort of grip because our skills weren't so much a commodity, and yeah, dealing with code and systems architecture and data and distributed systems wasn't that easy.
Now AI is a tool not for us but for the higher-ups, they can finally commoditize software engineering and need only a small fraction of us. I see engineers around here fighting and discussing who'll be left behind (the 80%) and who'll remain because they're "more than mere coders" (the 20%)... what we don't discuss here is that we're all now at the mercy of Anthropic et al, and that's bad. The irony is that the vast majority of us use Anthropic, so we are just loading the guns for them to use them. It's sad, but we call it progress. Nuts
Was it ever a lifetime career? Haven't most people looked around and asked themselves where are all the 50+ engineers? They basically don't exist in large numbers. Ageism is real in this industry. You either save up enough money to retire early, switch into management, or get forced out of the industry eventually. AI is just accelerating the trend. I see very few junior engineers resisting AI. I see a LOT of staff+ engineers resisting it. Just look at the comments on HN. Anti-AI sentiment is real.
If you are lucky and got in early, then probably yes, it could be a lifetime career. It's like all careers, when you joined early, you got a lot of opportunities, you also rode the wave, you eventually rose to the top if you grit through.
It's a lot easier to be early than to be smart or quick.
In 1996? Software development was the hot ticket to upper middle class in the early 80s when I was a recent CS grad, and I was already working with people who were in it for the money. By the late 90s, if you could spell “HTML”, you were making decent money as a web developer. This all came crashing down during the Dot Bomb collapse, but SW has been pretty popular for most of my career, and it just continued to get more popular, especially as salaries continued to increase.
I remember seeing an article around a decade ago about a ~50 year old "web developer" claiming age discrimination because they couldn't get a job. Somebody found their resume and it was literally 1990s "html/CSS" added to some other period tooling. Said person found a niche for a new technology (the web) and then stopped upping their skills.
I've had to change course several times in my career (graduated in 2004). UNIX admin and later network admin, DevOps, and now I'm doing a mixture of DevOps and development (despite not being a full time developer in my entire career, being able to use AI to plug into code and fix/enhance things like monitoring, leveraging cloud APIs, etc has been a game changer for me).
Right now, as somebody in their mid 40s, I'm seeing AI as a productivity amplifier. I am able to take my experience and steer and/or fight opus into doing what's needed and am able to recognize if it looks right.
I'm so glad I'm not fresh out of school in this environment, though people said the same thing when I graduated in the Dotcom bust...but being ready and eager to do groundwork was a door opener. Finding that first door to open was tough, though.
In retrospect the Dot Bomb was a bump in the road. Yes, some people who only knew enough HTML to be a "Webmaster" might have been filtered out, but pretty quickly anyone who could really build software had opportunities greater than before.
I'm repeating what others have essentially said, but ask yourself what's on your resume. If it says "Software Engineer" and that's all it talks about, then yea you might not find it's a lifetime career.
But if it's a diversity of things (that use or leverage software development) then you probably have a lifetime career ahead of you.
I've been writing software for over 40 years but I've never seen myself as having a software engineering career. I've been a research assistant in geophysics, a marine technician on research ships, a game developer, an advisor to the UN, and a lot more. Yes all through that I used software, but I did a lot of other things in the process of using it.
Argument A: AI means you don't learn as much, so even though you are more effective, it inhibits your growth and you shouldn't use it. However, on a pragmatic level, it's effective to hire a bajillion people, fire them at will, and get AI to do everything. You will get so many JIRA tickets closed and so many lines of code written.
Argument B: AI means you don't learn as much, and the single most useful work product of a software engineer is knowing how the code functions, so it's depriving your company of the main benefit of your work. Also, layoffs are terrible business strategy because every lost employee is years of knowledge walking out the door, every new hire is a risk, and red PRs are derisking the business.
Institutional and personal knowledge seem similar, but the implications of each are radically different.
I don't understand it. The time-limited career would work if we were born with innate ability for software engineering and would lose it over time by using AI. Most people are not born with that ability though, it needs to be developed first.
And read Programming as Theory Building already, it's not that long
Comparing software development to carrying heavy things at a construction site feels like a real stretch to me.
'If you work in construction, you need to lift and carry a series of heavy objects in order to be effective. But lifting heavy objects puts long-term wear on your back and joints, making you less effective over time. Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”.'
On another note, for sure software developers are saying things like "this is the part of the job that I like" or "if you aren't doing the work, you won't be good at the work." But other people are saying this, too. I just saw an episode of "Hacks" ("QuickScribbl") where the writers say pretty much this exact thing when confronted with AI tooling designed to "make their job easier". Is writing comedy also like lifting heavy objects at a construction site?
Seems the solution here is the same it's actually always been if you want career progression: be more than just a code jockey. The true value of an engineer is to be plugged into overall roadmaps, broader thinking around product, how to achieve company goals, etc etc.
Yes, LLMs might dramatically reduce the amount of code we write by hand. But I'm a lot less convinced they'll solve all of the amorphous, human-interacting aspects of the job.
My exprience has been that companies actively work to prevent people from becoming more than just code jockeys. For example, most of the places I've worked have viewed code delivered as the ONLY metric used to evaluate performance. Attempts to contribute to roadmaps or strategy are ignored at best and punished at worst.
Yeah, 95% of the available advancement in computing is in people management, not technical mastery. Businesses much prefer to hire externally to serve any non-core capabilities, especially to minimize internal culpability should anything go wrong. That leaves little opportunity to think outside the box technically.
There’s a hierarchy amongst knowledge work and AI hasn’t yet been able to do the work that is rare and valuable.
Over the past two decades, there have been lot of solved problems like building boring scalable web apps, UX design etc and AI is fairly good at this, enough so that good prompting can get you very far. This shouldn’t be a surprise, there’s a lot of publicly available data for this (GitHub repos etc).
On the other hand, there are rarer Computer science problems like designing efficient Datacenters, GPUs, DL models. Think about problems that someone of Jeff Dean’s or James Hamilton’s (AWS SVP) or a skilled Computer Architecture researcher like David Paterson’s ability would solve. These are incredibly hard and rare problems and AI hasn’t been able to make much progress in these areas. That’s true for other sciences as well.
If you’re a regular Joe like me who builds boring CRUD apps, AI is coming for you.
What I mean is if you are working on incredibly hard and rare problems that require rare skills and also those problems don’t have publicly available data that LLMs can be trained on, you’re safe from being “automated” away. If not, you must plan accordingly. Also if you’re a skilled manager (in any field) AI cannot replace you, highly skilled managers that can get the best out of their teams have rare skills that aren’t easily replicable even amongst humans much less AI. Although, if going forward we need fewer developers we will need fewer managers too.
The differentiator is augmenting reasoning with AI versus replacing reasoning with AI. But those who choose to replace their reasoning with AI probably weren't good at it to begin with; cause if they were, they'd choose to not replace it. Exception is that AI can actually replace reasoning (which it can't, yet) - then it's game over with a career in software engineering anyway.
80% of my day to day job has never been pumping out lots of code. it is a complicated career is it? we do a lot of alignment, design and thinking. i can't even agree the idea of outsourcing thinking, i think AI is very good at helping us to think clearly, but it doesn't really "think" for us.
If AI becomes good enough to easily produce maintainable and high quality software, then I really can't see how demand for software engineers would not plummet. Even lots of non-coding work that software engineers do, such as accurately capturing what client actually wants, will become much less valuable - e.g. currently misunderstanding of client's requirements is catastrophic and can lead to waste of months of labour; with AI it could become matter of max few hours lost. So I can understand argument that software engineering careers might be safe because AI may plateau and we might never reach level where it's actually capable of producing good software. But I absolutely don't buy that software engineering will be safe even if such AI exists. Even if your current work is just 20% actually coding, you must remember about second order effects that will take place once quality code generation is 1000 times faster.
AI can also do alignment and pull from its vast training dataset for design and "thinking" -- because 99% of the problems in this world were already solved, multiple times, maybe not in the exactly same format, but in a very similar format.
I also see that in the future humans will adapt to AI, instead of the opposite. Why? Because it's a lot easier for humans to adapt to AI, than the opposite. It's already happening -- why do companies ask their employees to write complete documentation for AI to consume? This is what I called "Adaption".
I can also imagine that in the near future, when employment plummets, when basic income become general, when governments build massive condos for social housing -- everything new will be required to adapt to AI. The roads, the buildings, everything physical is going to be built with ease-of-navigation by AI in consideration. We don't need a Gen AI -- that is too expensive and too long term for the Capitalist class to consider. We only need a bunch of AI agents and robots coordinated in an environment that is friendly to them.
Rather than coining a new word like adaption, I'd call this acculturation. It's reshaping not only SW dev but natural language too -- how we read and write and how we speak.
Everyone knows that AI-written slop isn't worth actually reading. So when reading mass media content we skim over each paragraph's opening phrases rather than read it deliberately, sentence by sentence. We also do this while writing notes, dropping determiners, acronymming common phrases, and making references to characters/scenes in popular media. Now with the rise of vocal interfaces and ever shorter rounds of engagement, all this abbreviating will only exponentiate.
>If the models are good enough, you will simply get outcompeted by engineers willing to trade their long-term cognitive ability for a short-term lucrative career
> (2) AI-users thus become less effective engineers over time, as their technical skills atrophy
Wouldn't (2) imply that if everyone just used AI there eventually would come a time when there aren't engineers who will outcompete you (because their skills are so atrophied)?
I take issue with the premise that "Using AI means you don’t learn as much from your work"
With AI assistance, I tackle far more tasks than I would without it. Learning per task goes down, but cumulative learning does not.
Software engineering today is almost nothing like the role it was 30 years ago.
Maybe if you somehow stick with the same company for your entire career, it could feel somewhat similar... but I doubt it, as 'best practices' and many other things cause it to change.
The days of 'lifetime career' had already gone for most people, way before AI arrived.
Is anything today a lifetime career? I’ve had at least five or six job descriptions over my time, and at least a few of them pretty much don’t exist anymore, or are changed beyond recognition.
Software engineering was not a career long time ago. The companies have no respect for software engineers and treat them as commodity that can be replaced at any time. The traditional career "progression" also doesn't exist. You can get pay rise only so many times and become the seniorest senior or you want to fulfil the Peter's principle.
While most developers were busy grinding, the corporations did the most ensuring the only sensible pathway to wealth and development is closed = running own business that is. In many countries, due to regulatory capture enacted by corrupt governments, making profit is next to impossible, that if you manage to jump bureaucratic hurdles that are not present for larger corporations.
AI is just a tool. Will AI replace software engineer is like asking will hammer replace the carpenter?
I'm a software engineer and architect, I love my job, I love diving into the small details, I love the grand overview.. I love identifying concepts and applying them to achieve elegant high-performing systems.
I love thinking about what kind of assembler the compiler may generate (though honestly, I haven't got a chance), I love thinking about how languages should be more dynamic (Who's got actually-first-class functions? Like, ones that you can build, compose, combine and manipulate to the same degree you can a string or a JSON object, no LISP, you're cheating, close no point).
And yet.. I don't care that much. Not because I'm late in my career (I'm 40, there's still some years left in me), but because I want to make computers do things, and what I enjoy doing is thinking up ways the things can happen, and sometimes the particulars that matter when making a lot of different things happen in a coherent system.. And yea, LLMs are trained on peoples output, and from what I'm seeing everywhere, is that people are overall fairly terrible at that, and most of the plumbing-type glue being written is not worth anyones time..
And I'm not saying I don't care because LLMs can't do my job (heck, even after hours of back-and-forth spec building and refining every little nook and cranny, the stupid coding agent still cheats or gets it wrong (even after it's beautifully explained, proven even, by reasoning and example alone, and on first try even) that the words coming after the previous words makes sense, as soon as the plan is put into motion, it'll mess it up on some scale so fundamental I should just have done it myself.. And I hope that changes, I hope that I don't have to go into such detail.. I hope to become a steward of taste rather than a code-reviewer.. I hope that I will eventually not be needed for that anymore.. I want it to replace me, so I can move to telling what I want, and have it made that way..
I hope I won't need to steward good taste, and that nobody will.. I hope the applications I use in 5 years will be a collection of one-offs, and gradually improving tools that was written _just_ for me, for my way of working, and my way of thinking.. I want to prompt the damn program to change itself as I discover new ways to do things, until it can eventually figure out how to automate the last bit of my task away.. And then I'll go do something else exciting.
Imagine a situation where AI creates thousands of lines of code in a few repos and there is a Production issue and does get resolved by AI.
How can humans jump in and resolve the bug without knowing anything about the code ?
Virtually, the entire blog is about AI with a ridiculous publishing rate (https://www.seangoedecke.com/page/5), funny how I can look at this site HTML and know right away it was done with AI.
Can we stop upvoting vibe published articles?
The arguments are flawed and don't even make sense to anyone who does software
Yes, the blog is mostly about AI, and yes, he publishes very frequently. But his articles don't read like AI and he claims not to have used it in his writing (https://www.seangoedecke.com/avoid-ai-writing/). And regardless of how you feel about the content, the community has clearly decided it's worthwhile as a discussion point.
> Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”
I dont know, maybe in your part of the world, but where I'm from we have a series of robust worker protection laws that try to limit the damage the work does to you. We generally consider it a bad thing for workers to damage their bodies, and if we could build houses without it, we'd prefer that.
In this specific case we do have a techniques to build software without causing damage, so why change that?
This post is arguing that maybe software enginnering should start being harmful, even though we know it doesn't have to. It's a post of a guy begging to be fed into the capitalist meat grinder. Meaningless self sacrifice.
i'm about 35 and i have made good money but not enough to quit. i plan on just sitting around cashing checks for another decade. with a few liquidity events along the way to sweeten the deal. should pay for my mortgage, some home renos, and fund my 401k etc. i don't foresee myself being out of work (and i don't even use AI to code! i'm just Actually Good!)
Absolutely untrue, you could have a solid career writing back office or internal software in financial services, insurance, higher ed, any number of industries. Would they make you a millionaire? No. But they'd pay for a nice house in the suburbs and raising a family.
I started at age 39 though and did pretty well up until a year or two ago (16 years total).
Like many people I've been sad about the loss of a career I spent years developing skills in and I'm 55 now and won't be quickly retraining for another high paying career. Fortunately I do have other skills I developed earlier in life and low needs so will probably limp by fine but it's still a painful adjustment.
Point being, you could always write code as an older person. Well, back in the old days when we wrote code anyway.
> I hope that this isn’t true. It would be really unfortunate for software engineers. But it would be even more unfortunate if it were true and we refused to acknowledge it.
More AI Soothsaying. Not so hard on the Inevitabilism this time.
On the contrary, in an efficient economy, every business operations manager (MBA) would be a skilled software engineer, able to comfortably manage data flows and design custom automated processes. There's so much potential energy there in unlocking this technical literacy.
Less "pure" programming, but lots more programming in general.
Was it ever? It's always seemed weird to me that people even think 'software engineering' is a career.
It's a tool for knowledge work.
No carpenter is a specialist in drills.
It seems to me that the best way to navigate a long term career is to have another specialty and use software engineering as a tool within that specialty.
>
Software engineers didn’t just disappear after age 40.
At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game" (i.e. you likely won't get a programming job anymore when you are, say, 35 or 40 years old).
So,
> Software engineers didn’t just disappear after age 40.
> At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life;
This wasn't common anywhere except for maybe the Silicon Valley bubble.
The rest of the US and even the world could see that not having a very successful company of your own is to equal to being a failure.
> At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game"
That seemed commonly held among folks participating in the dot-com bubble. Plenty of people had been doing it for decades even as the bubble was growing.
> Software engineers didn’t just disappear after age 40.
>> is rather a very recent phenomenon.
Not really. It's not that they disappeared, it's that they're a small fraction of the overall SWE population as a side-effect of how much that population has grown.
Software is wood, not drills, and if we somehow invented bacteria that gradually built an ugly but saleable house when fed on water and nutrients and nudged into shape, I bet carpenters (well, framers or whatever they're called in the US) would have an identity crisis too.
I kind of disagree. You are describing a kind person who is extremely valuable, a person who is proficient in SWE but also has domain specific skills in some niche.
That's great, but its nowhere near the norm, and people have been doing generalist software engineering for decades. There has been a sufficient amount of work for a long time to be performed by generalists that it has been a very reasonable career.
IMO AI is the first thing that has ever actually challenged that.
Id disagree with this analogy:
"No carpenter is a specialist in drills." and i think its an interesting lens through which to look at the evolution of our tools.
I think there are trades where tool (or process if i may be allowed to extend the analogy) specialists exist and are highly valued. My dad is a plumber, so ill use that example but id trust similar is true for carpentry. there are specialists by task/output (new construction, repairs, boilers etc) but also tool specialist plumbers and companies for example drain clearing equipment or certain kinds of pipe for handling chemicals other than water are very specialised, and there are roles for them because the thing they enable, and the criticality of the task, and often the cost and complexity of using the tool are high enough to make specialisation valuable.
IMO software has, for the 10 years ive been working in it, been in an unusual position where the tools (languages, engineering practices, tech stacks) were super technical and involved, but also could be applied to a large number of problems. That is the perfect recipe for tool specialists: complex tool with high value and broad domain/problem space applicability.
Because of that tool specialisation, we've separated the application of the tool to a problem/domain from the tool use. reduction of complexity of applying these tools to many problems, means all domain specialists will use them, relying less on tool specialists.
imaging a mcguffin tool for attaching any two materials together, but which took a degree to figure out (loose hyperbole here), that sudenly you could use for 5 bucks and a quick glance at the first page of the manual. An industry that used to have lots of mcguffin engineers, would be mega disrupted, and you could argue that those tool specialists would have to identify more with what they were building than the mcguffin they were using.
Yeah, IEEE Spectrum has responded to the dissimilar roles in SW dev by ranking programming language popularity contextually, by separating the project domains and ranking the languages only within each domain. That's a lot more useful than allowing the single dominant project domain to silence the recessive ones, as TIOBE does.
I tell my boys (both in HS now), the combination of a specialized skill/knowledge + competent computer programming is the sweet spot. For example, my oldest wants to go into Petroleum Engineering which is great but I told him to still learn software development and get comfortable solving problems with code. Having specialized Petroleum Engineering knowledge combined with being a competent software developer is a powerful combination.
Yeah, I've seen the same thing happen to data miners in the pharma industry. An increasing fraction of young biologists have skill in basic statistical DM as well as web search proficiency sufficient to gather DM code analysis examples, even without using AI. In the very near future I expect almost all R&D exploratory DM will be done by pharma domain experts (biologists and chemists) rather than served by DM experts (computer scientists or engineers).
I think the logical next step is that "XYZ knowledge worker" will become a software engineer of sorts. Not literally writing code, but at minimum encoding processes/workflows into some language.
If you're a paralegal or an accountant who can't manage their workflows with AI, you're going to be way less productive than someone who can.
And if you're a paralegal or an accountant who can manage a lot of your workflows with AI, you don't need custom software (hence less dedicated software engineers).
There's no category difference between being an expert in carpentry vs masonry and being an expert in drills vs hammers. They are both just areas of expertise.
Going down the path of trying to define what is expert functions and what is "merely" a tool using anything but descriptive technique is nonsense.
Expert functions are just those areas where using a tool is sufficiently difficult to require expertise.
Are people seriously thinking that you can make yourself dumber by using a chat UI?
If talking to an AI makes me dumber and a limited career, then all the customer support people that ever existed were in the same or worse position talking to dumb humans on chat all day answering tickets always about the same topics and linking the same docs over and over. This makes no sense.
You're misrepresenting the potential problem. It's more along the lines of using AI stops you exercising the cognitive processes you would doing things yourself and those encompass skills, knowledge and brain function that can atrophy. For an extreme example you can look at cognitive decline in the elderly which can be mitigated by taking part in activities that are cognitively stimulating.
Can you comment on other jobs though? The large majority of jobs require no big mental effort? Even switching from programming to management would go through that. Under that light it'd be impossible for a manager to ever become technical again because they'd atrophy so quickly?
I think you're probably castrophizing the impact with statements like "it'd be impossible for a manager to ever become technical again" because that's not the likely outcome as I understand things. But yes people who stop programming for an appreciable amount of time do find it harder to pick back up again.
The longer the manager is out of the game, the harder it is to return to the game. Returning to the game takes time. Depending on age and income, returning to the game may be impossible for some people over time.
I can't answer for the other guy, but my answer would be that talking to a clanker is LESS mental effort than being a manager, and that's why your reasoning atrophies so quickly.
Managers can go back to being technical, because they are still interacting with problems that require human thinking. Token farmers don't.
If you constantly pawn a task or cognitive load onto someone else (AI or not), you'll eventually get worse and worse at that particular type of thinking. Your overall mind doesn't necessarily get weaker, but you definitely start to get worse at anything you don't regularly practice.
> The career of a pro athlete has a maximum lifespan of around fifteen years. You have the opportunity to make a lot of money until around your mid-thirties, at which point your body just can’t keep up with it.
If you believe this about your software career, how do you think your going to switch into another career as a junior and keep up?
terribly written article that failed to make any point. anyone whise read ai generated code from the best models and who understand how llms work, knows this statement is complete bs.
Why do people think there will be fixing AI slop software? I see that opinion here and there on HN. The cost of codegen is next to nothing. It makes no sense to spend large sums of money having an engineer fix something that could be generated over and over until gods of stochasticity come in your favour.
We've entered a period of single-use-plastic software, piling up and polluting everything, because it's cheaper than the alternative
The problem partially is that AI can also fix AI slop. At this point I am in doubt whether code quality matters anymore in most non-critical software. You can ask an LLM if the code has quality issues and refactor to a _better_ version. It will reason through, prepare a plan and refactor. So now with this "better" code you can expect that your LLM will be able to deliver higher quality results but that's all the quality that is needed.
Actually, at this point I feel that the value in software engineering is moving from coding to testing and quality assurance.
All the frontier models tell me when there are no issues. After implementing a feature I will ask it to identify issues in my implementation, list them, and support each item they identified with technical argumentation and reasoning as to why it's an issue.
If it doesn't find anything it says I didn't find anything.
Multiple times per week I have the same conversation. It goes something like this:
The developers who still think their job is about writing code will perhaps not have a job in the future. Brutal as it may sound: I'm fine with that. I'm getting old and I value my remaining time on the planet.Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.
This is a bit of glib answer. Most of the time is spent coding which encompasses typing, retyping, and retyping again. It also includes banging your head against the wall while trying to get one of your rewrites to work against and under-documented API.
OP's formulation makes SWE sound like a purely noble enterprise like mathematics. It's more like an oil rig worker banging on pieces of metal with large hammers to get the drill string put together. They went in with a plan, but the reality didn't agree and they are on a tight schedule.
Most of the time is spent figuring what the right thing to do is, not writing the implementation. Sometimes the process of writing the implementation surfaces new considerations about what the right thing is, but still, producing text to feed to a compiler is not the bulk of the work of a software engineer. It is to unearth requirements and turn them into repeatable software.
Glib is called for. The amount of information asymmetry that's still on the table as vibe coders and vibe engineers and vibe doctors emerge is staggering. Professional experience is still incredibly valuable. Most software developers might spend more than 6% of their time coding but no Senior Developers are banging their heads for hours over typos.
https://www.youtube.com/shorts/xBilK3gT5e0
Are you, perchance, assuming that since you spend most of your time struggling with actual code, this is so for everyone else?
Or are you saying that I'm lying. That I am secretly hammering away at my keyboard while pretending not to?
No, writing code hasn't been how I spend most of my time for many decades now.
Are you a staff level engineer that has dozens of other engineers banging away at code projects you help define?
Try to write a design doc before you implement something (which people find they need to do for LLMs to work at all anyway). You’ll find that you spend much less time actually writing code.
Write proper API documentation laying out the assumptions and intent, generate some good API docs, write a design and architecture document (which people find they need for LLMs to work at all anyway). You’ll find that you spend a lot less time reading code.
> which people find they need to do for LLMs to work at all anyway
Everything we have to do for AI to function well, would help humans to function better too.
If you take the things for AI, but do then for humans instead, that human will easily 2x or more, and someone will actually understand the code that gets written.
It has varied over the years but it isn't actually relevant since I am talking about when I write software.
Writing code just isn't what takes time.
Getting the code into a state where it actually does what you want takes time - but a lot of that is research, testing, experimentation, documentation, etc. Those can be faster with AI assistance but you still need to bang on it enough to make sure it works right.
> Yes, about 2-5% of the time.
There are also those for whom that percentage is higher, let’s say 6-50%.
> I understand things and then apply my ability to formulate solutions
The AI is coming for that too.
You might just be lucky to be in circumstances that value your contributions or an industry or domain that isn’t well represented in the training data, or problem spaces too complex for AI. Not everyone is, not even the majority of devs.
People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
Agree. It is just like 2 totally separate groups are arguing.
One very tiny slice of speciality/ rare industries where code is critical but overall small part of project costs. I can see if code / software is 5% of overall cost even heavy use of AI for code part is not moving the needle. So people in this group can feel confident in their indispensability.
Second group is much larger and peddling CRUD / JS Frontends and other copy/paste junk. But as per industry classification they are just part of same Coder/Developer/IT Engineer group. And their bleak prospects is not some future scenario, it is playing out right now with tons of them getting laid off. And whole lot of people with IT degrees, certifications are not finding any jobs in this field.
What makes you feel that a complex frontend would be easier for AI than a non-CRUD backend system?
Hubris.
I don't mean this as a snarky jab. It's coming for anything software. I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend APIs, systems programming, embedded programming, they all seem equally threatened it's just a matter of time. Front end is easy to see in the AI web front ends but everything else is still easy pickings.
> I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend
That is not hard. It’s just tedious and very slow to do manually. The hard part would be about designing a usb dongle and ensuring that the associated software has good UX. The reason you don’t see kernel devs REing devices is not because it’s impossible or that it requires expert knowledge. It’s because it’s like counting sands on the beach.
It is irrelevant that complex frontend would be easy for AI or not. To me 1) how many unique complex frontends are needed out of total frontends that millions of sites out there need. 2) Will there be increase in need of such frontend engineers so other displaced folks can land a job there.
I think it will be far fewer to have any positive impact on IT engineers' overall job prospects.
There are periods of time where I might spend 80% of my time "coding", meaning I have minimal meetings and other responsibilities.
However, even out of that 80% of my time, what fraction is actually spent "writing code"?
AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:
- Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback
There are times when AI can do the "write the code" portion 10x faster than I could, but if it's production code that actually matters, by the time I actually review the code, I doubt it's more than 2x.
>AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:
- Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback
What part of those you think it doesn't help with?
There is no shortcut to understanding. No one can understand things for you
> The AI is coming for that too.
That may be true I’m not gonna say one way or the other, but if AI comes for that then almost all knowledge work is effectively dead, so all that’s left would be sales or physical labor.
I wonder though, can AI make the next JS framework. I mean that in sincerity, there was the leap from jQuery to React for ex. If an AI only knows jQuery and no one makes React, will React come out of AI.
News: "AGI refuses to make another JS framework, rages on the follies of misguided developers and their wateful JS crutches"
Developer community: Wow, we truly have become obsolete now!
Who will be the disrupters when there is nothing to disrupt
A thought experiment: When all practical software is only written by AIs, will the AIs use goto? What will the programming language of AIs look like?
My bet is something _like_ assembly, but not assembly.
That being said, I think humans will still program for fun. Just like we paint portraiture in a world with cameras.
People didn't leap from jQuery to React. It's a lot easier to imagine an AI looking at jQuery and [insert any server side MVC framework] and inventing Backbone.
The history of the last 250 years is inventing new professions as old ones are automated away.
I expect that to continue.
The history of the last 250 was moving from agriculture to industrial work to service work. Now the last frontier is starting to be overtaken by automation too.
(And in all of those transitions millions where left behind without work or with very worse prospects. The people that took the new jobs were often a different group, not people who knew the old jobs and were already in their 30s and 40s).
And what would be the new professions that uniquely require humans, when even thinking and creative jobs are eaten by AI? Would there be a boom of demand for dancers and chefs, especially as millions lose their service jobs?
> The history of the last 250 years is inventing new professions as old ones are automated away.
Even if this still holds true ("past performance is no guarantee of future results") the part about it that people handwave away without thinking about or addressing is how awful the transitional period can be.
The industrial revolution worked out well for the human labor force in the long term, but there were multiple generations of people who suffered through a horrendous transition (one that was only alleviated by the rise of a strong labor movement that may not be replicable in the age of AI, given how it is likely to shift the leverage of labor vs. capital).
If you want to lean on history as an indication that massive sudden productivity changes will make things better for humanity in the long run, then fine, but then you have to acknowledge that (based on that same history) the transition could still be absolutely chaotic and awful for the lifespan of anyone who is currently alive.
>> I understand things and then apply my ability to formulate solutions
> The AI is coming for that too.
If this is true, then you'd have to conclude that AI is coming for everything. I'm still not convinced by that. But I am convinced that the part of software development that involves typing code manually into an IDE all day is likely gone forever.
It really doesn't have to come for everything to feel like it's taking everything. If it eliminates 10% of white collar jobs over the next decade, the impact will be felt everywhere.
> If this is true, then you'd have to conclude that AI is coming for everything.
Now you’re getting it
> The AI is coming for that too.
Current AI tech giants prove over and over and over again that this is not the case
We've literally just started, what "over and over" do you refer to?
I've been told the past four years that AI is coming for my job. And thats just not true. Its no closer to that than it was 4 years ago.
> We've literally just started
5+ years in the software world is like 30 years in others...So...given lacking use-cases and humongous amounts of capital already wasted on chatbots...It's more like "we" are closer to closing curtains than to "just started"...
Discovery of the best solution in a problem space is not generative but only verificative. Meaning: the LLM can see if a solution is better than another, but it can't generate the best one from the start. If you trust it, you'll get sub-par solutions.
This is definitely an agent problem instead of an LLM problem. Anybody got something explorative like this working?
So? Hundreds of millions of office and devel jobs are about for developing "optimal solutions" to begin with.
Hype cycles, AI has made developers obsolete like a dozen times in the las couple of years, at least according to their developers.
Even if AI advances continue, for quite a while there's likely still going to be the 'Steve Jobs' role. That is, even if AI coding agents can, in the future, replace entire teams of SWEs, competently making all implementation decisions with no guidance from a tech-savvy human, the best software will likely still involve a human deciding what should be built and being very picky about how, exactly, it should externally behave.
I don't know if it makes sense to call that person an SWE, and some people currently employed as SWEs either won't be good at this or aren't interested in doing it. But the existing pool of SWEs is probably the largest concentration of people who'll end up doing this job, because it's the largest concentration of people who've thought a lot about, and developed taste with respect to, how software should work.
> The AI is coming for that too.
That's where we fundamentally disagree about.
Yes, AI is coming for solution formulation, absolutely, but not all of it, because it is actually a statistical machine with context limit.
Until the day LLMs are not statistical machine with a context limit, this will hold. Someone need to make something that has intent and purpose, and evidently now not by adding another 10T to the LLM parameter count.
> because it is actually a statistical machine with context limit.
So are humans.
Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)
And I argue that current LLMs have surpassed many of my capabilities already.
For example GPT/Opus can understand and document some ancient legacy project I never saw before in minutes. I would take a week+ to do the same and my report would probably have more mistakes and oversights than the one generated by the LLM.
We are not pre-trained using the summary of all human knowledge over all of history. Yet we make certain decisions with much more ease.
We are much more limited, but we fundamentally work differently. Hence adding more parameter like certain companies are doing isn't necessarily going to help. We need to rethink how LLM work, or how it work in tandem with something that's completely different.
I think it's doable, I just don't believe it's LLM, and I don't think anyone now knows what it is.
> We are not pre-trained using the summary of all human knowledge over all of history.
But we are? That's our education system.
The only reason school doesn't try to shove more information in our brains is because we hit bandwidth limits.
Yours is a “God of the gaps” argument. You will remain technically correct (the best kind of correct!) long after the statistical machine has subsumed your practical argument, context limit and all.
I fall into the "pessimistic heavy user" camp, I burn thousands of $ worth of SOTA tokens monthly but it just makes me more acutely aware of the limitation and amount of work I need to do to work around them and what kind of decision that I should reserve to myself instead of trusting the LLMs to do.
>but not all of it, because it is actually a statistical machine with context limit.
And the human mind is not?
It’s not.
> The AI is coming for that too.
To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.
This was something I learnt in my very first job in the 1980s. I worked for someone who did industrial automation beyond PLCs and suchlike. He spent 6 months working in the company. On the factory floor, in the logistics department, in procurement, in accounting and even shadowing the board. Then he delivered a proposal for how to restructure the parts of the company, change manufacturing processes, and show how logistics and procurement could be optimized if you saw them as two parts in a bigger dance.
He redesigned the company so that it could a) be automated, and b) leverage automation to increase the efficiency of several parts of the business. THEN, he started planning how to write the software (this was the 80s after all), and then we started implementing it.
Now think about what went into this. For instance we changed a lot of what happened on the factory floor. Because my boss had actually worked it. So he knew what pain points existed. Pain points even the factory workers didn't know how to address because they didn't know that they could be addressed.
I was naive. I thought this was how everyone approached "software projects". People generally don't. But it did teach me that the job isn't writing code. It is reasoning about complex systems that often are not even known to those who are parts of it.
And this is for _boring_ software that requires very little creativity and mostly zero novelty. Now imagine how you do novel things.
> People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
You make it sound like it is a bad thing that certain tasks become easier.
I spent a lot of time writing CRUD stuff. Because the things i really want to work on depend on them. I don't enjoy what is essentially boilerplate. Who does? If you can do the same job in 1/20 the time, then how is this a bad thing?
It is only a bad thing if writing CRUD webapps is the limit of your ability. We don't argue for banning excavators because it puts people with shovels out of work. We find more meaningful things for them to do and become more productive. New classes of work becomes low-skilled jobs.
If you have been doing software for a while, you are probably doing some subset of this. But these things are hard to articulate. It is hard to articulate because it is not something we think about. Like walking: easy for us to do, hard to program a robot to do it.
>To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.
1 person needs to do that. The other 100 not doing that currently to begin with, but doing the AI-automatable work?
> To some degree yes, in practice, not so much.
We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need? I think it's mostly a cope.
We have robots walking just fine now, by the way.
If they can do those things they can effectively replace any white collar job. That’s about 45% of the workforce. Societies tend to collapse around 25-30% unemployment.
Imagine 45% of higher than average paying jobs gone.
If that happens we’ll either figure out a new economic system, or society will collapse.
Also saying robots are walking just fine is misleading for any definition of just fine that is anywhere near as good as a human.
Look at how the billionaires are talking about AI: Their clear, unambiguous goal is basically to replace all white collar "knowledge" jobs. And there's currently nothing regulatory that's stopping them--they just need to wait for the state of the art to improve. Once AI is "good enough" if it ever is, they won't even think twice about 45% unemployment. What are we unemployed workers going to do about it? There's no effective labor organization left. Workers have basically no political power or seat at the table. We're not going to get violent--the police/military are already owned by the billionaire class. We're just going to eventually become economically irrelevant and die off.
I get that it is popular to hate billionaires these days, but realistically, they did not get to be billionaires by being stupid. It runs directly counter to their own interests to induce anything like 45% unemployment. They will get poorer, the world they live in right along with the rest of us will get noticeably shittier, etc.
More likely they figure out what to do with a bunch of idle talent. Or the coming generation of trillionaires will.
> We're just going to eventually become economically irrelevant and die off.
As harsh it may sound, it seems rather likely to me. It is not like s/w engineers have helped struggling workers in other sectors other than sanctimonious "Learn to code" advice. So software folks can't expect any solidarity or help from others.
The fundamental issue isn't unemployment due to automation, but the fact that society cannot benefit from unemployment.
It should be something for us to celebrate, because it means greater freedom for humans to pursue something else rather than spending time doing drudgery.
Put it another way, the issue is that resources are not shared more equitably. This is especially egregious considering that LLMs are trained on all human knowledge. We've all been contributing to this enterprise, and what we may end up getting in return is unemployment.
45% of folks sitting on their hands are going to have the free time to talk, and this group of people are skilled at organization. Are you planning on throwing your hands up and passively accepting whatever comes your way?
And at least in the US they have >45% of all the small arms weaponry. There is no bunker strong enough nor private army big enough if 100M people come for you.
It's important (and calming) to understand that since the Industrial Revolution started ~250 years ago, we've automated away most jobs several times over, while employment levels have stayed pretty constant.
"Automating half the jobs" is the same as "double productivity per worker".
When the doubling happens in 5 years rather than 50, it might be more disruptive, but I'm convinced we're on the verge of huge improvements in human standard of living!
What in the current state of world affairs outside of IT do you think is indicative of that potential for huge improvements in human standard of living?
We never noticed how easy the code writing part had already become because it happened slowly. Through mechanical means, through the ability to re-use code, and through code generation.
Heck, even long before LLMs about 10% to 30% of my code was already automatically generated. By tooling, by IDLs and by my editor just being able to infer what my most likely input would be.
> We have robots walking just fine now, by the way.
I don't think you got the point I was trying to make.
True, but I guess I see a distinction between scaffolded/templated boilerplate or autocomplete and actual application logic. People have generated boilerplate from templates for ages, as you say. RoR maybe a pretty good example, but there wasn't even early-days AI involved in doing that.
>> We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need?
Because they are currently "generative AI" meaning... autocomplete. They generate stuff but fall down at thinking and problem solving. There is talk of "reasoning models" but I think that's just clever meta-programming with LLMs. I can't say AI won't take that next step, but I think it will take another breakthrough on the order of transformers or attention. Companies are currently too busy exploiting the local maxima of LLMs.
> Companies are currently too busy exploiting the local maxima of LLMs
I get the feeling we can already spot the next AI Winter. Which is okay, we need a breather, and the current technology is useful enough on its own.
> Why do we believe that LLMs are going to stop there?
Why do you believe they wont? I think it's reasonable to assume that we will hit a ceiling that current models will not be able to break.
> We have robots walking just fine now, by the way.
Walking and reasoning are unrelated abilities.
Walking was given as an example of "hard to program a robot to do it" by GP. Well, now we have robots that can walk.
What evidence is there that LLMs have hit a ceiling at being able to do things like talk to users or stakeholders to elicit requirements? Using LLMs to help with design and architecture decisions is already a pretty common example that people give.
>bosses
The AI is coming for those too.
>> Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
Anecdotal evidence to support this.
I work with both dev and design teams. Upper management has already gone through several layoffs and offshoring of the two dev teams I work with. The devs they did keep were exactly what you said. The capable ones who reliably closed their Jira tickets. Never missed a deadline for building their features or components. And now? Their work has tripled and now the only help they get from management? "Start to figure out how to leverage AI, we're going to be a in hiring freeze for the next 10 months."
The double whammy of losing onshore team members and not getting any help from management to fix the problem they just created and essentially just telling them to figure out how to use AI to keep up is pretty staggering.
I would echo what one of the devs told me, "If this is the new "AI era" than you can count me right the fuck out of it."
>> I understand things and then apply my ability to formulate solutions
> The AI is coming for that too.
In that case all [1] non manual work is doomed, until robotics has an LLM moment.
[1] With the exception of all fields protected by politics or nepotism.
> all non manual work is doomed
All work in general. Knowledge workers can still do manual work, and will compete to do so when there is no option to continue what they do today.
Lot of people don't seem to get that - It is easier to go from terrible to average but much harder to go from average to good.
I am sure AI bros are same people who were convinced consumer grade fully automated driving was going happen "by end of the year" for last 7 years.
No, I never believed in fully automated tale by Tesla, but as the LLMs improve my personal estimate for the date of human-level AGI is rapidly moving to "present". Before GPT-2 I had it somewhere in 2100, at GPT-2 I thought maybe by 2060 if we are lucky. Now I think it is 2035 or maybe even sooner.
I like to see the optimism, even if I don't share it. I think it's incredible hubris that humans think we are about to reinvent our own level of intelligence, just because we made a machine that talks pretty.
I remember being that kid in high school who ran math and logical problems hard which contributed to me being very technical and to learn to push through painful mental challenges on the regular. Out of most of my graduating class there were not many of us that went on to become engineers for a reason because it isn't easy work by any means and I'm guessing is quite draining for people who don't use their brain like we do.
So while AI will change the industry I don't see any reputable company firing the smartest ones in the room for junior level intelligence.
Even with it advancing someone has to be responsible for when it screws up which we know it will.
Not sure where I first heard this, but I say it to my team all the time: "Programming is thinking, not typing"
I know a an accomplished CS professor, ACM fellow, cited in Knuth's TAOCP (as well as being an easter egg!), who still hunt-and-pecks. In fact, hunt-an-pecks incredibly slowly.
Seeing him type really reinforced this idea.
That's very true, which is why I find it insulting that so many AI proponents use the word "typing" to refer to writing code. It carries an implication that if you enjoy writing code by hand, you enjoy a mindless activity.
I've always told my Jr Engineers to "think twice, code once".
If I gave them a task and they immediately started typing it out, I would tell them to stop typing and ask them to explain to me what they were doing; they'd often just spit out what they thought the code should do, and I'd often point out edge cases they missed and would have missed had they just spit out code and a PR, wasting everyone's time. I would also insulate them from upper management to give them time to actually think (e.g. I wouldn't be coding so they could think then code).
To your point and to the GP's point, and one point I keep raising with LLM's: "typing is not where my time sinks are"
Isn't the long term trend just that we don't need as many engineers, not that there will no more software engineers?
Theres another, different loop I keep seeing which is:
I guess to cite a counter example, unemployment is still super low, software jobs are still holding up, but the bear case is that eventually 5% of people will be able to do what people do today, and the demand for software won't grow at the same pace.> Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.
Thing is, natural selection will take care of you at the same time. Because you'll also come to rely on products they make, or services they offer, either directly or indirectly. So eventually, you too, will suffer the consequences of the enshloppification.
This is exactly it. The speed of light has not changed: we're limited by our ability to understand the system, and make decisions about what to do next. AI will speed that up, but the core work is the understanding and decision-making.
Saying otherwise is sort of like reducing the task of writing a novel to typing.
And most of the time the statistical aspect of LLMs result in a less creative solution that is more expensive to run and harder to maintain. LLMs at this stage are good at scaffolding, generating the boilerplate you do not want to write and glue things together quickly. It just makes engineers faster.
Something missed in that computer science was a highly theory driven discipline where people were taught how to think critically about solving complex problems. Industry complained they weren’t teaching enough programming skills, so they dumbed down the thinking part and emphasized the vocational part. Now the vocational part is virtually useless, and the grounding of theory applied to complex problems is suddenly really relevant again. Schools will take time to retool their programs, teaching staff, and two generations if not three graduates will have entered into a work environment that doesn’t need what they learned.
As someone 35 years into my career I agree this is the most exciting part of my career. I love programming and I do it all the time but I do it by reading code and course correction and explaining how to think about the problems and herding cats - just like working with a team of 100 engineers. But the engineers I’m working with now by and large listen, don’t snipe me on perf reviews, aren’t hallucinating intent based on hallway conversations with someone else, etc. This team of AI engineers I have can explain to me their work, mistakes, drift, etc without ego and it’s if not always 100% correct it’s at least not maliciously so. It understands me no matter how complex the domain I reach into, in fact it understands the domain better than I do, so instead of spending a few months convincing people with little knowledge or experience that X is a good idea, I can actually discuss X and explore if it’s a good idea or not and make a better informed decision. I’ve learned more in these discussions than I’ve learned in decades of convincing overly egoistic juniors and managers to listen to me about something I’m an industry authority on.
However I see very clearly we will need very few of the team of 100 human engineers I can leave behind in my work. Some of will be there in a decade, but maybe less than 1:10. This is going to be a more brutal time than the Dotcom bust for CS grads, and I don’t think it will ever improve. Mostly because we simply won’t need the “my parents told me this makes money” people, just the passionate folks remain. But even then, we face a situation where the value of any software developed is very low because so much software is being developed. It’s going to turn into YouTube where software that is paid for is very small relative to the quantity of software developed. We already see this in the last few months with the rate of GitHub projects created. If the value of any software created is low, the compensation of the creator will be low unless they’re very rare talents.
That doesn't hold because the goal for executives is to increase revenue and the main sales pitch of Anthropic et al is to pay for agents instead of paying for engineers. That means 80% of the workforce is out no matter what. Whether or not one belongs to the remaining 20% is a different story, but obviously not all of us will be there.
> I understand things and then apply my ability to formulate solutions
AI is coming for that too. Don't be naive
It will be interesting for governments using workers as proxy for taxing corporations.
Only 5% of your time is spent writing code? That sounds like a low estimate for most software engineers I work with.
May I ask if you could estimate how you spend the other 95% of the time?
In no particular order
The least experienced developer writes the most code. Juniors would be spending whole day in the IDE, typing, testing, typing etc. Senior developers will go to a park for a few hours, think, then come back spent an hour or less typing code that just works or write nothing at all, maybe even delete code. Instead they might update documents, ask clarifications about found edge cases or errors in planning that were not considered.
Since software is in every industry of man, I think you'll need to mention which industry this perspective is coming from. This is definitely NOT the case in certain industries.
Commenting on Hacker News?
For those who claim to be developers who code no more than 5% of their time and resort to arguments like "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?", it's not commenting, it's shilling for the AI corpocracy on HN.
In all seriousness, communications consumes a lot of time. Meetings, emails, Slack messages, pestering stake holders and other developers...
If you spend 95% of your time on that stuff, you better be working on like critical infrastructure where nothing can go wrong, otherwise you are in an incredibly dysfunctional company.
I agree it would be absurd for it to take 95% of your time. I have, however, seen that it takes a lot more time than one would think.
I did some contracting work for a severely dysfunctional meeting heavy organization and it was about 2 hours of meetings for every hour of real technical work!
Ah yes agreed, if it's more than 90% it just signals to me that a developers skills are probably being wasted too much on business/coordination stuff.
But i guess if we mean actual time tapping your keyboard making code, then it's true some days for senior+ devs, but definitely not technical work overall.
So about 26 hours of meetings to 13 hours of "real technical work" per week, but that's is 33%, not 5%.
Even when it’s not dysfunctional, you spend a lot of time on communication and reading stuff other people wrote (including code). It’s very rare to work in isolation.
I guess it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing code. If we say it's just typing, then 95% is not absurd no
>
You don't think AI is going to be able to understand things and apply their ability to formulate solutions better than you, in the near future?
You’re a ”developer“, i guess, but not a coder (anymore), which is what your interlocutors are probably asking about. You’ve migrated to a middle manager job, not something they probably can just start doing competently. Essentially you’re agreeing with their initial sentiment, that coders will be made irrelevant.
I think it’s more nuanced. Even a “coder” spends the majority of their time, not coding.
You miss the major factor in your compensation: pricing pressure due to supply/demand.
By removing all the junior engineers, you've fundamentally changed the market forces longer term and most people expect that to negatively impact you in the supply demand curve regardless of whether or not the statements you've made above are true, which they most likely are for senior engineers.
In removing junior developers, leaving only senior developers, wouldn't that reduce supply, making the price go up, not down? It's been a while since Econ 101 for me though.
Because that classifies in "developers" and "software engineers". And software engineering isn't going to disappear anytime soon
Weird. I call myself a developer because I don't have an engineering degree from an abet certified engineering program.
I recognize, in some capacity, that this isn't the norm and in the US "professional engineer" is protected and not simply "engineer", but it feels akin to stolen valor to me.
If there were a license in the US for it, I’d agree with you. But as is, if you are “doing” engineering, you’re an engineer.
If you are a licensed engineer of some kind, you’d state that outright.
The equivalent of stolen valor would be claiming to be a licensed software engineer; except there is no such license so it would also be fraud, misrepresentation, etc.
(I know this is different elsewhere)
> If there were a license in the US for it, I’d agree with you.
Yeah, that is basically the thing in my country. You can't call yourself an engineer without passing a test, but I can't take it because there isn't one for software engineering.
Same thing for freelancing. Freelance jobs are defined in a list, and other jobs cannot benefit from the simplified tax rules that freelancers enjoy, but that list was written before software development was a thing.
I call myself a computer programmer unless someone is asking for my official job title (software engineer)
I'm a software dev in the US and I never call myself "engineer" in that capacity. Always "programmer" or "developer".
I agree. Engineers have to clear a much higher bar. Even though my career was spent in medical diagnostic software where we had to get 510k clearance, I was still keenly aware that this was a fundamentally different activity from actual engineering.
I'm an electrical engineer that moved to software engineering and there's a lot of commonalities between what I do now and what I did previously as an electrical engineer. The bar might seem high, but that's the only way I know how to work, honestly.
On the other hand, with the modern division of labour in a lot of companies and with the rhetoric I see here in HN and in other places: a lot of developers are indeed not even close to being engineers.
Maybe we should rejoice. I remember dreading writing documentation, and now I would happily hand that off to AI.
I dunno, man. I've been doing this for 20+ years and I think we're at a really important fork in the road where there are two possibilities.
The first is that AI is achieving human-level expertise and capability, but since they're now being increasingly trained on their own output they are fighting an uphill battle against model collapse. In that case, perhaps AI is going to just sort of max out at "knowing everything" and maybe agentic coding is just another massive paradigm shift in a long line of technological paradigm shifts and the tooling has changed but total job market collapse is unlikely.
The other possibility is that we're going to continue to see escalating AI capability with regard to context, information retrieval, and most importantly "cognition" (whatever that means). Maybe we overcome the challenges of model collapse. Maybe we figure out better methodologies for training that don't end up just producing a chatbot version of Stack Overflow + Wikipedia + Reddit. Maybe we actually start seeing AI create and not just recreate.
If it's the latter, then I think engineers who think they are going to stay ahead of AI sound an awful lot like saddle makers who said "pffft, these new cars can only go 5 miles per hour."
100% this.
I'll also add another factor: it's become increasingly clear at our company that AI-enabled humans are getting to the bottom of the backlog of feature ideas much quicker. This makes the 'good ideas' part of the business the rate limiting step. And those are definitely not increasing with AI, beyond that generated by the AI churn itself ("let's bolt on a chat experience or an MCP!")
So maybe the coding assistants don't get a 10x improvement any time soon, but we see engineering job market contraction because there aren't really enough good ideas to turn into code.
Yes, but as the price of getting work done goes down, a lot of companies that were priced out of custom software before now can hire devs, as the value hiring a few can provide just goes up. Fewer people per product, absolutely. No more teams of 10 or 20 working on the same thing. But there's so much out there that doesn't get done at all because you'd never be able to afford it.
Simple marginal thinking: When you lower the price of something, it gets more use cases. A rich person might not take even more flights because they are cheaper, but more people will consider flying when they wouldn't have at old prices
You are supposing that AI is achieving human level expertise and capability is a given. I am not so sure. Right now that's much further from the truth than one might think at first glance.
> max out at "knowing everything"
LLMs know nothing but are great at giving the illusion that they know stuff. (It's "mansplaining as a service"; it is easier to give confident answers every time, even if they are wrong, than to program actual knowledge.) Even your first case seems wildly optimistic. The second case is a lot of "maybes" and "we don't know how but we might figure it out" that seems like a lot to bet an entire farm on, much less an entire industry of farms.
We sure are looking at a shift in the job market, but I don't think it is a fork in the road so much as a Slow/Yield sign. Companies are signalling they are willing to take promises/hope to cut labor costs whether or not the results are real. I don't think anything about current AI can kill the software development industry, but I sure do think it can do a lot to make it a lot more miserable, lower wages, and artificially reduce job demand. I don't think this has anything to do with the real capabilities of today's AI and everything to do with the perception is enough of an excuse and companies were always looking for that excuse. (Just as ageism has always existed. AI is also just a fresh excuse for companies to carry on aging out experience from their staff, especially people with long enough memories/well schooled enough memories to remember previous AI booms and busts.)
But also, yeah if some magic breakthrough makes this a real "buggy whip manufacturer moment" and not just an illusion of one, I don't mind being the engineer on that side of it. There's nothing wrong about lamenting the coming death of an industry that employs a lot of good people and tries to make good products. This is HN, you celebrate the failures, learn from them, and then you pivot or you try something new. If evidence tells me to pivot then I will pivot, I'm already debating trying something entirely new, but learning from the failures can also mean respecting "what went right?" and acknowledging how many people did a lot of good, hard work despite the outcome.
Saying being a programmer is about writing code is a bit like saying being an artist is about drawing lines on a canvas.
Yeah technically drawing lines on canvases may be an very important part of being a painter, but it is hardly the core of what makes or breaks great art.
What you described are senior developers and system architects.
Junior developers spend most of their time writing code (when they're not forced to attend pointless standups, because Agile/blah/blah)
> The developers who still think their job is about writing code will perhaps not have a job in the future.
So you're saying the same thing everyone else is saying. SWEs won't go away, but they will be greatly reduced, because those whose job is about writing code -- junior devs -- will be replaced.
(How will Sr Devs in the future be created? That's the question, isn't it.)
> How will Sr Devs in the future be created?
As an extreme example, maybe we’ll see long-running internships and trainings like doctors experience. Doctors don’t start their career until ~12+ years of prep and training.
Pragmatically, software development has a lot of examples of teenagers making apps and college students building software companies. In the 12 years it takes for training, low-knowledge workers could be vibe coding continuously replacements of most commercial software products they’d be hired to build. So I doubt we’ll treat software development as a rarified high skill job.
The true argument is about quantity - of people, not code. All qualitative arguments are missing the point.
>- I understand things and then apply my ability to formulate solutions
I think the future is pretty up in the air in this respect, but my guess is that AI will just lead to another shift in the set of knowledge that a 'real programmer' is expected to have. I'm old enough to remember when people would make fun of web developers for 'programming' using HTML and JavaScript. And of course, back in the day, you couldn't be a real programmer unless you wrote assembly language. In a few years' time, being able to write (as opposed to read) source code in any specific programming language will probably become a niche skill. The next generation will be able to read Python to about the same extent that I can read x86 assembly.
Perceptions of what knowledge counts as 'low level' are constantly shifting. These days, if you write C, you're a low-level, close to the metal programmer. In the 70s, a lot of people made fun of Unix for being implemented in a high-level programming language (i.e. C) rather than assembly.
Note that just because you know the job is understanding things, the manager who'll boot you and leave you without income probably doesn't. They'll just get their political cookie points for saving money by replacing you with AI.
Pure wage workers should consider dropping the attitude about how tech progress will just make their inferiors in the same line of work be out of a job (hrmph good riddance etc.). Because this pseudo-progress could creep up on them as well.
Then you won’t have this just world of the deserving workers at all. Just formerly deserving workers and idiot billionaires like Musk (while the robots do all of the work).
I normally say that I have zero concerns regarding AI in terms of employment. At most I am concerned in learning the best practices on AI usage to stay on top of things.
It's ability to write code is alright. Sometimes it impresses me, sometimes it leaves me underwhelmed. It certainly can't be left to do things autonomously if you are responsible for its output.
Moderately useful tool, but hellishly expensive when not being subsidized by imbeciles that dream of it undrrmining labor. A fool and his money should be separated anyway.
What I am really concerned about the incoming economic disaster being brewed. I suspect things will get very ugly pretty soon.
In my experience, it's been the complete opposite. The very experienced engineers that are actually willing to use top of the line tooling are much better than they were before, including those that are over 40, and over 50.
Part of the practical degradation of traditional programmers over time has always been concentration and deep calculation, just like in chess. The old chess player knows chess much better than a 19 year old phenom, but they cannot calculate for that many hours at the same speed as before, so their experience eventually loses to the raw calculation. Maybe at 35, or at 45, but you are just not as good. Claude Code and Codex save you the computation, while every single instinct and 2 second "intuition", which is what you build with experience, is still online.
It's not just that it's a more fair competition: It's now unfair in the opposite direction. The senior that before could lead a team of 6 is now leading a team of agents, and reviewing their code just as before. Hell, it's easier to get the agent to change direction than most juniors around me, which will not be easy to correct with just plain, low-judgement feedback.
But when a senior can do the job of 6 coworkers, what do you suppose will happen to the coworkers?
In farming, those who were replaced by tractors did not keep their jobs. What is different now?
They build tractors, or sell tractors, or work in agricultural research and development...
I highly doubt that a significant portion of farm labor became salesman or researchers. Builders? I could see that but robots already replaced a portion of those too.
Well, if that's the case, then in your concept the issue isn't what will happen to the programmers, but rather to all the work in general.
less jobs creation is a almost certain for tech, but some people with high IQ get wayy more things done, they already do. This will spread to robots and other areas because robots are not automous yet, maybe will take decade(s). but meanwhile few operators will lead them in a more productive way? That's my bet. It's a clear, logical process with iterations. A lot of things are getting faster with AI, except energy production in some places in the world!
Anecdotally, it feels like something materially changed in the US software hiring market at the start of this year to me. It feels like more and more businesses are taking a wait and see approach to avoid over-investing in human capital in the next few years.
It also feels like the hiring "signal", which was always weak before, is just completely gone now, when every job you do advertise receives over 500 LLM written applications and cover letters that all look and feel the same.
The pro-athlete comparison in this article is bit silly IMO- there are obvious physical body issues that occur with aging if you rely on your muscles etc to make money. If you compare to other fields of knowledge work, such as say law or medicine, there are loads of examples of very experienced, very sharp operators in their 40s and 50s.
Honestly, AI doesn't feel like it's affecting hiring needs from the trenches. We don't have engineers sitting on their hands because AI wrote up everything the leadership could imagine.
Instead, we get an economy that feels like it's on the cusp of a fall or at the very least a roller coaster. Poor tax incentives to hire. ZIRP is long gone. And the hiring managers are overrun with slop.
But bosses are happy to say it's AI because that makes you sound in control.
Thank you, it's been all but confirmed that lot of "AI layoffs" are due to reaching a workforce equilibrium from Covid era over hiring.
Saying AI for anything, good news or bad news, is a get out of jail free card for execs who want to appease shareholders.
> Anecdotally, it feels like something materially changed in the US software hiring market at the start of this year to me. It feels like more and more businesses are taking a wait and see approach to avoid over-investing in human capital in the next few years.
My guess is companies overhired in COVID and between that experience and an uncertain market they don't want to make the same mistake twice.
where did the excess labor force suddenly materialize in covid.
the "learn to code" campaign began ramping up in 2013. If you started undergrad in 2016 you would've graduated right into the covid market.
https://en.wikipedia.org/wiki/Learn_to_Code#Policy_impact
I think the hype peaked around 2016 where Democrats were portrayed as out of touch for saying laid off coal miners could just "learn to code". By 2019 it was a cliché used to mock laid off journalists on Twitter.
2008 had ~30k CS graduates.
2015 had ~50k CS graduates.
2021 had ~100k CS graduates.
You can extrapolate the rest.
thats only a fraction of all the layoffs
Someone can graduate only once--or not at all--and be laid off multiple times. :p
This is a great question that rarely gets answered. It’s partially that a ton of student students went to school for computer science because they saw how much money could be made, another fraction is people that switched into software from related fields, maybe with a boot camp or something.
It didn't. The elites never want to admit that they have failed to efficiently use capital for the last 40 years. It's always the fault of workers that should never be trusted. Just continue trusting the elites as they ruined US manufacturing jobs, surely the same institutions won't fail the workers again!
> If you work in construction, you need to lift and carry a series of heavy objects in order to be effective. But lifting heavy objects puts long-term wear on your back and joints, making you less effective over time. Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”.
My parents were both construction workers. There is an understanding that you cannot lift heavy objects forever. You stop lifting objects and move to being a foreman, a supervisor... and if you are uncomfortable learning to get others to do work that you before have done yourself, you burn out your body entirely and the consequences are horrible.
This is factual reality, but it is also a parable that has been important for me to internalize about delegation in my own career. It is not irrelevant to AI use, but I don't think it slots onto it totally as neatly.
Software developers are more architechs than plain programmers. You wouldn't make an architech lift heavy things, you want they to design how those heavy things are used.
> AI-users thus become less effective engineers over time, as their technical skills atrophy
Based on my experience, I think this will prove more true than not in the long run, unfortunately.
Professionally, I see people largely falling into two camps: those that augment their reasoning with AI, and those that replace their reasoning with AI. I’m not too worried about the former, it’s the latter for whom I’m worried.
My mom is a (US public school) high school teacher, and she vents to me about the number of students who just take “Google AI overview” as an absolute source of truth. Maybe it’s just the new “you can’t cite Wikipedia”, but she feels that since the pandemic, there’s a notable decline in the critical thinking skills of children coming through her classes.
We have a whole generation (or two) of kids that have grown up being told what to like, hate, believe, etc. by influencers and anonymous people on the internet. They’d already outsourced their reasoning before LLMs were a thing. Most of them don’t appear to be ready to constructively engage with a system that is designed to make them believe they are getting what they want with dubious quality.
> My mom is a (US public school) high school teacher, and she vents to me about the number of students who just take “Google AI overview” as an absolute source of truth.
I notice many of the adults in my life are doing this now as well.
From Reddit:
> After being laid off, a programmer becomes a welder. One day while working, he suddenly muttered to himself, "It's been so long, I've even forgotten how to solve three sum". A coworker next to him quietly replied, "Two pointers".
I keep reading about how AI will be fine because people can just retrain for different careers. However, I never read what those careers are or who is going to pay for retraining.
I certainly don't have the money or time to go back to college and start a new career at the bottom.
The argument is that “that’s what always happened in the past”.
Which is true, but it’s true as long as it’s not true.
The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.
At a mechanistic level, the “we have always found other jobs” argument misses that the reason we’ve always found other jobs is because humans have always had an intelligence advantage over automation. Even something as mechanical as human inputs in an assembly line was eventually dependent on the human ability to make tiny, often imperceptibly, adjustments that a robot couldn’t.
But if something approximating AGI does work out, human labor has absolutely no advantage over automation so it’s not clear why the past “automation has created more human jobs” logic should continue.
> Which is true, but it’s true as long as it’s not true.
It also isn't true. The story of the last 50 years has been that technology, especially computer and communications technology, has facilitated the concentration of wealth. The skilled work got computerized, or outsourced to India or China. That left U.S. workers with service jobs where they have much lower impact on P&L and thus much less leverage.
In my field, we used to have legal secretaries and law librarians and highly experienced paralegals. They got paid pretty well and had pretty good job security because the people who brought in the revenue interacted with them daily and relied on them. Now, big firms have computerized a lot of that work and consolidated much of the rest into centralized off-site back-office locations. Those folks who got downsized never found comparable work. They didn't, and couldn't, go work for WestLaw to maintain the new electronic tools. The law firms also held on to many of them until retirement or offered them early retirement packages, and then simply never filled the positions. It used to be a pretty solid job for someone with an associates degree, and it simply doesn't exist anymore.
The only thing keeping the job market together is the explosion in healthcare workers. My Gen-Z brother and sister in law are both going into those fields. In a typical tertiary American city, the largest employers are the local hospital and perhaps a university or community college. Both of those get most of their revenue directly or indirectly from the government. It's not clear to me how that's sustainable.
>It's not clear to me how that's sustainable
If it makes you feel better, I'm pretty sure it isn't sustainable. (But I'm not an economist so take that with a block of salt.)
I don't think anyone has the answers. It's just some of us are honest enough to concede we have no answers, while others promote an answer that aligns best with their belief system.
"It'll all work out."
"It's the immigrants/blacks/jews/whatever dragging us down."
"Nothing's going to happen and we can all continue doing the work we always have."
"Burn the rich."
Etc etc.
Not a lot of serious attempts out there at even getting a hand on the issues, let alone fixing the issues.
I'm also pretty sure in the past industrial transitions, many of the people who lost their jobs at the start of the change never found better ones. It took a generation or so for new opportunities to really be found and fine tuned and you're competing for those new roles with younger people anyway.
If ai does take a lot of white collar work, is it a lot of comfort that maybe jobs in a very different sector will be better in 20 years?
Did the younger people find better jobs? You used to have all these jobs for people who were maybe a bit smarter than average with good judgment. In the 1990s, the local community college used to advertise associates degrees for paralegals. That's a job that doesn't exist in the same way anymore thanks to computers. Now it's become an internship for kids with top credentials before they go to law school. Which is fine for them, but what about everyone else?
It seems to me like all of these people are flocking now to healthcare fields. That seems totally unsustainable.
>It seems to me like all of these people are flocking now to healthcare fields. That seems totally unsustainable.
Why? There will never be a shortage of sick/dying people. So medical staff, and also undertakers, aren't going anywhere.
My understanding is that healthcare keeps growing because the large Boomer generation is aging. When they have passed though, then we should see a corresponding slide in healthcare growth
Because most healthcare spending comes from tax dollars.
Not in all past industrial transitions.
But yes, the argument has been wrong often enough that the people still repeating it as a rule should be mocked and ashamed.
It’s also not that true, and highly dependent on a lot of factors.
Anecdotally I see a lot of schadenfreude online about tech jobs after a decade or two of lecturing everybody from Appalachians in coal country, to Midwestern autoworkers, that they should just “learn to code.”
Totally agree, and would add another way “that’s what always happened in the past” is a terribly weak argument. Things might have always worked out at the societal level so far, but very often do not at the individual level. Countless successful craftsmen have had their livelihoods ruined by technological changes and spent their remaining years impoverished. How many people funding AI would be willing to throw their own life away for the good of some future strangers that may or may not be born? I'm pretty sure the answer is <=0.
> The classic example of how drastically this kind of thinking can fail is Malthusian theory, that populations would collapse because food growth was linear while population growth was exponential. This was true for all of history until Malthus actually made this observation.
Malthusian observation can still be true...It only has to be true once, and the only reason people say it isn't right now is due to industrial fertilizers and short memories.
It's not going to happen, just as it didn't happen for skilled industrial workers whose jobs got outsourced to China. The government will pay just enough in welfare to keep the situation manageable. Then they'll demonize you in the culture, as a Luddite, etc.
> However, I never read what those careers are or who is going to pay for retraining.
There aren't any careers and if there were, you would have to pay. Corporations certainly won't except under extremely rare situations where they have to to compete.
I think the idea of being an employee is fundamentally changing. Not saying its good or bad but it's shifting to a more entrepreneurial phase where people have to step out of their 9 to 5s and find ways to deliver value that others want to pay for.
We saw this pre-ai with uber and door dash. I think as AI automation dies down and most companies are competing at a near optimal level with the new tools we'll need humans again in more traditional roles to build the next generation of innovations. And then the whole cycle will repeat.
That's a lot of SV-speak. How exactly do people step into an entrepreneurial phase? They're at work in corporate settings with fixed defined roles. Most workplaces are not many-hatted-donning startup environments, but restricted roles where there are deliverables, deadlines, meetings, etc. Which leaves out of hours for "entrepreneurship" whatever that is.
Github project work on the weekends? That's not possible for most people in their mature/family years (or shouldn't be necessary - what about living life??)
Uber and Doordash are both examples of abusing workers and their resources to externalize costs on the worker.
What about people who have been out of work for a year and all they can do right now is deliver for Uber and Doordash so they can make rent and put some food on the table?
Is it ideal working conditions? No, but its better than nothing, you can set your own hours, and you can leave when the next opportunity comes.
His point is that it's not entrepreneurship, it's employment.
> We saw this pre-ai with uber and door dash.
Oh, yeah? Did the Uber drivers and door dashers accrue the surplus value?
Yeah, that's just a copium answer from people who simply want to hand wave away the issue instead of admitting they have no good answers.
Like a politician who's asked about this in a town hall, but thinks that "our plan is to do absolutely nothing" doesn't sound very appealing.
The same is true of the industries that software disrupted.
The second part seems obvious to me: the ones who are getting retrained. If it's some kind of formal education, depending where you are, maybe the state at least for part of it.
Education in what, though? No idea. And if there was one answer, and it's true that there will be fewer software developers, you'd likely be competing with many people for few jobs.
> I never read what those careers are
Exactly. I have yet to read a single logically sound argument that even gives a hint of what those professions/jobs might be (remember, they have to be plentiful enough to employ large numbers of people, so "I quit my corporate job and making more as a TikTok influencer" doesn't count). Remember that a new profession has to open up new hitherto unknown revenue streams otherwise there are no companies who will pay you.
At least in the US, the only major non-AI growth field seems to be healthcare to deal with the swell of baby boomers living longer than people have before.
But if we're waiting to be paid to retrain there, I wouldn't hold our collective breath.
Baby boomers had already started the face of dying though. The next generation is still going to be right there. That generation is smaller. These people will always be dying. However, I wouldn't hold my breath if you're a young person in that field. Maybe but maybe not
Also, it's not necessarily true that there will be other great careers available. This seems to just be an assumption people are making.
Of course, there are jobs that will still require human labour for some time yet, but in reality a lot jobs that require physical human labour are now done in other parts of the world where labour is cheaper.
Those which cannot be exported like plumbing or waitressing only have limited demand. You can't take 50% of the current white-collar workforce and dump them in these careers and expect them to easily find work or receive a decent wage. The demand simply does not exist.
Additionally, at the same time as white-collar jobs are being lost an increasing number of "low-skill" manual labour jobs are also being automated. Self-checkout machines mean it's harder to get work in retail, robotaxis and drone delivery will make it harder harder to find work in delivery and logistics, robots in warehouses will make it harder to find warehouse jobs.
It seems to me there is an implicate assumption that AI will either create a bunch of new well-paid jobs that employers need humans for (which means AI cannot do them) and jobs which cannot be exported abroad for cheaper. What well-paid jobs would even fit the category of being immune to AI and immune to outsourcing? Are we all going to be really well paid cleaners or something? It makes no sense.
A lot of the advice we're seeing today about retraining in construction worker or plumber seems to assume that there's an unlimited demand for this labour which there simply is not. And even if hypothetically there was about to be a huge increase in demand for construction workers, it would take years to even have the machinery, supply chain and infrastructure in place to support the millions of people entering construction.
The most likely scenario is that people will lose their jobs and will be stuck in an endless race to the bottom fighting for the limited number of jobs that are left in the domestic economy while everything else is either outsourced or done by robots and AI.
The better advice is to start preparing for this reality. Do not assume the government will or can protect you. When wealth concentrates corruption because almost inevitable and politicians have families to look after too.
Please take this seriously. Even if I'm wrong it's better to prepare for the worse rather than to assume everything will be find and you'll be able to retrain into a new well-paid career.
+1
50% of the workforce was in farming near the end of the 1800's. Today, 2% 40% of the workforce was in manufacturing early to mid 1900's. Today 8% 60+% of the current workforce is white collar. What will it be in 20 years ?
LLM's are only a couple of years old, we have no idea where this will go. Maybe it will be a big hallucination, maybe we are looking at the very early version of farm and manufacturing machines.
The ENIAC was larger than a person, we now have watches that are significantly more powerful. Maybe in the future, your Apple watch will have more compute than several racks of H100's.
When they came for the farmers, no one else cared - everyone got cheap and bountiful food. When they came for the manufacturers, no one else cared - everyone got cheap and bountiful products. Now they are coming for the white collar workers, and their highly paid laptop lifestyles.
Who is left to care ? The billionaires ?
The most future-proof “career” right now is having money. At least multiple million dollars. That’s a skill that is very much in demand.
Whoo, deff a field where I would try breaking in.
This is the story that's been written since the Luddite revolts, as far as I know. The successors in that case were the capitalists who spent a significant amount of time and money convincing the constabulary and political figures to side with them. People were shot and jailed in the worst cases. The best case, workers were left without work or sent off to work-houses where they became indentured servants to the state.
The last work-house closed in the 1930s.
That all started not because people were afraid jobs were going to be replaced by the new loom. People had been using looms for centuries. They were protesting working conditions: low wages, lack of social protections when people were let go, child labor, work houses, etc. There were no labor laws at the time to protect workers... but there were these valuable new machines that the capital owners valued greatly. The machines were destroyed as leverage: a threat.
Since the capitalists ultimately "won" that conflict it has been written, by technocrats, that technological progress is virtuous and that while workers will initially be displaced the benefit to society will be enough such that those displaced will find productive work else where.
But I think even capitalist economists such as Keynes found the idea a bit preposterous. He wrote about how the gains in productivity from technological advances aren't being distributed back to workers: we're not working less, we're working more than ever. While it isn't about displacement of workers, it is displacement of value and that tends to go hand in hand.
I think asking, "Where do I go?" is a valid question. One that workers have been trying to ask since the Luddites at least. Unfortunately I think it's one that gets brushed under the rug. There doesn't seem to be much political will to provide systems that would make losing a job a non-issue and work optional.
That would give us the most leverage. If I didn't have to work in order to live I could leave a job or get displaced by the latest technological advancement. But I could retrain into anything I wanted and rejoin the work force when I was good and ready. I wouldn't have to risk losing my house, skipping meals, live without insurance, etc.
AI cannot create art by itself.
I really wish seemingly intelligent people would stop using the abstraction analogy (like the article does). The key word is: determinism. Every level of abstraction (inc. power tools, C, etc.) added a deterministic layer you can rely on to more effectively do whatever it is that you're doing - same result, every time. LLM's use natural language to describe programming and the result is varied at the very best (hence agents, so we can brute force the result instead). I think the real moat is becoming the person who can actually still program.
People always say this but it’s misguided imo. Yes LLMs are not deterministic, but that’s totally irrelevant. You aren’t executing the LLMs output directly, you’re using the LLM to produce an artefact once that is then executed deterministically. A spec gets turned into code once. Editing the spec can cause the code to be updated but it’s not recreating the whole program each time, so why does determinism matter?
In my experience, I'm using LLMs as my abstraction to "junior engineer". A junior engineer isn't deterministic either. I find that if you treat the LLM output like a person's output, you're good. Or at least in my projects, it's been very successful. I don't have it generate more code than I can review, or if I give it a snippet to help me fix it, if it ends up re-writing it like an ambitious engineer would do, I tell it to start over and make minimal changes.
I guess I'm not spun up about the determinism because I've been working at the "treat it like a person" level more than the "treat it like a compiler" level.
To me, it's really like an engineer who knows the docs and had a good memory rather than infallable code generator.
I work at a small company, so we don't have tons of processes in place, but I imagine that if you already had huge "standards" docs that engineers need to follow, then giving the LLM those standards would make things even better.
The thing is you can quickly teach a Junior how to respect a specification contract, so that with very minimal oversight, you get the wanted implementation. And after a few years (or months), the communication overhead get shorter. What would have been multiple rounds of meetings and review sessions are a short email and one or two demos.
try distributing this spec amongst your team members, ask each of them to drive it to completion. no follow up edits. deploy to individual environments and then run a rigorous test suite against all of the deployments. see if all of them behave the same way.
If it's not deterministic you can never fully trust it. In a deterministic abstraction I don't need to audit the lower levels.
Who said you need to trust it? Reviewing code is still way faster than writing code.
> Reviewing code is still way faster than writing code.
Writing code results in a much better understanding of the code than reviewing it
In fact I would say that in large complex codebases, in order to develop the same understanding of what the code is doing might actually take longer than writing it from scratch would have
Exactly, the argument makes sense if its about inference at runtime
But that's not the case here
this is the way LLMs _should_ be used, as an assistant to create reliable, deterministic code. and honestly, they're fantastic when used this way. build the thing you need with the LLM, then put the LLM away.
but in practice, the current obsession with agents means people are creating applications that depend entirely on sending requests to LLMs for their core functionality. which means abandoning the whole idea of deterministic software in favor of just praying that all of the prompts you put around those API requests will lead to the right result.
how do you know the artifact is correct?
I see what you're getting at, but determinism isn't the right word either. LLMs are fundamentally deterministic -- they are pure functions which output text as a function of the input text and the network parameters[1]. Depending on your views on free will, it could be effectively argued that humans are deterministic as well.
The concept you're touching on is the idea that LLMs (and humans) are functions which are inscrutable. Their behavior cannot be distilled into a series of logical steps that you can fit in your head, there are no invariants which neatly decompose their complexity into a few interpretable states, and the input and output spaces are unstructured, ambiguous, underspecified, and essentially infinite. This makes them just about impossible to reason about or compose using the same strategies and analysis we apply to traditional programs.
[1] Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice. I can't imagine there are many workflows which feed an LLM the exact same prompt multiple times and rely on the output having some statistical distribution. In fact, even if you wanted this you may just end up getting a cached response.
Let's be real, if you and I both ask claude to generate a feature on the same project, what are the chances that it spits out 100% replicated code? But if we are to build the project using a Dockerfile, we will get the same binary and the same image. Products around LLMs are non deterministic unlike compilers.
> Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice
Everyone added /dev/random to their offerings, so every LLM tools for coding are non deterministic.
There's something to be said about the fact that the very people who would use deterministic layers to build stuff are... non-deterministic. We, as humans, have our set of pros and cons, wins and failures. Even the most brilliant coders on earth will make mistakes from time to time. I often fail to see this getting accounted in any conversation when there is a critique towards LLMs, as if we humans are not flawed in our own ways, with a huge degree of variance across individuals. Good and bad code existed prior to LLMs. If you're hiring someone to do code, you're basically using some heuristics to trust this person will do a good job. But nothing is ever guaranteed 100% deterministically ever. Without thinking it that much, LLMs will sometimes produce better code and manage systems that some people who are earning salaries out there. Possibly sub-par developers if we were precise, but professionals in the meaning of the word (that are being paid to do work).
At the end of the day, what matters is how willing the person behind a given task is when it comes to deliver quality work, how transparent and honest they are, to understand requirements, and a pleasure to work with along other humans. AI/LLMs are just extra tools for them. As crazy as it might sound, but not so many people are willing to push boundaries and deliver great work. That is what makes the difference.
I grant that there's a definition of abstraction that LLMs don't fall into. But people describing LLMs as another abstraction layer aren't all misunderstanding this. Instead, they are using the term ... more abstractly.
EG: How did Mark Zuckerberg make software five years ago?
He's as capable of opening up an editor as I am, but circumstance had offered him a different interface in terms of human resources. Instead of the editor, he interacts with those humans, who produced the software. This layer between him and the built systems is an abstraction, deterministic or not.
Today, you and I have a broader delegation mandate over many tasks than we did a few years ago.
LLM's don't have to achieve perfect reliability to replace lots of work. They just have to reach the balance of reliability and cost suitable for a given task. This will depend on the task.
every time a person uses the abstraction argument, an angel dies
> The career of a pro athlete has a maximum lifespan of around fifteen years. You have the opportunity to make a lot of money until around your mid-thirties,
This sounds ageist - I'm around 40 and feel I am at my mental peak, compared to even my mid 20's. This isn't a good analogy at all, the brain doesn't "wear out" like a professional athletes' body does, it just changes its structure. The brain is a remarkable organ.
He just means: by this age you've probably found your preferred title and level, unless you want to rise to more C-level / executive positions, which are rarer in any case and most folks don't want.
This is definitely more charitable, but isn't this already the case now? It seems he's saying past your mid 30's you'd no longer be viable as a software engineer. That's never been the case, and I'm not sure why it would now suddenly be the case.
Even clearer, if you don't adapt to the changes taking place in the field there might not be a future for you. Its not about age, it is about attitude and flexibility (which are, admittedly, issues when getting older).
In other words, if you want to continue stubbornly typing out code by hand, the person right over there has already mastered agentic tooling and is doing vastly more than you, more quickly, and with greater precision, and will simply be a more fit candidate to hire. Roles for this type of legacy stubborn personality will be less and less, and you will age out as part of the old school.
I see what you're getting at, but if it's not about age, why use an age related analogy? I probably should have amended my first statement in this thread is that it sounds ageist, if even implying that the people who will refuse to adapt will be older. This day is already here, people are already adapting to this. He seemed to frame it as the current young 20's career people will have this limited timeframe of productivity.
If by software engineering, one means typing code character by character into a text editor, sure it's going to be difficult to find someone to pay you for it.
If you mean creating software, well we are creating more software than ever before and the definition of what software is has never been so diverse. I can see many different careers branching off from here.
We are experiencing what Civil Engineers experienced going from slide rules to calculators. Or electrical engineers going from manual circuit path drawing to CAD tools.
The interesting thing to me is that, Software Engineering will have to evolve. Processes and tools will have to evolve, as they had evolved through the years.
When I was finishing university in 2004, we learned about the "crisis of software " time, the Cascade development process and how new "iterative methods" were starting.
We learned about how spaghetti code gave way to Pascal/C structured prpgramming, which gave way to OOP.
Engineering methods also evolved, with UML being one infamous language, but also formal methods such as Z language for formal verification; or ABC or cyclomatic complexity measurements of software complexity.
Which brings me to Today: Now that computers are writing MOST of the code; the value of current languages and software dev processes is decreasing. Programming Languages are made for people (otherwise we would continue writing in Assembler). So now we have to change the abstractions we use to communicate our intent to the computers, and to verify that the final instructions are doing what we wanted.
I'm very interested to see these new abstractions. I even believe that, given that all the small details of coding will be fully automated, MAYBE we will finally see more Engineering (real engineering) Rigor in the Software Engineering profession. Even if there still will be coders, the same way there are non-engineers building and modifying houses (common in Mexico at least)
Calculators and CAD tools do not give you non-deterministic answers. Both of them simply automate part of the manual work for them without creating anything "new". I haven't used CAD tools but I did use some level editors such as Trenchbroom -- I think what is automated is the 3d shapes that you want to make -- e.g. back in the day of '96, when ID Software is creating Quake, there was very little pre-drawn shapes in the level editor and they have to make the blocks by themselves, thus it is very difficult and time consuming to make complex shapes such as curved walls and tunnels. Then better tools were invented and now it is much easier to create a complex shape. But you don't type "a Quake level with theme A, and blah blah" and then you get a more or less working level -- this is what AI is doing right now.
I think the right analogy to calculators and CAD tools, is IDE with Intellisense for SWE -- instead of typing code one char by one char, we can tab to automate some part of it.
But I agree with your consensus -- SWE is changing, whether we like it or not. We need to adapt, or find a niche and grit to retirement.
> non-deterministic answers
It doesn't make sense to get hung up on this aspect of LLMs. We prefer non deterministic so far because it tends to work slightly better even if it is completely possible to ask for a temperature=0 deterministic answer.
With more scale and research, at some point you'll get results that are both useful and deterministic, if it's not already the case.
It absolute makes sense to get "hung" up on something when it comes to planning society around it JFC. I'm with the other commentator, your understanding of these tools should be taken into question since you seem to be reading the tea leaves of statistical noise.
In 2020, there are two companies that are competitors with each other. They each employ 100 programmers to do their job, and we all know how those organizations operated; perpetually behind, each feature added generating yet more possible future features, we've all lived it and are still largely living it today.
In 2026, both companies decide that AI can accelerate their developers by a factor of 10x. I'm not asserting that's reality, it's just a nice round number.
Company 1 fires 90 of their programmers and does the same work with 10.
Company 2 keeps all their programmers and does ten times the work they used to do, and maybe ends up hiring more.
Who wins in the market?
Of course the answer is "it depends" because it always is but I would say the winning space for Company 1 is substantially smaller than Company 2. They need a very precise combination of market circumstances. One that is not so precise that it doesn't exist, but it's a risky bet that you're in one of the exceptions.
In the time when the acceleration is occurring and we haven't settled in to the new reality yet the Company 1 answer seems superficially appealing to the bean counters, but it only takes one defector in a given market to go with Company 2's solution to force the entire rest of their industry to follow suit to compete properly.
The value generation by one programmer that can be possibly captured by that programmer's salary is probably not going down in the medium and long term either.
Your hypothetical ignores the distribution of programmer talent. Company 1 can pay more per person and hire 10x programmers, who can then leverage AI to produce the same or more as Company 2.
We have seen this in other knowledge industries. U.S. legal sector job count is about the same today as it was 20 years ago. But billing rates have exploded and revenues in the 200 largest firms have increased more than 50% after adjusting for inflation. Higher-end law firms have leveraged technology to be able to service much more of the demand and push out smaller regional competitors.
Of course it does. It ignores a lot of things. Mostly I just want to present the view that things aren't entirely hopeless and the entire industry is doomed to contract by 90% because of AI. Your legal system point also fits in precisely with what I'm trying to convey, just in a different direction.
I think paying significantly more was a very localized thing that happened for AI researchers who were familiar with the alchemy that made GPT4 suddenly work much better than anything else seen before.
This resonates strongly with me, in that all that extra margin has to be spent on something other than dividends
My concern would be whether creating that software pays enough to keep up with skyrocketing costs of living. In the past, the jobs created by automation have generally been lower paid with less autonomy.
> My concern would be whether creating that software pays enough to keep up with skyrocketing costs of living.
You might need to relocate to a place with much lower costs of living.
This was the idea behind remote working discussed during COVID-19 times:
- the company can pay less money because the employee is living at a much cheaper place than the expensive city where the company is located
- on the other hand, even with a smaller salary, the employee has more money at the end of the month because of the smaller costs of living
So both sides win.
Ignoring the preference of people generally wanting to live in HCOL areas, this only works if every company hires equally from LCOL areas. One of the benefits of living in a HCOL area is access to the job market it provides. It's much easier to get hired for a software position living in San Francisco than it is living in Deming, New Mexico.
More importantly, in San Francisco, there are a lot more opportunities than in coming. I've never been to either city (i'm not going to come to the conference I was at because I never left the hotel.) however, I can still tell you confidently that if you have a weird hobby, you're much lower than you can find other people with that interest, stores that sell the things you need to complete the hobby, and all those other things in life that you want. If you love doing the types of things people in Deming do, well, it's a great life, I'm sure. However, as soon as you want to do something off the wall, well, you may not even find enough people in Deming to have your cricket team, while I have no doubt that San Francisco has a team that you could join.
but moving to a lower COL area can reduce that amount of public and private services one gets access to, no? network connectivity will, for example, likely be worse out in the sticks
Unfortunately, in America places with low cost of living are generally, to put it diplomatically, unpleasant places to live. That's even more the case if you don't fit into the white, cis, straight, and Christian box that rural areas are willing to accept.
> Unfortunately, in America places with low cost of living are generally, to put it diplomatically, unpleasant places to live.
This will change for the better if more and more educated people relocate there.
And then those areas become more expensive...
But at least stereotyping happens everywhere!
I like how the assumption was they were all white, Christian and rural.
This problem is not a software engineering problem nor an AI problem but a problem of the balance of power between working hard vs. investing. If the people who believe in working hard organize and slow down the tendency to rig everything for investors, then the markets should stabilize at a more generally prosperous place.
The balance of power is dictated by economic facts, not by organizing or politics. Auto workers in 1950 weren't better organized than auto workers in 2026. They just had more leverage because they weren't competing with auto workers in China. Likewise, Silicon Valley isn't paying people writing web apps $$$ because those workers are organized. They are doing it because they don't have a feasible alternative. If AI enables them to do more with less, they'll take that option.
Creating more software does not solve anything if that software is mostly a functional duplicate of other software. Or, in other words, all companies re-invent the wheel many times over. It doesn't matter if you 10x the development of software that brings nothing new besides being written in a shiny new framework.
We should, IMHO, start getting rid of most software. Go back to basics: what do you need, make that better, make it complete. Finish a piece of software for once.
s/software engineer/secretary/
s/creating software/typing correspondence/
In a world where software programming/architecting is solved by AI, value will accrue to people with expertise in other domains (who have now been granted the power of 1000 expert developers), not the people whose skillsets have been made redundant by better, faster and cheaper AI tools.
It could go either way. Don't forget that LLMs also have expertise in the other domains. Who would do better - the chemist with vibe coded app or the developer with vibe coded chemistry?
My premise is that a vibe-coded app will be indistinguishable from a ‘hand-crafted’ one. So in that scenario the chemist wins, because the developer has no value to add.
It is clear to me that SWE and ML research will be subsumed before other domains because labs are focusing their efforts there, in their quest to build self-improving systems.
There will be more software in the same way there is more agricultural output today.
The idea that productivity gains which result in more of something being produced also create more demand for labour to produce that thing is more often wrong that true as far as I can tell. In fact, it's quite hard to point to any historical examples of this happening. In general labour demand significantly decreases when productivity significantly increases and typically people need to retrain.
Except we are now in the golden age where people with 20 or 30 years of experience know what quality software is - or at least what it is not. So they are able to steer the LLMs. Once this knowledge is gone - the quality could go downhill.
Unless I'm missing something, there's an obvious logic issue here.
If we truly need to sacrifice our skill to be productive by using LLMs that atrophy us, then the only devs that have a limited lifespan are us. The next ones won't have a skillset to atrophy since they won't have built it through manual work.
Also, I hereby propose to publicly ban the "LLMs generating code are like compilers generating machine code" analogy, it's getting old to reargue the same idea time after time.
why is the LLM-compiler analogy flawed? Is it only because LLM output is non deterministic?
Besides probabilistic and non reproducible output, programming languages are designed to be unambiguous and explicit, and human language doesn't have that.
For(){} it's normally either undefined or has a specific meaning. "Then iterate and do x" might mean many subtly different things.
Most programmers never deal with a compiler bug in their whole career, and can dismiss the possibility. For LLMs it would be hard to even define what a "compiler bug" would be since there is no specification for English.
Then there's the fact that models generally don't guarantee anything at all. Sonnet can change under your feet.
Models also degrade as the context window gets larger. Compilers handle one line just the same as 20.
I could keep going, there's so many fundamental differences in the process that the analogy only serves to provide a false feeling of security.
Because you don't have to coax, trick, or guide your compiler into doing the right thing.
Clearly you are not a C++ programmer. :)
Maybe C++ compilers would benefit from asking an LLM to rewrite their messages in a way that makes the point clearer...
But the GP stands.
I don't think its just, there's also the fact that if you're working with c or cpp or any systems level language, you typically do know how to read assembly because you've stumbled upon it for some reason, and if you're writing low level programs (which is typically what these languages are used for) you will definitely at some point need to know to read assembly and maybe even write some. But with LLMs the entire field has shifted. You don't need to know anything to write any language and you don't even need to have high level computer science knowledge nowadays to get something that works and the world increasingly just seems to want something that works.
I have mentioned it several times lately, but if the analogy was correct, people would be committing prompts and not code. High-level source code gets committed, binaries don't. If prompts were really "just a higher level of abstraction", then there wouldn't be a need for saving the code. Or at least you'd see people publish their prompts and chat history alongside the code.
Compiler: "Here is an exact program. Translate it while preserving its meaning."
LLM code generation: "Here is an intent/specification. Invent code that hopefully satisfies it."
Does the compiler analogy provide value under those terms? I don't think it does. In fact, I think it provides negative value.
We don't need to use tortured analogies to express excitement over these tools.
We're forgetting one thing: we (mere engineers) have control over nothing. The vast majority of us are at the mercy of executives and investors. Before AI we had some sort of grip because our skills weren't so much a commodity, and yeah, dealing with code and systems architecture and data and distributed systems wasn't that easy. Now AI is a tool not for us but for the higher-ups, they can finally commoditize software engineering and need only a small fraction of us. I see engineers around here fighting and discussing who'll be left behind (the 80%) and who'll remain because they're "more than mere coders" (the 20%)... what we don't discuss here is that we're all now at the mercy of Anthropic et al, and that's bad. The irony is that the vast majority of us use Anthropic, so we are just loading the guns for them to use them. It's sad, but we call it progress. Nuts
Was it ever a lifetime career? Haven't most people looked around and asked themselves where are all the 50+ engineers? They basically don't exist in large numbers. Ageism is real in this industry. You either save up enough money to retire early, switch into management, or get forced out of the industry eventually. AI is just accelerating the trend. I see very few junior engineers resisting AI. I see a LOT of staff+ engineers resisting it. Just look at the comments on HN. Anti-AI sentiment is real.
If you are lucky and got in early, then probably yes, it could be a lifetime career. It's like all careers, when you joined early, you got a lot of opportunities, you also rode the wave, you eventually rose to the top if you grit through.
It's a lot easier to be early than to be smart or quick.
If you're on the top, you probably aren't coding much. So you're more in management than getting your hands dirty.
Yeah, but you still have the choice to stay in the trench. People like Carmack/Cutler do that. But I agree the majority just go high management.
Managers are being slammed - FB, Amazon and recently Cloudflare and Coinbase.
New grads are being slammed, "because LLMs can do that work."
No new folks, no managers, and no olds. What a delightful career we've chosen for ourselves.
> Haven't most people looked around and asked themselves where are all the 50+ engineers? They basically don't exist in large numbers.
I'm not discounting ageism in the industry, but how popular of a career was it 30+ years ago compared to now?
In 1996? Software development was the hot ticket to upper middle class in the early 80s when I was a recent CS grad, and I was already working with people who were in it for the money. By the late 90s, if you could spell “HTML”, you were making decent money as a web developer. This all came crashing down during the Dot Bomb collapse, but SW has been pretty popular for most of my career, and it just continued to get more popular, especially as salaries continued to increase.
I remember seeing an article around a decade ago about a ~50 year old "web developer" claiming age discrimination because they couldn't get a job. Somebody found their resume and it was literally 1990s "html/CSS" added to some other period tooling. Said person found a niche for a new technology (the web) and then stopped upping their skills.
I've had to change course several times in my career (graduated in 2004). UNIX admin and later network admin, DevOps, and now I'm doing a mixture of DevOps and development (despite not being a full time developer in my entire career, being able to use AI to plug into code and fix/enhance things like monitoring, leveraging cloud APIs, etc has been a game changer for me).
Right now, as somebody in their mid 40s, I'm seeing AI as a productivity amplifier. I am able to take my experience and steer and/or fight opus into doing what's needed and am able to recognize if it looks right.
I'm so glad I'm not fresh out of school in this environment, though people said the same thing when I graduated in the Dotcom bust...but being ready and eager to do groundwork was a door opener. Finding that first door to open was tough, though.
In retrospect the Dot Bomb was a bump in the road. Yes, some people who only knew enough HTML to be a "Webmaster" might have been filtered out, but pretty quickly anyone who could really build software had opportunities greater than before.
I'm repeating what others have essentially said, but ask yourself what's on your resume. If it says "Software Engineer" and that's all it talks about, then yea you might not find it's a lifetime career.
But if it's a diversity of things (that use or leverage software development) then you probably have a lifetime career ahead of you.
I've been writing software for over 40 years but I've never seen myself as having a software engineering career. I've been a research assistant in geophysics, a marine technician on research ships, a game developer, an advisor to the UN, and a lot more. Yes all through that I used software, but I did a lot of other things in the process of using it.
Argument A: AI means you don't learn as much, so even though you are more effective, it inhibits your growth and you shouldn't use it. However, on a pragmatic level, it's effective to hire a bajillion people, fire them at will, and get AI to do everything. You will get so many JIRA tickets closed and so many lines of code written.
Argument B: AI means you don't learn as much, and the single most useful work product of a software engineer is knowing how the code functions, so it's depriving your company of the main benefit of your work. Also, layoffs are terrible business strategy because every lost employee is years of knowledge walking out the door, every new hire is a risk, and red PRs are derisking the business.
Institutional and personal knowledge seem similar, but the implications of each are radically different.
I don't understand it. The time-limited career would work if we were born with innate ability for software engineering and would lose it over time by using AI. Most people are not born with that ability though, it needs to be developed first.
And read Programming as Theory Building already, it's not that long
Comparing software development to carrying heavy things at a construction site feels like a real stretch to me.
'If you work in construction, you need to lift and carry a series of heavy objects in order to be effective. But lifting heavy objects puts long-term wear on your back and joints, making you less effective over time. Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”.'
On another note, for sure software developers are saying things like "this is the part of the job that I like" or "if you aren't doing the work, you won't be good at the work." But other people are saying this, too. I just saw an episode of "Hacks" ("QuickScribbl") where the writers say pretty much this exact thing when confronted with AI tooling designed to "make their job easier". Is writing comedy also like lifting heavy objects at a construction site?
Seems the solution here is the same it's actually always been if you want career progression: be more than just a code jockey. The true value of an engineer is to be plugged into overall roadmaps, broader thinking around product, how to achieve company goals, etc etc.
Yes, LLMs might dramatically reduce the amount of code we write by hand. But I'm a lot less convinced they'll solve all of the amorphous, human-interacting aspects of the job.
My exprience has been that companies actively work to prevent people from becoming more than just code jockeys. For example, most of the places I've worked have viewed code delivered as the ONLY metric used to evaluate performance. Attempts to contribute to roadmaps or strategy are ignored at best and punished at worst.
Yeah, 95% of the available advancement in computing is in people management, not technical mastery. Businesses much prefer to hire externally to serve any non-core capabilities, especially to minimize internal culpability should anything go wrong. That leaves little opportunity to think outside the box technically.
There’s a hierarchy amongst knowledge work and AI hasn’t yet been able to do the work that is rare and valuable.
Over the past two decades, there have been lot of solved problems like building boring scalable web apps, UX design etc and AI is fairly good at this, enough so that good prompting can get you very far. This shouldn’t be a surprise, there’s a lot of publicly available data for this (GitHub repos etc).
On the other hand, there are rarer Computer science problems like designing efficient Datacenters, GPUs, DL models. Think about problems that someone of Jeff Dean’s or James Hamilton’s (AWS SVP) or a skilled Computer Architecture researcher like David Paterson’s ability would solve. These are incredibly hard and rare problems and AI hasn’t been able to make much progress in these areas. That’s true for other sciences as well.
If you’re a regular Joe like me who builds boring CRUD apps, AI is coming for you.
What I mean is if you are working on incredibly hard and rare problems that require rare skills and also those problems don’t have publicly available data that LLMs can be trained on, you’re safe from being “automated” away. If not, you must plan accordingly. Also if you’re a skilled manager (in any field) AI cannot replace you, highly skilled managers that can get the best out of their teams have rare skills that aren’t easily replicable even amongst humans much less AI. Although, if going forward we need fewer developers we will need fewer managers too.
The differentiator is augmenting reasoning with AI versus replacing reasoning with AI. But those who choose to replace their reasoning with AI probably weren't good at it to begin with; cause if they were, they'd choose to not replace it. Exception is that AI can actually replace reasoning (which it can't, yet) - then it's game over with a career in software engineering anyway.
80% of my day to day job has never been pumping out lots of code. it is a complicated career is it? we do a lot of alignment, design and thinking. i can't even agree the idea of outsourcing thinking, i think AI is very good at helping us to think clearly, but it doesn't really "think" for us.
if you do that then... likely very replacable.
If AI becomes good enough to easily produce maintainable and high quality software, then I really can't see how demand for software engineers would not plummet. Even lots of non-coding work that software engineers do, such as accurately capturing what client actually wants, will become much less valuable - e.g. currently misunderstanding of client's requirements is catastrophic and can lead to waste of months of labour; with AI it could become matter of max few hours lost. So I can understand argument that software engineering careers might be safe because AI may plateau and we might never reach level where it's actually capable of producing good software. But I absolutely don't buy that software engineering will be safe even if such AI exists. Even if your current work is just 20% actually coding, you must remember about second order effects that will take place once quality code generation is 1000 times faster.
AI can also do alignment and pull from its vast training dataset for design and "thinking" -- because 99% of the problems in this world were already solved, multiple times, maybe not in the exactly same format, but in a very similar format.
I also see that in the future humans will adapt to AI, instead of the opposite. Why? Because it's a lot easier for humans to adapt to AI, than the opposite. It's already happening -- why do companies ask their employees to write complete documentation for AI to consume? This is what I called "Adaption".
I can also imagine that in the near future, when employment plummets, when basic income become general, when governments build massive condos for social housing -- everything new will be required to adapt to AI. The roads, the buildings, everything physical is going to be built with ease-of-navigation by AI in consideration. We don't need a Gen AI -- that is too expensive and too long term for the Capitalist class to consider. We only need a bunch of AI agents and robots coordinated in an environment that is friendly to them.
Rather than coining a new word like adaption, I'd call this acculturation. It's reshaping not only SW dev but natural language too -- how we read and write and how we speak.
Everyone knows that AI-written slop isn't worth actually reading. So when reading mass media content we skim over each paragraph's opening phrases rather than read it deliberately, sentence by sentence. We also do this while writing notes, dropping determiners, acronymming common phrases, and making references to characters/scenes in popular media. Now with the rise of vocal interfaces and ever shorter rounds of engagement, all this abbreviating will only exponentiate.
Totally agreed and on point. Calculator operators aren't around a lot.
>If the models are good enough, you will simply get outcompeted by engineers willing to trade their long-term cognitive ability for a short-term lucrative career
> (2) AI-users thus become less effective engineers over time, as their technical skills atrophy
Wouldn't (2) imply that if everyone just used AI there eventually would come a time when there aren't engineers who will outcompete you (because their skills are so atrophied)?
I take issue with the premise that "Using AI means you don’t learn as much from your work" With AI assistance, I tackle far more tasks than I would without it. Learning per task goes down, but cumulative learning does not.
Software engineering today is almost nothing like the role it was 30 years ago.
Maybe if you somehow stick with the same company for your entire career, it could feel somewhat similar... but I doubt it, as 'best practices' and many other things cause it to change.
The days of 'lifetime career' had already gone for most people, way before AI arrived.
Software is a tool to solve a problem, as long as you keep finding problems that you can solve with it, you're likely to get paid to do it.
If your crowning achievement is: "I can 100% all leetcode hards" I have bad news for you.
>I don’t think there’s compelling evidence that using AI makes you less intelligent overall1.
That statement is enough of an evidence
Is anything today a lifetime career? I’ve had at least five or six job descriptions over my time, and at least a few of them pretty much don’t exist anymore, or are changed beyond recognition.
Software engineering was not a career long time ago. The companies have no respect for software engineers and treat them as commodity that can be replaced at any time. The traditional career "progression" also doesn't exist. You can get pay rise only so many times and become the seniorest senior or you want to fulfil the Peter's principle.
While most developers were busy grinding, the corporations did the most ensuring the only sensible pathway to wealth and development is closed = running own business that is. In many countries, due to regulatory capture enacted by corrupt governments, making profit is next to impossible, that if you manage to jump bureaucratic hurdles that are not present for larger corporations.
AI is just a tool. Will AI replace software engineer is like asking will hammer replace the carpenter?
So, permeant underclasses those billionaires talking about are actually just juniors that never get a chance to become a senior.
I'm a software engineer and architect, I love my job, I love diving into the small details, I love the grand overview.. I love identifying concepts and applying them to achieve elegant high-performing systems.
I love thinking about what kind of assembler the compiler may generate (though honestly, I haven't got a chance), I love thinking about how languages should be more dynamic (Who's got actually-first-class functions? Like, ones that you can build, compose, combine and manipulate to the same degree you can a string or a JSON object, no LISP, you're cheating, close no point).
And yet.. I don't care that much. Not because I'm late in my career (I'm 40, there's still some years left in me), but because I want to make computers do things, and what I enjoy doing is thinking up ways the things can happen, and sometimes the particulars that matter when making a lot of different things happen in a coherent system.. And yea, LLMs are trained on peoples output, and from what I'm seeing everywhere, is that people are overall fairly terrible at that, and most of the plumbing-type glue being written is not worth anyones time..
And I'm not saying I don't care because LLMs can't do my job (heck, even after hours of back-and-forth spec building and refining every little nook and cranny, the stupid coding agent still cheats or gets it wrong (even after it's beautifully explained, proven even, by reasoning and example alone, and on first try even) that the words coming after the previous words makes sense, as soon as the plan is put into motion, it'll mess it up on some scale so fundamental I should just have done it myself.. And I hope that changes, I hope that I don't have to go into such detail.. I hope to become a steward of taste rather than a code-reviewer.. I hope that I will eventually not be needed for that anymore.. I want it to replace me, so I can move to telling what I want, and have it made that way..
I hope I won't need to steward good taste, and that nobody will.. I hope the applications I use in 5 years will be a collection of one-offs, and gradually improving tools that was written _just_ for me, for my way of working, and my way of thinking.. I want to prompt the damn program to change itself as I discover new ways to do things, until it can eventually figure out how to automate the last bit of my task away.. And then I'll go do something else exciting.
Imagine a situation where AI creates thousands of lines of code in a few repos and there is a Production issue and does get resolved by AI. How can humans jump in and resolve the bug without knowing anything about the code ?
Why are we upvoting this?
Virtually, the entire blog is about AI with a ridiculous publishing rate (https://www.seangoedecke.com/page/5), funny how I can look at this site HTML and know right away it was done with AI.
Can we stop upvoting vibe published articles? The arguments are flawed and don't even make sense to anyone who does software
I think you're being a bit harsh here.
Yes, the blog is mostly about AI, and yes, he publishes very frequently. But his articles don't read like AI and he claims not to have used it in his writing (https://www.seangoedecke.com/avoid-ai-writing/). And regardless of how you feel about the content, the community has clearly decided it's worthwhile as a discussion point.
> Why are we upvoting this?
Because people want to discuss about the topic of the headline.
Nah it is
> Construction workers don’t say that being a good construction worker means not lifting heavy objects. They say “too bad, that’s the job”
I dont know, maybe in your part of the world, but where I'm from we have a series of robust worker protection laws that try to limit the damage the work does to you. We generally consider it a bad thing for workers to damage their bodies, and if we could build houses without it, we'd prefer that.
In this specific case we do have a techniques to build software without causing damage, so why change that?
This post is arguing that maybe software enginnering should start being harmful, even though we know it doesn't have to. It's a post of a guy begging to be fed into the capitalist meat grinder. Meaningless self sacrifice.
It never was a lifetime career, if you don't get the dough by 35 you just failed.
i'm about 35 and i have made good money but not enough to quit. i plan on just sitting around cashing checks for another decade. with a few liquidity events along the way to sweeten the deal. should pay for my mortgage, some home renos, and fund my 401k etc. i don't foresee myself being out of work (and i don't even use AI to code! i'm just Actually Good!)
Absolutely untrue, you could have a solid career writing back office or internal software in financial services, insurance, higher ed, any number of industries. Would they make you a millionaire? No. But they'd pay for a nice house in the suburbs and raising a family.
I started at age 39 though and did pretty well up until a year or two ago (16 years total).
Like many people I've been sad about the loss of a career I spent years developing skills in and I'm 55 now and won't be quickly retraining for another high paying career. Fortunately I do have other skills I developed earlier in life and low needs so will probably limp by fine but it's still a painful adjustment.
Point being, you could always write code as an older person. Well, back in the old days when we wrote code anyway.
> I hope that this isn’t true. It would be really unfortunate for software engineers. But it would be even more unfortunate if it were true and we refused to acknowledge it.
More AI Soothsaying. Not so hard on the Inevitabilism this time.
https://news.ycombinator.com/item?id=47362178
On the contrary, in an efficient economy, every business operations manager (MBA) would be a skilled software engineer, able to comfortably manage data flows and design custom automated processes. There's so much potential energy there in unlocking this technical literacy.
Less "pure" programming, but lots more programming in general.
Was it ever? It's always seemed weird to me that people even think 'software engineering' is a career.
It's a tool for knowledge work.
No carpenter is a specialist in drills.
It seems to me that the best way to navigate a long term career is to have another specialty and use software engineering as a tool within that specialty.
It most certainly was a lifelong career.
I’m kind of confused how you might think it wasn’t. Going through a career as a software dev until retirement was very common.
Software engineers didn’t just disappear after age 40.
> Software engineers didn’t just disappear after age 40.
At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game" (i.e. you likely won't get a programming job anymore when you are, say, 35 or 40 years old).
So,
> Software engineers didn’t just disappear after age 40.
is rather a very recent phenomenon.
> At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life;
This wasn't common anywhere except for maybe the Silicon Valley bubble.
The rest of the US and even the world could see that not having a very successful company of your own is to equal to being a failure.
Enter the carousel. This is the time of renewal.
I'm not sure anyone under 40 is getting that reference.
> At the end of the 90th and beginning of the 00th ("dotcom bubble") it was a common saying that if as a programmer, when you are 30 or 40, you don't have a very successful company (and thus basically set for life), you basically failed in life; exactly because "everybody" knew that programming is a "young man's game"
That seemed commonly held among folks participating in the dot-com bubble. Plenty of people had been doing it for decades even as the bubble was growing.
> Software engineers didn’t just disappear after age 40.
>> is rather a very recent phenomenon.
Not really. It's not that they disappeared, it's that they're a small fraction of the overall SWE population as a side-effect of how much that population has grown.
Software is wood, not drills, and if we somehow invented bacteria that gradually built an ugly but saleable house when fed on water and nutrients and nudged into shape, I bet carpenters (well, framers or whatever they're called in the US) would have an identity crisis too.
I kind of disagree. You are describing a kind person who is extremely valuable, a person who is proficient in SWE but also has domain specific skills in some niche.
That's great, but its nowhere near the norm, and people have been doing generalist software engineering for decades. There has been a sufficient amount of work for a long time to be performed by generalists that it has been a very reasonable career.
IMO AI is the first thing that has ever actually challenged that.
Id disagree with this analogy: "No carpenter is a specialist in drills." and i think its an interesting lens through which to look at the evolution of our tools.
I think there are trades where tool (or process if i may be allowed to extend the analogy) specialists exist and are highly valued. My dad is a plumber, so ill use that example but id trust similar is true for carpentry. there are specialists by task/output (new construction, repairs, boilers etc) but also tool specialist plumbers and companies for example drain clearing equipment or certain kinds of pipe for handling chemicals other than water are very specialised, and there are roles for them because the thing they enable, and the criticality of the task, and often the cost and complexity of using the tool are high enough to make specialisation valuable.
IMO software has, for the 10 years ive been working in it, been in an unusual position where the tools (languages, engineering practices, tech stacks) were super technical and involved, but also could be applied to a large number of problems. That is the perfect recipe for tool specialists: complex tool with high value and broad domain/problem space applicability.
Because of that tool specialisation, we've separated the application of the tool to a problem/domain from the tool use. reduction of complexity of applying these tools to many problems, means all domain specialists will use them, relying less on tool specialists.
imaging a mcguffin tool for attaching any two materials together, but which took a degree to figure out (loose hyperbole here), that sudenly you could use for 5 bucks and a quick glance at the first page of the manual. An industry that used to have lots of mcguffin engineers, would be mega disrupted, and you could argue that those tool specialists would have to identify more with what they were building than the mcguffin they were using.
Yeah, IEEE Spectrum has responded to the dissimilar roles in SW dev by ranking programming language popularity contextually, by separating the project domains and ranking the languages only within each domain. That's a lot more useful than allowing the single dominant project domain to silence the recessive ones, as TIOBE does.
I tell my boys (both in HS now), the combination of a specialized skill/knowledge + competent computer programming is the sweet spot. For example, my oldest wants to go into Petroleum Engineering which is great but I told him to still learn software development and get comfortable solving problems with code. Having specialized Petroleum Engineering knowledge combined with being a competent software developer is a powerful combination.
Yeah, I've seen the same thing happen to data miners in the pharma industry. An increasing fraction of young biologists have skill in basic statistical DM as well as web search proficiency sufficient to gather DM code analysis examples, even without using AI. In the very near future I expect almost all R&D exploratory DM will be done by pharma domain experts (biologists and chemists) rather than served by DM experts (computer scientists or engineers).
I think the logical next step is that "XYZ knowledge worker" will become a software engineer of sorts. Not literally writing code, but at minimum encoding processes/workflows into some language.
If you're a paralegal or an accountant who can't manage their workflows with AI, you're going to be way less productive than someone who can.
And if you're a paralegal or an accountant who can manage a lot of your workflows with AI, you don't need custom software (hence less dedicated software engineers).
> No carpenter is a specialist in drills.
There's no category difference between being an expert in carpentry vs masonry and being an expert in drills vs hammers. They are both just areas of expertise.
Going down the path of trying to define what is expert functions and what is "merely" a tool using anything but descriptive technique is nonsense.
Expert functions are just those areas where using a tool is sufficiently difficult to require expertise.
Software engineering isn't a tool, it's the task.
Are people seriously thinking that you can make yourself dumber by using a chat UI?
If talking to an AI makes me dumber and a limited career, then all the customer support people that ever existed were in the same or worse position talking to dumb humans on chat all day answering tickets always about the same topics and linking the same docs over and over. This makes no sense.
You're misrepresenting the potential problem. It's more along the lines of using AI stops you exercising the cognitive processes you would doing things yourself and those encompass skills, knowledge and brain function that can atrophy. For an extreme example you can look at cognitive decline in the elderly which can be mitigated by taking part in activities that are cognitively stimulating.
Can you comment on other jobs though? The large majority of jobs require no big mental effort? Even switching from programming to management would go through that. Under that light it'd be impossible for a manager to ever become technical again because they'd atrophy so quickly?
I think you're probably castrophizing the impact with statements like "it'd be impossible for a manager to ever become technical again" because that's not the likely outcome as I understand things. But yes people who stop programming for an appreciable amount of time do find it harder to pick back up again.
The longer the manager is out of the game, the harder it is to return to the game. Returning to the game takes time. Depending on age and income, returning to the game may be impossible for some people over time.
I can't answer for the other guy, but my answer would be that talking to a clanker is LESS mental effort than being a manager, and that's why your reasoning atrophies so quickly.
Managers can go back to being technical, because they are still interacting with problems that require human thinking. Token farmers don't.
If you constantly pawn a task or cognitive load onto someone else (AI or not), you'll eventually get worse and worse at that particular type of thinking. Your overall mind doesn't necessarily get weaker, but you definitely start to get worse at anything you don't regularly practice.
I think you need to read the studies linked in the footnotes. This is a well-studied issued.
You can definitely feel it when you talk against an AI vs doing the churn yourself. It's comfortable, simple, it doesn't aggravate you.
Pretty much every study says so, so I guess?
> The career of a pro athlete has a maximum lifespan of around fifteen years. You have the opportunity to make a lot of money until around your mid-thirties, at which point your body just can’t keep up with it.
If you believe this about your software career, how do you think your going to switch into another career as a junior and keep up?
Easy, I made the switch in my 30s, now I manage software engineers :)
Software managers are being replaced by vibe coders.In the age of AI managers are irrelevant
Most careers evolve as technology does.
Other professions do too, whether it's healthcare, etc.
Software being a new field, didn't really become a standardized profession in the way engineering might be.
The goalposts are moving because the standards are moving, because the capabilities are moving.
Remaining a self-directed learner will remain critical.
terribly written article that failed to make any point. anyone whise read ai generated code from the best models and who understand how llms work, knows this statement is complete bs.
It will be for those fixing AI slop software. (In fact, they might need several lifetimes.)
Why do people think there will be fixing AI slop software? I see that opinion here and there on HN. The cost of codegen is next to nothing. It makes no sense to spend large sums of money having an engineer fix something that could be generated over and over until gods of stochasticity come in your favour.
We've entered a period of single-use-plastic software, piling up and polluting everything, because it's cheaper than the alternative
When everything is generated on-demand - each exploit has to be discovered anew. No more conveniences like common libraries.
This is sarcasm, but it's probably also going to get sold as a feature at some point.
The problem partially is that AI can also fix AI slop. At this point I am in doubt whether code quality matters anymore in most non-critical software. You can ask an LLM if the code has quality issues and refactor to a _better_ version. It will reason through, prepare a plan and refactor. So now with this "better" code you can expect that your LLM will be able to deliver higher quality results but that's all the quality that is needed.
Actually, at this point I feel that the value in software engineering is moving from coding to testing and quality assurance.
In my experience, an LLM "refactoring" autonomously doesn't actually improve code quality, it simply reorganizes the mess into a new mess.
This is my experience with human developers too so I'm not sure if there's a meaningful difference.
Sure, but also, AI will always find issues. It will never be mildly satisfied with the codebase and say so.
All the frontier models tell me when there are no issues. After implementing a feature I will ask it to identify issues in my implementation, list them, and support each item they identified with technical argumentation and reasoning as to why it's an issue.
If it doesn't find anything it says I didn't find anything.
Not from my experience. It's true that it will always find new issues in a new session but it is happy to say so when the code is good.
> AI can also fix AI slop
No it can't.
AI knows nothing about software engineering, all it can do is generate code.