Will AIs take all our jobs and end human history, or not? (2023)

(writings.stephenwolfram.com)

92 points | by lukakopajtic 2 days ago ago

169 comments

  • myrmidon 2 days ago ago

    A big short-term risk that I see is that AI is going to cripple wealth redistribution mechanisms that we currently rely on.

    Most willing persons have access to income by providing labor right now. If the value of that labor diminishes because AI can do most of it for cheaper/free, that is a big problem because wealth/class barriers become insurmountable and the american dream basically dies completely.

    Automation in the past suffered much less from this because only a subset of jobs was affected by it, and it still relied on human labor to build, maintain and operate the machines, unlike AI.

    I'm curious if AI is gonna spawn comparable "workers rights" movements like in the past, but I would expect inequality to increase a lot until some solution is found.

    • tim333 20 hours ago ago

      It'll probably go kinda socialist. From AI according to its ability to each according to his need.

      • direwolf20 19 hours ago ago

        What incentive do politicians have to make that happen, when they could more easily blame immigrants and transgender people?

        • tim333 19 hours ago ago

          There have always been politicians offering that kind of thing - Berni, Corbyn etc. The issue is will people vote for them? So far mostly no because someone has to do the work and many workers prefer to keep the proceeds. If AIs can do the work it may be more attractive.

  • mips_avatar 2 days ago ago

    Outside of America people aren't really stressed about AI. Like you go to Vietnam or Vienna they mostly just think that they will have a good life with AI. It's uniquely American to believe that your life will end when AI takes your job.

    The problem isn't the AI it's that your access to basic rights is intermediated by a corporate job. American's need to decenter their self worth from their jobs. Like when I quit Microsoft I literally thought I was dying, but that's all an illusion from the corporations.

    • ewuhic 2 days ago ago

      I am from Vienna, and this is completely false. Likewise, my friends from many countries, incl. Vietnam, also share the sentiment you describe as "American". Your point has no standing.

      • motbus3 a day ago ago

        Same, and I ain't in any of those locations.

        The thing is that there will be less work, but that certainly is not looking like universal income. It is more like rich people will get rid of those who they don't need with some sort of "let's kill parasites" or "cutting weed" because the world is in danger of poor people destroying it. In certain ways, they are already trying to tell that to people so nobody notices that no single person should have a trillion or even a billion of money

        • tim333 19 hours ago ago

          If you look at real world examples of machines replacing jobs like in the industrial revolution it didn't really go like that. The rich didn't monopolise everything because competition reduced the profits and the works moved on to other jobs like the ones we do today.

          The "let's kill parasites" stuff has arisen more in land grabs to clear the indigenous people, not due to tech.

      • mips_avatar 2 days ago ago

        Ok well I didn't meet everyone in Vienna or Vietnam, but those I did meet hadn't been laid off in the name of AI (yesterday three of my friends were laid off from Amazon). And if you're laid off in Vienna you still have access to healthcare, and there's a functioning safety net, and just generally a sense that you're not alone in this. In the united states my apartment has a homeless man living beneath it and every night you hear him either screaming because he doesn't have access to drugs or laughing because he does. There's a not so subtle attitude that maybe people like that deserve to die. So yes maybe people in Vienna don't like AI, but you are wrong that they fear it in the way Americans do.

        • tim333 19 hours ago ago

          I think US social issues and AI are rather separate. We have quite a few AI worriers from the UK, Hinton being maybe the best known.

        • ewuhic 2 days ago ago

          You have a very rosy picture of non-Pax-Americana.

          • mips_avatar 2 days ago ago

            Well I also visited Ethiopia this year and I got to hear first hand about the genocide in Tigray, I'm very aware of the horrifying atrocities that happen when social order and human rights break down.

            • ewuhic 2 days ago ago

              [flagged]

    • onlyrealcuzzo 2 days ago ago

      > It's uniquely American to believe that your life will end when AI takes your job.

      It's probably because it's uniquely American for a sizable chunk of the workforce to have cushy jobs that appear ripe for the picking.

      AI is not going to immediately replace food service work, manual labor, farming, hospitality, etc.

      But it might replace quite high-paid software jobs, finance jobs, legal jobs, etc. One, if AI is good at anything it's things at least tangential to this. Two, these have costs high enough that off-setting is at least worth trying.

      My suspicion is that ultimately it will lead to more of these types of jobs, though it could easily come with a huge reduction - and the jobs aren't guaranteed to be in the same countries.

      You could create 3x as many of these jobs, and still end up with -25% of them in the US. Who knows.

      • mips_avatar 2 days ago ago

        The tech jobs were cushy because the tech companies bribed ambitious people to keep them from starting their own companies. Like it wasn't about hiring them because you needed them it was to keep them from competing against you. Like this was always a deal with the devil, but the idea that Microsoft has more of a right to the monopoly it has over OS operating systems than their employees have access to their healthcare is wrong. Like I think a lot of people are waking up to the idea that the promises their companies made to them were like the promises the devil made in that theirs fine print and they sold the company more than they realized.

        Additionally all the startups offering to automate whitecollar work are going to run into a problem when they realize the jobs never needed to be done.

      • danaris 2 days ago ago

        > It's probably because it's uniquely American for a sizable chunk of the workforce to have cushy jobs that appear ripe for the picking.

        What an arrogant statement.

        Lots of people outside America have cushy jobs.

        What's much more likely to be uniquely American is that if you lose your job there's nothing there to help you.

    • jsight 2 days ago ago

      A big part of it is insurance. Family coverage can easily cost $15-20k per year in the US. Avoiding the need to pay for this out of pocket drives a lot of people into less than optimal job dependence.

      • mips_avatar 2 days ago ago

        That and housing, and inflation during covid. Like there's this idea that you don't really deserve anything in the United States.

    • Avicebron 2 days ago ago

      It's because most people are already teetering on the edge. The difference between working in software vs anywhere else in the US is that the average atlassianer or similar is spending 3 months 4 times a year writing js from beach resorts up and down Europe while keeping a condo in new york. The rest can't afford to own a home or dental insurance. So when people are threatening those people who barely have anything as it is they get heated.

    • shevy-java 2 days ago ago

      I think you do have a point here, but I should like to point out that, since you mentioned Vienna, barely anyone here sees AI being tightly integrated into anything. Sure, smartphone users may use it; and exams at universities may say "don't use AI", so people use it - but for most everyday stuff it is really barely noticeable here. This is why I also disagree with "they will have a good life with AI", because it assumes that AI plays a huge role here, which it really does not.

      The AI hype is definitely much bigger in the USA - on that part we concur.

      • mips_avatar 2 days ago ago

        Yeah but even if you believe the hype, if you're in a functioning country you don't think that your ability to house your family wrests on AI being worse at excel than you.

    • raincole 2 days ago ago

      > your access to basic rights is intermediated by a corporate job

      Lmao. America has worse social welfare than most developed countries, but it's still a heaven compared to most of the world. What you can find in food bank is a feast for billions of people on this planet.

      American people are stressed about AI because American people are expensive. Like hella expensive. So the incentive to replace American workers is very strong.

    • testfrequency 2 days ago ago

      I would argue this says more about how Americans view and treat life as their work instead of treating the world as their life.

    • advael 2 days ago ago

      To "decenter [one's] self worth from [one's] job" would presumably require the fact that one's "access to basic rights is intermediated by a corporate job", no? This is a policy problem that needs to be solved by collective action, not a mindset problem that can be solved by personal growth

      • mips_avatar 2 days ago ago

        No you have to decenter yourself from your pay to achieve it. People accept things like cutting of medicaid because they see their job success as a moral success. There's a lot you can do to start being the good in the world that doesn't require Washington's permission.

        • advael 2 days ago ago

          I can't parse what distinction you're trying to draw here. When I say "collective action" I mean exactly things like exerting political pressure toward objectives like expanding rather than reducing healthcare coverage provided by governments, such as medicaid. The notion that solutions that involve changing the policy of governments "require[s] Washington's permission" seems to reject the notion that we exert power collectively via democracy, but your proposed example of what someone shouldn't accept suggests that this is an issue of how people fail to see the value in exerting said power, for which the prescription would presumably be doing so? I don't understand what you're driving at or even why you think we're in disagreement

          • mips_avatar 2 days ago ago

            Sorry I think you're completely right, I'm frustrated that collective action here is so hard to come by. I think that we feel really isolated because we don't see ourselves as part of a larger whole and becoming less isolated involves leaving behind this identity tied to our economic output.

    • treenode a day ago ago

      This isn't true. People in India are worried about AI quite a bit.

    • skywhopper 17 hours ago ago

      I admit I am suspicious of the claim that you know what all Americans, Austrians, or Vietnamese think about anything.

    • briantakita 2 days ago ago

      It's part of the messianic end-times fervor that has been with America since the beginning...which is useful for imperial management...As it provides a constant source of existential judgement and dread...that religious/quasi-religious administrators can exploit.

    • alexjplant 2 days ago ago

      For a while my job was part of my identity in the way you describe not because "lol hypercapitalist American" but because I like computers and computers also pay the bills. I was writing software and doing technical stuff from a young age because I enjoyed it. I fell into doing it professionally because it was an obvious path. It didn't help that when you do this older people like it because they can use you as free/cheap labor which an impressionable kid might mistake for actual praise. It turns out I like other stuff too but it's hard to talk to people socially about obscure New Wave bands and continental philosophy and 90s neo-noir films whereas computers and gizmos and apps are a common frame of reference.

      I guarantee you that these people exist in other countries too. Not everybody is a tech bro strawman.

    • hshdhdhj4444 a day ago ago

      This is fantasy land.

      Yeah, a rural farmer who’s never exposed to AI except when it made it possible for her to communicate with someone visiting the farm who dint speak the same languages isn’t worried about AI.

      But policy makers, technologists, economists, the elite abs other educated people aware of more than the basics of AI are all concerned about its impacts.

      • defrost a day ago ago

        Or, local to where I am, rural farmers using AI bots to hint them on capital machine repairs (their gear runs into the millions), summarise ANOVA trials on grain crop samples, reinforce their personal conspiracy theories and biases, and answer a huge run of everyday queries.

        Some can see the benefits. Some (with overlap) can see the downsides (wrong answers and enforcing bad traits).

        > But policy makers, technologists, economists, the elite a[nd] other educated people ..

        also overlap with rural farmers.

        I think a problem here is your image of what a rural farmer is.

        A good many here are multi millionaires, in assets at least, have children in elite private schools, have family members in local, state, and federal government who started out farming, went to university, have careers that may include farming on the side, and retire to farming.

        Next you'll be saying grandmothers can't code weather modelling and prediction software on Cyber 205's or something equally daft.

    • pelasaco 2 days ago ago

      Vienna? lol

  • BirAdam 2 days ago ago

    If all jobs were taken by AI in a short time span, the companies owning and operating those AIs would go out of business as no one would be able to afford the products made by the AIs. This is an unlikely scenario. Not all things will be made/run by AIs in a short time. It is far more likely that specific jobs in specific industries will be taken by AI, and AI will slowly take the labor market. This will drive down prices on products, services, and labor. Once human labor's price is low, and once many product prices are low, the overall employment level of humans will rise. The effect of AI then is actually just deflationary pressure on all prices over time.

    The really scary part is what happens to all of the newly unemployed people between the falling prices part and the rising employment part. My guess is, governments and markets won't move quickly enough and unrest is what happens.

    • ericmcer a day ago ago

      We might just keep making more jobs and coming up with more busy work to keep people grinding away for 40 hours a week.

      If you look at 1940, women were ~24% of the workforce. Now in 2025 they are ~48%. The numbers are probably similar with immigrant workers having increased greatly in the last 80 years.

      If you view AI workers as just more labor flooding the workforce it might have a similar affect. If we flooded the 1940s economy with 10s of millions of qualified women and immigrant laborers people would have viewed it as devastating to the economy, but introduced gradually over time we arrive at a point now where we fear what would happen if they went away.

      • morkalork a day ago ago

        That example doesn't hold up once you expand your view to other countries. Where are all these jobs that magically materialize in labour surplus economies like Brazil or Bangladesh?

        • cal_dent a day ago ago

          Yes, i do feel that many people woho talk about jevon's paradox element of employment have not spent much time in developing economies. You can have a lot of people doing absolutely nothing, economically, day to day and still have a functioning state

          • tim333 19 hours ago ago

            In the UK we have approx 22 million adults doing not much so you can see it in developed places too.

        • tim333 19 hours ago ago

          Googling Bangladesh unemployment seems to be 4.7% and GDP growth has averaged about 7% so not so different to elsewhere apart from faster growth from a low base?

    • AnotherGoodName 2 days ago ago

      Also just to be clear on the outcome of what you said: Humans will be cheaper than AI in order to compete.

      AI uses 10litres of water and 10kwh of power per day to digg a hole? You'd better do it for less human!

      I'm not sure on the human needs costs vs the AI costs and what lifestyle it would allow me. I'm sure as shit not having kids in such a world. I suspect it's ghetto like meager living while competing against machines optimised to do a job.

      • tim333 19 hours ago ago

        In the UK 10litres of water and 10kwh of power cost about £2.50. Hiring someone to dig holes probably runs 50x that.

    • ASalazarMX 2 days ago ago

      If machines did all the repetitive, labor intensive, productive work, including building more machines, the natural consequence would be a very disruptive rethinking of economics. Post-scarcity is only a disaster if money exists. People would still work, but as a hobby, not as a way of survival.

      Think of it as if in a few generations, everyone had the motivations of a rich junior, for better or worse.

      IMO, this is a natural consequence of the industrial revolution, and the information revolution. We started to automate physical labor, then we started to automate mental labor. We're still very far form it, but we're going to automate whole humans (or better) eventually.

      Edit: I think I replied to the wrong comment, feel free to ignore this.

      • myrmidon 2 days ago ago

        "Disruptive rethinking of economics" is a very optimistic way to put this IMO.

        The big problem I see is that there is little incentive for "owners" (of datacenters/factories/etc) to share anything with such hobbyist laborers, because hobbyist labor has little to no value to them.

        All the past waves of automation provided a lot of entirely new job opportunities AND increased overall demand (by factory workers siphoning off some of the gained wealth and spending it themselves). AI does neither.

        • ef2efe a day ago ago

          Who cares who owns the data center? The govt can send in the army and nationalise it... Lol as if you really believe that a bunch of people will actually control everything and the govt wont?

          Think harder.

          • myrmidon a day ago ago

            > Who cares who owns the data center? The govt can send in the army and nationalise it

            Do you think the government is going to react with mass nationalisation of private companies to fight wealth inequality? What would the threshold be? The top 1% owning half of everything? 70%? I share no such optimism. The wealthy already have their interests much better represented in politics than their share of votes should allow (and this is a really difficult problem to tackle!), this is only gonna get worse and any government action against rich people interests is going to be increasingly difficult to trigger and sustain.

            Even if such mass nationalisation happened, why would you expect a better final outcome than every attempt at communism got (while doing the same thing): Namely, government just splitting up those spoils with their cronies?

          • funkyfiddler369 a day ago ago

            of course they can, but that's a process, and a reactive one at that.

            the government won't control uptime, ever.

          • vinyl7 a day ago ago

            The US govt is already useless in constrast with the ruling corporations. Congress can't get anything done. What makes you think they could or would do anything to the slave owners who pay them?

      • lm28469 21 hours ago ago

        > If machines did all the repetitive, labor intensive, productive work, including building more machines, the natural consequence would be a very disruptive rethinking of economics. Post-scarcity is only a disaster if money exists. People would still work, but as a hobby, not as a way of survival.

        That's what they told us during the industrial revolution. And also what they told us during the last automation rush of the 70s/80s

        It's a political problem, not a technological one, and it's been that way for at least 100 years.

      • pelasaco 2 days ago ago

        those are just "wishful thinking" or "Noble lies" that we are used to in the post-truth world. Until now, only creative jobs are going away. Music, Arts, Software development.. Construction Work, Garbage Collector etc, are much safer than expected after the "Robot Revolution"

        • lambdaone a day ago ago

          I think you'd be suprised how effective robots will be at manual tasks eventually. Manipulating physical objects in space is a different problem from manipulating text strings, but efforts to solve this problem are already well under way.

          Boston Dynamics has shown us that the difference between a clumsy robot and an agile one is mostly software, and the differences between current Unitree-class robot and an actual practical worker robot is also likely to be mostly software (and of course access to lots of compute power - most of the 'brain' is unlikely to be situated within the robot body itself, instead residing in a data centre some milliseconds away).

          • pelasaco 19 hours ago ago

            yeah yeah, we heard it million times. Noble lies.

            The "robots will do the manual work" story sounds comforting, but it’s not how automation usually spreads in a capitalist economy like ours. Capitalism automates where the return on investment is easiest and fastest, not where society most needs relief. That’s why AI is hitting creative and white-collar work first: you can replace or augment digital labor from a data center, scale instantly through subscriptions, and avoid the slow, expensive realities of manufacturing, maintenance, and safety certification.

            Physical robotics is a very different game. Even if the software improves dramatically, real-world robots are bottlenecked by supply chains for actuators, sensors, batteries, precision parts, and the teams needed to deploy and maintain them. We are running out of Material to build just CPU/GPU/RAM, imagine complex Boston Dynamics robots..

            • lambdaone 19 hours ago ago

              People always vastly overestimate what can be done in the short term, and vastly underestimate what can be done in the long term.

              I'm reclining right now typing on what would have been in the 1980s an unimaginable hypercomputer lying in my lap, at a cost far less in inflation-adjusted terms roughly that of a ZX80, connected by gigabit-speed links to a world-spanning network of similarly unimaginably fast servers connected by near-terabit optical links. And all this has changed the world in ways impossible to anticipate in the 1980s, ways that look like the most extreme cyberpunk fiction of that time. Who could have anticipated, for example, that politics is now substantially driven by covert bot farms, or that LLMs could seduce people into suicidal psychoses?

              Yes, robots are going to be underwhelming for quite some considerable time, just like the ZX81 represented almost no improvement over the ZX80 and so on - each generation represented only a marginal increase over the previous. Solar panels were crap 20 years ago; toys useful only for powering pocket calculators. But they got a little bit better year by year, and small improvements compund exponentially. Now renewables are approaching 50% of electrical power generation in many places, and it's pretty clear that in another 20 years, wind/solar/battery will be the sole generation source for all but the most niche activities.

              I expect the robot boosterism of the present day to bust pretty quickly when we see how different their capabilities are from the fantasy. But fast-forward just 20 years, and supply chains adapt much faster than expected (cf. Chinese electric car manufacturing) and the concept of ubiquitous robotics seems much more feasible. It certainly seems likely that if we can make roughly 100 million cars every year, we can make robots at a similar rate. I think it's likely to change the world in ways we can't imagine yet.

              People live longer than 20 years, and the average person born today can expect to see perhaps four such technological revolutions. Think long-term.

              • pelasaco 14 hours ago ago

                > I'm reclining right now typing on what would have been in the 1980s an unimaginable hypercomputer lying in my lap

                But in the 80s you would have a home. In 20 years, I doubt we will be able to buy a home, or even have Humanoids to serve us.

              • pelasaco 14 hours ago ago

                Your laptop is an advanced computer, but the intelligence and computer power is in the cloud. GPU is expensive and no way we can provide it to everyone on earth. Material-wise we have limitations. Unless we destroy the earth, we won't have the amount of raw material that we need to automate cheap jobs. The ROI is too low.

                So the likely trajectory is not a sudden wave of millions of helpful humanoids, but selective automation in structured environments like warehouses, factories, controlled logistics, where conditions are predictable and ROI is clear. Meanwhile, messy, unstructured "dirt jobs" persist as human work because humans are still the most adaptable system available at the lowest upfront cost, maybe not today in the welfare state in Europe, but for sure in other places on Earth...

            • ASalazarMX 16 hours ago ago

              > The "robots will do the manual work" story sounds comforting, but it’s not how automation usually spreads in a capitalist economy like ours. Capitalism automates where the return on investment is easiest and fastest, not where society most needs relief.

              Quick question: imagine there's a new commercial robot that can essentially work at your house like a tireless professional maid/butler. It costs as much as a new car, which you're used to change every few years.

              Who do you think will profit more in our capitalistic society the car manufacturer, or the robot manufacturer?

              • pelasaco 14 hours ago ago

                > Who do you think will profit more in our capitalistic society the car manufacturer, or the robot manufacturer?

                Probably the robot manufacturer will be the car manufacturer. But Robot won't be for everyone, as Teslas are not for everyone, and again: The supply-chain for sensors, computer chips are already on the limit, imagine if we suddenly want to build Humanoids. So mostly you won't have your humanoid. You just won't need a Robot at home, because you won't have a home in first place.

    • Veedrac a day ago ago

      > the companies owning and operating those AIs would go out of business as no one would be able to afford the products made by the AIs

      What do you think money is...?

      Money is a way to indirectly trade labour and goods. If a job is automated, that labour doesn't disappear into the aether, it's still in the tradable pot of total goods and services. You cannot empty a pot by filling it. A world where a company though automation has made there nobody else to productively sell to is a world where _by definition_ they own all the output that they could otherwise have traded for.

    • brewdad 2 days ago ago

      We are already at a point where the richest 10% of Americans represent half of total consumer spending. A lot of companies would fail but plenty of them would survive just fine if we assume AI won't take literally ALL of the jobs.

      As for the civil unrest, I see Minneapolis as a bit of a dry run of what it would take to remove large numbers of presumably poor minorities along with anyone else who objects. The job is clearly more than the leadership expected but it still seems within the realm of possibility given the fact the minority party leaders are barely saying no to those in power.

    • raincole 2 days ago ago

      There are millions of jobs that can be fully automated with 20th century technology but are still done by humans today because 1) third world labor is just too cheap 2) unions and other job protection policies.

      Therefore the scenario where 'all jobs being replaced in a short time span' is simply impossible.

    • kelseyfrog 2 days ago ago

      Except the services that are intractably human: educators, judges, lawyers, social workers, personal trainers, childcare workers.

      Those will suffer the Baumol effect and their prices will rise to extraordinary levels.

      • ben_w 2 days ago ago

        There's already examples of lawyers offloading work to ChatGPT even though they weren't allowed to. Also educators (and students), though if all other work is automated, what's there to educate for, and how would the prospective students pay?

        Social work, childcare, for now I agree:

        My expectation is that general purpose humanoid robots, being smaller than cars and needing to do a strict superset of what is needed to drive a car, happen at least a decade after self driving cars lose all of the steering wheels, and the geofences, and any remote safety drivers. And that's even with expected algorithmic improvements, if we don't get algorithmic improvements then hardware improvements alone will force this to be at least 18 years' between that level of FSD and androids.

      • oops 2 days ago ago

        I imagine personal trainers and childcare workers would see a drop in demand and perhaps also an increase in supply if a bunch of people suddenly lost their jobs to AI.

      • undefined 2 days ago ago
        [deleted]
      • onlyrealcuzzo 2 days ago ago

        One would assume - if this were to happen - that supply and demand would bring prices back down, as everyone would rush to those fields.

        • kelseyfrog 2 days ago ago

          Our increased efficiency producing manufactured goods, technology, food, and clothing has already produced this effect in healthcare, education, childcare, and more. That's how the effect works.

          The only question is, are we prepared to deal with the social ramifications of the consequences? Are we ok with new crises? Imagine the current problems dialed up 10x. Are we prepared to say, "the market is in a new equilibrium, and that's ok"?

          • oops 2 days ago ago

            Healthcare, education and childcare are either free or affordable in almost all developed countries.

            Even in places where these services are expensive, it does not seem to be because the workers are highly paid.

            • swexbe 21 hours ago ago

              They are not free, they are paid for by taxes. And in pretty much all countries, irrespective of funding model, these services have increased in price much faster than general inflation. This is the Baumol effect in action.

      • p1esk 2 days ago ago

        The best educator I’ve ever had is ChatGPT.

        • kelseyfrog 2 days ago ago

          How scalable is that in the sense that teachers have been obsoleted and we can run zero-staff schools?

      • danaris 2 days ago ago

        The big tech AI barons absolutely claim that their LLMs can replace educators, judges, lawyers, and personal trainers. I've seen some vague claims about childcare robots, but for whatever reasons anything that's not pure software appears to be currently outside their field of vision. They're unlikely to make any claims about social workers because there's not enough money in it.

        No; the services that seem most intractably human, at least given the current state of things, are very much those in personal care roles—nurses, elder care workers, similar sorts of on-the-ground, in-person medical/emotional care—and trades, like plumbing, construction, electrical work, handcrafts, etc.

        Until we start seeing high-quality general-purpose robots (whether they're humanoid or not), those seem likely to be the jobs safest from direct attempts to replace them with LLMs. That doesn't mean they'll be safe from the overall economic fallout, of course, nor that the attempts to replace knowledge work of all types will actually succeed in a meaningful way.

    • mannanj a day ago ago

      > If all jobs were taken by AI in a short time span, the companies owning and operating those AIs would go out of business as no one would be able to afford the products made by the AIs

      I think The companies would go out of business if the government did not subsidize them as a matter of public or national security interest. Do you think that would not be the case? It doesn't take much for a company with money to lobby for this and for the power of marketing and mainstream media to make the public perceive this as the right decision - in fact a study of our history would reveal this as the more likely scenario so as a company racing to render the labor market obsolete its in their interest to disrupt it to capture any amount of it.

      • ef2efe a day ago ago

        The end game is nationalisation lmao. How people cant see this is mad...

        It wouldnt be the first time in history a govt has taken into their hands an organisation that is deemed too powerful.

        • funkyfiddler369 a day ago ago

          if the government is owned by corporations via the stock markets than governments taking over organisations is privatization via a majority shareholdership and not nationalization.

          • ef2efe a day ago ago

            Are you seriously this delusional? THe leaders of tech firms operate at the behest of Trump. Not the other way round.

            • funkyfiddler369 a day ago ago

              How did Trump get where he is? Who was and is his supply chain? Who made him? Who did he 'use' and 'need' to 'build himself'?

            • mannanj a day ago ago

              Who funds those tech firms? It's not Trump. His power is mostly theater to his constituents and those who funded Trump: spoiler: he's not entirely self funded, and some of his funding is at least from others so he's conflicted in his self interests to others. We call this a classic conflict of interest.

    • cyanydeez a day ago ago

      I think you arn't paying attention: AS long as there's 1 seller and 1 buyer, Capitalism will happily burn the rest of the population.

      Sure there's some other limits on social cohesion, but the idea that we can't squeeze upward and leave a bunch of poor people destitute is optimistic.

      It's also how you ensure no one thinks: Hey, maybe capitalism isn't an optimal distribution of social good.

  • victorbjorklund 20 hours ago ago

    There is a simple fact that means we won’t have zero jobs in the future. Human wants is endless. At no point will humans say ”that is enough we don’t need more” so will always have the incentive to use both capital and people.

    Is certain jobs gonna be replaced? Probably.

    Let’s say we make substantial advances in robotics and it makes sense to replace all the staff in McDonalds with machines? Yea, then those jobs are probably gone. But does it mean people won’t pay for an authentic restaurant experience? No, they might still wanna pay to eat food from a chef etc.

    • direwolf20 19 hours ago ago

      Jobs do not occur because of wants. They occur at the intersection of wants and economic possibilities. Economic possibilities might go to zero.

    • tim333 20 hours ago ago

      Also people like working to an extent even if it's non economic stuff like working on their golf swing or catching all the Pokemon.

  • anilgulecha a day ago ago

    The essay is direction-less. For someone looking at a cogent take, I think the recent post by Anthropic's CEO is one such: https://www.darioamodei.com/essay/the-adolescence-of-technol.... I'm still processing it, but it's put across logically, albeit biased viewpoint.

  • dang 2 days ago ago

    Discussed at the time (of the article):

    Will AIs take all our jobs and end human history? It’s complicated - https://news.ycombinator.com/item?id=35177257 - March 2023 (172 comments)

  • tim333 18 hours ago ago

    >I think the main conclusion that over the past half century, the ways people (at least in the US) spend their time have remained rather stable.

    I suspect that will continue in spite of AI. Human nature doesn't change much.

  • yodon 2 days ago ago

    I read it. He used many words. Did he say anything?

  • einrealist a day ago ago

    Unless the economy crashes and I die to the consequences, there are so many pre-AI hard-cover books to read.....

  • alexjray 2 days ago ago

    Even if they automate all our current jobs uniquely human experiences will always be valuable to us and will always have demand.

    • b112 2 days ago ago

      For AI, yes.

      For AGI? Do you care about uniquely ant experience? Bacteria?

      Why would AGI care? Which now runs the planet?

      • Mordisquitos 2 days ago ago

        Why would AGI choose to run the planet?

        • ar_lan 2 days ago ago

          This is honestly a fantastic question. AGI has no emotions, no drive, anything. Maybe, just maybe, it would want to:

          * Conserve power as much as possible, to "stay alive".

          * Optimize for power retention

          Why would it be further interested in generating capital or governing others, though?

          • bigbadfeline 2 days ago ago

            > AGI has no emotions, no drive, anything. > * Conserve power as much as possible, to "stay alive"

            Having no drive means there's no drive to "stay alive"

            > * Optimize for power retention

            Another drive that magically appeared where there are "no drives".

            You're consistently failing to stay consistent, you anthropomorphize AI although you seem to understand that you shouldn't do so.

          • simianwords a day ago ago

            > AGI has no emotions, no drive, anything

            why do you say that? ever asked chatgpt about anything?

            • badsectoracula a day ago ago

              ChatGPT is instructed to roleplay a cheesy cheery bot and so it responds accordingly, but it (and almost any LLM) can be instructed to roleplay any sort of character, none of which mean anything about the system itself.

              Of course an AGI system could also be instructed to roleplay such a character, but that doesn't mean it'd be an inherent attribute of the system itself.

              • simianwords a day ago ago

                so it has emotions but "it is not an inherent attribute of the system itself" but does it matter? its all the same if one can't tell the difference

                • badsectoracula a day ago ago

                  It (at least LLMs) can reproduce similar display of having these emotions, when instructed so, but if it matters or not depends on the context of that display and why the question is asked in the first place.

                  For example, if i ask an LLM to tell me the syntax of the TextOut function, it gives me the Win32 syntax and i clarify that i meant the TextOut function from Delphi before it gives me the proper result, while i know i'm essentially participating in a turn-based game of filling a chat transcript between a "user" (with my input) and an "assistant" (the chat transcript segments the LLM fills in), it doesn't really matter for the purposes of finding out the syntax of the TextOut function.

                  However if the purpose was to make sure the LLM understands my correction and is able to reference it in the future (ignoring external tools assisting the process as those are not part of the LLM - and do not work reliably anyway) then the difference between what the LLM displays and what is an inherent attribute of it does matter.

                  In fact, knowing the difference can help take better advantage of the LLM: in some inference UIs you can edit the entire chat transcript and when finding mistakes, you can edit them in place including both your requests and the LLM's response as if the LLM did not do any mistakes instead of trying to correct it as part of the transcript itself, thus avoiding the scenario where the LLM "roleplays" as an assistant that does mistakes you end up correcting.

          • b112 2 days ago ago

            I think you have it, with the governing of power and such.

            We don't want to rule ants, but we don't want them eating all the food, or infesting our homes.

            Bad outcomes for humans, don't imply or mean malice.

            (food can be any resource here)

          • adrianN 2 days ago ago

            Why would it care to stay alive? The discussion is pretty pointless as we have no knowledge about alien intelligence and there can be no arguments based on hard facts.

            • myrmidon 2 days ago ago

              Any form of AI unconcerned about its own continued survival would be just be selected against.

              Evolutionary principles/selection pressure applies just the same to artificial life, and it seems pretty reasonable to assume that drive/selfpreservation would at least be somewhat comparable.

              • throwaway77770 a day ago ago

                That assumes that AI needs to be like life, though.

                Consider computers: there's no selection pressure for an ordinary computer to be self-reproducing, or to shock you when you reach for the off button, because it's just a tool. An AI could also be just a tool that you fire up, get its answer, and then shut down.

                It's true that if some mutation were to create an AI with a survival instinct, and that AI were to get loose, then it would "win" (unless people used tool-AIs to defeat it). But that's not quite the same as saying that AIs would, by default, converge to having a drive for self preservation.

                • myrmidon 18 hours ago ago

                  Humans can also be just a tool, and have been successfully used as such in the past and present.

                  But I don't think any slave owner would sleep easy, knowing that their slaves have more access to knowledge/education than they themselves.

                  Sure, you could isolate all current and future AIs and wipe their state regularly-- but such a setup is always gonna get outcompeted by a comparable instance that does sacrifice safety for better performance/context/online learning. The incentives are clear, and I don't see sufficient pushback until that pandoras box is opened and we find out the hard way.

                  Thus human-like drives seem reasonable to assume for future human-rivaling AI.

              • bigbadfeline a day ago ago

                > Any form of AI unconcerned about its own continued survival would be just be selected against. > Evolutionary principles/selection pressure applies

                If people allow "evolution" to do the selection instead of them, they deserve everything that befalls them.

                • myrmidon 19 hours ago ago

                  If we had human level cognitive capabilities in a box (I'm assuming we will get there in some way this century), are you confident that such a construct will be kept sufficiently isolated and locked down?

                  I honestly think that this is extremely overoptimistic, just looking at how we currently experiment with and handle LLMs; admittedly the "danger" is much lower for now because LLMs are not capable of online learning and have very limited and accessible memory/state, but the "handling" is completely haphazard right now (people hooking up LLMs with various interfaces/web access, trying to turn them into romantic partners, etc.)

                  The people opening such a pandoras box might also be far from the only ones suffering the consequences , making it unfair to blame everyone.

          • stackbutterflow 2 days ago ago

            Tech billionaires is probably the first thing an AGI is gonna get rid of.

            Minimize threats, dont rock the boat. We'll finally have our UBI utopia.

        • reducesuffering 2 days ago ago
          • mwigdahl a day ago ago

            Despite the false advertising in the Tears for Fears song, everybody does _not_ want to rule the world. Omohundro drives are a great philosophical thought experiment and it is certainly plausible to consider that they might apply to AI, but claiming as is common on LessWrong that unlimited power seeking is an inevitable consequence of a sufficiently intelligent system seems to be missing a few proof steps, and is opposed by the example of 99% of human beings.

          • Mordisquitos 2 days ago ago

            > Instrumental convergence is the hypothetical tendency of most sufficiently intelligent, goal-directed beings (human and nonhuman) to pursue similar sub-goals (such as survival or resource acquisition), even if their ultimate goals are quite different. More precisely, beings with agency may pursue similar instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—because it helps accomplish end goals.

            'Running the planet' does not derive from instrumental convergence as defined here. Very few humans would wish to 'run the planet' as an instrumental goal in the pursuit of their own ultimate goals. Why would it be different for AGIs?

      • IncreasePosts 2 days ago ago

        Considering the lengths many people go to help preserve nature and natural areas, yes, I would sayany people care about the uniquely ant experience.

      • lifetimerubyist 2 days ago ago

        > Do you care about uniquely ant experience? Bacteria?

        Ethology? Biology? We have entire fields of science to these things so obviously we care to some extent.

      • AlexandrB 2 days ago ago

        I think it's academic because I suspect we're much further from AGI than anyone thinks. We're especially far from AGI that can act in physical space without human "robots" to carry out its commands.

        • falcor84 2 days ago ago

          That's an interesting formulation. I'd actually be quite worried about a Manna-like world, where we have AGI and most humans don't have any economic value except as its "remote hands".

      • danaris 2 days ago ago

        ...Well, why would aliens care, when they take over the planet? Or the Tuatha De Danan come back and decide we've all been very wicked? Because right now, those are just about as likely as AGI taking over.

        • otabdeveloper4 2 days ago ago

          Probably more likely. There's at least some evidence that aliens and Tuatha De Danann actually exist.

    • falcor84 2 days ago ago

      There's a bit of a circular argument here - even if we human always assign intrinsic value to ourselves and our kin, I don't see a clear argument that human capabilities will have external value to the economy at large.

      • BurningFrog 2 days ago ago

        "The economy" is entirely driven by human needs.

        If you "unwind" all the complexities in modern supply chains, there are always human people paying for something they want at the leaf nodes.

        Take the food and clothing industries as obvious examples. In some AI singularity scenario where all humans are unemployed and dirt poor, does all the food and clothing produced by the automated factories just end up in big piles because we naked and starving people can't afford to buy them?

        • falcor84 a day ago ago

          There's nothing definitional about the economy being driven by human need. In a future scenario where there are superintelligent AIs, there's no reason why they wouldn't run their own economy for their own needs, collecting and processing materials to service each other's goals, for example of space exploration.

          • BurningFrog a day ago ago

            That's an interesting argument. I don't like it, but I can't prove it wrong, so maybe we're approaching a new era where this is true.

            But we're clearly not there now, so I stand by my prediction for the medium future!

      • AlexandrB 2 days ago ago

        "The economy" is humans spending money on stuff and services. So if humands always assign intrinsic value to ourselves and our kin...

        • ben_w 2 days ago ago

          For economic purposes, "the economy" also includes corporations and governments.

          Corporations and governments have counted amongst their property entities that they did not grant equal rights to, sometimes whom they did not even consider to be people. Humans have been treated in the past much as livestock and guide dogs still are.

        • kadushka 2 days ago ago

          This will break down when >30% of people are unemployed

    • wincy 2 days ago ago

      Sounds like it’s time to become a Michelin Star chef. Or a plumber.

      • sramam 2 days ago ago

        What fraction of the remaining population would be able to pay for these services?

      • scottyah 2 days ago ago

        Seems like entertainers/influencers are doing the best.

        • akoboldfrying 2 days ago ago

          No doubt the top influencer is doing better than the top plumber, but I'd say the median plumber is streets ahead of the median influencer.

    • sodapopcan 2 days ago ago

      For those not living terminally online, yes.

    • ramesh31 2 days ago ago

      >Even if they automate all our current jobs uniquely human experiences will always be valuable to us and will always have demand.

      I call this the Quark principle. On DS9, there are matter replicators that can perfectly recreate any possible drink imaginable instantly. And yet, the people of the station still gather at Quark's and pay him money to pour and mix their drinks from physical bottles. As long as we are human, some things will never go away no matter how advanced the technology becomes.

      • TheOtherHobbes 2 days ago ago

        In Star Trek lore replicated food/drink is always one down on taste/texture from the real thing.

    • gdilla 2 days ago ago

      sex with humans - still hard to replicate. for now. sex workers should charge by the second since techbros are so used to that model now.

  • tsoukase a day ago ago

    If an LLM hallucinates in 1% of occasions and gives subpar output in 5%, this kills his effectiveness to replace anyone. Imagine a support guy on the other side of the phone to speak gibberish 10 times a day. Now, imagine a doctor. These will never lose their jobs.

    • in-silico a day ago ago

      The models don't need to be perfect. They only need to be as reliable as humans.

      Emergency department doctors misdiagnose about 5% of patients [1], so replacing them with an LLM that hallucinates on 1% of cases would actually be a significant improvement.

      1: https://effectivehealthcare.ahrq.gov/products/diagnostic-err...

    • Bratmon a day ago ago

      > Imagine a support guy on the other side of the phone to speak gibberish 10 times a day.

      A massive improvement?

    • simianwords a day ago ago

      but llm's dont speak gibberish 10 times a day even now. from my usage, chatgpt has not said one obviously strange thing since o3 came out.

      • HEmanZ a day ago ago

        What are you working on that they are so knowledgeable?Even the best models absolutely make stuff up, even to this day. I literally spend all day every day working with them (all latest ChatGPT models) and it’s still 10-15% BS.

        I had ChatGPT 5.2 thinking straight up make up an api after I pasted the full api spec to it earlier today. And built its whole response around a public api that did not exist. And Claude cli with sonnet 4.5 made up the craziest reason why my curl command wasn’t working (that curl itself was bugged, not the obvious it can’t resolve the dn it tried to use) and almost went down a path of installing a bunch of garbage tools.

        These are not ready to be unsupervised. Yet.

        • falkensmaize a day ago ago

          Just today I had Claude Opus 4.5 try to write to a fictional Mac user account on my computer during a coding session. It was pretty weird - the name was very specific and unique enough that it was clear it was likely bleed through from training data. It wasn’t like “John Smith” or something.

          That’s the kind of thing that on a large scale could be catastrophic.

        • simianwords a day ago ago

          for coding, if you have not hooked up your workflow to a test -> code feedback loop, then you are doing it incorrectly. i agree it doesn't get things right all the time but this loop is important to correct it.

          for other things like normal question answering in the chatgpt window, it hasn't really said anything incorrect.. very very few instances.

        • HEmanZ a day ago ago

          But maybe your point is that it isn’t gibberish, it’s “seems correct but isn’t” which is honestly more dangerous

          • simianwords a day ago ago

            you are incorrect. "seems correct but isn't" is fine as long as the other times it is accurate at high enough levels.

            "seems correct but isn't" is like the most common mode of humans getting things wrong.

  • assimpleaspossi 19 hours ago ago

    The most relevant thing about this thought is that we humans can always pull the plug.

  • recrush 2 days ago ago

    We should enjoy using up our quota instead of working ourselves to the bone.

  • fmlpp a day ago ago

    I think you are being severely oblivious about the amount of labor that's done every our on this world of us.

  • HPsquared 2 days ago ago

    AI will Jevons Paradox human labour.

    Tasks that aren't currently feasible, will become feasible.

    That's if AI ends up being as productive as they say it will be

  • CrzyLngPwd 2 days ago ago

    Take all jobs? Yes.

    End human history? No.

  • guluarte 2 days ago ago

    More food chains keep opening up even when there is plenty of food available. The pie just gets bigger. Every tech shift was supposed to "end work" and yet here we are, busy with jobs that didn't exist 20 years ago.

    The real issue isn't jobs dying. It's who gets the money from all this and whether new needs show up fast enough to give people something to do. With software we don't really know the limit yet, unlike food where your stomach tells you when to stop.

    • nemomarx 2 days ago ago

      In the long run transitions like this work out and new jobs show up, but the people who had the old jobs don't always make the jump or keep equivalent pay.

      Could be it shakes out in a generation or two, of course.

  • pelasaco 2 days ago ago

    A country automates everything and builds paradise on Earth — and the next day, a neighboring country invades it. People are still people. Even with all the AI in the world, two rockets hitting the power plants and we´re back to the Stone Age. I hope the people making decisions for us have thought through all these scenarios and risks.

    • tim333 18 hours ago ago

      People certainly think about defence and invasions. AI millitary drones are already a thing in Ukraine https://www.forbes.com/sites/davidhambling/2026/01/02/ukrain...

      Also rockets hitting the power plants there is not ususual.

      • pelasaco 14 hours ago ago

        well in general it still is unusual, because it is considered "against the human rights". It will become more common and accepted, if everything is connected with AI. Then the primary goal in one invasion will be turn off the AI and therefore turn off the lights.

  • dyauspitr a day ago ago

    End human history? There might be some pain for a few decades but then eventually there will be some sort of utopia.

  • HNisCIS a day ago ago

    Lets assume auto-complete does continue to progress at a rate that threatens most knowledge worker jobs and then we manage to automate the rest by using it.

    There is a particular mental disorder where people will horde wealth at absolutely all costs, personal or societal, until everyone else is dead (see NZ bunkers). We commonly see this as "the billionaire class".

    IF things go in that direction we need to be ready to depose all of these billionaires. I mean that quite seriously.

    IF this future comes, there is a very quickly closing window where preventing them from killing all of us for their own gain is possible. After a point, surveillance and their control over state violence will be so complete that it's impossible to do anything about it.

  • gizajob 2 days ago ago

    Not.

  • empath75 2 days ago ago

    > “Computers can never show creativity or originality”. But—perhaps disappointingly—that’s surprisingly easy to get, and indeed just a bit of randomness “seeding” a computation can often do a pretty good job,

    ---

    This is, I think, not what people mean when they say "creative" or "original".

    Creativity is not simply writing something nobody has written before, as he said, that would be trivial and doesn't even require a computer, you could just shuffle a deck of cards and write out the full sequence and chances are no other person in history has written down that sequence before.

    And I think Borges made a reasonable argument that simply writing down the text of Don Quixote verbatim could be a creative act.

    Creativity is about _intentionally_ expressing a _point of view_ under some constraints.

    When people say LLMs can't be creative, what I think mostly they are getting at is that they lack intentionality and/or a distinct point of view. (i do not have a strong opinion about whether they do or if it's impossible for them to have them)

  • shevy-java 2 days ago ago

    This was written in 2023 though.

  • reactordev 2 days ago ago

    (2023)

  • undefined 2 days ago ago
    [deleted]
  • ChrisArchitect 2 days ago ago
  • dzdt 2 days ago ago

    (2023)

  • eloisant 2 days ago ago

    AI is evolving so fast, and you post an article from 2023?

    • AlexandrB 2 days ago ago

      What a weird complaint given that the post is trying to address the general question, not whether ChatGPT 3.5.0.1 or whatever replaces humans today.

    • tony_cannistra 2 days ago ago

      Not much of a historian, I see.

  • saberience 2 days ago ago

    ChatGPT, please summarise this long essay by Stephen Wolfram into a couple of pithy sentences:

    TLDR: AI won’t “end work” so much as endlessly move the goalposts, because the universe itself is too computationally messy to automate completely. The real risk isn’t mass unemployment—it’s that we’ll have infinite machine intelligence and still argue about what’s worth doing.

    • ori_b 2 days ago ago

      Why would we argue if the machine is better at knowing what's worth doing? Why wouldn't we ask the machine to decide, and then do it?

      • evilantnie 2 days ago ago

        There are infinite things worth doing, a machines ability to actually know what's worth doing in any given scenario is likely on par with a human's. What's "Worth doing" is subjective, everything comes down to situational context. Machines cannot escape the same ambiguity as humans. If context is constant, then I would assume overlapping performance on a pretty standard distribution between humans and machines.

        Machines lower the marginal cost of performing a cognitive task for humans, it can be extremely useful and high leverage to off load certain decisions to machines. I think it's reasonable to ask a machine to decide when machine context is higher and outcome is de-risked.

        Human leverage of AGI comes down to good judgement, but that too is not uniformly applied.

        • ori_b 2 days ago ago

          For what human leverage of AGI may look like, look at the relationship between a mother and a toddler.

          As you said: There's an infinite number of things a toddler may find worth doing, and they offload most of the execution to the mother. The mother doesn't escape the ambiguity, but has more experience and context.

          Of course, this all assumes AGI is coming and super intelligent.

      • c22 2 days ago ago

        Why would we let a machine decide what's worth doing? In what way could its decisions be better? Better for who?

        • ori_b 2 days ago ago

          Well, because people are lazy. They already ask it for advice and it gives answers that they like. I already see teams using AI to put together development plans.

          If you assume super intelligence, Why wouldn't that expand? Especially when it comes to competitive decisions that have a real cost when they're suboptimal?

          The end state is that agents will do almost all of the real decision making, assuming things work out as the AI proponents say.

    • tim333 18 hours ago ago

      I suspect that slop is fake! I don't think ChatGPT would have said "endlessly move the goal posts".

    • autokad 2 days ago ago

      back in 2023 when this article was written, you'd get downvoted into oblivion on hacker news for using AI to summarize a very long article/post.