As Rocks May Think

(evjang.com)

76 points | by modeless 11 hours ago ago

73 comments

  • lrc 7 hours ago ago

    if you click through to the linked Vannevar Bush article and scroll down there are a bunch of vintage ads around the prose that are kind of interesting. And some of the predictions have been well overtaken by events!

  • munificent 10 hours ago ago

    > We are entering a golden age in which all computer science problems seem to be tractable, insomuch as we can get very useful approximations of any computable function.

    Alternatively, we are entering a dark age where the billionaires who control most of the world's capital will no longer need to suffer the indignity of paying wages to humans in order to generate more revenue from information products and all of the data they've hoarded over the past couple of decades.

    > the real kicker is that we now have general-purpose thinking machines that can use computers and tackle just about any short digital problem.

    We already have those thinking machines. They're called people. Why haven't people solved many of the world's problems already? Largely because the people who can afford to pay them to do so have chosen not to.

    I don't see any evidence that the selfishness, avarice, and short-term thinking of the elites will be improved by them being able to replace their employees with a bot army.

    • skybrian 7 hours ago ago

      I don't think you've read those quotes very closely? He's writing about all computer science problems. And "just about any short digital problem" is not the same as solving the world's problems.

      AI ghosts can do a lot of things, but they're limited by being non-physical.

      • munificent 6 hours ago ago

        He does also say:

        > The entire global economy is re-organizing around the scale-up of AI models.

        > Software engineering is just the beginning; ...

        > Air conditioning currently consumes 10% of global electricity production, while datacenter compute less than 1%. We will have rocks thinking all the time to further the interests of their owners. Every corporation with GPUs to spare will have ambient thinkers constantly re-planning deadlines, reducing tech debt, and trawling for more information that helps the business make its decisions in a dynamic world.

        > Militaries will scramble every FLOP they can find to play out wargames, like rollouts in a MCTS search. What will happen when the first decisive war is won not by guns and drones, but by compute and information advantage? Stockpile your thinking tokens, for thinking begets better thinking.

        So he is extending this to more than just computer science.

        • skybrian 5 hours ago ago

          Yeah, I think that's magical thinking about how much better war planning will help.

    • keeda 5 hours ago ago

      Here are my thoughts, which are not fully formed because AI is still so new. But taking this line of thought reductio ad absurdum, it becomes apparent that the elites have a critical dependency on us plebs:

      Almost all of their wealth is ultimately derived from people.

      The rich get richer by taking a massive cut of the economy, and the economy is basically people providing and paying for services and goods. If all the employees are replaced and can earn no money, there is no economy. Now the elite have two major problems:

      a) What do they take a cut of to keep getting richer?

      b) How long will they be safe when the resentment eventually boils over? (There's a reason the doomsday bunker industry is booming.)

      My hunch is, after a period of turmoil, we'll end up in the usual equilibrium where the rest of the world is kept economically growing just enough to keep (a) us stable enough not to revolt and (b) them getting richer. I don't know what that looks like, could be UBI or something. But we'll figure it out because our incentives are aligned: we all want to stay alive and get richer (for varying definitions of "richer" of course.)

      However, I suspect a lot will change quickly, because a ton of things that made up the old world order is going to be upended. Like, you'd need millions in funding to hire a team to launch any major software product; this ultimately kept the power in the hands of those with capital. Now a single person with an AI agent and a cloud platform can do it themselves for pocket change. This pattern will repeat across industries.

      The power of capital is being disintermediated, and it's not clear what the repercussions will be.

    • Centigonal 10 hours ago ago

      I don't understand why you're being downvoted. This is a topic worth discussing.

      Like every previous invention that improves productivity (cf. copiers, steam power, the wheel), this wave of AI is making certain forms of labor redundant, creating or further enriching a class of industrialists, and enabling individuals to become even more productive.

      This could create a golden age, or a dark age -- most likely, it will create both. The industrial revolution created Dickensian London, the Luddite rebellion & ensuing massacres, and Blake's "dark satanic mills," but it also gave me my wardrobe of cool $30 band T-shirts and my beloved Amtrak train service.

      Now is the time to talk about how we predict incentive structures will cause this technology to be used, and what levers we have at our disposal to tilt it toward "golden age."

      • sunsunsunsun 9 hours ago ago

        Considering the usage of LLMs by many people as a sort of friend or psychologist we also get to look forward to a new form a control over people. These things earn peoples "trust" and there is no reason why it couldn't be used to sway peoples opinions. Not to mention the devious and subtle ways it can advertise to people.

        Also, these productivity gains arent used to reduce working time for the same number of people, but instead to reduce the number of people needed to do the same amount of work. Working people get to see the productivity benefits via worsening material conditions.

      • beeflet 9 hours ago ago

        Unlike every previous invention that improves productivity, It is making every form of labor redundant.

        • zozbot234 9 hours ago ago

          AIUI, in most lines of work AI is being used to replace/augment pointless paper-pushing jobs. It doesn't seem to be all that useful for real, productive work.

          Coding may be a limited exception, but even then the AI's job is to be basically a dumb (if sometimes knowledgeable) code monkey. You still need to do all the architecture and detailed design work if you want something maintainable at the end of the day.

          • munificent 9 hours ago ago

            > It doesn't seem to be all that useful for real, productive work.

            Even the most pointless bullshit job accomplishes a societal function by transferring wages from a likely wealthy large corporation to a individual worker who has bills to pay.

            Eliminating bullshit jobs might be good from an economic efficiency perspective, but people still gotta eat.

            • uoaei 8 hours ago ago

              The logic of American economic policy relies on a large velocity of money driven by consumer habits. It is tautological, and it is obsolete in the face of the elite trying to minimize wage expenses.

            • DennisP 7 hours ago ago

              If the only point is distributing money, then the pointless bullshit job is an unnecessary complication.

              • munificent 6 hours ago ago

                It's not unnecessary to the person who uses it to pay their bills.

                • xg15 3 hours ago ago

                  I think GP meant that the money could be distributed directly without the job in between, i.e. UBI.

                  Of course that comes with its own set of problems, e.g. that you will lose training, connections, the ability to exert influence through the job or any hope of building a career.

          • beeflet 9 hours ago ago

            real productive work like what? What do you think all this hubub with robotics is about?

            I mean, I know what you are getting at. I agree with you on the current state of the art. But advancements beyond this point threaten everyone's job. I don't see a moat for 95% of human labor.

            There's no reason why you couldn't figure out an AI to assemble "the architecture and detailed design work". I mean I hope it's the case that the state of the art stays like this forever, I'm just not counting on it.

            • zozbot234 9 hours ago ago

              Robotics is nothing new, we had robots in factories in the 1980s. The jobs of modern factory workers are mostly about attending to robots and other automated systems.

              > There's no reason why you couldn't figure out an AI to assemble "the architecture and detailed design work".

              I'd like to see that because it would mean that AI's have managed to stay at least somewhat coherent over longer work contexts.

              The closest you get to this (AIUI) is with AI's trying to prove complex math theorems, where the proof checking system itself enforces the presence of effective large-scale structure. But that's an outside system keeping the AI on a very tight leash with immediate feedback, and not letting it go off-track.

      • keybored 10 hours ago ago

        People fought back. Who is fighting back now?

        Capitalists have openly gloated in public about wanting to replace at least one profession. That was months or years ago. What are people doing in response? Discussing incentive structures?

        SC coders paid hundreds of thousands a year are just letting this happen to them. “Nothing to be done about another 15K round of layoffs, onlookers say”

        • zozbot234 9 hours ago ago

          > Capitalists have openly gloated in public about wanting to replace at least one profession. That was months or years ago. What are people doing in response?

          Great, let them try. They'll find out that AI makes the human SC coder more productive not less. Everyone knows that AI has little to nothing to do with the layoffs, it's just a silly excuse to give their investors better optics. Nobody wants to admit that maybe they've overhired a bit after the whole COVID mess.

        • AndrewKemendo 9 hours ago ago

          This is exactly it, nobody is going to do anything about it

        • CamperBob2 9 hours ago ago

          Buggy-whip makers inconsolable!

    • denkmoon 9 hours ago ago

      A labouring proletariat with bread and circuses is a distracted proletariat. Billionaires are still flesh and blood, much like Louis XVI and Charles I.

      • AndrewKemendo 9 hours ago ago

        Are you actually doing anything in that direction or is this “tough guy on the internet?”

        I see literally zero people doing the equivalent of “breaking the factories” like the luddites attempted

        • denkmoon 9 hours ago ago

          We're not there yet. The luddite movement formed and acted over decades not years.

          Do you not see the overwhelmingly negative response to AI produced goods and services from the average westerner?

          • AndrewKemendo 8 hours ago ago

            So, no then. Like I said upstream, nobody is going to anything about it.

            At a certain point it’s too late.

        • tejohnso 8 hours ago ago

          I think we'd need a lot more suffering before we have enough people to start that kind of action. If we see 35% unemployment over the next 5 years with insufficient time to adjust, then maybe the pitchforks come out.

          • AndrewKemendo 8 hours ago ago

            So then we should just go slightly slower?

            What if it’s over 10 years?

            • tejohnso 7 hours ago ago

              Well, time is one aspect but we'd also need motivation and proper execution for a reasonable chance at successful adaptation. My guess is we'll coast along the boundary. I don't imagine things will move so fast as to cause the sort of general upheaval that I think you're talking about. But I do think things will move fast enough to cause significant harm on a larger scale than we've seen recently in the West.

              • AndrewKemendo 6 hours ago ago

                Yeah, I agree that’s the most likely future.

    • measurablefunc 10 hours ago ago

      What you fail to understand Bob is that as long as we let the billionaires do what they want then we all automatically win. That's just how the system is designed to work, we can't lose as long as Musk & his buddies are at the helm.

      • munificent 10 hours ago ago

        Gazing up at them adoringly, mouth open, waiting for it all to trickle down on my face.

        • measurablefunc 10 hours ago ago

          It's the only thing us plebeians can hope for. When all is said & done the people at the top are the only ones that can truly create wealth w/ their innovative genius. The rest of us should just shut up & follow their orders for our own good.

          • drdaeman 9 hours ago ago

            That would be a thing if wealth would correlate with innovation. I’m afraid the correlation is inverse in way too many cases.

            • munificent 9 hours ago ago

              This comment thread is being sarcastic.

      • undefined 7 hours ago ago
        [deleted]
  • lawrenceyan 10 hours ago ago

    Biggest update I see is that he thinks AI 2027 is actually going to happen.

  • esafak 10 hours ago ago

    This looks like a survey. Is there a thesis; any claim?

    • tejohnso 8 hours ago ago

      Sounds like OP is happy to be alive at this moment, reveling in the wonder of it all, and wanting to share.

  • kalterdev 7 hours ago ago

    > Chief among all changes is that machines can code and think quite well now.

    They can’t and never will.

    • johnfn 7 hours ago ago

      Are you really claiming that there isn't a machine in existence that can code? And that that is never possible?

      • kalterdev 6 hours ago ago

        It can code in an autocomplete sense. In the serious sense, if we don’t distinguish between code and thought, it can’t.

        Observe that modern coding agents rely heavily on heuristics. LLM excels at training datasets, at analyzing existing knowledge, but it can’t generate new knowledge on the same scale, its thinking (a process of identification and integration) is severely limited on the conscious level (context window), where being rational is most valuable.

        Because it doesn’t have volition, it cannot choose to be logical and not irrational, it cannot commit to attaining the full non-contradictory awareness of reality. That’s why I said “never.”

        • johnfn 6 hours ago ago

          > It can code in an autocomplete sense.

          I just (right before hopping on HN) finished up a session where an agent rewrote 3000 lines of custom tests. If you know of any "autocomplete" that can do something similar, let me know. Otherwise, I think saying LLMs are "autocomplete" doesn't make a lot of sense.

          • kalterdev 5 hours ago ago

            That’s impressive. I don’t object to the fact that they make humans phenomenally productive. But “they code and think” makes me cringe. Maybe I’m confusing lexicon differences for philosophic battles.

        • libraryofbabel 6 hours ago ago

          Some of that is true, sure, but nobody who claims LLMs can code and reason about problems is claiming that they operate like humans. Can you give concrete examples of actual specific coding tasks that LLMs can’t do and never will be able to do as a consequence of all that?

          • kalterdev 6 hours ago ago

            I think it can solve about any leetcode problem. I don’t think it can build an enterprise-grade system. It can be trained on an exiting one but these systems are not closed and no past knowledge seems to predict the future.

            That’s not very specific but I don’t have another answer.

      • wavemode 6 hours ago ago

        I think "quite", "well", and "now" are the objectionable parts of the quote.

  • macrocosmos 8 hours ago ago

    > AI generated videos are indistinguishable from reality.

  • alsetmusic 10 hours ago ago

    This person doesn't understand how LLMs work.

    • pradeesh 8 hours ago ago

      Not sure how you could read this essay and come to that conclusion. It definitely aligns with my own understanding, and his conclusions seem pretty reasonable (though the AI 2027/Situational Awareness part might be arguable)

      • alsetmusic 4 hours ago ago

        Absolutely:

        > In order to predict where thinking and reasoning capabilities are going, it's important to understand the trail of thought that went into today's thinking LLMs.

        No. You don't understand at all. They don't think. They don't reason. They are statistical word generators. They are very impressive at doing things like writing code, but they don't work the way that is being inferred here.

    • DennisP 9 hours ago ago

      Care to be more specific?

      • alsetmusic 4 hours ago ago

        See reply in related thread.

  • TacticalCoder 10 hours ago ago

    > As Rocks May Think

    I thought they meant the plural of ASRock as in "ASRocks May think" and thought this was about ASRock motherboards getting a BIOS/UEFI with an integrated LLM or something.

  • undefined 10 hours ago ago
    [deleted]
  • measurablefunc 10 hours ago ago

    Yes, yes, we are all going to be living in an automated & luxurious communist utopia. Here are some material facts to ground the exuberance: 1) Lifecycel of typical GPU in a data center is 1-3 years, 2) Buildout is already limited by production capacity & will hit production walls by 2027-2028 when turnover matches & exceeds production capacity, 3) TSMCs projected capacity is ~130k wafers/month & it is not keeping up w/ demand which is more than doubling, 4) Power consumption for these "geniuses" & "thinking rocks" in data centers requires lots of power & the capacity was saturated in 2025, 5) So along w/ production capacity limitations power production is now another gating factor.

    Anyway, like data centers in space there are lots of material limitations that all of these exuberant "ZOMG rocks can think now" essays all sweep under the rug to drive a very biased narrative about what is actually happening & the fact that all those binary bits are produced by real materials that have lifecycles & production limits not visible in the digital artifacts.

    • zozbot234 9 hours ago ago

      GPU lifecycle is 1-3 years because GPUs are becoming obsolete for cutting-edge work (especially re: power use) in that timeframe. This is good news if you'd like to see expanded use of AI. Production walls at fabs will matter little since future silicon dies will be capable of far more per unit area than current ones, so there will be plenty of incentive to upgrade.

      • measurablefunc 9 hours ago ago

        I'm just stating facts. I don't care whether it's a good or bad thing. You can theorycraft about future utopias w/ computronium as much as you want but the facts as they stand today are what I stated.

        • groby_b 9 hours ago ago

          "I don't care whether it's a good or bad thing" is not really a believable statement given your polemic closings.

          • measurablefunc 9 hours ago ago

            If you think the facts are "good" or "bad" then take it up w/ the people who can do something about it to make them "better". Typical discussions about stuff like this becomes nonsensical & incoherent b/c whether you think the facts are "good" or "bad" makes no difference to the material reality & again, those are as I have stated them.

  • mynameisjody 9 hours ago ago

    I'm still waiting for one of these articles to be written by someone without something to be directly gained by the hype. Eric Jang, VP of AI at 1X.

    • johnfn 8 hours ago ago

      The previous post in this blog is titled "Leaving 1X". So your wait is over!

      • Paracompact 5 hours ago ago

        Very, very likely he is remaining in the AI space.

      • RealityVoid 7 hours ago ago

        Unrelated, but it seems his previous company, 1x, was initially named Halodi and was located in Norway. And eventually, it was moved with all employees in Silicon Valley. How the hell does that work? That sounds like a logistical nightmare.Do you upend all those people's lives? Do you fire those who refuse? How many norwegians even want to go to the US? Sounds crazy to me.

        • xg15 3 hours ago ago

          Did they actually move or is it just a "remote-first" company now?

          (Or even just registered in SV but still physically in Norway?)

          Edit: Seems like a mix of all of it:

          > I joined Halodi Robotics in 2022 (prior name of the company) as the only California-based employee. At the time, we were about 40 based out of Norway and 2 in Texas.

        • jrmg 5 hours ago ago

          How do they all get work visas?

  • xyzsparetimexyz 9 hours ago ago

    [flagged]

    • zozbot234 9 hours ago ago

      Nah, the ugliest prose is clanker prose and this definitely isn't. This stuff comes 100% from an actual carbon-based lifeform.

      • akovaski 7 hours ago ago

        I think that Gemini regularly generates inane metaphors like the above. As an example, here's a message that it sent me when I was attempting to get it to generate a somewhat natural conversation:

        ----

        Look, if you aren't putting salt on your watermelon, you’re basically eating flavored water. It’s the only way to actually wake up the sweetness. People who think it’s "weird" are the same ones who still buy 2-in-1 shampoo.

        Anyway, I saw a guy at the park today trying to teach a cat to walk on a leash. The cat looked like it was being interrogated by the FBI, just dead-weighting it across the grass while he whispered "encouragement."

        Physical books are vastly superior to Kindles solely for the ability to judge a stranger's taste from across a coffee shop. You can’t get that hit of elitism from a matte gray plastic slab.

        ----

        This was with a prompt telling it to skip Reddit-style analogies.

        • wtetzner 6 hours ago ago

          I buy 3-in-1 shampoo, conditioner and body wash.

      • beeflet 9 hours ago ago

        Who is the wise guy that gave water the ability to think

    • kagol 6 hours ago ago

      Curious about the root of your distaste. Just a bad analogy/visualization?

    • appellations 9 hours ago ago

      Author is Vice President of AI, 1X Technologies.

    • netsharc 8 hours ago ago

      The article goes from philosophical (what AI will do to society) to jargony blowhard and then even deeper look (I think. I flicked my thumb past several screens of text), and back out again.

      Come on author, learn to write properly. Or tell your LLM to not mix a philosophical article with a technical one.