Meta's embrace of AI is making its employees miserable

(nytimes.com)

368 points | by JumpCrisscross 12 hours ago ago

392 comments

  • joenot443 11 hours ago ago
  • 1vuio0pswjnm7 an hour ago ago

    "Many workers immediately revolted. In online comments, they blasted the tracking as a privacy violation, ..."

    “How do we opt out?” - Meta employee

    Poetic justice, or "dogfooding"

    • zeroonetwothree 36 minutes ago ago

      Most Meta employees do not support invasive features without opt outs in their products. Some try to argue against it. But ultimately there is only person who gets to decide.

      You might as well blame the entire US population for certain problematic actions of the president.

      • xigoi 18 minutes ago ago

        Being a USA citizen is often not a choice. Being a Metaslop employee is a choice.

      • gloxkiqcza 18 minutes ago ago

        > You might as well blame the entire US population for certain problematic actions of the president

        You actively decide to work for Meta which has been known to dishonestly violate privacy since basically day one [1]. Most US citizens were passively born as one. It’s also much easier to leave a company than to move out of USA.

        [1] https://www.businessinsider.com/embarrassing-and-damaging-zu...

      • Sh0000reZ 25 minutes ago ago

        "Country by of and for the people."

        A nation is what its people tolerate.

        An economics euphemism; what the market will bear.

        Americans bear their government and neighbors providing zero assurances of food, shelter, healthcare.

        Millions support the Prez and the rest, even though they have power in numbers, well, not doing anything is a choice.

        Good luck out there. My fellow Americans and I don't have to care if you end up homeless in your car. Murica!

      • reshlo 29 minutes ago ago

        Getting another job when you have Meta on your CV is a lot easier than moving to another country.

    • throwaway7356 18 minutes ago ago

      I wonder what the thought process is? "I only work here because I like to violate everyone else's privacy. Mine has to be respected"?

      It's not like Meta/Facebook ever had moral concerns about privacy violations or surveillance (or many other things).

    • Bombthecat 44 minutes ago ago

      You can opt out by clicking through those 9 pages, each with a confusing form and buttons.

      And on the next update we either just enable it again and make you go through the process again :)

  • Havoc 11 hours ago ago

    I think there is a bit of wider social norms piece missing as well on AI use in knowledge work context.

    Someone forwarded an enormous amount of text over teams the other day at work. From someone (bless her) that always means well but usually averages about one spelling mistake per word and rarely goes over 20 words per message. Clearly copy paste chatgpt.

    For say hn gang that thinks in terms of context shifts, information load and things on THAT wave length the problem with that situation is obvious but I realised then that is not at all obvious to the average public. She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.

    There is zero understanding or consensus of acceptable practices around that sort of thing baked into societal norms right now.

    • erentz 7 hours ago ago

      Seems AI has made it cheap to produce information but now you have to spend more time parsing the information. And it’s now the less competent/useful people spending less time producing more information with the more useful people spending more of their valuable time parsing that information. This is why I’m skeptical of LLMs ever becoming a net benefit in most organizations.

      • anonymars 3 hours ago ago

        Intellectual denial of service

      • scruple 5 hours ago ago

        LLMs are Brandolini's Law taken to an entirely different plane of existence.

      • jimbokun 3 hours ago ago

        Calling it “information“is generous.

      • Bombthecat 42 minutes ago ago

        You don't parse the information. You paste it back to AI to get the bullet points the first person put it.

      • trollbridge 6 hours ago ago

        Well, you can use LLMs to parse LLM-generated slop. They make nice summaries. I have taken this approach to people who send me obviously generated LLM text; I simply run it through an LLM, paste the summary, and ask them "Is this an accurate summary?" and then I ask the for their original prompt.

        • Sgt_Apone 4 hours ago ago

          Might as well donate money to the AI companies at this point.

        • stoorafa 4 hours ago ago
        • dodu_ 3 hours ago ago

          Ah yes, take my single sentence, blow it up to 3 paragraphs with LLMs, and then the person reading it can have an LLM summarize it in a single sentence.

          What the fuck are we even doing anymore?

          • anon84873628 2 hours ago ago

            The thing is, eventually these products will be more integrated into business workflows and have access to all the context, so the three paragraph expansion probably will be a significant improvement upon the original input.

            And either that person won't be employed anymore, of the thing they were asking for in the first place will be automated for them.

            I've already got my agent building a dossier for everyone we interact with. I haven't started training it on their writing style so I can mirror back to them... yet.

            • BlackFly an hour ago ago

              As these products improve, one person sending the output and not the prompt will remain useless. The prompt captures the intent and level of real consideration of the person sending it, the receiver can augment that with additional information if they want to.

          • philipswood 2 hours ago ago

            He's describing a 4 step process:

            1) >I simply run it through an LLM,

            2) >paste the summary,

            3) >and ask them "Is this an accurate summary?"

            4) >and then I ask the for their original prompt.

            Agreed that just step 1 or step 1 and 2 would be depressingly pointless, but step 3 and 4 make this the equivalent of sending someone a let-me-google-that-for-you kind of link, does it not?

            Caught out like this I imagine many people will kind of get the fact that you'd rather have their direct inputs..

            (Or just get mad at you, but that's fair I guess)

          • ua709 2 hours ago ago

            I wonder if that even works. Kinda like when kids play telephone I think it’s unlikely the input and output sentences actually match.

        • erentz 5 hours ago ago

          But now even this is just producing more information and requires more work both of you and of the original sender.

        • throw310822 27 minutes ago ago

          > and then I ask the for their original prompt.

          Original prompt: "Please rewrite this information in a nice format for my insufferable asshole colleague".

    • Avicebron 7 hours ago ago

      My default is that I won't copy and paste anything that's AI generated in communications. I kind of think that's the line. Use whatever you want in the background, but I want to communicate with the synthesis of your thoughts.

      I think this is a reasonable standard to hold, otherwise, like many before have said...send me the prompt. It's actually more interesting/better I know a coworker is struggling to communicate about something.

      • threecheese 7 hours ago ago

        I follow the same strategy, but loosely - I need those emdashes to signal that I’m using the tools.

        • rdtsc 5 hours ago ago

          That’s my latest joke — that we’ll have to pretend like we used the tools so they can feel validated they’ve spent all this money on hyped up technology. So, yes, it’s em-dashes and “it’s not just this, it’s that …” so they can hopefully leave us alone.

          • xp84 4 hours ago ago

            I remember feeling embarrassed one time that I used a very early GPT thing to help organize perf reviews for employees from the various bullet points I had written for each (I had a lot of direct reports). But in current world, I assume I’d be praised for doing so.

      • jrumbut 2 hours ago ago

        My typical practice is to write a reply using my own brain and whatever practices are called for, then attach any interesting chatbot responses that were generated as documents.

        So there's a clear separation, a reply from me which I stand by and then some interesting chatbot stuff if you're into that.

      • pinkgolem 5 hours ago ago

        I mean, i struggle with spelling/wording and ask the LLM to proofread a lot.

        I often send out the LLM version, but still check if it contains the original thoughts correctly.

        It's not a bad way to extend your vocabulary & catch spelling mistakes

        • mkl 3 hours ago ago

          You don't need a fake extended vocabulary. Just communicate directly and honestly. Underlining spelling errors as you type has been a standard feature of email software for nearly three decades.

        • stingraycharles 5 hours ago ago

          > I often send out the LLM version, but still check if it contains the original thoughts correctly.

          Please don’t do this. You probably aren’t aware of how bad this can land. It’s not just about containing your original thoughts, it’s about the verbosity, repetitiveness, and absurdity of it all.

          Grammarly is a much better tool for these kinds of purposes, and it actually guides and teaches you to improve your writing along the way.

          • __mharrison__ an hour ago ago

            The irony that this response has a very common LLMism...

          • adastra22 4 hours ago ago

            Grammarly the honeypot?

            • stingraycharles 3 hours ago ago

              You seem to be referring to something specific I’m not aware of, could you elaborate?

              A Google search didn’t reveal anything specific other than them using famous author names for expert review.

              • adastra22 2 hours ago ago

                It's the nature of the product itself. It's a key logger software. That's literally what it does -- take every input on your computer and route it to their servers.

                • stingraycharles an hour ago ago

                  Right, I was just confused by your use of the word honeypot.

                  “keylogger mode” is optional, and to my understanding, you always see a visual indicator in the text area.

                  it doesn’t take every input as far as I know, and security firms don’t consider it a threat.

                  but point taken, it’s not for people with privacy concerns.

          • anon84873628 an hour ago ago

            Verbosity and repetitiveness? Which tools are you using?

            Tell it that you want a succinct professional email and it will do that. Give it examples of your own writing and it will match that style. If there's something you don't like, tell it to rewrite the part differently.

            Theses are literally the things language models are best at.

            • stingraycharles an hour ago ago

              > Tell it that you want a succinct professional email and it will do that. Give it examples of your own writing and it will match that style.

              This is not what the parent I replied to indicated, nor what people usually do.

    • nlawalker 6 hours ago ago

      You have to call it out when you see it, politely and charitably.

      "Hey, thanks! This is a great overview, and I actually asked ChatGPT before asking here and got a lot of the same information, but what I'm really looking for is..."

      • stingraycharles 5 hours ago ago

        This is what I do, slightly more explicitly saying “just be the real you”. About 50% of colleagues take it well. The other 50% don’t understand the problem, and don’t understand when (and when not) to use AI.

        They are at high risk.

        Employees using ChatGPT to renegotiate their salary are showing a serious lack of cognitive awareness.

      • vasco an hour ago ago

        "If I wanted to receive copy paste from a bot I wouldn't message you, why are you trying to sneak this in?"

        You reminded me of American colleagues that lie and say things are good when they are bad lol. Unable to be straight to the point. You're upset at the waste of time yet you thank them?

        • zeroonetwothree 32 minutes ago ago

          Just curious: would you consider yourself autistic?

    • milkshakes 7 hours ago ago
    • andai 2 hours ago ago

      Well no, you're supposed to copy-paste it into ChatGPT, ask for executive summary, and recover an approximation of the original input. Duh :)

    • Aurornis 8 hours ago ago

      > She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.

      This is the root frustration spreading across workplaces everywhere. Before AI the only way for someone to generate a design document, Jira ticket, or pull request without investing a lot of their own time and effort into producing what you saw.

      LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible. You have to actually read and process the work, which takes 100 times more effort than it took them to make it.

      For people in the working world who saw the workplace as a game of min-maxing their effort against the appearance of being a valuable contributor, LLMs are the perfect shortcut: They can now generate the appearance of doing a lot of work with no more than a few lines of asking an LLM to produce documents.

      If anyone spends the 30 minutes to review the AI slop from their 15-second prompt, they'll copy your feedback into ChatGPT and send another document over with the fixes. Now they've even captured you into doing their work for them!

      For teams or even entire companies that were relying on appearances of activity as a proxy for contributions, this is going to be a difficult transition. Everyone e-mail job worker in the world just received a tool that will generate the appearance of doing their job for them and even possibly be plausibly correct most of the time. One person can generate volumes of design documents, Jira tickets, and even copy and paste witty responses into the company Slack and appear to be the most engaged and dedicated employee by volume while doing less actual work than ever before.

      I think teams that already had good review cultures with managers who cared about the output rather than the metrics are doing fine because anyone even a little bit engaged can spot the AI copy-and-paste employees with even a little inspection. The lazy managers who relied on skimming documents and plotting number of PRs or lines of code changed are in for a rude awakening when they discover the employees dominating their little games are the ones doing the most damage to the team.

      • ceejayoz 7 hours ago ago

        > LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible.

        Oh, we know. It's pretty clear in many cases.

        • Terr_ 5 hours ago ago

          Perhaps a less-brittle version would be to replace "we don't know X" with "we can't easily prove X to the extent needed to deter it."

        • 2wdfsd 6 hours ago ago

          lol yeeh... its obvious as hell.

          And frankly the best signal now is: the shorter it is the greater the likelihood it was at least expensive for the human to produce. Said in another way - a shorter thing is easier to make sense of completely and if its garbage - its garbage. At least the cost borne on you was minimised!

      • xp84 4 hours ago ago

        Insightful take.

        What’s funny to me is your last paragraph. A lot of companies are so gung-ho about “AI ALL the things!” that I’m not sure as a manager if I’d get in trouble for “spotting the AI copy paste” junk. I’m supposed to make sure everyone is using AI as much as possible, after all. So, rejecting someone’s output for being low-effort AI slop and asking for a “less AI” version of it might mark me as a silly old fashioned guy who doesn’t believe in AI.

        • anon84873628 an hour ago ago

          Why not coach the people to use the AI correctly and continue rewriting until it is the correct length and level of detail? This whole thread is full of people talking as if you can only one shot the things, or they are incapable of being succinct.

      • alexandre_m 7 hours ago ago

        > This is the root frustration spreading across workplaces everywhere. Before AI the only way for someone to generate a design document, Jira ticket, or pull request without investing a lot of their own time and effort into producing what you saw.

        That’s not really the point. Engineering has always operated on trust networks, not just artifacts.

        Your review naturally adapts based on the level of trust you have in the author. If someone has consistently produced high-quality work, whether they used AI or not becomes mostly irrelevant.

    • gumby271 9 hours ago ago

      I've run into a similar thing where I'll be cc'd on support tickets with one of our customer support agents and they'll then reply to me with what is clearly an ai summary of the single email from the customer that I can already read. I do think they're trying to be helpful, but it's hard to not feel like they think I'm a child or an idiot. Back in the day we agreed that Googling something for someone was rude (letmegooglethatforyou.com being a good example), I don't know why ai summaries and slop aren't understood in the same way.

      • asib 8 hours ago ago

        That’s not the intent of letmegooglethatforyou. It’s a pointed way of telling the recipient they should do the bare minimum research on their own before asking someone else for help. It’s not about being angry that someone told you something they found from a cursory google search

        • notatoad 5 hours ago ago

          You’re right, but Lmgtfy links are incredibly similar in tone to sending somebody ai output.

          Lmgtfy was a passive-aggressive (but not really passive) way to say “hey, are you too dumb to google this?”. Sending somebody ai output feels the same to me - the message you’re sending to the recipient is “here, you’re obviously too dumb to ask an LLM about this yourself”. Except some people don’t seem to realize that’s the message they’re sending

      • furyofantares 7 hours ago ago

        letmegooglethatforyou.com was to let someone know that not searching for themselves is rude - it was not because it was rude to search for someone else (it wasn't and isn't)

    • figassis 7 hours ago ago

      And it’s too soon to have these norms. Employers today are willing to part with them at the hint of the slimmest efficiency gains, you’ll waste time. So I think the correct response today is wait for it to settle. Norms will form on their own.

    • scruple 5 hours ago ago

      Yeah I write prompts asking it to misspell a few words, break a few grammar rules, forget to capitalize once in a while, miss some punctuation once in a while. No one will ever catch on.

    • Forgeties79 7 hours ago ago

      My current bar is “if you know I’m expecting to hear from a person don’t paste unedited ChatGPT outputs and hit send.” Everybody wants to send out the efforts of their corner-cutting, but nobody wants to receive them.

      Most people know when they are doing it. If you feel the need to obscure your LLM usage, it means you didn’t put enough of your own voice and work into the final draft and you need to do something about that.

      • notatoad 5 hours ago ago

        I’d go a step further and say there is never a good reason to share unedited ai output.

        The closest acceptable thing to share is the full chat, including your prompts. If the output is useful enough to share, then the human thought process that led to the ai output is almost always more useful than the output itself.

      • bandrami 7 hours ago ago

        The asymmetry is that lots of people want to use LLMs to produce things, and nobody wants to consume the things LLMs produce.

        The Nash equilibrium here is that the market has to find a way for the people producing things with LLMs to pay people to consume them, and the market always finds a way.

        • 2wdfsd 6 hours ago ago

          Not quite. Ultimately the lions-share of income of model producer's is coming from firms.

          Firms are only going to pay out to model producers if they are getting more in excess of the cost of financing projects over time. If a firm does not see this happen, they reduce their spend on tokens. Simple.

          Its a whole lot more nuanced than some shitty game theory.

          • somewhatgoated 3 hours ago ago

            “Firms are only making perfectly rational decisions that result in meaningful real outcomes” - not my experience.

            Firms waste literally billions on some bullshit that gets them nothing.

        • Forgeties79 6 hours ago ago

          >the market always finds a way

          That may be the case but every day LLM’s feel less like the next big thing and more like 3D printing. Here to stay, but not nearly as ubiquitous and earth shattering as people made it out to be.

          If I had to guess right now, I would say LLM’s are more significant than 3D printers, but less significant than the Internet.

          • bandrami 3 hours ago ago

            I've thought the 3D-printing analogy is pretty apt for about a year now. It had a lot of promise at first but it never quite has the impact people thought it would. There are still 3D printers for sale, and people still prototype with them, but nobody's printing out a dustpan when they need one.

            • anon84873628 an hour ago ago

              Um, have you heard about the drone warfare in Ukraine?

          • trollbridge 6 hours ago ago

            I'd say that's a pretty accurate analysis. Something that is easily generated by an LLM obviously has low value and there is no moat.

            Agentic coding is a bit different, particularly if a great deal of effort and intelligence goes into it, but that's a quite different thing than just cranking out slop apps.

            • Forgeties79 3 hours ago ago

              Yeah there is no doubt that some companies are going to radically change their operations because of agentic coding in particular. But the revolution that is being promised, and the investment that has gone along with it, is going to smash against some pretty nasty shoals of reality sooner rather than later

              • bandrami 3 hours ago ago

                Some are going to radically change their operations, but we have yet to actually see if the ROI on that comes through for them. It will be an interesting thing to watch.

      • jimbokun 3 hours ago ago

        A lot of time I will just say “Gemini/Claude is telling me…” just like I would for a Google search result. Sometimes helpful to use the common wisdom embedded in the LLMs as a starting point for the discussion.

        • somewhatgoated 3 hours ago ago

          As soon as I read this phrase my eyes glaze over and I skip everything that comes after it.

          If I want the LLM answer I freaking ask it myself

          • 878654Tom an hour ago ago

            Indeed, am I talking to a person or to a proxy-prompt?

    • analog31 4 hours ago ago

      In an ideal workplace, one could sit down with the colleague and have her experience untangling the slop, perhaps by a process akin to pair programming.

      Sometimes I wonder if we're letting people graduate from school with no real grasp of the purpose of written communication. School strips writing of purpose, and creates artificial purposes such as using AI to combine words in order for AI to assign it a good grade. Even before the AI era, most human generated text was not worth reading.

    • Mars008 4 hours ago ago

      I've seen manager obviously reading copilot's advises as his own thoughts on meetings.

      • adastra22 4 hours ago ago

        copilot?!

        • fg137 3 hours ago ago

          Microsoft Copilot is used as the default "general" AI tool at most companies you have heard of

    • otabdeveloper4 2 hours ago ago

      You can use an LLM to fix spelling and grammar errors. You don't need to generate slop. (Cloud providers sell LLMs as "robot information workers" when they're actually "calculators for text".)

    • stavros 8 hours ago ago

      Well, sure, it's very new. Soon we'll adapt and it'll be just another tool we're using.

  • menloshark 7 hours ago ago

    Here's how things play out: Zuck gets some idea, he's surrounded by a bunch of yes men who say "yes, this will definitely change the world", then it turns into this optics game of kissing the ring. You ask yourself "how could they blow 80B on the Metaverse like that", this is how.

    DON'T JOIN META, no matter how fast the recruiters reply to your messages. No matter how cool the work sounds (the managers lie in team matching). There's a reason why the average tenure is <2 years.

    It's a toxic and fear based culture. You join, the people around you are already thinking how to scapegoat you. People gatekeep actual work and save it for political favorites and everyone else on the outside is stuck cooking up bullshit projects. If you do manage to find work on your own, people will immediately start scheming to steal it

    • zmmmmm 5 hours ago ago

      It is hard to judge culture during a period of serial downsizing because it will always be toxic in that context. But what you tell aligns with what I have inferred over a period of many years observing, even during times when they were growing: at a high level, Zuck gives the right signals of a successful tech CEO. He's smart, insightful, talks well (now) and appears decisive and willing to back long term bets into the future. And he makes money like crazy.

      But looking at the track record there's a very concerning lack of execution around critical strategic objectives. Take metaverse - I know most people laugh at it because they think it was a bad idea to start with. I push that aside and look at the execution. They poured a startling amount of money into it, and the end result - technically - sucks. This is not good execution of a bad idea. This is incompetent execution of an untested idea. After 5 years of huge investment the characters in Horizon Worlds still look like cartoons. All the advertised features of hyper-realistic worlds, generative world building etc failed to materialise. They made a face saving pivot to mobile where they claim it is successful but I literally never heard of anyone using it. I think it will be entirely synthetic traffic driven from their existing properties.

      Then you can look at AI. You can say the jury is still out on their AI reboot, but it has been out a long time now, and it seems like at best they are just grading into being at par with leading AI labs. But I think that's being generous because so little has been released. What is certain is they went from a leading position right up to 2022-2023 to falling completely off the radar. Despite still holding the undisputed leading AI framework in PyTorch.

      I have to conclude there's a genuine culture and execution problem that probably centers on the fact that Zuck is simply not a good people manager. And his relationship with the next level down (Andrew Bosworth etc) is such that he doesn't enable them to be either. And this all permeates through to an organization that delivers at a fraction of what it should given the resources it is expending.

      • Animats 3 hours ago ago

        The low execution quality of Meta's metaverse effort surprised me, too.

        But they wanted it to run on their relatively weak headgear. A good metaverse needs a decent gamer PC, a serious GPU, and a few hundred megabits per second of Internet bandwidth. (I've written a Second Life client in Rust, so I'm very aware of the system requirements.) Facebook needs to serve a user base which is mostly phones and people with weak PCs. Not Steam users.

        If you have to squeeze it onto underpowered hardware, you get something like Decentraland or R2 or Horizon - low rez, very limited detail, small contained areas. Roblox has made some progress on this problem, but it took them two decades, even with a lot of money.

        The real problem with metaverses is that a big, realistic virtual world is a technical achievement, but not particularly fun. It's a world in which you can spend time and meet people, but the world is not a game. It has no plot or agenda. This throws many new Second Life users. They find themselves in a virtual world the size of Los Angeles, with thousands of options, and are totally lost. It's not passive entertainment. As Ted Turner (CNN, TBS, etc.) used to say, "the great thing about television is that it's so passive."

        • duskwuff 3 hours ago ago

          I think the problem goes beyond that. Meta never had a particularly coherent story for what "Horizon Worlds" was supposed to be to users - it was variously pitched as an online conference room, a social hangout, a way to explore 3D models, a video game... it felt as if they were throwing ideas at the wall to see what stuck, and nothing really did.

          • zmmmmm 2 hours ago ago

            Ultimately yes, that was the issue. In theory they built a viable product, even if it still was cartoonish etc. But it was enough to see that even if it was perfected - there simply wasn't a killer app for what to actually do in there. The vast majority of the worlds that got any traction were just kids playgrounds with silly or trivial games. Some of them were quite fun. But none of them represented a serious value proposition to anybody with actual money.

            The crazy thing is, they built a half decent app called Horizon Workrooms. You could go in there with colleagues and co-work. With so many people WFH it was an actual useful thing to be able to share a room with your colleagues and anybody could throw up a shared screen on the projector, while having your own display in front of you that nobody could see. I did this with folks from my team and it became a regular Friday afternoon type thing for us all to hange out. This was actually useful. But they managed to screw it up and eventually canceled it as well.

          • Animats 2 hours ago ago

            That's what metaverses are like - big spaces in which users can do things. What to do is largely up to the users.

      • ffsm8 4 hours ago ago

        He is the owner though.

        If zuck wanted, he could solve it. Decimate middle management, downsize at a level of what musk did to Twitter and then _slowly rebuilt_ in order to pay attention to the culture this time, removing anyone that takes part in such behavior...

        The company would be worth more (because smaller headcount) and likely even ship more, because the culture would be better.. I've never worked at Facebook though, I'm just an armchair analyst being judgemental from reading some comments.

        • zmmmmm an hour ago ago

          Interesting wording, because he's not the owner. What he owns is enough voting rights that nobody can challenge his decisions.

          And also interesting in the sense that, this is what he claimed to actually do a few years ago. He had a "year of efficiency" where he significantly flattened and restructured the org, losing tens of thousands of staff. At that time I even defended him precisely due to this reasoning - if execution is failing you need a reboot. Well he did the reboot and it is still failing.

      • marcosdumay 4 hours ago ago

        > This is incompetent execution of an untested idea.

        VR will be huge some day. Maybe not as huge as the Metaverse hype, but huge nonetheless.

        But did you expect Facebook to have any competence on making it? Even if the timing was correct, what differentiator do they have?

        And then the CEO throws a world-changing amount of money without even an idea (because "a VR world!" isn't an idea). Did you expect any of that money not to be wasted? That's not how products are made.

        The Metaverse wasn't an organization failure. It was all Zuckenberg's incompetence, Facebook didn't even get the chance to try.

        The AI started different, but it's becoming the same thing again.

        • somewhatgoated 3 hours ago ago

          VR won’t be huge someday. We won’t live to see it at least. We also won’t experience quantum computing having a real world impact. We also won’t see humanoid robots doing any meaningful real world work. There also won’t be a Mars base in our lifetime or datacenters in space or underwater. There won’t be any flying cars either.

        • Eufrat 3 hours ago ago

          > VR will be huge some day. Maybe not as huge as the Metaverse hype, but huge nonetheless.

          I really doubt this. There’s too many people who suffer from motion sickness to make this payoff. 33% of the population suffers from motion sickness to varying degrees and current mitigations including blowing a fan at suffering users, is an unrealistc barrier to causal usage.

          • zmmmmm 2 hours ago ago

            i think the key is, about half of that 33% can tolerate certain elements of it (stationary experiences etc) and another slice suffer in a way that will be resolvable or at least somewhat mitigated by technology improvements. And then another slice will accommodate it if exposed early enough.

            Put it all together and you probably are talking more like 10% of people residual. It is still a lot but I think it's just bearable to not be a death blow to mainstream use.

        • HWR_14 3 hours ago ago

          > Even if the timing was correct, what differentiator do they have?

          Being willing to put $80 billion on the line is a differentiator. It can subsidize hardware, hire talent, acquire companies, etc.

          There were definitely ideas beyond just "VR good". But frankly, giving some of the high level employees he had (Boswell and Luckie and Carmack among others) $10billion each to make VR products they think should exist is something that would probably work

        • intended 8 minutes ago ago

          No.

          VR is not going to be huge, and it misses the entire point of tech.

          Think of something like a Bloomberg terminal. Ugly as sin, and incomprehensible to any one who hasn’t practiced using it. It also gets work done faster, and has a keyboard with multiple keys to get to menus faster.

          BB terminals save calories. VR does not.

          VR is cool, it is aspirational, but it is not saving experts, let alone the average person, time and energy.

    • giancarlostoro 2 hours ago ago

      > DON'T JOIN META, no matter how fast the recruiters reply to your messages. No matter how cool the work sounds (the managers lie in team matching). There's a reason why the average tenure is <2 years.

      I would be surprised if I even got through the interview hellscape that these companies put people through. I'm not interested in talking about algorithms and things that no dev in my entire decade+ time on the industry ever talks about, ever. To make matters worse, the things you should screen developers for nobody seems to do so, except exceptional shops that care about quality (ironically enough!). The only thing the algo questions do is push out "older" candidates who may not remember every little nuance anymore, because... they don't have to hand craft algorithms, every language worth its salt has sorting algorithms or lambdas (thinking of C#) to make sorting effortless.

      • omgitspavel 39 minutes ago ago

        A decade+ is plenty of time to spend a few weeks brushing up on CS basics. There is really only a handful of algorithms and data sctructues and none of them are rocket science.

        And what's the alternative? Quizzing people on some random C# framework methods? The "I don't use algos in a day to day job" argument has been around forever, but nobody making it ever proposes a better filter.

    • mathgladiator 7 hours ago ago

      OR join meta, sell your soul, stay for 7 years, then retire and be done with work forever!!!

      • jimbokun 3 hours ago ago

        Will they still be offering enough compensation over the next 7 years for that to be true?

        Not sure their stock price will continue to rise as it gas in the past.

      • whateveracct 5 hours ago ago

        7 years at a toxic workplace is tough

        • somewhatgoated 3 hours ago ago

          I’d rather be poor

          • voidfunc 3 hours ago ago

            Id rather not.

            Ive never known poverty in my life and I will do _anything_ to avoid it.

      • teaearlgraycold 5 hours ago ago

        I 100% understand the appeal of freedom from external pressures that retirement offers. But at the same time all the (many) people I know that retired early mostly just goof off and struggle to complete any of their many projects. And don’t get me wrong, I love goofing off. Been doing plenty of it. But given my inevitable death I have to appreciate a little external pressure forcing me to do good work.

        • toast0 5 hours ago ago

          > But at the same time all the (many) people I know that retired early mostly just goof off and struggle to complete any of their many projects.

          I retired early and ended up going back to work part time. I didn't complete many of my projects, but that's not why I went back. Most of my projects were things I wanted to play with, not things I expected to finish.

          Working part time is nice because of external pressure, but really, the most of the pressure is cause I'll feel bad if I disappoint the people that are letting me work with them.

          I don't feel bad if I don't get my personal projects done, because nobody is going to use them anyway.

          • Seattle3503 an hour ago ago

            Are you a dev? What does part time look like?

          • teaearlgraycold 4 hours ago ago

            I have picked up a project that helps out a nonprofit and it’s making a nice financial impact. And then there are artistic projects that I hope positively impact others.

        • jimbokun 3 hours ago ago

          What corporations even offer “good” work any more? In the sense of not making the world a net shittier place.

          • teaearlgraycold 3 hours ago ago

            I don’t know what your values are but I’m sure you can find some company that is at least morally neutral in its mission. However you might have to accept lower pay.

            But to clarify I meant “work you can be proud of” when I said “good”.

        • xp84 5 hours ago ago

          If I were fortunate enough to be in that position, I think I’d partner up with a buddy to build something cool (that is unlikely to be a big moneymaker) and rely on each other for that pressure.

        • somewhatgoated 3 hours ago ago

          Idk external pressure is mostly forcing me to participate in the corporate hellscape - would love to leave this and goof off as a goat farmer somewhere.

          Let’s face it - most businesses don’t produce anything meaningful and just exist to realise the infinite growth fallacy of capitalism

        • Mars008 5 hours ago ago

          > and struggle to complete any of their many projects

          Hmm.. I don't struggle, I enjoy it. The goal isn't to start glossy product production. It's to learn how to do it. As soon as it's obvious project is usually shelved. Except for the 'main line' projects which together can result in something significant.

      • menloshark 7 hours ago ago

        Maybe if you joined 10 years ago lmao

    • kraf 2 hours ago ago

      They create toxic products that make the entire world more toxic. How they still manage to not have any responsibility while being editors and publishers is beyond me. I couldn't imagine how their insides wouldn't be toxic as well. Nice people don't do this.

    • dlandis 6 hours ago ago

      > People gatekeep actual work and save it for political favorites and everyone else on the outside is stuck cooking up bullshit projects. If you do manage to find work on your own, people will immediately start scheming to steal it

      So this applies to even, say, mid-level developers? Wouldn't you get work assigned to you after you're hired, or do you actually have to hunt for your own projects, like you might in some consulting firms?

      • menloshark 5 hours ago ago

        > or do you actually have to hunt for your own projects, like you might in some consulting firms?

        This is how the company works on a fundamental level.

        On healthy teams, having something assigned to you (for levels under staff/6) is normal. On unhealthy teams, you're just a sitting duck and it's better to find your own work. Or else you'll be forced to work on bullshit projects with no upside.

        Side note: the "they" who does the assigning is not a manager, it's another IC. The ones that go out and find their own work. That could be at any level technically, but usually staff+ because they form little political mafias.

    • voidfunc 3 hours ago ago

      Im joining meta for the total comp not because I give a shit about the company or products. Same as every company.

      • menloshark 3 hours ago ago

        The total comp is a lie because the average tenure is <2 years, statistically speaking you won't get the full 4yr initial grant by the time you leave.

        Just one suggestion: don't stop interviewing and be very observant of whatever team you land in, be ready to jump ship if there are too many red flags. Also don't trust any of the managers. Don't take anything people say at face value. Be very discerning in team matching, where you land determines everything.

        You might be thinking "oh if I just work 7 days a week, I'll be safe". That's not true, it's all about where you land.

        • torton 2 hours ago ago

          > Also don't trust any of the managers. Don't take anything people say at face value.

          "Did you enjoy Game of Thrones? You'll love working here!"

      • BobbyJo 3 hours ago ago

        OP was saying not to join because you'll have a shitty time, not because the products aren't inspirational enough.

        • voidfunc 3 hours ago ago

          Ill join purely for the comp. I can take a lot of abuse, trust me.

    • udswagz 2 hours ago ago

      100% true, absolutely nailed it

    • gerdesj 7 hours ago ago

      Is this supposition or first hand experience?

      • menloshark 7 hours ago ago

        The latter. I've seen so much unethical shit here, I'd love to give more detail but I'd probably dox myself

        • janussunaj 6 hours ago ago

          Don't let it break you. Take whatever money you made and run.

          The rest of big tech isn't much better. Big G is less stressful, but you'll see vicious and cringey behavior left and right. Hyped large startups are cults and 100% cringe. Meta is kind of the worst of both worlds though. "But they pay so well". Yeah, also: life is short.

    • jimbokun 4 hours ago ago

      This certainly fits with everything in the article.

    • pfannkuchen 4 hours ago ago

      > There's a reason why the average tenure is <2 years.

      Companies that hire a lot or hired a lot recently always have this. The 3 month people drag down the average. It isn’t necessarily due to turnover.

      Not disagreeing with the overall point, I’ve just seen people say this same thing about a lot of companies and it doesn’t always mean something.

  • DragonStrength 7 hours ago ago

    Well, yeah, management sees a weak labor market and imagines the ability to fire all those troublesome engineers. Remember, especially in recent years, tech management is made up predominantly of grads from a select set of "elite" universities, whose caliber is determined mostly by how rich the parents are. It's no surprise we're in a moment of extreme labor disdain. The idea engineers with years of education are as fungible as manual labor has been tried again and again with the same results. LLMs won't change that.

    • whyenot 7 hours ago ago

      > It's no surprise we're in a moment of extreme labor disdain.

      So sad to think that a generation or two ago, everyone wanted to emulate the HP Way. Now all of that is gone and unless you are a superstar, you're just a commodity to be managed, and extinguished when the time comes.

      • _doctor_love 7 hours ago ago

        Sorry, going to have to disagree with you there friend. It is not the case that everyone wanted to emulate the HP Way. The HP Way represented the best of Silicon Valley thinking, and if you read the book, you will see that even those guys were an outlier.

        I remember that there is a passage in the book where the HP guys go and meet with other leaders of American corporations, and most of them felt that they did not have any kind of obligation back to society. I am a huge fan of the HP Way, but they were unusual, and not the norm.

    • jimbokun 3 hours ago ago

      That and the large technology companies don’t really have many ideas for new software or features that will make them more money. Can only increase profits by reducing costs.

  • loeg 8 hours ago ago

    Mark hates leakers, so it is kind of intensely funny that the NYT seems to have a direct line to probably dozens of ICs. Ultimately, it's hard to keep secrets shared with 70,000 employees.

    • asveikau 8 hours ago ago

      Years ago when following what Zuckerberg did occupied more space in my brain, it struck me that he can "hate leakers" but not look inward and change his behavior in a way that doesn't upset people and make them want to leak. He is a very reactionary guy, and not a "how can I be the change" or "what did I do to cause this" kind of guy.

      I thought of this during his various scandals at the end of the 2010s. Everything was a PR reaction for him, rather than looking inward. The best PR is not being an asshole. I wonder if he's thought about it.

      • loeg 8 hours ago ago

        I don't think there's any possible way to behave to satisfy every single employee of tens of thousands.

        • asveikau 8 hours ago ago

          Can't please every human alive, so I might as well not try to do any better. This is a very Zuckerbergian take.

          • loeg 7 hours ago ago

            Do you want to prevent leaks, or achieve some other goal? If you want to prevent leaks, this isn't an effective approach. If you want to achieve a different goal, that's fine, too, but orthogonal to the stated goal. For leaks, it's probably better to just restrict communications to the necessary distribution and understand that anything widely distributed is more or less public.

            • Henchman21 7 hours ago ago

              You can prevent all the leaks by having nothing bad to leak. That is what is being suggested.

              • loeg 7 hours ago ago

                I don't think leaking is limited to "bad" things.

                • noisy_boy a few seconds ago ago

                  Leaking of good things is just good "PR" - people even pay good money for that to "accidentally" happen. Wonder why nobody thinks about actually doing the good thing and not bother with the rest.

              • SpicyLemonZest 5 hours ago ago

                I get why people have this idea, but it doesn't work. A culture of "don't have anything bad to leak" quickly and inevitably leads to "keep your mouth shut so there's nothing bad to leak".

          • alex1138 7 hours ago ago

            He'd please a lot more people if the feed hadn't been filled with crap - out of order, at that - that nobody ever subscribed to in the first place, causing them to miss actual posts from friends (whatever you think of 'social media', his website is fucking broken) for YEARS AND YEARS AND YEARS

            • dingaling an hour ago ago

              FB works perfectly well for me.

              80% of posts in my FB feed are groups or people to which I've subscribed or followed.

              10% are interesting things it suggests outside that core, which I then follow.

              10% are suggestions that I don't find interesting and which I mark as such.

            • senordevnyc 7 hours ago ago

              It is wild to me to have this much visceral hatred of another human because they made a change to their website that you don’t like.

              • CamperBob2 6 hours ago ago

                If you go out of your way to get people addicted to your site, you don't get to complain when they take your rug-pull a little too personally.

                • senordevnyc 5 hours ago ago

                  lol, I don’t think Zuckerberg gives two shits about people complaining.

              • _DeadFred_ 6 hours ago ago

                If that one change requires every user to click on their friends profiles to see updates x 2.1 billion daily users and say 4 family friends checked and it takes 1 second to click that is 291,000 8 hour workdays lost per day to humanity. Around 100,000,000 work days per year humanity looses out on putting to productive use. And I am REALLY underestimating the time lost to this. Facebook is stealing low end 100 million days of productivity a year from humanity on this one thing.

                Or another way, 850,000,000 hours. It took 5-15 billion human hours of work to go to the moon. They steal 1 moon program worth of human time from humanity every 6 or so years. At the scales they operate we need to judge them on that scale. Mark get's paid/rewarded at that scale. He needs to be judged on the same scale. Not on 'the impact per individual'.

                Meta has stolen multiple moon programs from humanity (again I am way under measuring) for that one change in order to increase their billions of dollars.

                https://www.quora.com/How-many-man-hours-went-into-the-Apoll...

                • senordevnyc 5 hours ago ago

                  How can you steal time from humanity when they freely chose to use your product? You don’t owe people a perfect product that doesn’t “waste” their time when compared to some arbitrary standard of how it should be.

                  • dag100 3 hours ago ago

                    Your argument is effectively saying "how can lowering the quality of my product affect customers when they freely use it?".

                    If you use Facebook regularly, you are locked into it because unless you manage to convince your entire friend network to move to some other social media with you, you will have to "leave them behind".

                  • ceejayoz 5 hours ago ago

                    > How can you steal time from humanity when they freely chose to use your product?

                    By employing psychologists who figure out how to make it addictive?

                • alex1138 6 hours ago ago

                  It's worse than that, people have reported that even going to someone's page, FB determines the order posted. Also, psychological experiments FB has done; also, it's kind of the definition of addiction, because FB, in the beginning, when you first friend someone, shows posts. These then subsequently drop off. You can post something, assuming it's been read. It might show up for nobody.

                  It's been said before that it's interesting Zuckerberg for making a social site is pretty introverted. It's because he stole it and he's always been stealing things. He did it to Whatsapp. He copied Snapchat multiple times. He thinks people are "dumb fucks" rather than "look, people shouldn't give info away, but now that I have it I'll do everything I can to keep it secure" (I DON'T like Google but my understanding is they have far fewer data problems). That's the mark of a certain kind of person which I'll, I suppose, not name. It's insulting to the web, what he does

          • j-bos 7 hours ago ago

            > This is a very Zuckerbergian take.

            No, it's just a common fallacy. If you don't like the guy, isn't "zuckerbergian" an example of helping him live rent free in people's heads?

            • asveikau 6 hours ago ago

              I'm actually not kidding when I say that Zuckerberg likes that particular fallacy a lot and I've seen him use it. You're right that it's not at all exclusive to him.

      • georgemcbay 7 hours ago ago

        > The best PR is not being an asshole. I wonder if he's thought about it.

        There are a lot of people in the world who lack basic human empathy to such an extent that it is nearly impossible for them to just not be an asshole.

        I don't know for sure if this applies to Mark Zuckerberg but based on all the second-hand anecdotal information I've heard about him "empathy" as he understands it is a product branding feature rather than a human emotion.

        • cybercatgurrl 3 hours ago ago

          hard to do anything about when it’s in your genetics. it’s a form of neurodivergence just like any other. and to deny it is just furthering the stigma against people with high cognitive empathy and low affective empathy

      • cheschire 7 hours ago ago

        Jesse Eisenberg captured this perfectly.

      • giancarlostoro 2 hours ago ago

        He probably has the same thing as Elon Musk, aspbergers to be honest. Eh I just looked it up, and apparently he does. Come to think of it, maybe Steve Jobs as well, he was insanely eccentric.

    • hibikir 5 hours ago ago

      Given Facebook's current size, and how many people are relatively disgruntled, but work there because of the pay being quite good, the chances of leaks for wide coms approach 100%. The level of internal, upward trust you need to have few leaks left facebook at least 10 years ago.

      No amount of hate will fix it, and no amount of tracking will hide all but the most hidden secrets, so he better get over it. In his situation, hating leakers is like Garfield hating Mondays.

    • jimbokun 3 hours ago ago

      Employees seeing wave after wave of their coworkers laid off. You won’t win much loyalty that way.

      This latest one releasing the NUMBER and DATE of the layoffs a month in advance without naming WHO is a whole new level of stupid. Let’s deliberately maximize the level of anxiety in our employees and reduce their trust in us to zero.

      • loeg an hour ago ago

        > This latest one releasing the NUMBER and DATE of the layoffs a month in advance

        This, too, was leaked to the press. Their plan wasn't to announce a month in advance.

  • softwaredoug 9 hours ago ago

    I noticed a lot more joy using AI from people at smaller companies or working by themselves :)

    I say this as someone self employed that burned almost $1000 on tokens last month. And had. A lot of fun doing it.

    • munificent 8 hours ago ago

      No surprise. People like being more productive when they reap some of the benefits of that increased productivity. If you're expected to be 10x more productive but don't get a raise, all you're doing is stuffing money in some executive's pocket while your job security goes down.

      • zmmmmm 5 hours ago ago

        I'm being heavily consulted to advise management on culture change towards AI. And my number one message is this: make the number one, first and potentially only beneficiary of AI use the individual staff members themselves. If they have more time now, DO NOT start filling that with more work for them to do. If they do more all by themselves accept it as a bonus (experience says this is overwhelming what will happen anyway). Whichever way it goes, let them experience directly the benefit, and let the culture change happen organically downstream from that.

        I think all these companies front-loading staff reductions are actively sabotaging themselves in the worst possible way in this regard.

        • uzername 4 hours ago ago

          I would love to hear more about your advice and the coaching you are giving to management. We also have a strong push to prove evidence of climbing productivity with clearly state future staffing goals. I would like to advocate for this, at even partially, enhancement and quality of life improvement for IC folks.

          • zmmmmm 2 hours ago ago

            It starts with the generic pitch around culture change - "culture eats strategy for breakfast" style. Then a bit of shock and awe around how extensively AI is going to redesign business processes in the long run, leading into an argument about it being a marathon, not a sprint and at the moment everyone is treating it like a sprint, the real winners will be those gearing up for endurance. Then structuring the pathway: personal productivity as a cornerstone ebbing into pilots of implementation in areas highly aligned with AI capabilities minimised risk - all as preparation for the main game which will ultimately redesign core business processes in an AI first way.

            I will say I am a bit of an outlier. I see others mostly pitching for things like small teams of "AI Champions" etc. I don't favor this because I think it will lead to dysfunctional outcomes (people trying to make the initiatives fail because they weren't "chosen" etc). So I pitch for the broad based, whole organization journey etc. But it does require a strong argument for acceptance of a slower pace of externally visible adoption.

      • dzhiurgis 8 hours ago ago

        This.

        I’m in a dreadful situation right now. Everyone in team got a claude account, but I’m a contractor so not for me (the only dev in team of 25 consultants). Someone in the team assigned me a task to review claude skill that opens up tickets for me. I’m not even using claude and official policy is no AI use for development…

        Otherwise it’s been mixed bag. Pace definitely picked up and things that I actually enjoyed doing (UI) it does very well. Things that are actually hard (backend logic) it sucks and painted me in corner too many times.

    • Aurornis 8 hours ago ago

      Meta is on the extreme other end of this. The article opens with how they're now using AI to monitor how everyone uses their computers.

      It's still insane to me that Meta thought this would be a good idea, or that employees would be comfortable with it even though they claim it's only used for anonymous AI training.

      • loeg 8 hours ago ago

        > using AI to monitor how everyone uses their computers

        It's the other way around -- they're monitoring the computers to train AI.

        • sterlind 8 hours ago ago

          probably both, to be fair.

          Meta may know that their employees will put up with it, given how depressing the job market is right now, but unhappy, cynical, resentful employees do not produce good software and innovations.

          there's a real financial cost to treating devs like cage-raised livestock.

          • loeg 8 hours ago ago

            It's unclear how you would use LLMs to monitor clicks. Unless you just mean they're authoring the monitoring software with LLM assistance (which is probably right).

            • saratogacx 4 hours ago ago

              LLM generates context based on what's on the screen and associates it with the action taken by the user. It is less "point of time" but more "charting the flow"

              For example. page content of a PR with open comments, next action is to focus on the first comment. when a new PR with no open comments is shown the approve/push button is the next action. That starts a re-enforcement loop.

        • stasomatic 7 hours ago ago

          Could this be a vector to poison the AI? I am not one for sabotage, just bad karma all in all, but not all are like that, and if one knows their days at ACME are numbered, the sirens start singing.

      • jimbokun 3 hours ago ago

        If they were competently evil they would have just done it quietly.

    • abalashov 9 hours ago ago

      I work by myself and feel no joy in using AI.

      • stavros 8 hours ago ago

        I work by myself an feel great joy. Today I talked to the AI about a feature I want to add to this week's project (https://www.writelucid.cc) and it had some good feedback. Later I refactored a big part of the code to simplify it (though I had to explain to Claude why this was possible), and it came out great.

        I've never been happier, I can now build everything I've been wanting to build, really fast, with very few bugs.

      • echelon 8 hours ago ago

        I work for myself and I absolutely love AI.

        I'm able to get 3x the work done. Greenfield stuff appears almost immediately.

        My job is providing value to customers, not worshipping at the cathedral of software that will last forever. Nothing lasts forever.

        Start treating software as ephemeral. It'll click.

        This doesn't mean write low quality, unmaintainable software. It just means focus on getting stuff to your customer.

        Writing in super typesafe languages with the highest level of strictness helps a lot. My AI stack is Rust and Typescript.

        • Thanemate 43 minutes ago ago

          I tried using it last week to make a simple Yu-Gi-Oh! website, that shows decks, lets you rate them, register users, etc. kinda like masterduelmeta.com and I enjoyed using it, but definitely did not enjoyed making it. I didn't felt a sense of ownership or dopamine from nailing the styles just right, or making the cards shimmer when you hover them.

          All jobs can generate income. What led me follow this one job in particular was the joy of turning nothing into something, and it now feels that the most effective way to do that is for only $99.99/month, and that price needle is only going to move further upwards as capabilities increase.

        • saltyoldman 8 hours ago ago

          This is the right way to look at things now. It might not always have the right track record, but AI built coding is more likely to have all the right permissions in place by default, most likely to copy existing patterns in your codebase, most likely to use the highest performance patterns and on top of all that, the spec will match what was asked of it.

          • codemog 8 hours ago ago

            What magical AI are you using? That’s not my experience at all.

            • loeg 7 hours ago ago

              Claude with the 4.7 model is getting pretty good.

            • sterlind 8 hours ago ago

              there is a significant learning curve to using AI well. learning to stay skeptical and keep your brain on, developing an intuition of how much free reign to give it, writing ironclad specs and design docs and keeping them updated, making work easy to inspect, the tone you use talking to it, using one agent to critique another's work, etc.

              basically, AI will produce slop if left unattended. but it's not really its fault.. it's a process failing, like not supervising the interns. using AI the Right Way(tm) is a mental workout, quite a bit slower, but extremely rewarding (ime.)

          • crooked-v 7 hours ago ago

            I can't even get LLMs to reliably use tool calls instead of bash, let alone follow existing patterns in a codebase.

            • echelon 4 hours ago ago

              What do your prompts look like?

              Mine are pretty robust and articulate. I tend to write very lengthy instructions and include snippets of code, file paths, struct names, etc.

    • jimbokun 3 hours ago ago

      What’s your ROI for that $1000?

    • j-bos 9 hours ago ago

      Been feeling that energy too, trying so hard to stay at my current big co job for the health insurance. But the draw is pulling me hard.

      • foota 8 hours ago ago

        I've generally assumed that AI would make developers get lower compensation because of the lowered quantity of developers required for the same output, but this raises the possibility of it actually increasing if more developers end up doing their own things instead of entering the broader labor market :)

        • loeg 8 hours ago ago

          It could increase compensation by growing the economy. (E.g., perhaps counterintuitively, skilled immigration has this effect.)

        • bdangubic 8 hours ago ago

          the problem is that very few to none SWEs “doing their own thing” will ever make a penny out if it. whatever they do, if it actually makes a little traction, will be cloned and copied in a week by someone else. this whole idea that “we’ll see a 1-person billion dollar startup” is as silly as it gets

    • amelius 6 hours ago ago

      Just wait until Big AI copies those businesses.

  • cadamsdotcom 4 hours ago ago

    Uber burning its whole AI budget in 4 months instead of 12, companies everywhere pressing employees to use AI whether or not it makes sense..

    My cofounder and I get to “only” pay $200/mo to build our product while the hyperscalers burning tokens like crazy stave off price rises for people like us - thanks Zuck!

  • bachmeier 11 hours ago ago

    > it is cutting jobs to offset its A.I. spending, saying last month that it would slash 10 percent of its work force.

    > Meta also introduced internal dashboards to track employees’ consumption of “tokens,” a unit of A.I. use that is roughly equivalent to four characters of text, four people said. Some said the dashboards were a pressure tactic to encourage competition with colleagues. That led some employees to make so many A.I. agents that others had to introduce agents to find agents, and agents to rate agents, two people said.

    Maybe the first to be laid off should be the ones that thought it made sense to track token consumption. Goodhart's Law doesn't even apply in this scenario because that's a dumb metric whether or not you're using it to evaluate employees.

    • superfrank 8 hours ago ago

      My company did something similar (dashboard to track tokens). It was made available to managers about two weeks before it was available to everyone, so I got to see all my reports' usage before they knew they were being tracked.

      The dashboard got announced publicly and just about everyone's usage went up by 100%-200% almost immediately and hasn't come back down, but nothing I'm tracking shows any increase in output since then. We absolutely saw productivity gains a few months ago, but it feels like now people are just burning tokens for the sake of it.

      On top of that, as a reaction to the rising costs, we've now gone from unlimited token use to every engineer now having a monthly token budget of $600. I get why that was done, but we're a publicly traded US tech company worth 10s of billions of dollars. We're not hurting for money and the knock on effects are just crazy. For example, I had an engineer in sprint planning say about a large migration type ticket, "Can we hold that ticket until the end of the month? I don't want to burn through all my tokens this early in the month." I just cannot imagine that that's the culture that our executive team was trying to cultivate when they first purchased these tools.

      I'm not anti-AI and actually really enjoy using AI for development, but over and over I've watched business leaders shoot themselves in the foot trying to force more AI use on their employees in pursuit of ever increasing productivity. I just keep thinking that there's no way that any productivity gains we've seen from the forced, tracked AI usage are enough to offset the productivity lost from anxiety and churn caused by the unrealistic productivity expectations, vanity metrics, and mass layoffs that have come along with increased AI adoption.

      • swingboy 5 hours ago ago

        How were you measuring productivity gains (prior to the dashboard)?

        • superfrank 4 hours ago ago

          Without going too into detail, my company is really, really big on estimates and predictable delivery timelines. An entire years worth of work is speced out, estimated, and scheduled by the end of October the previous year. It's a really terrible process, IMO, but it's the process so it is what it is.

          Normally, most teams (mine included) are about 10% behind their plan by the end of Q1. This year my team is closer to 10% ahead despite the fact that we're down one engineer due to a small re-org at the end of last year. These projects were planned and estimated before AI was in heavy use and when, at best, most AI focused devs were still using it like smart auto-complete. Essentially, we estimated the projects before AI was heavily used and we're consistently beating those estimates by a good amount which is not how all previous years have gone.

          The AI metrics dashboards didn't roll out until mid-March and, while I'm still seeing us beat our estimates from last year, we're not seeing any additional gains. Basically, all of Q1 we had AI and no dashboards and were beating 2025 estimates by X%. For Q2 we had dashboards and extra pressure to use AI and we're still seeing those X% gains, but no additional gains despite higher token usages.

          We also have KPIs around completing a certain number of a certain type of Jira ticket that customers can file and we've seen a similar pattern of an sharp increase in tickets completed in Dec-Feb and then the new rate holds, but no additional increase after the company started pushing AI usage.

      • akomtu 6 hours ago ago

        Those executives are simply implementing the directive to inject as much AI as possible into every gear of the economy. Their bonuses depend on this. The idea is that if the world economy becomes dependent on this AI monstrosity, we won't be able to get rid of it. It will be like a situation with a nasty parasite that does a lot of harm, but cannot be removed without the host dying.

    • sardukardboard 10 hours ago ago

      A funny Goodhart’s Law parallel showed up in during GPT-5.1 training, where the model was rewarded for using the web search tool, so it learned the behavior of superficially using web search to calculate “1 + 1” and not utilize the result.

      https://alignment.openai.com/prod-evals/

    • zerreh50 10 hours ago ago

      It will get really funny when they start imposing an exact number of tokens as a quota, where too little means you are an outdated luddite and too much is inefficient and wastes money

    • idle_zealot 10 hours ago ago

      > that's a dumb metric whether or not you're using it to evaluate employees

      Only if you assume in good faith that the point is to evaluate employees for productivity on some stated goal for the company or role. If you try to view the metric from other possible positions, the one I think fits best is the promotion of token consumption by all means. This is useful for signaling to the broader market that AI is profitable and merits more investment, and may be part of a deal between them and whoever they're buying tokens from. It makes more sense to me that Meta would be more interested in leveraging its control over people to manipulate the state of the world, market, and general sentiment than having them work on stable, well-established and market-dominant software services that really only need to be kept chugging along. Isn't mass-manipulation their whole business? Why wouldn't they use their employees and internal structure to contribute?

      • strongpigeon 10 hours ago ago

        Having worked in big tech, I can almost guarantee you you’re overthinking this.

    • sidewndr46 7 hours ago ago

      I'm reminded of the sales dashboard that tracked the number of calls each sales employee made. There was one employee in 1st place that I assume just always called the same customers multiple times. Her position was about 10x 2nd place.

      If someone gave me unfettered access to inference of modern LLMs, there would be no concept of measurement other than the total system wide capacity of whatever the company had available.

    • strongpigeon 10 hours ago ago

      Not that I disagree with you, but I’ve heard of such tactic being used in some orgs at both Google and Microsoft as well.

      It seems like a common conclusion from a management that wants to push for AI adoption. I doubt it’s super effective, but we’ll see how it turns out.

      • iugtmkbdfil834 10 hours ago ago

        It gets worse, my corp is tech adjacent at best, so we get push to use AI, but also get heavily restricted tokens, ridiculous limits on internal tooling ( think context for one short prompt ) and expectation that now one should be able to create the $result fast anyway...

        Edit: and if you question that, you are a troublemaker to add to the list

    • skybrian 8 hours ago ago

      This is a company incentive to increase expenses. Maybe not as bad as Dilbert's "I'm gonna myself a new minivan," but still.

  • 1vuio0pswjnm7 2 hours ago ago

    Alternative to archive.is

    No CAPTCHA, no Javascript, no DDoS directed at blog, no geoblocking

    https://static.nytimes.com/narrated-articles/synthetic/artic...

  • undefined 11 hours ago ago
    [deleted]
  • giancarlostoro 2 hours ago ago

    Man, is Mark hiring someone to run this dumpster fire? I'll take a pay cut (compared to whoever is doing so now) and give it a vision to aim towards instead of the mindless laps he seems to be running through, I'll leave after a year, and it will likely still produce a better outcome than whatever is going on with it right now. I'm a nobody software developer, but I love tech and hate to see what could otherwise be successful fail.

    I don't understand why he's struggling so hard with this stuff. AI shouldn't be forced on its employees or users. It should be something available to them, but not forced. I don't understand why he's burning time on so many bizarre things. There's plenty of things that could yield dividends with LLMs like optimizing them for speed while retaining serious accuracy, by coming up with new techniques. If he did that, and made a model that was exceptional at programming, he could compete with Claude Code easily, or Codex. Instead, I get the feeling he has no idea what he wants and is just burning funds endlessly.

    He could have had a cool Meta Verse, but he also could have just bought Second Life and had it prebuilt. Now he technically has the pieces, but not the focus he needs. Bro needs to delegate this fire to someone else before the ship sinks.

    If I didn't know better, the memes about him being a robot are actually true, and there's a safeguard that sabotages him from making any usable AI as a failsafe to prevent him from building a primitive AI inferior to himself that could kill humanity, plot twist is he doesn't know he's AI.

    • HDThoreaun 17 minutes ago ago

      Mark runs the company himself for no salary. Does come with a private jet and security though

  • incognito_robot 23 minutes ago ago

    This will probably get me down voted but I have zero sympathy for anyone who chooses to works for a company built up around the harvesting of personal information.

    You do have a choice, and by staying you are making it. Stop playing at being a victim.

  • ceejayoz 7 hours ago ago

    > “This data is very tightly controlled,” Mr. Bosworth replied. “This will not be a leak risk.”

    Ooof. Famous last words.

  • wg0 4 hours ago ago

    This is the same visionary that:

    - Renamed a well known brand. - Wasted billions of dollars on cartoonish third grade glitchy and buggy VR world claiming it's the future.

    And now he's all in on AI and now models are closed sources. Because.

  • stephc_int13 10 hours ago ago

    I believe that any kind of partial automation is going to make the job more soul-crushing.

    Ford style assembly lines made the work of the factory workers more miserable. Partially automated cashier did the same thing.

    I don't think there is any point in trying to resist automation, as the efficiency benefits are too important.

    • layer8 10 hours ago ago

      Efficiency gains are more important than people not having to spend their working life with soul-crushing tasks? I don’t quite follow.

      • stephc_int13 10 hours ago ago

        The assumption is that orders of magnitude more people will benefit from the efficiency gains, like it was the case in agriculture automation or factory work automation.

        In those cases, that led to a transition period, nowadays only a small fraction of the human population is working to produce food, and their job is more about planning, finance and orchestration of machine work, but many specialised jobs were lost or made miserable in the process.

        IMHO any job that can be done by a machine should not be done by a human, the tricky part is going there with as little undesirable effects as possible.

        • daveguy 9 hours ago ago

          Yes, eventually we will all be able to enjoy our delicious algorithm, attention, and data sandwiches on our lunch breaks.

    • themafia 10 hours ago ago

      > Ford style assembly lines

      The ones with 10 hour shifts and mandatory overtime? Yea, I don't think it's the _line_ that's making them miserable.

      > Partially automated cashier did the same thing.

      I've not once heard anyone in the service industry make this complaint.

      > as the efficiency benefits are too important.

      You can squeeze every last drop of productivity from your employees. In the short term this may even evidence profits. In the long term it only works if you hold a monopoly position.

      • stephc_int13 10 hours ago ago

        "The ones with 10 hour shifts and mandatory overtime? Yea, I don't think it's the _line_ that's making them miserable."

        The whole innovation was about making the jobs as simple and repetitive as possible so humans would basically work like robots.

        Once you're there, having removed any agency and freedom, pushing the hours to the limits of human exhaustion is just one logical step.

        • 3fsd 6 hours ago ago

          You've described the amazon warehouse. Ive worked in there and trust me, I did not see people displaying exhaustion etc. There were many there who did the job purely because of how simple it was and were ok with it. Perhaps they got conditioned to it.

          Yes it was jarring for me to experience that.

        • themafia 5 hours ago ago

          > was about making the jobs as simple and repetitive as possible so humans would basically work like robots.

          So they make fewer mistakes. Not that they become zombies that you are then able to abuse.

          > pushing the hours to the limits of human exhaustion is just one logical step.

          There's nothing logical about ignoring consequences. Which is probably why the "union strike" even exists. It's fighting illogic with illogic.

    • wat10000 9 hours ago ago

      We’ve had partial automation in programming since the first assembler was written. I don’t think we’re more miserable than we would be if we still had to write machine code by hand.

      • stephc_int13 9 hours ago ago

        People who enjoyed programming at this level (myself included) were not really that happy but most had to transition into a job that didn't value some the skills they patiently acquired and were machines never attained the highest level.

        I would have been happy writting z80 and 68000 assembly code for an entire career.

        • wat10000 8 hours ago ago

          Same here, but I think even you and I would get annoyed if we had to write machine code directly. Some people like assembly, but I've never encountered someone who eschewed even that.

          If we look at automation beyond assemblers (e.g. compilers), even if you or I might be content without it, I think it's safe to say that the vast majority of programmers are glad they don't have to write assembly.

  • nialv7 2 hours ago ago

    Mark Zuckerberg reminds me of a spoiled child.

  • monksy 3 hours ago ago

    I can't wait till we get VC money coming back and the recruiting strategy is engineering freedom and merit.

  • onlytue 10 hours ago ago

    As someone who hasn’t spent a vast portion of life believing technology would make life better, I’m not shocked at all.

  • jp57 7 hours ago ago

    They mention Meta's layoffs, which probably have more impact on employee morale than the AI stuff.

    My current theory of tech layoffs is that over the last decade or so, churn-inducing practices like stack-ranking have gone out of vogue. One can speculate as to why this happened. Perhaps generational made middle management unwilling to do the dirty work? Nevertheless it happened.

    However, companies still want to, and some would argue need to, eliminate low performers, so now they periodically do a companywide reduction in force and frame it with whatever justification is handy, macroeconomic conditions, AI, whatever.

    This hypothesis would explain phenomena like companies hiring aggressively during or after a layoff, and why the layoffs keep happening year after year.

    • Ifkaluva 7 hours ago ago

      Not sure about other tech cos, I think Meta and Amazon currently do stack ranking.

      It seems to be a thing that comes and goes as the job market is weaker or stronger

    • xingped 7 hours ago ago

      Oh don't worry your pretty little head, stack-ranking and churning are still _very_ in vogue with the tech companies.

  • rl3 10 hours ago ago

    It occurred to me recently that AI's degradation of the human factor via way of increased pressure on the remaining ranks of humans might actually be far more damaging than the AI's output itself.

  • _doctor_love 10 hours ago ago

    I love the quote in there from Boz that basically says "no you can't opt out fuck off"

    • camillomiller 10 hours ago ago

      People focus a lot on how Zuckerberg is a deranged sociopath, but I think Bosworth should get the same criticism if not worse. The good face he put on while fucking over the world is utterly disgusting. I got to a point where I just wish ill fate to these people, because there is really no other process by which they can be slowed down or stopped.

  • TinyBig 9 hours ago ago

    On top of token tracking, they're also scoring employees on how much they teach Ai to their colleagues. As bad as the token dashboard sounds, employees being forced to try to mine each other for credit sounds worse.

  • synergy20 9 hours ago ago

    from a different perspective, there are way more people who are truly miserable these days comparing to these who earn probably more than half a million per year on average. we must live in parallel universe.

    • loeg 8 hours ago ago

      I don't think the median Meta employee earns more than $500,000. Certainly a big chunk do.

      Just to envelope math some of this: getting GE (~15%?) ratings at IC5 (plurality of employees) in a SWE/PE role in a west coast metro (95-100% pay scale) puts you somewhere around $550,000 with a flat stock price. (But most employees will get lower ratings.) I haven't run the numbers with the smaller 2026 refreshers and revised "Checkpoint" rating scale.

  • shmerl 2 hours ago ago

    As if anyone would trust explanations on why they are gathering data. Data is data and can be used for whatever.

  • jijji 5 hours ago ago

    Facebook the web site reminds me of a really bad implementation of MySpace. MySpace was better, even in 2003. There are hundreds of usability bugs that exist on various parts of the platform that for over a decade remain unfixed. For a company that has 78,000 employees, you would think one of them might want to dig in and fix the web interface bugs. What's weird is in the age of Claude Code, it would probably take one software engineer a week to fix all of them, so its really pure incompetence. I think they spend more time on automation around restricting the usage of the platform that they forgot about the user interface bugs that plague it.

    Also, avoid using Meta Pay aka Facebook Payments, where a user can send a payment to another user via the Messenger app. Someone sent me money a few weeks ago, and a two weeks alter they still have the payment marked as "Completed" on the sending side, and "Cancelled" on the receiving side. I told the sender to just do a chargeback with their bank because Facebook basically stole the money. Don't use Meta Pay for sending payments to anyone. Then when you try to open a "case" about it, you call a call center in Indonesia and the people have no access to see anything about the transaction, they just send it up the chain, only to have an automated response telling you to do something that the web site doesn't even offer as an option. I don't think there is any humans in the loop, besides the Indonesian call center that has no access to any of what you're calling about.

  • BrenBarn 3 hours ago ago

    Not to mention making the rest of the world miserable too.

  • moneycantbuy 9 hours ago ago

    unionize

  • androiddrew 10 hours ago ago

    I believe that's the point.

  • bossyTeacher 11 hours ago ago

    Not going to lie, I have no pity for the tech employees of a company that has spent most of its existence making the world a worse place. They are finally getting a taste of the medicine Facebook has been giving to everyone in the last 2 decades.

    • xantronix an hour ago ago

      I hear you, but many in the tech industry will likely use this moment as a bellwether of what else they can do to extract every last drop of value from their employees. This is going to get really fucking ugly really quickly if, say, Microsoft, were to package this capability up as part of their fleet management suite to sell to companies who all want their own models like this.

    • puttycat 9 hours ago ago

      I agree, but we do owe them PyTorch and React.

      • bossyTeacher 9 hours ago ago

        The world would be fine with any of the many JS libraries/frameworks

    • mbroncano 9 hours ago ago

      N/a

  • jcgrillo 2 hours ago ago

    It's hilarious they're using "AI" as the excuse to do all this dystopian zero theorem shit. This is web3 all over again, just bigger and more "disruptive". AI isn't useful in and of itself, just as a blanket excuse to justify the craven impulses of the most ethically bankrupt tech functionaries. Disgusting.

  • outside1234 9 hours ago ago

    I am not a big fan of unions, but we need some form of union as soon as possible.

    • Cyph0n 9 hours ago ago

      One of humanity’s greatest inventions imo. Just imagine a world where we didn’t have the ability to mobilize a workforce in pursuit of a common goal.

    • dawnerd 9 hours ago ago

      I came to that conclusion the other day after reading all of the layoff letters. The big corp tech workers def need to start considering it.

    • alex1138 8 hours ago ago

      Don't worry! There was a link about how Facebook Workplace censored the word unionize

  • Giorgi 10 hours ago ago

    Meta has been banning it's core users for months now, above 20 million users are now banned, they are on death spiral after that Metaverse fiasco.

    https://www.nbcdfw.com/news/nbc-5-responds/meta-users-contin...

  • vrganj 9 hours ago ago

    Modern elites forgot that treating workers nicely was the compromise we as a society settled on because the alternative is pitchforks and torched homes.

  • downrightmike 11 hours ago ago

    MEta made billions on AI in 2025, 10% of their revenue... by allowing scammers to use AI to attack users and steal user's money.

  • aenis 3 hours ago ago

    Still amazes me anyone uses facebook. I can understand smoking or drinking, there is a chemical dependency. But that absolute crap? Its "las vegas slot machines at 4am" level miserable.

  • Kapura 5 hours ago ago

    pikachu_zoom.gif

    • doitLP 4 hours ago ago

      This really isn’t Reddit. Please try to preserve the last vestige of something resembling good dialogue on the internet.

  • AIorNot 9 hours ago ago

    Is there any CEO out there as insecure as zuckerberg?

  • shevy-java 10 hours ago ago

    Well, that's the goal of AI Skynet - it has no need for humans. Did nobody learn from that movie?

  • jmyeet 7 hours ago ago

    It's hard not to look at Meta and come away with any conclusion other than they shit the bed. Hard.

    I think the last good move Meta made was buying IG. Maybe not good for IG users but absolutely a great move for Meta. Not quite as good as Google buying Youtube but it's up there. Best $1 billion any company has probably spent.

    But Facebook is a graveyard of conspiracy theory Debras, anti-vaxxers and your racist uncle just posting links all day. Sharing links was a contentious decision and it clearly improves short-term engagement but (IMHO) it destroys the platfrom's initial purpose of keeping in touch with friends and family.

    Let's not forget too that Meta spent probably billions on building its own crypto (ie Libra). But that was just a taste of what was to come. The Metaverse was one of the largest boondoggles in corporate history. $70B+ with no product-market fit. It was an entirely ego-driven "build it and theyw ill come" moment from somebody who doesn't know waht to do with the empire he's built who is surrounded by Yes Men.

    Facebook and AI feels a lot like Microsoft and mobile. Microsoft just completely missed the boat based on poor leadership and conflicting priorities (eg wanting one Windows code base for all devices). Facebook has a huge corpus of human communication and engagement, which should be a treasure trove for building AI but I don't think anybody really believes Meta knows what they're doing or will get anywhere doing it.

    I've seen in this in big tech companies: big initatives get well-funded. Seasoned veterans swoop in and cash the fattest checks (in bonus stock) until the entire thing falls apart. Think Google Wave.

    What I really think is going to kill these companies is the corporate layoffs or, rather, what they represent. They represent big tech companies turning into Corporate America where politics defines your careers, the company seems incapable of doing anything due to competing fiefdoms (a la Intel) and middle management just reorganizes every 6-12 months so nobody in management ever faces the consequences of their actions.

    Monitoring your employees keystrokes with AI ins't going to help either. But management (or the consultants they end up paying) are never going to come to the conclusion that the problem is management.

  • casey2 6 hours ago ago

    This isn't newsworthy and I'm sick of propaganda.

  • deanCommie 10 hours ago ago

    Every big tech company's embrace of AI is making all of their employees miserable.

    Whereas if you're half-competent and at a startup, the AI is an incredible opportunity to try to leap ahead while the prices are subsidized (by the big tech behemoths fighting wth each other)

    The reason is a complete inversion of Ownership and Agency.

    For a decade of ZIRP, big tech convinced its employees that they're "changing the world", and what we did mattered. Sure the exhorbitant salaries and constantly rising stock value didn't hurt, but honestly other than the FIRE cultists, for most of us the difference between 200k/year and 800k/year didn't feel much day to day (other than the ability to buy a house or something, and feel safe with a retirement nest egg). No, most people were missionaries not mercanaries.

    2021 was the first crack. The comps went crazy, half the industry turned over, and the ones who didn't felt a bitter sting where it became blatantly clear that all the new arrivals were just in it for the $$$, and the companies were willing to pay for the backfills but not to reward the loyalty of the missionaries.

    Then came the yearly layoffs, chipping away further, and reminding every employee that they're at the mercy of a spreadsheet and the whims of people 3 levels above them in the org chart, in spite of the economic reality of their product, or their personal productivity.

    And now we're here, and it's clear that all of the above is still relevant. The old-timers that hung around see that their personal output doesn't matter, their product's PnL doesn't matter. All that matters is 1) the company's AI strategy (and if they're not part of it, they're secondary), and 2) tokenmaxing.

    How can anyone find joy in this environment unless they're purely in it for the comp?

    I couldn't. I left my big tech job in December after 15 years, and have not been this happy at work since pre-COVID.

    • Ifkaluva 10 hours ago ago

      > the difference between 200k/year and 800k/year didn't feel much day to day (other than the ability to buy a house or something, and feel safe with a retirement nest egg)

      I can’t believe I read this sentence, lol.

      800k is the ability to buy a house and support a family on a single income. Do you see so many people lamenting the days when this was possible? So many memes about the lifestyle Homer Simpson could provide, and may modern families can’t? 800k makes it possible.

      It’s a huge lifestyle upgrade, especially if your partner wants to do something artistic, academic, or otherwise less profitable.

      • treis 9 hours ago ago

        The real difference between 800k and 200k is the ability to work for 5 year and make 200k off interest for the rest of your life.

      • alistairSH 10 hours ago ago

        While I mostly agree, $200k makes that possible too, if you play it right. For example, go remote, move to the countryside, let your spouse rear the kids or the dogs or whatever.

        But yeah, "no difference between 200 and 800", while spelling out some MASSIVE differences is quite a statement.

      • dehrmann 9 hours ago ago

        This essentially describing the backward bending supply curve of labor:

        https://en.wikipedia.org/wiki/Backward_bending_supply_curve_...

      • joshribakoff 10 hours ago ago

        If someone has a 10m portfolio, it really is irrational to chase a higher w-2.

        • zzrrt 9 hours ago ago

          How many do you think build that portfolio from $200k/yr? The point is an extra 600k, at least for several years, is life-changing when managed wisely. I could perhaps see the GP's point about "day to day" feeling, if you acclimate to the baseline of financial security that having that much money buys you. I'm only assuming though, having never had the opportunity to experience GP's claim about that kind of comp.

    • dinkumthinkum 6 hours ago ago

      The idea that there is not much difference between 200K and 800K per year is absolute nonsense. I feel like its one of those things that are so stupid we are all dumber for having heard it. I don't know if its just one of those things where people on HN seem to virtue signal as if they are the most minimalists, live in 250 square foot tiny houses, live of radishes they grow in their microgrardens or whatever. This is like people that say, "eh, you don't need to make more than $95K."

    • kogasa240p 10 hours ago ago

      Good post

      >2021 was the first crack. The comps went crazy, half the industry turned over, and the ones who didn't felt a bitter sting where it became blatantly clear that all the new arrivals were just in it for the $$$, and the companies were willing to pay for the backfills but not to reward the loyalty of the missionaries.

      Also SVB collapsed in late 2022, notice that AI hype started right after.

      • iugtmkbdfil834 10 hours ago ago

        Sigh, if it actually collapsed, it would have been fine. Summers saved it depositors before it collapse. I still don't get how that was not a story on a par with 2008 ( or maybe it wasn't because the fallout was avoided ).

  • syngrog66 3 hours ago ago

    Zuck is a billionaire sociopath. All else flows from that.

  • ost-ing 11 hours ago ago

    As someone who has spent a vast portion of life believing technology would make life better, I've come to the realisation that this idea is a fallacy. Technology amplifies power and until we collectively redefine and enforce a value system that benefits us all, the advancements in technology simply serve as a means of subjugation

    • timoth3y 9 hours ago ago

      I think that is the core truth of the matter. Technology itself does not make life better.

      I recently published an article about the Luddites. If you look at their actual demands, they were not anti-tech. They were labor activists. Life got much, much worse for most people in the industrial revolution until the laws they advocated were finally implemented.

      https://www.disruptingjapan.com/the-real-luddites-would-have...

      • pitched 8 hours ago ago

        The Luddites were against the systems that were shifting work away from skilled workers (them) to unskilled workers (commonly children). I have no doubt serious injury and death happened at those machines but find it hard to believe that was the cause of the Luddites. All signs point to them being more worried about themselves.

        There’s no way they would have been pro-AI. It would take a very skilled VC to warp the world enough to make that sound true.

      • willhslade 9 hours ago ago

        I mean, dentristy and vaccines but ok.

    • ahartmetz 11 hours ago ago

      It really depends on the technology. Different technologies redistribute power differently. LLMs are very "centralizing" indeed. It is hardly feasible to train your own LLM as a private person or even a small company - at best you can download a pre-trained one, which at least nobody can silently change or take away from you.

      • krupan 10 hours ago ago

        Very well said. Free software was a revolt against technology that you have no control over and I feel like the people that are whole heartedly embracing "AI" have completely forgotten this. They now use an incredibly expensive proprietary piece of technology that they have no control over to write a bunch of code that they cannot (even if they tried) understand and they talk like it's the most amazing thing ever. This is pure short-sighted foolishness.

        • ahartmetz 10 hours ago ago

          Yeah - I really like F/OSS for the freedom aspect and I intensely dislike SaaS LLMs for the same reason. I tolerate them more easily for ancillary tasks like vulnerability search or super-powered LSP-workalikes to learn about a code base. There will eventually be a lot of nuance, I hope and believe - reasonable compromises between going all in and abstaining completely. So far, I'm doing okay just occasionally dabbling in local models. I at least need to know what people are talking about.

      • matusp 10 hours ago ago

        That's why we have state. There are many technologies that we, as a society, decided to control in various ways. You can't just build a nuclear weapon for example. There is no particular reason why we let tech bros control many aspects of our lives, apart from legal inertia.

        LLMs can be "trivially" decentralized by expanding the concept intellectual property to also cover algorithmic processing. It's just about how we setup our laws and rules.

        • krupan 10 hours ago ago

          Nobody had to legislate Free software into existence in order to protect us. Wise people saw the need and did something of their own accord. We are still free to do this!

          • ahartmetz 10 hours ago ago

            It seems like one needs a big machine farm and a vast corpus of training data with a lot of manual curation to get started creating a competitive LLM, plus whatever technical expertise that I don't even know about. The stuff that makes LLMs exist now and not earlier.

            It might be possible to organize all that with volunteers and some paid work, but how in practice? Stallman seems kind of out of the game at this point and there is no Linus Torvalds figure neither for this, as of now.

            • moring an hour ago ago

              > It seems like one needs a big machine farm and a vast corpus of training data with a lot of manual curation to get started creating a competitive LLM, plus whatever technical expertise that I don't even know about. The stuff that makes LLMs exist now and not earlier.

              "big machine farm" reminds me of folding@home, which needed the same and got it.

              "manual curation" is what Wikipedia did, as well as the free software community.

              "technical expertise" is present in the free software world too. It is sparse since it is sparse in the world as a whole, but it exists.

              "no Linus Torvalds figure" might be the main problem ATM.

            • stevenhuang 7 hours ago ago

              > there is no Linus Torvalds figure neither for this, as of now

              Well yes there is. It's Karpathy.

        • apsurd 10 hours ago ago

          The State are the people and the people want tech billionaires because they want the same chance at being that (tech) m/billionaire.

          Temporarily embarrassed millionaires; I cannot get around that issue toward collective action, toward myself contributing to an answer. I'm stuck. I can't unsee its truth =/. The individual will choose enrichment. We all will.

          • rixed 3 hours ago ago

            No we won't, and that's why we had free software in the first place. Many people dream of a community not of being forever singled out.

          • watwut 10 hours ago ago

            It does not seem like people want tech billionaires. It is fairly common to hate them.

            It is just that peoples preference dont matter as billionaires have disproportionally more power.

            • apsurd 9 hours ago ago

              My example is always Bezos; everyone "hates" greedy tech Billionaire Bezos but how did he get there? We all put him there every day, every hour, every purchase.

              If basically everyone transacts with Amazon, willingly, how is it possible that Bezos is the bad guy? I get that it's not black and white but the point stands: he didn't overthrow the government, the we put him there.

      • worik 8 hours ago ago

        > It is hardly feasible to train your own LLM as a private person or even a small company

        Yet

    • mlinsey 9 hours ago ago

      That's definitely too broad a statement. I'd argue encryption, oral contraceptives, and the printing press were all strongly decentralizing.

      • mystraline 9 hours ago ago

        The Public won the encryption battle against the USG and ITAR idiocy. Well, until climate destroying shitcoin. Thats a big Lose.

        The printing press had sooo much state violence over that everywhere.

        Oral contraceptives are a fight the USA is losing to the extremist christian republicans. Right now the line is right on Misoprostol. And shithole states like Texas even criminalize day-after pills and 'suspect' miscarriages.

        And horrible tech like "weatherproof camera + AI + battery + solar + cell" (FLOCK) are easy to implement and already have been used in tracking women with miscarriages in Texas and across the country.

        It seems for every new tech, theres 1 really cool good thing for the public, a few neutral things, and 1-3 absolutely terrible things.

        And those terrible things make money. Lots of money.

        • mlinsey 9 hours ago ago

          You're describing efforts by powerful institutions to squash the technology, which they definitely try to do, but that's just a strong signal that the technology itself is inherently opposed to centralized power, not an enabler of it.

          Other technologies like surveillance (and, perhaps, AI) are more clearly centralizing and enabling of power.

          The difference matters a lot if you're having mixed feelings about working in technology.

          • mystraline 7 hours ago ago

            You are anthromorphizing technology. He doesnt like that.

            • blast 3 hours ago ago

              mlinsey wasn't anthropomorphizing technology, and your GP comment seems extreme to me.

    • throwyawayyyy 5 hours ago ago

      The Stasi had one informant per 6.5 people. We're moving to a world in which everyone can have their own personal informant. I really don't think I am being hyperbolic here.

    • fidotron 10 hours ago ago

      Let's go there: this is what the Unabomber was on about, and there has long been an effort to stop people noticing this.

      Ultimately you end up with either going for totalitarianism (either to arrest development in the status quo, maintain a state of anarcho primitivism or technocratic tedium) or we resist that and break out by trying to forge forward into some unknown unchartered territory.

      In practice we have no choice but to aim for the unknown and hope. Can't lie and say I can see what the way through all this is though.

      • iugtmkbdfil834 10 hours ago ago

        Not so long ago, I have come to a rather unpleasant realization that whether a lot of that will happen, will depend heavily on whether the ones currently trying to make technology control every facet of our lives decide to allow society get dumber first ( think Idiocracy, which AI very much could allow ) or not in which case it is anyone's guess, because people will still have some basic skills and memories of what could be.

        I am hoping for the best, but life has taught me hard not to bet against humanity's worst instincts.

        edit: add whether

        • fidotron 10 hours ago ago

          100%. Same applies to any hypothetical sentient AI that may or may not arise. The incentives to keep everyone weak and dumb are too strong.

          I have a friend in a position of some influence, and am currently trying to persuade them to stop being so comfortable trusting in humanity to come to the right decisions for exactly that reason.

        • JuniperMesos 9 hours ago ago

          The thesis of Idiocracy is that society gets dumber in the future because intelligence is mostly genetically-determined and smarter people systematically have fewer children than dumber people, i.e. literal evolutionary selection against human intelligence over many human generations. This is clear in the first several minutes of the movie. People who recognize that this is what that movie is saying often then condemn it for being Nazi-adjacent pro-eugenics propaganda.

          In the logic of Idiocracy, the way that an AI would "allow" the future society portrayed in the movie is by letting dumb people systematically have more kids than smart people, and "not allowing" this would entail some kind of coercive eugenics policy aimed at getting smart people to have more kids than they would otherwise be inclined to.

          • throwaway173738 9 hours ago ago

            Idocracy is basically arguing against the idea that progress is inevitable so we can just sit around and do nothing. Joe’s character development is in his shift away from getting out of the way and toward a follow or lead choice. It’s called out at the start of the movie when he’s sitting on his butt at the military library and his CO is like “you’re not supposed to get out of the way.”

          • tremon 9 hours ago ago

            because intelligence is mostly genetically-determined

            None of the points of Idiocracy depend on whether intelligence is by nature or by nurture. The premise of the movie stays exactly the same if you replace those two minutes of backstory with a dysfunctional education system, the return of child labour, an increase in teen pregnancies, and anti-intellectualism in general.

            • cjbgkagh 9 hours ago ago

              I think the opening scene makes the hereditary position very clear, I don’t know how anyone could interpret that as not marking the case of ‘nature’ determining intelligence.

              Edit; sorry, either I misread your comment or it was changed. On the premise that ignoring the intro a nurture based idiocracy could be possible, I would suggest it’s the thoroughness and extent of dumbing down that wouldn’t be possible if it was based on nurture.

              • tremon 8 hours ago ago

                smart people are born into dumb families pretty regularly

                Are you implying widespread infidelity here, or are you making the case that something besides "nature" may be determining intelligence?

                • cjbgkagh 8 hours ago ago

                  I did edit that out before your comment because I didn’t feel I needed it, but I still stand by it.

                  There is still quite a lot of randomness in genes, the idea that intelligence would always be the average of the parents would require that that a very large number of SNPs are involved. GWAS studies do say this but this is more a side effect of using linear regression for the scores as this assumes independence which I think is not a safe assumption. I think some intelligence genes can be recessive so you can have two carrier parents where 1/4 of their children will be smarter than either of them.

                  I should also add, by pretty regularly I mean from the point of view from the smart people. Given a sample of smart people how often are they notably smarter than both parents.

              • iugtmkbdfil834 9 hours ago ago

                Is it hereditary when parent is a dumb jock who creates a ridiculously bad environment in terms of 'nurture' aspect? I am not sure you thought your argument fully through.

                • cjbgkagh 9 hours ago ago

                  Yes, and science explicitly allows him to continue procreating where otherwise he would have not been able to (he was in an accident because he was stupid). It’s explicitly saying the darwinian winnowing of the weaker (dumber) members of the species has been interfered with.

                  • iugtmkbdfil834 8 hours ago ago

                    Fair point. I re-read your argument and I accept it.

          • tardedmeme 8 hours ago ago

            That's the story they used in the movie. It's possible that real life will arrive at a similar situation by completely different means.

            Also did you think about why dumber people might have more children? A large part of that reason is national policy choices.

          • iugtmkbdfil834 9 hours ago ago

            Here is a problem, I am not arguing "Idiocracy: the process as presented in the movie"; I am arguing "Idiocracy; the resulting dumbed down populace". Admittedly, it is a mental shortcut and a bad one since it clearly did not land as I had hoped.

          • rixed 3 hours ago ago

              The thesis of Idiocracy is that society gets dumber in the future because intelligence is mostly genetically-determined
            
            I don't remember the movie taking side on the nature vs nurture debate. The thesis is that intelligence is _inherited_.

            Friendly reminder that plenty of nurture is inherited too.

          • dinkumthinkum 7 hours ago ago

            Why is saying things that are true called propaganda? Also, it is not just genetics but that is a part of it. The idea is also that parents with lower intelligence will value education for their offspring less and care less about whether their kids learn to read well, etc. Honestly, it just completely obvious and is playing out in real time. It's just obvious.

            • atq2119 6 hours ago ago

              > The idea is also that parents with lower intelligence will value education for their offspring less and care less about whether their kids learn to read well, etc.

              Educated people can neglect their kids. And less educated people can still recognize when education is valued by society.

              Now look around. Do you feel like we're living in a society that values education? Did the successful people that kids see in their formative years get there through education, and/or do they visibly value education beyond lip service?

              Some of them, yes. But I'd argue that between influencers, teachers' pay, and the increasingly obvious nepotism and corruption by people in power, the situation is looking pretty dire. We don't truly value education as a society and are therefore teaching a new generation that education isn't to be valued. And that has nothing to do with genetics.

              • dinkumthinkum 5 hours ago ago

                But, all your exceptions, at best prove the rule. Just go and observe out in the world. That is why I said it is happening in real-time. I think you are not looking at what's in front of you. Teacher's pay? Teachers were never highly paid. It is quite clear that many of the families of kids in low performing are not families that push education or reading or any of that sort. Yes, you can have a well-to-do family where both parents are lawyers and they neglect their kids. I think its not a coincidence that many such families produce kids that are lawyers. You may think overall people are valuing education less and that might be true but that is not what my argument. My argument is there are clearly families that value it much, much less and that is a cycle and it is not down to teachers or all the the other boogeymen.

          • lelanthran 7 hours ago ago

            > People who recognize that this is what that movie is saying often then condemn it for being Nazi-adjacent pro-eugenics propaganda.

            In this respect, that movie is a great filter for virtue-signalling low-intelligence societal rejects.

          • SanityPlease 9 hours ago ago

            I recently saw a razor commercial where a man was shaving his cheek. I immediately condemned the company for being Nazi-adjacent propaganda since the actor was one step away from giving himself a Hitler mustache.

      • a_victorp 6 hours ago ago

        In my view, the core of the solution here is to realize that no system will be stable and "perfect" forever. That is, we may chart into the unknown and arrive at a pretty good solution that benefits people, but over time, as people relax, some people will try to take power and eventually succeed. My point is: some people will always try to get advantages. So it will always come to the community to put in work to improve society and guarantee the benefits are given to all. There is no defining a set of rules and forgetting about it

      • jimbokun 3 hours ago ago

        Or find a way to make government respond more to voters than dollars.

      • rexpop 8 hours ago ago

        This issue is evident to many smart people, and it would behove you to find a few whose conclusions were more prosocial, and sustainable.

        One such perspective is Tools for Conviviality, a 1973 book by Ivan Illich.

        Your ultimatum is imaginatively anemic.

      • Aurornis 7 hours ago ago

        > Let's go there: this is what the Unabomber was on about, and there has long been an effort to stop people noticing this.

        There has not been "an effort to stop people noticing this". The Unabomber Manifesto has been available everywhere and published across mediums from the start. The topic beat to death by everyone from anarchists to eco-fascists to internet edgelords since it was released. It has also occupied a place of debate in academia, being studied and criticized in a lot of courses.

        The Unabomber Manifesto wasn't even a particularly good critique in this topic. It just happened to become a popular one because he was a terrible person who murdered a lot of people and wanted to murder a lot more. The common criticism of the manifesto is that it was a bunch of cliches tied together with some writing that appeared eloquent, and then he forced it into notoriety by being a literal terrorist.

        It doesn't stop comments like this from implying that he was on to something or the next step of implying that there's some broader conspiracy to stop us all from noticing that he had a point. The latter conspiracy breaks down when you look at how much everyone knows about the manifesto and how it has been reprinted and discussed to death for years. He even wrote and published entire freaking books from prison.

      • tardedmeme 8 hours ago ago

        "uncharted territory"

      • hackable_sand 9 hours ago ago

        I mean, I don't see what the rush is.

        It's like Silicon Valley overdosed on Adderall.

        You can have the same tech, just in 5 human generations. I don't see why you have to have it now.

        • BowBun 8 hours ago ago

          For about 5% of Silicon Valley, reaching these new 'heights' of civilization is the goal. For the rest, including a bunch of folks in these threads, their primary motivator is building generational wealth for their family, humanity be damned. I'll just keep nodding my head at these HN discussions while pushing down the thoughts that a majority of this crowd is complicit (myself included). Every day, I grow more confident in my choice the eschew my legacy and leave y'all to it.

    • krackers 11 hours ago ago

      >In any technologically advanced society the individual’s fate must depend on decisions that he personally cannot influence to any great extent. A technological society cannot be broken down into small, autonomous communities, because production depends on the cooperation of very large numbers of people and machines. Such a society must be highly organized and decisions have to be made that affect very large numbers of people. When a decision affects, say, a million people, then each of the affected individuals has, on the average, only a one-millionth share in making the decision

      • idle_zealot 10 hours ago ago

        I don't know what you're quoting, but I wish it were the case that something affecting a million people granted each affected individual about a one-millionth share in the decision. I don't think that would always yield good outcomes, but at least it would be democratic. Structures that enable that are what we should be building.

        • tejohnso 9 hours ago ago

          With our level of technology I don't see why we couldn't have that kind of decision directly put into the hands of individuals rather than leave it to "representatives" or worse yet corporations that aren't even required to ask. Maybe I'm not thinking through the difficulties well enough, be what we have with elected representatives campaigning on one set of ideals and then voting the complete opposite way is unacceptable. At least, that should be grounds for imprisonment. Maybe that would be sufficient to get the representative voting system working well enough.

          • atmavatar 8 hours ago ago

            > With our level of technology I don't see why we couldn't have that kind of decision directly put into the hands of individuals rather than leave it to "representatives" or worse yet corporations that aren't even required to ask.

            Reading the contents of proposed bills is a herculean task, to the extent that even our elected representatives dedicated to the task don't do so a significant fraction of the time. There's perhaps a good argument that's mostly because representatives (particularly in the House) spend too much time fundraising, but imagine the outcome when the burden is placed on people who have (sometimes several) completely independent, full-time jobs.

            I would also argue that there's value in debating bills before passing them, but this opportunity for debate would all but disappear in a direct democracy, both because it's an additional burden on top of the time needed to read the bills and because it's a logistical nightmare to set up a proper debate venue that can properly accommodate everyone.

            On top of that, you have to deal with the fact that the majority of US adults' literacy levels are below 6th grade, making them less likely to understand legislation they read or be able to engage in meaningful debate about it.

            I think I'd want to fix our electoral system to make it more representative of the public (i.e. use something better than winner-take-all, first-past-the-post) before I'd even want to try tackling the monumental problems that we'd face in trying to enable a direct democracy for anything beyond the local city/municipality level.

            > Maybe I'm not thinking through the difficulties well enough, be what we have with elected representatives campaigning on one set of ideals and then voting the complete opposite way is unacceptable. At least, that should be grounds for imprisonment.

            I'm with you somewhat in spirit, but I think the devil's in the details.

            A particular concern I'd have with doing this is that it's fairly common for representatives to attach riders to bills that have little to nothing to do with the original text. As such, there may be times when my representative may be forced to vote against a bill, the core of which is something they campaigned on, because one or more riders are completely unacceptable.

            I do think there's probably value in providing a mechanism to recall representatives and senators, not the least of which is because we've seen in recent history several such politicians do full 180s and even change political parties upon election.

            I don't think we want to open the pandora's box of incarcerating representatives based upon their voting history, though.

        • rglover 10 hours ago ago

          In some circles, he goes by Uncle Ted.

          • kQq9oHeAz6wLLS 10 hours ago ago

            To quote a movie:

            In the 1960's there was a young man graduated from the University of Michigan. Did some brilliant work in mathematics. Specifically bounded harmonic functions. Then he went on to Berkeley, was assistant professor, showed amazing potential, then he moved to Montana and he blew the competition away.

            • ajdegol 9 hours ago ago

              But you forgot about Vickers

        • kingofmen 9 hours ago ago

          That is why the writer specified "on average", which clearly remains true, at least in the case that the decisionmaker is part of the affected group. The optimistic part is in assuming that latter.

    • thomaswoodson 10 hours ago ago

      New around here, but… For those interested in a deep dive, I highly recommend reading the Technological Society, by French philosopher and sociologist Jacques Ellul.

    • zmmmmm 5 hours ago ago

      > Technology amplifies power and until we collectively redefine and enforce a value system that benefits us all

      The fallacy is believing that some kind of invisible hand will guide it to automatically produce an equal outcome and therefore any and all regulation is inherently bad. This seems to be a prevailing belief in the US in the current climate.

      We have institutions that are designed to redistribute power and they are called governments. People have to believe in their role in doing that enough to actually empower them to implement it though.

    • marcosdumay 3 hours ago ago

      > Technology amplifies power

      Some technologies concentrate power, some technologies distribute it. "The WWW" is still too coarse of a category to have just one of those effects up there.

      Some people just don't show the capacity to handle nuance or complexity.

    • thih9 10 hours ago ago

      One attempt was open source. Or perhaps libre software? I guess it is not a success since only one of these looks mainstream.

      • krupan 10 hours ago ago

        Free/Open Source software is very mainstream now I'm not sure what you mean, but maybe we are taking too much of it for granted?

        • thih9 8 hours ago ago

          I meant that “open source” achieved pop culture level recognition. Libre software never did, most people are still unaware of the difference or don’t care.

          A popular claim against open source is that it alone is insufficient to prevent abuse (here: accumulation of power).

          Recent decade has shown many cases of that, with corporations adopting open source projects without giving power to their users; e.g.: Android or cloud services.

          Perhaps if we understood open source less as a process and more as a movement (so: if libre software movement was more popular) things would be different.

      • hamdingers 10 hours ago ago

        It is curious how successful AI developers have been in trying to redefine "open source" as "the binary is free to download"

        • echelon 10 hours ago ago

          The OSI is garbage, and "open source" outside of the most viral licenses is too.

          I'll go further and say that it accelerated getting us into this mess we're in today.

          The OSI is owned and controlled by the tech titan hyperscalers who benefit from free labor.

          Useful "open source software" always gets encrusted by the big titans that then build means to control the tech, and then the means to control us. And just to rub salt in the wounds, they rarely compensate the original authors.

          Android is Linux, right? Then why can't we install our own software? Why does it spy on us? Open source is so great, right?

          95% of humans will never own a phone that gives them freedom. And we enabled that.

          Everything we as tech people own is also getting locked down. We're going to have to start providing our state ID to access the internet soon.

          But OMG, Year of Linux on the Desktop 2012!!12

          Pretty soon you won't even be able to use your Linux. Everything will be attested.

          Open source hasn't stopped power from accruing to the titans. It's accelerated their domination.

          People rush to defend Google and Amazon when you criticize how they profit off of Redis, Elasticsearch, etc. The teams that build the tech aren't becoming wealthy, and most of the bytes flowing through those systems are doing so behind closed source AWS/GCP/Azure offerings.

          These companies then use their insane reach to tax everything that moves. Google owns 92% (yes, 92%!) of URL bars and they tax every search, especially searches for other companies' trademarks. They do even better - they turn it into a bidding war. Almost nothing that exists in the world today can make it to you without being taxed by them.

          If they don't like your content, you just disappear.

          Mobile platforms have never been ours. We can't install what we want. We're soon going to be locked at the firmware level to just Google and Apple and forced to use their adblocking-free, tracker-enabled "browsers" (1984 telescreens). Any competition can't get started due to the massive scale required, meanwhile Apple and Google tax everything at 30% and start correlating everything you do, everyone you talk to, everywhere you go in their panopticon.

          "Open source" was wool pulled over our eyes so that we happily built, supported, and enabled this.

          Open source should be replaced with "our proletariat users and small businesses can have this for free, but businesses listed on any stock exchange cannot commercialize this ever unless they pay out the nose for it".

          "Source available" / shareware is peak. Give your users the thing, and the means to maintain it after you're gone, but tell Google et al. to go away.

          "Fuck you, pay me" as the artists frequently say.

          But also, let's stop giving the Death Star free labor.

          (edit: I'd love a feedback sampling of the heavy downvotes. OSI purists? Goog employees? Surely MIT/BSD fans and not anyone who follows Stallman.)

          • xtracto 9 hours ago ago

            >Pretty soon you won't even be able to use your Linux. Everything will be attested.

            I want to rescue this snippet.

            Few of us remember the "fight" and discussions that happened when Firefox first pondered the idea of allowing encrypted video on the platform. Same with Linux. This was when The powers that be forced Netflix and other video distributors to introduce that opaque tech in the web. The same thing happened with DeCSS and Linux DVD playing; but that generation was a bit more... revel.

            But we as a society are indeed slowly and steadily giving away our rights of many, for the rights of few cartels.

            It's been a sad journey to see for someone born in the early 80s.

            • echelon 9 hours ago ago

              > But we as a society are indeed slowly and steadily giving away our rights of many, for the rights of few cartels.

              Increasing geopolitical multi-polarity may force big tech to give up ground. The EU and ASEAN in particular should be hitting Google et al. with the regulatory hammer.

              When we get clearer heads back in power (Lina Khan was great, but moved much too slow), they ought to carve the tech cos into Baby Bells. Horizontally so they have to compete with themselves.

              > It's been a sad journey to see for someone born in the early 80s.

              The dream of the open web, privacy, freedom of speech, and freedom of computing is being killed by the oligarchy. And they convinced us the progressive thing to do was to give them our labor - they hung us with it.

      • bamboozled 10 hours ago ago

        Open source is still around ? It would be vastly improving your life even though you can’t see it.

    • wnc3141 8 hours ago ago

      Read "why nations fail". It essentially covers this. Markets and technologies are great but ultimately bound by the systems of power they inhabit.

    • bananaflag 10 hours ago ago

      Well, I love to take showers, which involve a lot of tech like running water and water heaters and soap which I can buy from the supermarket.

      I lived in places without any of those and I wouldn't want to do it again.

      • hn_throwaway_99 9 hours ago ago

        While I believe the comment you are replying to may be too broad considering "all tech", I also strongly agree with the overall sentiment (and in particular I commend ost-ing for putting a general feeling I think a lot of people have so clearly and succinctly into words).

        As a Gen Xer, I grew up with a strong belief in the "goodness" of technology, of its power to make people's lives better and to ameliorate suffering. So after 25 years of seeing so much invested into technology that actively makes people's lives worse (e.g. ad-tech, social media algorithms), and even conservatively just results in the huge accumulation of wealth and power to the very few, I can't help but feel extremely disillusioned.

        Yes, I like showers and soap and running water, but I rarely see the type of economic investment into tech these days that will have as broad of a beneficial impact as running water did.

      • WillPostForFood 10 hours ago ago

        Tyranny of antibiotics and vaccines and MRI machines...

    • pdonis 9 hours ago ago

      > until we collectively redefine and enforce a value system that benefits us all

      Here's the problem: you can't.

      First, people have disagreements, often very fundamental ones, over what "benefits us all". There's no way to resolve many such disagreements short of brute force.

      Second, "enforce"--note the last five letters of that word--means some people are given the power to do things to other people that, if anyone else did them, would be crimes. Throw you in jail, fine you, restrict the things you can do. Indeed, that's how David Friedman, whose "The Machinery of Freedom" is worth reading, defines a government. And the problem is that government still has to be done by humans, and humans can't be trusted with the power to do such things.

      Ultimately the only defense we have is to not give other people such power. Not governments, not tech giants, nobody. But that requires a degree of foresight that most people don't have, or don't want to take the time to exercise, particularly not if something juicy is in front of them. How many people back when Facebook first started would have been willing to simply not use it--because they foresaw that in a couple of decades, Facebook would become a huge monster that nobody knows how to rein in? If my own personal circle is any guide, the answer is "not enough to matter"--of all the people I know, I am the only one who does not use Facebook and never has. And even I didn't refuse to use it back when it first started because I saw what things would be like today--I just had an instinctive reaction against it and listened to that reaction, and then watched the trainwreck slowly develop over the years since.

      So we're stuck. Even if we end up deciding that, for example, the government will break up the tech giants, slap huge fines on Zuckerberg, Bezos, etc., maybe confiscate a bunch of their property, maybe even make them do a bunch of community service, possibly even some of them serve some jail time--it will still be just other humans doing things to them that no humans can be trusted to do. It won't fix the root problem. It will just kick the can down the road a little longer.

      • worik 8 hours ago ago

        > Here's the problem: you can't.

        It is not possible to "enforce" a "value system". True.

        But we can have a value system we share that benefits us all. We can work together to improve our welfare as a whole.

        > Ultimately the only defense we have is to not give other people such power.

        That is untrue. We must have webs of trust, and within those webs there are power hierarchies. The trick is that those hierarchies must not be arbitrary nor permanent.

        This is a problem that has been wrestled with, and solved, several times in human history. Any human structure is vulnerable to outside attack, and most (all?) vulnerable to internal decay, but they can be established and are worth establishing.

        An example I can think of are the Republican Militias as described by George Orwell in Road to Catalonia, and activist groups I have been involved with here in Aotearoa.

        Nothing lasts for ever - that does not mean we cannot work together for good things.

        --

        In one group I was involved on we used to have monthly meetings, by phone, of the organising committee. We had thirty members (all activists running hot with their own opinions), made decisions by consensus. I was on that committee for five years and we went over our allotted two ours twice: Once by ninety minutes (that was a day!) and the other time by five minutes. Nobody ever felt unheard - that mattered to us.

        It is possible, people have been doing it forever, the good things we make are vulnerable but we should still aspire to and achieve them

        • pdonis 8 hours ago ago

          > It is not possible to "enforce" a "value system".

          That's not what I said. What I said was that you can't enforce a value system "that benefits us all"--because "us all" will never agree on what value system that should be, at least not once you get beyond small groups of people. But of course you can just declare by fiat that your preferred value system "benefits us all", and ignore objections, and if you have enough brute force at your disposal, you can enforce it. You just won't be enforcing a value system that actually "benefits us all".

          > We must have webs of trust

          Yes.

          > within those webs there are power hierarchies.

          Only if you let that happen. But in a sane web of trust, you don't--because in a sane web of trust, everybody understands that power--in the sense of someone you trust doing something that harms you, simply because you're unable to prevent them--is a betrayal of trust.

          > This is a problem that has been wrestled with, and solved, several times in human history.

          I disagree. I certainly don't see the Republican Militias in the Spanish Civil War as solving this problem.

          > that does not mean we cannot work together for good things

          Of course we can. But our ability to do that without violating any trusts is limited, often very severely, by how many people we can get to agree with us, without any force or coercion being applied, on what "good things" to work together for. Unfortunately utopian dreamers and "revolutionaries" throughout human history have failed to recognize this basic fact, and their attempts to make a better world have always resulted in mass suffering and death.

        • pdonis 7 hours ago ago

          > we can have a value system we share that benefits us all

          If "us all" is a small enough group, sure. It doesn't scale, though.

        • tremon 8 hours ago ago

          It is not possible to "enforce" a "value system". True.

          I think that organized religion wants to say a few words here.

    • bicepjai 10 hours ago ago

      I agree, I want technology in the hands of people (I want to control my data) and I don’t believe in cloud anymore since corporate greed and ad tech just destroyed the trust in cloud use case.

    • vatsachak 8 hours ago ago

      Technology can also divide power. Think about the amount of open source intelligence that exists.

    • keeda 8 hours ago ago

      Technology is primarily an accelerator. It just accelerates things, both good and bad, that were already in motion or at least already possible. Which is why things like healthcare got better but things like wealth inequality got worse.

      What needs to change is the system in which that technology exists inside, because otherwise removing technology will still keep us on the same trajectory to the same destination, only much slower and possibly with much more pain.

      What we're seeing right now with layoffs and everything else is simply an acceleration of our current trajectory. We were always going to get here, AI just got us here a few decades ahead of schedule.

      For once, however, we have a technology that could let us change this trajectory. I've said this before, but the capital class held so much power because it took a lot of people, and hence a lot of capital, to take on large endeavors that created new wealth. But things were rigged such that those who provided the capital also captured most of that new wealth.

      Now, just as AI lets companies (i.e. capital) do the same things with fewer people, it also lets people do the work of entire companies by themselves... i.e. without capital. That is a big enough shift in power dynamics to alter the trajectory in previously inconceivable ways.

    • lyu07282 9 hours ago ago

      This is exactly right, it gets examplified by Chinese people's attitude to new innovative technology. It's all doom and gloom in the west. But the thing is that both are correct stances on this technology for exactly the reason you mentioned. It's fundamentally different systems of governance of the same technology. One only ever saw new technology result in consolidation of power and collective decline and genuine grievances ignored, the other has demonstrated the ability to make new technology increase the collective well being of everybody. It's the low vs. high trust society, see the inverse of Francis Fukuyama's 1996 book Trust.

    • jgalt212 10 hours ago ago

      > Technology amplifies power and until we collectively redefine and enforce a value system that benefits us all, the advancements in technology simply serve as a means of subjugation.

      True during the mainframe. Not true during the PC age. Perhaps true again during frontier model / data center ago. Maybe not true again when hostable open weights models become efficient and good enough.

    • toasty228 11 hours ago ago

      > As someone who has spent a vast portion of life believing technology would make life better, I've come to the realisation that this idea is a fallacy.

      I have to very regularly remind myself many people genuinely believe this shit and are not straight up evil/maniacs, it's getting harder

      • ismailmaj 10 hours ago ago

        I'm thinking that personally, technology is not bad in a vacuum and not necessarily bad in society, but it just reveals that our system is ill-equipped to guarantee good usage of it.

        We could have fun defining what's good usage but we're so far from it, it would just make me sad.

      • stevenhuang 7 hours ago ago

        What, so you believe technology corrupts absolutely? Nonsense.

    • vjvjvjvjghv 9 hours ago ago

      "Technology amplifies power and until we collectively redefine and enforce a value system that benefits us all, the advancements in technology simply serve as a means of subjugation"

      My theory is that AI and robotics have the potential to break capitalism as we know it. We will probably reach a point where machines will be better than humans at pretty much anything and there will be almost no need for workers who just do a job (like most of us). But if nobody has money to buy things then there is no point in producing anything. Not sure where this will be going but I am pretty sure the capitalists will not voluntarily share the gains.

      In theory all this progress should be great and exciting for humanity but without changing the system there may be dark times coming for most of us. I always have to think of Marshall Brain's "Manna" story. It may be a spot on prediction of things to come.

    • jmyeet 10 hours ago ago

      Thing is, technology (particularly automation) could make life better but it not doing that is a choice. Think about it. We could live in a world where people only had to work 20 hours a week or even at some point not at all. We don't do that because we have a system that simply makes a handful of people even wealthier. We will likely see the first trillionaire minted in our lifetimes. That is an unimaginable and unjustifiable amount of money for one person to have.

      So you're not really complaining about technology making things worse. You're complaining about wealth inequality, which is a direct result of the mode of production and the organization of the economy.

      Internet access should, at this point, be basically free. The best Internet in the country is municipal broadband. It's better and it's cheaper. It's owned by the town, city or county that it's in, which means it's owned by citizens of that municpality.

      Instead what we have in most of the country are national ISPs like Verizon, Comcast, Spectrum and AT&T and the prices are sky high. They are only sky high so somebody far away can continue to extract profit from something that's already built and not that expensive to build.

      You will get lied to by people saying national ISPs have an economy of scale. Well, if that were true, why is municipal broadband so (relatively) better and cheaper? Why would there be state laws that make municipal broadband illegal? Why would national ISPs lobby for such laws?

      • alehlopeh 9 hours ago ago

        If it’s a choice, then who gets to make that choice? Certainly not the individual. I don’t remember getting to choose anything of the sort. If you say it’s society that makes the choice, how does that work exactly? Through democracy and governance? Well then society did make the choice. Are you then complaining that the choice society made is not the one you prefer? From the perspective of the individual, it’s not a choice at all.

      • joe_mamba 9 hours ago ago

        >We could live in a world where people only had to work 20 hours a week or even at some point not at all.

        How would your country function, if all medical staff, construction, rail, sewage, police and firefighters suddenly worked half as long or not at all, starting tomorrow?

        Because my home country tried this whole "if we seize the means of production from the wealthy elites, we won't have to work as hard anymore" ~80 years ago, and guess what happened to the workers? Were they working less hours for more money, OR, were they working just as much while also starving and being plagued by shortages?

        The problem with your logic, is that it only applies to bullshit Western office corporate jobs who are anyway not actually doing much useful work for 40h. All those office jobs that don't need to be working 40h, were just subsidized by endless money printing, that's why you see so many layoffs happening, once the ZIRP era ended when they were hiring people just to raise headcounts to boost stock valuations to gullible investors they could rug-pull, but now the bubble popped and the jig is up.

        And it only works in a world where you own the world reserve currency, and globalisation, free trade and international competition does not exist, because the countries who will work harder than you, will outcompete you and subjugate you in the long run so you can make their sneakers and phones for 60h/week, while they kick it back and live from printing money.

        • jmyeet 8 hours ago ago

          Your assertion seems to be that those things haven't had massive productivity gains over time. Cleary they have. Firefighters aren't carrying buckets of water or using a horse-and-cart. No, they use fire engines, helicopters and planes. They use more advanced fire suppressants and protective gear.

          Medical professionals have better diagnostics, health records, MRIs and other imaging equipment and so on. The medical profession is pretty much a perfect example of my point, actually. Do we train more doctors (per-capita) or just expect existing doctors to work more hours? There are a whole bunch of vested interests in constraining doctor supply.

          Likewise, resident physicians are incredibly profitable for hospitals because they create a lot of value and cost nothing. You see this where various parties are trying to increase emergency medicine residencies from 3 to 4 years.

          Hospitals hate fully-qualified attending physicians because they can't artificially suppress their salaries. It's why we've gotten things like Nurse Practioners, Physician's Assistants, CRNAs, etc. It's also why, for example, you see a case like in Oregon where private equity is trying to destroy physician organization. I'm of course talking about Peace Health and ApolloMD, a case in Oregon recently.

          We also make medical people spend a bunch of time dealing with insurance BS, for literally no reason.

          This isn't just a BS "corporate job" thing.

          • joe_mamba 17 minutes ago ago

            You said a lot off-topic rtants, without answering my question directly: How would your country function, if all medical staff, construction, rail, sewage, police and firefighters suddenly worked half as long or not at all, starting tomorrow?

            You mentioned a lot of tech improves since the stone age as an argument, but existing staff working 40h already use those tech advancements.

    • runarberg 9 hours ago ago

      GMO was the first technology I figured that out. It was heavily pushed in around 2007 as a solution to world hunger, but at the same time it was very easy to see that hunger was a problem of distribution, not technology. Even back in 2007 we made much more food then was required to feed the world population. Furthering the obviousness of the lie was that in reality GMO was (and still is) mostly used for growing feed or cosmetic products. And on top of that we had large monopolies with patents to protect, and herbicides to sell, pushing this technology the hardest.

      Now its been 20 years later. The technology is mature and many of the patents have expired, but GMO has done absolutely nothing to solve world hunger.

      • worik 8 hours ago ago

        Yes.

        We do not need GMO for food, I agree.

        But where is the genetically engineered heart muscle that sits in the engine bay of my car running on nutrient solution, excreting C02 and urine while driving my car with a hydraulic motor?

    • shevy-java 10 hours ago ago

      > As someone who has spent a vast portion of life believing technology would make life better, I've come to the realisation that this idea is a fallacy.

      Technology is not a good-only or evil-only thing. You have use cases that are beneficial and you have use cases that is not benefical. The technology in by itself isn't what makes things worse. Even many thousand years ago, humans used weapons to bash in other humans. Remember the Ötzi: https://en.wikipedia.org/wiki/%C3%96tzi#Body he was killed by arrows, most logically from someone else shooting at him (at around 3230 BC). Nuclear energy is used as weapon or source for generation of energy (or rather, transformation of energy). And so on and so forth.

      IMO the biggest question has less to do about technology, but distribution of wealth and possibilities. I think oligarchs need to be impossible; right now they are causing a ton of problems. Technology also creates problems, I agree on that, but I would not subscribe to a "technology makes everything worse". That does not seem to be a realistic assessment.

      • iugtmkbdfil834 10 hours ago ago

        This. But suggest that maybe someone should stop being able to 'create wealth' past 7th island in French Polynesia and people go nuts.

    • themafia 10 hours ago ago

      Technology is simply a technique to leverage and extend human desire. It's a tool. It's in the hands of those who control and use it.

      You shouldn't blame technology. You should blame the maniacs that have latched on to it as a way of extending their power. You should blame the government for their failures of regulation. You should blame the media for failing to cover this obvious problem.

      The people who want to subjugate you are the problem.

      • apsurd 10 hours ago ago

        "The problem is them"

        no no, we're not doing that.

        • themafia 5 hours ago ago

          I named three specific groups. You can easily find the bad actors within these groups. There is no "them." There is a highly identifiable set of individuals that take actions which cause major problems in our society. On this there is absolutely no question. In fact there is even good social consensus as to who these people are.

          I get that hacker news would rather avoid inconvenient issues or simply satirize them but I take absolutely no care of this.

    • Forgeties79 10 hours ago ago

      > until we collectively redefine and enforce a value system that benefits us all

      Tons of us called for common sense guard rails and a little bit of actual intention as we rolled out LLM’s, but we were all shouted down as “luddites” who were “obstructing progress.”

      We all knew this was coming. It’s been incredibly frustrating knowing how preventable so much of it has been and will continue to be.

      Edit: these responses are absurd. Banning GPU’s…? What are you on about? Who said anything about stopping or banning LLM’s? Did none of you see “guardrails”? “A little bit of actual intention”? Where are you getting these extreme interpretations?

      I’m talking basic regulatory framework stuff. Regulations around disclosure, usage, access, etc. you know, all the stuff we neglected and are now paying for with social media in droves? We have done this song and dance so many times. No one is going to take away your precious robot helper, we’re just saying “maybe we should think about this for more than two seconds and not be completely blinded by dollar signs.” I mean people have literally died in my state because Zuckerberg wants to save a few bucks building his data center.

      It feels like AI evangelists come out the woodwork seething if anybody even implies you shouldn’t be allowed to do literally whatever you want at all times.

      • iugtmkbdfil834 10 hours ago ago

        Sigh, and what guardrails are common sense? Are those the same level of common sense as those advocated for guns ( and narrowed down at every possible opportunity )? Some of us see this tech as possibly revolutionary and thanks to useful individuals calling for muzzling that tech we now have the worst of both worlds: centrally controlled, not really open ( weights are just weights -- though Meta actually deserves some credit here ), and heavily muzzled.

        Clearly, powers that be learned all too well from internet rollout.

        • Forgeties79 7 hours ago ago

          >sigh,

          I stopped reading. No point in engaging this if that’s how you’re kicking things off.

          • CamperBob2 5 hours ago ago

            "Sigh" is indeed a thought-terminating cliche. So is "Common sense."

      • mat_b 10 hours ago ago

        Except that it's not preventable. Technology is always an arms race. If you don't create it, someone else will, and then they'll have the advantage and subjugate you, so you might as well be the one to do it first. Whatever it is that you're trying to prevent, someone is going to do it if it gives them power.

        • Forgeties79 8 hours ago ago

          What’s not preventable? What do you think I am saying, exactly?

          • stevenhuang 3 hours ago ago

            I don't think even you know what you're saying.

      • zzzeek 10 hours ago ago

        It wasn't "preventable" though. How would you prevent what's been happening ? Pass a law making GPUs illegal ? Just ..."convince" everyone that the machine that can write working software, business letters and render good enough banner and print advertising for nearly free is evil and just don't use it (ask Emily Bender how that's going)? There is no realistic way from stopping any of this from happening. Need a different approach.

        • Forgeties79 8 hours ago ago

          I proposed literally none of those solutions or even suggested them.

          • iugtmkbdfil834 4 hours ago ago

            << common sense guard rails and a little bit of actual intention

            The issue is that you seem to be proposing nothing but platitudes and when called on it, did not elaborate but high tailed to the cloak of misunderstood defender of sense and sensibility.

    • wotsdat 10 hours ago ago

      go steal someone else's money, damn socialists

  • LightBug1 10 hours ago ago

    That's what's making its employees miserable ????!

    • daveguy 9 hours ago ago

      You'd think the knowledge that they're creating technology to purposely addict children for their attention and data would contribute more to making them miserable.

      • bigfatkitten 9 hours ago ago

        If you choose to apply for a job at somewhere like Meta, you’re clearly not the sort of person who lets your moral compass trouble you.

    • hunterpayne 10 hours ago ago

      Right, Meta's offices are awful places to work. Loud, huge open offices that sound like a cocktail party all day, every day. That would make me depressed.

      • Ifkaluva 9 hours ago ago

        Open office floor plans are most of the tech industry

  • sharts 5 hours ago ago

    Who cares? These folks chose that path. Why should anyone sympathize?

  • swingboy 8 hours ago ago

    I fear it's only a matter of time until someone with nothing to lose does more than throw a molotov as Sama's house.

    • xyst 8 hours ago ago

      who cares? just like the uhc ceo, they will get replaced.

      working class is replaceable. So is the "executive" class. Only the billionaire/epstein class will prevail in this hyper capitalistic society.

      • allthetime 8 hours ago ago

        Imagine other possibilities.

  • Tenoke 8 hours ago ago

    In 2019 I suggested[0] you might reach AGI if you train on computer usage - mouse movement, keypresses, what's on the screen etc. - and it sounds like Meta are kind of trying some form of it.

    0. https://svilentodorov.xyz/blog/human-imitating-task/

    • codemog 8 hours ago ago

      A lot of people had this idea. You’re going to have to take a vague idea to fruition if you want the props you’re looking for.

      • Tenoke 7 hours ago ago

        Can you point me to them? I couldn't find anyone writing a version of that idea back then, I'd be curious to read how others framed it.