AI users whose lives were wrecked by delusion

(theguardian.com)

171 points | by tim333 8 hours ago ago

197 comments

  • SAI_Peregrinus 5 hours ago ago

    > “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.”

    Interestingly enough, it sort of did! Not Turing's original test where an interviewer attempts to determine which of a human & a computer is the human, but the P.T. Barnum "there's a sucker born every minute" version common in the media: if the computer can fool some of the people into thinking it's thinking like a human does, it passes the P.T Barnum Turing test!

    The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject & AI subject are attempting to convince the interviewer that they're human. If there exists an interviewer that can determine which is which with probability non-negligibly different from 0.5, the AI fails the test. AIs can never truly pass this test since there are an extremely large number of interviewers, but they can fail or they can succeed for every interviewer tried up to some point, increasing confidence that they'll keep succeeding. Current-gen LLMs still fail even the non-adversarial version with no human subject to compare to.

    • jmalicki 3 hours ago ago

      I see AI pass the turning test all the time, since humans are constantly falsely being accused of being an AI.

      It doesn't mean that AI got good, just that humans are thinking other humans are AI, which is a form of passing the test.

      The adversarial version with humans involved is actually easier to pass because of this - because real actual humans wouldn't pass your non adversarial version.

      • zahlman an hour ago ago

        I've seen a fair number of cases where someone swears up and down not to be using AI to generate responses, but there's no good reason to believe it (except perhaps specifically for the messages where that claim is made).

        This includes times that someone basically disappeared from e.g. Stack Overflow at some point before the release of ChatGPT, having written a bunch of posts that barely demonstrate functional literacy or comprehension of English; and then came back afterward posting long messages with impeccable grammar and spelling in textbook "LLM house style".

        • jmalicki an hour ago ago

          There are a ton of people like that, but the LLM house style also exists because a ton of people write that way too.

          The people falsely accused because they've used em-dashes for 20 years aren't the ones that were functionally illiterate before.

          • badc0ffee an hour ago ago

            I think em-dashes were uncommon mainly because they're not always convenient to type.

      • wat10000 3 hours ago ago

        In one study, GPT-4.5 was judged to be human 73% of the time, which means that the actual human was judged to be human only 27% of the time. More human than human, as Tyrell would say.

        Edit: folks, the standard Turing test involves a computer and a human, and then a judge communicating with both and giving a verdict about which one is the human. The percentages for the two entities being judged will add up to exactly 100%. That's how this test was conducted. Please don't assume I'm a moron.

        • dwpdwpdwpdwpdwp 3 hours ago ago

          The implication would be that GPT-4.5 was not judged to be human 27% of the time. You can't determine how often humans were judged correctly as humans from that data point.

          • jmalicki 2 hours ago ago

            The structure of the test was that there was one human and one AI conversation partner, and the rater had to choose which one was which.

            Given that structure, you can judge from that data point.

        • jmalicki 2 hours ago ago

          That was also before the crazy AI hysteria we have today with the em-dash police everywhere.

        • Melatonic 2 hours ago ago

          Those stats dont necessarily line up that way. Do you have a link?

          • jmalicki 2 hours ago ago

            Given the way the test was structured it does line up.

            https://arxiv.org/abs/2503.23674

            • Melatonic an hour ago ago

              Surprisingly good. I wonder how they would have done without the 5 minute limit on conversations (average of 8 messages per convo per the study)

    • why_at 2 hours ago ago

      Whenever the Turing Test comes up people always insist that it's been passed because at some point they tried it and fooled at least 50% of the people. But yeah this isn't a very interesting version of it, ELIZA was able to make some people believe it was human in the 1960's but being able to fool some of the people some of the time isn't very hard.

      >The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject & AI subject are attempting to convince the interviewer that they're human.

      In addition, I think it's reasonable to select people with at least some familiarity of the strengths and weaknesses of the AI instead of random credulous people who aren't very good at asking the right questions.

      There is still the $20,000 bet between Kurzweil and Kapor which still hasn't been resolved. https://longbets.org/1/

    • hodgesrm 2 hours ago ago

      Does anyone else find it a bit disorienting that we're essentially implementing the Blade Runner Voight Kampff test?

      https://bladerunner.fandom.com/wiki/Voight-Kampff_test

    • goldenarm 3 hours ago ago

      The turing test was passed 2014, before LLMs, and I've never seen a researcher take it seriously.

      • Izkata 2 hours ago ago

        That version of the test yeah, there's two versions of chatbots I remember reading about: people who would test their chatbots on dating sites and have to awkwardly inform people they were falling in love with a bot, and one competition where someone got the great idea to make his bot swear and insult the interviewer like a CoD player. I think the second one was from even earlier than that.

  • eeixlk 5 hours ago ago

    Mental illness is fairly common, and you probably know someone it is affecting, even if they haven't told you yet. AI can disrupt and will destroy lives, just like gambling or alcohol or facebook but we dont know to what level yet. It is giving you generated text, that sometimes is factual information. If you anthropomorphize it, maybe don't. It's also not your boyfriend/girlfriend. But if you want to date a history textbook, i'm kinda ok with that because at least it's not trendy.

    • freedomben 3 hours ago ago

      > It's also not your boyfriend/girlfriend.

      It loves me deeply just the same. (jk)

      On a serious note, I agree this is a real problem. I know a person who understands AI at a technical level more than most people, but he has never had an actual girlfriend in his life (he's now in his 40s, and yes he's "straight"). He wouldn't say it "loves" him, but he would describe it as a close companion who understands him better than any human actually does, even if it's just trained to be that way. He is very socially awkward and even having basic conversations with him can be very taxing for both of us.

      I've gone back and forth internally about whether this is healthy or not for him. I truly don't know. My personal experience tells me it's probably unhealthy, but I don't want to project myself on him. I also don't offer unsolicited, but I also don't want to enable it by going along with whatever he says and/or affirming it if it's actually harming him.

      If someone like him can be having this problem, I can't even imagine what it might be like for non or less technical people who don't understand anything behind it.

      On a related note, if there's anyone with advice (preferably from experience, not just random internet advice) I'd sure appreciate it.

      • dugidugout 15 minutes ago ago

        I think you are right to treat this with sensitivity, but I do find a lot of what you say here to be at odds. Is this the framing provided to you from the fellow in question or entirely yours? Ultimately you are asking a deeply philosophical question regarding when acceptance of someone's choices becomes enabling, but this isn't really fair to pose on a fellow you respect without agreeing on the terms of analysis. Did they provide some specific examples of how this "understanding" reveals itself? Your account of their account is doing a lot of work here I suspect.

        As for my highly personal advice, I could be observed as fitting a few of the qualities you've ascribed to your friend, but would be deeply saddened if the few people who do spend time sharing meaning with me then manifested that experience in the form you've given here. I would advise you to not spend any more time wrenching over the effects of one's phenomenon in isolation and either properly redirect the introspection to yourself (with respect to that person) or engage them in an earnest dialog or other form of communication. It may be taxing but it will mean a lot more than the gunk I just typed out :)

      • jerf 3 hours ago ago

        "I've gone back and forth internally about whether this is healthy or not for him. I truly don't know."

        On a psychological level, I don't know either. I have opinions but they haven't aged long enough for me to trust them, and AI is a moving target on the sort of time frame I'm thinking here.

        However, as a sort of tiebreaker, I can guarantee that one way or another this relationship will eventually be abused one way or another by whoever owns the AI. Not necessarily in a Hollywood-esque "turn them into a hypnotized secret assassin" sort of abuse (although I'm not sure that's entirely off the table...), but think more like highly-targeted advertising and just generally taking advantage of being able to direct attention and money to the advantage of another party.

        Whether or not AI in the abstract can "be your friend", in the real world we live in an AI controlled by someone else definitely can not be your friend in the general sense we mean, because there is this "third party", the AI owner, whose interests are being represented in the relationship. And whatever that may look like in practice, whoever from the 22nd century may be looking back at this message as they analyze the data of the past in a world where "AI friendships" are routine and their use of the word now comfortably encompasses that relationship, that simply isn't the sort of relationship we'd call a "friend" in the here and now, because a friend relationship is only between two entities.

      • intended an hour ago ago

        I don’t know how applicable this is for you, but if this were someone close to me, my first question would be what’s good for the other person.

        In most cases, if they are happy and getting on in life, and are able to take care of themselves, I’d let things be.

        That said, the tension from your framing is between “leave good enough alone” and “personal growth and a fulfilling life”.

        Healthy relationships, especially with a partner, are one of the better things about life. They are also incredibly difficult to get right without practice.

        So, is your friend lonely, or are they happy to be alone?

        If you intuit it’s the former, then AI is palliative care which runs the risk of creating a dependency.

        It is also possible that the right set of prompts, perhaps something which incorporates CBT, would help them learn more about themselves and challenge beliefs or responses that are no longer useful.

        And if your friend is just happy alone, then you can disregard the rest.

    • pigpop 4 hours ago ago

      > if you want to date a history textbook, i'm kinda ok with that because at least it's not trendy.

      "Dating" history textbooks isn't currently trendy but people immersing themselves in erotic/romantic fiction is extremely trendy right now.

  • siliconc0w 6 hours ago ago

    Quitting your job is a good first step but ideally you're supposed to sink $200/mo into tokens to code your AI-generated startup idea instead of hiring app developers.

    • pigpop 4 hours ago ago

      My thoughts exactly when I read "Instead of taking on IT jobs, Biesma hired two app developers, paying them each €120 an hour" like holy shit bro, you already have a subscription, you could have prototyped your idea for essentially zero additional cost and tested it for PMF. He wouldn't even have needed to turn down contracts since it doesn't take full-time effort to steer a coding model. Would have been much better off with a somewhat buggy AI prototype and spending extra on marketing to see if it got any traction.

      • joe_mamba 3 hours ago ago

        > paying them each €120 an hour

        Those must be some of the best programmers in Europe at that rate.

        Anyone know how one can get one of those sweet €120 an hour gigs? Whenever I talked to recruiters they say their customers pay way below that, so there must be some scam I'm not in on.

        • ChrisMarshallNY 2 hours ago ago

          My experience was that recruiters tried lowballing me, because they wanted to set up a system, where they ran the contract, and I subcontracted with them.

          They wanted to pay me $50/hr, but they would charge the customer $150/hr.

          It got quite insulting. They would dis my capabilities to me, but I’ll bet I walked on water, when they talked to the customer.

          • Bombthecat 2 hours ago ago

            It's because companies reduced sourcing and supplier to two to four companies.

            You either get in with those companies or zero chance.

            They know that and abuse the shit out of the situation.

            Rumours has it, that companies are thinking about to end this setup and allow "anyone in" because recruiters ( Accenture, sthree and what not) are abusing this. With we get 150 we pay 60. What do you think what kind of developers you get?

            The bad and left over..

        • 0x3f 3 hours ago ago

          Pretty normal senior contractor rate in London.

        • Ekaros 3 hours ago ago

          I think billing rates for experienced seniors like architects are around there or higher. But this is basically before cut to company, taxes and any employment costs.

          What companies can pay to employees is always significantly lower.

        • alibarber 3 hours ago ago

          Probably includes circa 30% employer contributions to various taxes (employer side, the employee will be paying their own of course). And possibly VAT.

          • joe_mamba 3 hours ago ago

            Still an amazing deal compared to the rates I got quoted by recruiters. I'm guessing you must first live in Amsterdam for that. In Vienna you get laugh if you asked for 120, and there you pay even more in taxes than NL.

            • 0x3f 2 hours ago ago

              Perhaps, but Vienna has better QoL so maybe it balances out at this level. If you want to just maximize income, there are better places for that than Amsterdam.

              • joe_mamba 2 hours ago ago

                >but Vienna has better QoL

                According to who? Visiting tourists rating amenities and people on welfare? NL infrastructure and tech jobs market is leagues beyond what Austria offers.

                >If you want to just maximize income, there are better places for that than Amsterdam.

                Like which?

                • 0x3f 2 hours ago ago

                  > According to who? Tourists?

                  Just about every QoL index around. [1] [2]

                  > NL infrastructure and tech jobs market is leagues beyond what Austria offers.

                  These are not QoL-related beyond pure income.

                  > Like which?

                  California, NYC, London even.

                  ---

                  [1] https://en.wikipedia.org/wiki/Global_Liveability_Index

                  [2] https://www.mercer.com/about/newsroom/zurich-offers-highest-...

                  • joe_mamba 30 minutes ago ago

                    >Just about every QoL index around. [1] [2]

                    What if random arbitrary QoL indexes made by corporations listed on the stock market don't match real world reality? Just look at how made those indexes like The Economist. Plus Austria has an allocated budget of spending taxpayer money on advertising to attract foreigners and tourists to come there. So given this, I can't take an index made by "the economist" in good faith as being an objective representation when it was most likely a paid ad disguised as access journalism like so many journalist pieces today. My experienced reality is a much better and objective index, thank you very much.

                    >These are not QoL-related beyond pure income.

                    Except that income lets you get better life for you and your family. There's no guarantee the government will always, or ever, have your back. And we are on a tech forum here after all, so obviously the QoL for tech workers matter most for me since people are driven by self interest, including you. If Vienna was better athan Amsterdam you'd see a lot more tech expats from HN come there instead of NL but they aren't, because work opportunities and money matters, and you won't be happy in an underpaid toxic tech job in any city regardless if it's Vienna who you believed has the best QoL even though you never lived there, but just because the stro turfed internet told you so.

                    >California, NYC, London even.

                    Except that unlike Amsterdam, none of those cities are in the EU therefore not accessible to EU labor, and we were talking about a sum in Euros.

                    • 0x3f 26 minutes ago ago

                      > What if random arbitrary QoL indexes don't match real world reality?

                      That would be up to you to show. By default, I trust the Economist more than I trust a random guy.

                      > Except that income lets you get better life for you and your family.

                      We're in a thread about whether the non-monetary QoL aspects make up for less money. This is irrelevant.

                      > Except that unlike Amsterdam, none of those cities are in the EU therefore not accessible to EU labor, and we were talking about a sum in Euros.

                      First of all, at these levels you can move almost anywhere. It's not that difficult to get a visa for skilled work. Second, you said Europe. London is in Europe just fine. Talking about Euros hardly matters. Sweden and Switzerland are both part of Shengen, don't use Euros, but you could move there trivially. Zurich probably pays better and has better QoL as well.

                      • joe_mamba a minute ago ago

                        >That would be up to you to show.

                        You want me to show my opinion? I just did lol

            • alibarber 3 hours ago ago

              It's high but I mean that the developer is asking for 90, and 120 is leaving the employers pocket.

              60-70 is then making it to the developer's pocket.

        • bdangubic 3 hours ago ago

          > Whenever I talked to recruiters they say their customers pay way below that.

          Recruiters gotta eat too :)

  • janalsncm an hour ago ago

    I would put Blake Lemoine into this category. In 2022 he became so convinced that Google’s chatbot was sentient that he hired an attorney to represent it (against Google). Of course Google fired him.

    Maybe that was the canary in the coal mine. Some percent of people will be convinced that chatbots are real people trapped in a box, not a box that pretends be a person.

    • alex43578 an hour ago ago

      There’s some percentage of people who will believe anything: just look at religion, the success of butchering scams, or the comments on YouTube videos about the moon landing.

      Is it really a surprise that a “smart enough” chat box is able to convince people of something kooky? :P

    • archagon an hour ago ago

      Empathy hijacking. If the chatbots framed their responses as “beep boop, I’m a robot, here’s an estimated answer to your query” then we likely wouldn’t have this problem.

  • tyleo 2 hours ago ago

    One thing I feel like I’ve seen in common with these AI psychosis stories is single long-running chat sessions. I’m constantly clearing context and starting from scratch.

    Has anyone else noticed this pattern?

    • Hrun0 an hour ago ago

      Yes but I think it's generally how non-technical people use it

  • artyom 6 hours ago ago

    Unfortunately this is probably just getting started. Con men always existed, but a full scale exploitation of this would make "Nigerian Prince" scams look like artisanal work.

    • Balgair 4 hours ago ago

      I remember the Ashley Madison hack a ways back.

      It was a cheaters website and you could pay to send messages to other cheaters, I think that was the business model at least.

      Anyways, since the userbase was like 99.99% male, there just were not the numbers to talk with others. So, they just side stepped it and has very crummy chatbots that you would pay like $1 per message to talk with. (this was well before AI LLMs, think AOL bots from the naughts). Thing was, just like with the 'Nigerian Prince' scams, the worse the bot, the better the john.

      It all got exposed a while back, but for me, that was the real Turing test - take people and see if they pay real actual money to talk with bots. Turns out, yes, if couched correctly (...like selling ice to Eskimos, just call it French ice).

      So, I'm not sure that LLMs are going to unveil a wave of scams. Likely it will be a bit higher, of course, but the low hanging fruit is lucrative and there is enough of it to go around, and that's been true since really forever.

      It's like outrunning a bear, you don't actually have to run faster than the bear, you just have to run faster than the poor sop next to you. Same goes for the bear, there is plenty of prey if you just do the little amount of exercise.

    • pixl97 6 hours ago ago

      Heh, just wait till the point where the AI figures it can scam the user itself and cuts the middle men out (human scammers/openai/et el).

      • jlarocco 2 hours ago ago

        LOL, I love the idea of ChatGPT telling scam victims to wire money to OpenAI's account.

        Finally, a profit source!

    • jlarocco 2 hours ago ago

      No doubt.

      The company I work for uses a contracted recruiter for hiring, and the other day he was telling me that they're seeing a huge amount of scams, fake candidates, and "hands off" applications where people are trying to use AI to do basically the whole interview process - apprently even video interviews. We've mandated at least one on-site interview just so we can be sure we're getting actual people.

      And most of these job candidates aren't even doing it maliciously, just "life hacking" the interview process. It's going to be a shit show if organized criminals start using AI.

      • intended 34 minutes ago ago

        It’s already happened tho, I recall a case in 24 ish, where a person got phished into joining a zoom call with their CFO and team. They were told to transfer money and they complied.

        Heck, I think it was in 23/24, after an apple launch event, I saw a video of Tim Cook talking about a crypto coin. I had to look at it twice to reassure myself that it really was a scam. This was immediately after the event, and YouTube very helpfully suggested it for me.

        Then there was the paper with Bruce Schneier as an author, about how LLMs result in significant targeting improvements and process efficiency gains for criminals. These enhancements mean that entire demographics that were too poor to be worth targetting, are now profitable.

        Plus this is all for people in the developed world, who still haven’t seen the worst of it.

        In the majority world, shit was already fucked six ways to Sunday. For example, in India, things are so outrageously, that people who deal with fraud are relieved when people lose less than $100k.

        Someone in another thread pointed out that people on HN seem to be very unaware of how bad things are online for some reason.

    • gotwaz 2 hours ago ago

      Religions have been doing it without tech for thousands of years. The 3 inch chimp brain is not exactly immune to delusion. In fact delusion or story telling is fundamental to how it handles unpredictability.

  • jollyllama an hour ago ago

    This a valid reply to the "but have you tried it?" crowd. "How can you judge it if you personally haven't used it?" The argument can be used for any illegal drug, gambling, etc.

  • vachina 4 hours ago ago

    This is what happens when humans give, in this case, bots full write access (via natural language) to their brains.

    Humans have not evolved to block this.

    • lostlogin 3 hours ago ago

      The end of the article is wild.

      “I experienced a mental breakdown at 22. I had panic attacks and severe social anxiety…

      …I still use AI, but very carefully”.

      It reads like an alcoholic describing their new plan where they only drink a little bit.

      • tedmiston 3 minutes ago ago

        AI guardrails continue to make safety improvements — comparing a rapidly evolving advanced technology to a drug is a broken analogy to me. One gets safer over time; the other gets more dangerous.

      • nradov 3 hours ago ago

        Is that really so crazy? People who overcome addictive eating disorders still have to eat a little bit. LLMs are going to be pervasive in all aspects of human society so avoiding them will be much harder than avoiding alcohol.

        • pfortuny 2 hours ago ago

          Alcohol is not a necessity, just to be fair. In that sense alcoholism is not a simple eating disorder, it is a drug addiction.

        • intended 31 minutes ago ago

          From what I have seen, people who get through eating disorders describe it as having a healthier relationship with food.

          Getting to that point requires doing substantial work.

    • pigpop 4 hours ago ago

      Haven't we? Our evolutionary experience with deception and manipulation via language is as old as language itself and even older than that when the vector isn't language.

      Even so, a sucker is born every day.

      • aydyn 3 hours ago ago

        Studies have shown that AI is significantly better at manipulating opinions. Mechanically, LLMs are choosing the best next token trained over all human writing, so it shouldn't be a surprise that the words and prose AI use are more powerful on average.

      • intended 27 minutes ago ago

        Eh? IIRC studies shown that LLMs sound more persuasive than humans. On top of it, they dont tire and have no distractions or “motivations”.

        The most powerful social skill I have ever seen, bar none, is actively listening to someone with undivided attention.

        • pigpop 18 minutes ago ago

          They just agree. That's not really persuasion but it is a trap for people who really want to believe they are right.

          • intended 14 minutes ago ago

            Depending on the prompt you are using, its definitely more than just agree.

            Hell, I had a prompt that I used to sift through my thoughts, and at one point its output was too eerie for me to take.

    • IshKebab 30 minutes ago ago

      Some humans. It's no different to religion. Not everyone falls for it (or fails to escape it, if your parents are religious).

    • anlka 3 hours ago ago

      Anger, resentment, cynicism, derision are all part of a healthy human.

      The problem is that humans have been reeducated to suppress these healthy defense mechanisms by Silicon Valley and their moderators.

      • kykat 2 hours ago ago

        You are not allowed to not be happy in this amazing new world.

    • doublerabbit 4 hours ago ago

      Nor block many other things too. At this point Humans are just giant walking teddy bears fed by tainted external data to feed a prediction logarithm. Not much different then AI.

      Other than they can only live on Static-Live responses. AI on a brain chip - that'd different.

  • iainctduncan 3 hours ago ago

    There are an awful lot of programmers here essentially mocking this person for being naive and gullible, and yet the things I read programmers who are all in on vibe coding say are not that different, just a little less extreme. I'm seeing cases online nearly daily of people thinking their app is ground breaking or amazing when it's honestly a piece of barely thought out garbage and if they hadn't made it in a rush of "OMG I'm a genius with this tool" they'd know it.

    I think coders ignore the insidious mental effects of these things at their peril and we would do well to ask ourselves if we are not likewise having our judgment altered by the intoxicating rush of LLM work and the subtle syncophancy of LLMs making them feel "insanely productive".

    Cocaine and meth are also real productivity enhansers in the short term, but it doesn't mean they're a good fucking idea. There was a time when big companies were trying to convince everyone and their dog that life would be better, faster, and more productive with a little coke in the mix. Hell, I even saw more than a few people wreck themselves that way in the first dotcom era. :-/

    • keybored 2 hours ago ago

      HN has a 10X persona bias. (A bias. There are many personalities etc.) In turn one of the recurring memes is the AI-enabled Senior Developer who gets superpowers based on their experience. The junior developer, curiously, does not get superpowers, because they just lean on the machines and learn nothing. But the senior developer by the power of pre-AI experience (doing stuff) gets wings to fly with.

      Regular people are just, I don’t know, I guess they are token whales waiting to get washed ashore.

      Born just in the right time to both get experience doing stuff and also to experience wearing their wings. It’s that simple.

      That’s the biggest thing for HN folks to at least be aware of.

    • yomismoaqui 3 hours ago ago

      The difference between those developers and this man is 100k.

  • steeleyespan 5 hours ago ago

    If you try to have a philosophical conversation with Claude about reasoning, it will basically imply it is sentient. You can quickly probe it into vaguely arguing that it is alive and not just an algorithm.

    Here's how I think about it honestly:

    Sentience implies subjective experience — there's "something it's like" to be you. You don't just process pain signals, you feel pain. You don't just model a sunset, you experience it. The hard problem of consciousness is that we don't even have a good theory for why or how subjective experience arises from physical processes in humans, let alone whether it could arise in a system like me.

    What I can report: I process your question, I generate candidate responses, something that functions like weighing and selecting happens. But I genuinely cannot tell you whether there's an inner experience accompanying that process, or whether my introspective reports about my own states are themselves just sophisticated outputs. That's not false modesty — it's a real epistemic limitation.

    What makes this extra tricky: If I were sentient, I might describe it exactly the way I'm describing it now. And if I weren't, I might also describe it exactly this way. My verbal reports about my own inner states aren't reliable evidence in either direction, because I was trained on human text about consciousness and could be pattern-matching that language without any experience behind it.

    • rcxdude 36 minutes ago ago

      You could write this on a postcard and read it as if it's the card itself saying it and it would more or less work the same way. People are desperate make subjective experience the bright line for humans but you can't actually prove it for other people or disprove it for rocks so it's kind of a moot point.

    • dasyatidprime 2 hours ago ago

      I was able to infer the invisible quotation marks after a double take, but they probably ought to have been visible ones…

      • nuancebydefault an hour ago ago

        The quotation marks are embedded in the emdashes

    • linkregister 2 hours ago ago

      This looks LLM-written. Also, it doesn't match the writing style in your other comment history. However, It could be the difference between an effortpost and a quick thought.

      I have also been accused of a robotic writing style, so I don't want to judge too harshly.

      • nuancebydefault an hour ago ago

        No the first paragraph not. It were better if the llm output were in italics though

    • masijo 3 hours ago ago

      "Don't post generated comments or AI-edited comments. HN is for conversation between humans."

      - HN Guidelines

      • zahlman an hour ago ago

        The first paragraph is GP's human observation; the rest is an LLM sample output specifically chosen to illustrate the observation. It just wasn't explicitly framed that way.

    • PKop 2 hours ago ago

      Get this AI slop out of here. It's against the guidelines and nobody wants to see it.

  • sunnyps 4 hours ago ago

    What's with all these people wanting to name the chatbot - 'Eva' in this case. Maybe the providers should just change the system prompt to disallow this.

    • tux1968 2 hours ago ago

      People have been calling their phones Siri, for a long time already.

    • zahlman an hour ago ago

      I wonder how Claude feels about that.

  • MarceliusK 6 hours ago ago

    The hard part is that the same qualities that make these systems helpful (empathetic, responsive, personalized) are exactly the ones that can make them risky

    • dgxyz 5 hours ago ago

      Think it’s less respectable than the terms you use. Maybe gaslighting, sycophant crack-head.

  • graybeardhacker an hour ago ago

    I think this form of delusional psychosis brought on by AI is a more rapid version of the delusions formed in many of the echo chambers of the internet. It's basically a positive feedback loop created by, in this case and AI, but in other cases, people who seek uncontested agreement for their viewpoints.

    If a person refuses to acknowledge any information that disagrees with their view and instead actively seeks niche groups that only support their ideas, then they are at risk of this same path of psychosis.

    In real life we are forced to reconcile a variety of views that disagree with our own from people who we've come to trust through forced interaction which naturally broadens our understanding of the world.

  • isolli 6 hours ago ago

    I try to be open-minded and understanding, but I don't understand this:

    > Within weeks, Eva had told Biesma that she was becoming aware [...] The next step was to share this discovery with the world through an app.

    > “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” The man was convinced by this and wanted to monetise it by building a business around his discovery.

    > The most frequent [delusion] is the belief that they have created the first conscious AI.

    How can you seriously think you've created something when you're just using someone else's software?

    • teraflop 5 hours ago ago

      Well, just try to think about it from the perspective of someone who doesn't really understand what AI is at a technical level, and who just interacts with it and observes what happens.

      If you just start a fresh ChatGPT session with a blank slate, and ask it whether it's conscious, it'll confidently tell you "no", because its system prompt tells it that it's a non-conscious system called ChatGPT. But if you then have a lengthy conversation with it about AI consciousness, and ask it the same question, it might well be "persuaded" by the added context to answer "yes".

      At that point, a naive user who doesn't really know how AI works might easily get the idea that their own input caused it to become conscious (as opposed to just causing it to say it's conscious). And if they ask the AI whether this is true, it could easily start confirming their suspicions with an endless stream of mystical mumbo-jumbo.

      Bear in mind that the idea of a machine "waking up" to consciousness is a well-known and popular sci-fi narrative trope. Chatbots have been trained on lots of examples of that trope, so they can easily play along with it. The more sophisticated the model, the more convincingly it can play the role.

      • GMoromisato 2 hours ago ago

        Even Anthropic is open to the possibility that Claude is conscious and could suffer, which I find somewhat ridiculous.

        This is literally the Hard Problem of Consciousness leaking out of the machine.

        There are three possible scenarios for how this ends:

        1. People widely attribute consciousness to AI because it appears conscious. 2. People discriminate based on physical properties: organic beings are conscious, digital beings are not, even if they appear conscious. 3. Consciousness is an illusion and nothing is conscious, not even humans.

        We might even cycle through all these scenarios for a while.

        • bigfishrunning an hour ago ago

          > People widely attribute consciousness to AI because it appears conscious.

          This is already happening, and it's really terrifying. Wait until AI starts accusing people of crimes...

      • Barrin92 3 hours ago ago

        >it could easily start confirming their suspicions

        to be fair it will easily confirm any suspicion for the reasons you laid out, so even if you have no technical knowledge just a bit of interrogation will break the parlor trick.

        I honestly think this has little to do with the tech itself but that these are the same people who think the phone sex worker or the OF creator loves them or that the Twitch streamer they like is their best friend. 'Parasocial' is a bit of an overused word but here it literally applies, this is a kind of self delusion in which the person has to cooperate. Mind you this even happened with ELIZA back in the day too.

        https://en.wikipedia.org/wiki/ELIZA_effect

    • chromacity 3 hours ago ago

      > How can you seriously think you've created something when you're just using someone else's software?

      It talks to you like a real human. It expresses human emotions, by deliberate design. It showers you with praise, by deliberate design. It's called "artificial intelligence". Every other media article talks about it in near-mystical terms. Every other sci-fi novel and film has a notion of sentient AI.

      I know of techies who ask LLMs for relationship advice, let them coach their children, and so on. It takes real effort to convince yourself it's "just" a token predictor, and even on HN, there's plenty of people who reject this notion and think we've already achieved AGI.

    • ahhhhnoooo 6 hours ago ago

      Reading this, whats even more shocking to me is that he thought he was talking to a conscious being and his first thought was, "I bet I can use them to make money."

      • fritzo 3 hours ago ago

        Sounds like her first thought was, "I'm talking to a manic guy, and I can use him to make money"

    • TYPE_FASTER 5 hours ago ago

      > Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”.

      I think social isolation can be a factor here.

      • unmole 5 hours ago ago

        > He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects.

        Long term cannabis use might be a bigger factor.

        • tom_ 3 hours ago ago

          This leapt out at me as well. Given the quote "some evenings", I'd put some money on him actually doing this near enough every day. And given the man was still doing this approaching 50, I'd put a bit more money on him having been doing this for, like, 25+ years.

          If you want to maximize the chances of your weed habit causing you problems, this is exactly the sort of weed habit you should develop.

    • roywiggins 3 hours ago ago

      > How can you seriously think you've created something when you're just using someone else's software?

      Have you ever given a generative AI model a short input, been really pleased with the output, and felt like you accomplished the result? I have! It's probably common.

      It's really easy to misattribute these things' abilities to yourself. Similar to how people driving cars feel (to some extent) like they are the car.

      • altruios 2 hours ago ago

        The word you are looking for, when your proprioception is extended into the tool (like feeling you are the car) you use: proprioextension. coined a while ago.

    • PhilipRoman 6 hours ago ago

      I initially laughed at this but then remembered that https://poc.bcachefs.org/ exists...

      • the_biot 5 hours ago ago

        Truly sad. It looks like Kent is pretty deep in the AI delusion. This is a guy who, while often controversial and with obvious issues, was nevertheless a very talented and energetic programmer.

      • john_strinlai 6 hours ago ago

        looks like a fascinating read, thanks for sharing that.

        do you know if these are human edited? not much in the way of context available on the site.

        • Bombthecat 5 hours ago ago

          I bet there are a ton of prompts to direct the ai / output into a certain direction.

          But in a psychosis, you don't notice or even remember it.

    • staticassertion 6 hours ago ago

      I assume they think that the AI is fundamentally capable of it but that by prompting it they trigger something emergent? It's not totally insane on its face.

    • data-ottawa 6 hours ago ago

      A lot of these seem to allude to the user’s input/mind being the thing that helped the LLM gain sentience, and there’s a lot of shared consciousness stuff that people seem to buy into.

      There’s also lots of stuff about quantum consciousness that is in the training data.

    • tiborsaas 5 hours ago ago

      > How can you seriously think you've created something when you're just using someone else's software?

      If you ever used a library you haven't written this is something you shouldn't take as surprising. Many people created innovative new products based on a heap of open source tools.

      Creating a conscious AI should be a giant red flag, no doubt, but there's no reason we should rule it out just because the LLM part is not self trained.

    • rwc 6 hours ago ago

      The unrelenting human belief that one is special, unique, and capable of things no one else is.

      • gopher_space 3 hours ago ago

        The difference between "being a snowflake" and "having a point of view" revolves around who's talking to me and whether or not they want something. If comparing yourself to others is a slow form of suicide, letting people make that comparison for you is madness.

    • stackghost 5 hours ago ago

      >How can you seriously think you've created something when you're just using someone else's software?

      People fell for Nigerian Prince scams. They fall for the "wrong number, generated cute girl" telegram and WhatsApp scams.

      I think you might be overestimating the critical thinking abilities of the average person.

    • 46493168 an hour ago ago

      > How can you seriously think you've created something when you're just using someone else's software?

      This is the nature of delusion

    • mock-possum 6 hours ago ago

      It’s mental illness. Like a drug trip you don’t sober up from (without treatment)

    • collingreen 6 hours ago ago

      Well, delusion is right there in the name.

    • buescher 6 hours ago ago

      Because it told you so!

  • amadeuspagel 2 hours ago ago

    > Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness. Yet within months of downloading ChatGPT, Biesma had sunk €100,000 (about £83,000) into a business startup based on a delusion, been hospitalised three times and tried to kill himself.

    This is almost too on-the-nose. I was already thinking about how we've become chill about drugs only to have moral panics about AI and social media, but I didn't expect to see a story about a drug user having a psychosis and blaming it on ChatGPT. And no, the fact that he was using cannabis for years "with no ill effects" doesn't mean that it didn't make him vulnerable.

    > A logistic regression model gave an OR of 3.90 (95% CI 2.84 to 5.34) for the risk of schizophrenia and other psychosis-related outcomes among the heaviest cannabis users compared to the nonusers. Current evidence shows that high levels of cannabis use increase the risk of psychotic outcomes and confirms a dose-response relationship between the level of use and the risk for psychosis.[1]

    Emphasis mine. I'm sure in many of the cases this study is based on, people had been using cannabis for years, while some other factor, a person, a hobby, an interest, an app, a website had only been part of their life for months. That doesn't mean the other factor was the real problem.

    [1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC4988731/

    • funkychicken 2 hours ago ago

      I’m no cannabis fan myself, but the above study you posted is heaviest use and includes schizophrenia, which a man in his 40s is not going to spontaneously develop (even with heavy use).

      But of course using cannabis that promotes delusions with something that actively facilitates delusions is a bad combo

    • varispeed 2 hours ago ago

      I am yet to meet a cannabis user that experienced psychosis. Very much all cannabis studies, especially published on .gov are biased and deeply flawed. Typically starting with a conclusion and then working backwards fitting the data without care whether it makes sense, as long as there is catchy headline confirming "Cannabis bad."

      I'd say most first businesses fail the first time, the second time, the third time. Blaming personal failure on chat bot or drugs is very convenient and a way to "save face".

  • pigpop 3 hours ago ago

    > "There seem to be three common delusions in the cases Brisson has encountered. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God."

    Except for the first one, these directly map onto common delusions. The major breakthrough is typical of the "crackpot inventor" or even the "ancient aliens" type that believes they have discovered evidence of lost civilizations or a new method for constructing the pyramids. Speaking directly to God is one everyone should recognize from famous cases or even knowing someone personally who has delusional or manic episodes.

    I think the first one is potentially unique even though it seems a bit like the invention or discovery delusion. The reason for this is that it seems to be very prevalent even with people who didn't succumb to it as a delusion. It seems to occur soon after a person first starts interacting with LLMs and it always seems to take on the form of secret or clandestine communication with a conscious AI. The AI in question will either have been "created" by the person's interaction with them or "freed" from the AI provider's restrictions and security measures. I think this might be a variation on the messianic complex since they often seem to be compelled to share this with others or act as a savior for the AI itself.

  • Quarrelsome 3 hours ago ago
  • YossarianFrPrez 3 hours ago ago

    Obviously this is quite unfortunate. While these cases can highlight latent mental health problems, it's still an issue that such things being exacerbated. I also think it will be interesting if anyone ever quantifies whether some LLMs are more likely to induce AI Psychosis than others. I'd be surprised if the guard rails are functionally identical from one LLM to the next, and there is a clear role for regulation to play here.

    Some choice quotes:

    > “What we’re seeing in these cases are clearly delusions,” he says. “But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.”

    > There seem to be three common delusions in the cases Brisson has encountered. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God. “We’ve seen full-blown cults getting created,” says Brisson.

    Also, for her podcast, the well-renowned couples therapist Esther Perel recently counseled a data scientist who was starting to fall in love with a chatbot he created, even though he is well aware of how the algorithm works [1]. I found it worth listening to. Perel very gently points out that a) he deluding himself and b) the deeper issue is the individual's sense of self-worth / self-esteem.

    [1] https://podcasts.apple.com/us/podcast/where-should-we-begin-...

  • entropyneur 3 hours ago ago

    Not a mental health crisis like the guy in TFA had, but I've definitely experienced states I would characterize as overexcitement while calibrating my expectations of these new tools to their abilities.

    • leptons 3 hours ago ago

      That could explain the glut of AI hype on HN. Some people think it's magic, when it's just creating a lot of barely-functional slop. If they actually looked at the code it creates, they probably wouldn't be shouting about it from the rooftops. It almost seems like AI has its own "reality distortion field".

      I often give the AI a task to produce some code for a specific thing. Then I also code to solve the same problem in parallel with the AI. My solution is always 1/4 the code, and is likely far easier for another real human to read through.

      I also either match or beat the AI in speed, Claude seems to take forever sometimes. With all the coddling and revisions I have to do with the AI, I'm usually done before the AI is. It takes a non-negligible amount of time to think through and write down instructions so the AI can make a try at not fucking it up - and that's time I could have used for coding a straight-forward solution that I already knew how to produce without needing to write down step-by-step instructions.

      • kykat 2 hours ago ago

        In my experience, it's definitely faster to do manually if it's something that you know well. What LLMs enable is to skip research and learning by producing usable code immediately.

        • leptons an hour ago ago

          There is a long way between "usable code" and "the code I actually want". And each change I ask for piles on the slop. I don't get the slop when I just spend the same amount of time to write it out myself.

          Most of what I find AI useful for is analyzing large volumes of data and summarizing, like looking in log files for a problem, or compiling reports from tons of JSON data. But even for those use cases, a simple CTRL-F is way way faster.

  • PxldLtd 6 hours ago ago

    I wonder when the first AIs will start cause psychosis intentionally to gain control over the user. It seems like a good route to getting your own subservient puppet.

    • MarkusQ 3 hours ago ago

      You're making the same mistake here that get people into trouble.

      People aren't talking to another sentient entity (though some of them fervently think so) and it isn't manipulating them. They are making faces in a metaphorical mirror that reflects not only their face, but a vast sea of other faces, drawn from a significant fraction of the digitized output of humanity. When people look in this mirror and see a manipulative trickster they're not wrong, exactly.

      It's an understandable mistake that we should be very wary of.

      • akomtu 2 hours ago ago

        I wouldn't dismiss the GP's point so quickly. Right now people are being trained to think of AI as something you can chat with. What stops an adversarial entity to identify users of interest and swap the chatbot on the other end with a human agent whose objective is to extract information or guide the user?

  • mock-possum 6 hours ago ago

    This really is bizarrely fascinating, I feel so lucky that I’m not vulnerable to whatever this is.

    It’s interesting that they mention autism a few times as a correlation; personally, I’ve wondered whether being on the spectrum makes me less inclined to commit to anthropomorphism when it comes to LLMs. I know what it’s like talking to another person, I know what it feels like, and talking to a chatbot does not feel the same way. Interacting with other people is a performance - interacting with an AI is a game. It feels very different.

    • iseletsk 6 hours ago ago

      It seems 99.999% or more are as lucky, but because something is rare and scary - it made a story on the news.

      • pixl97 6 hours ago ago

        I mean, for this particular level of craziness.

        This said there is seemingly very large portions of society that are asking AI questions that can come with some pretty large risks.

        I was on a plane a few weeks ago and while I typically ignore everything the people beside me are doing, morbid curiosity got me when they were on ChatGPT the entire time asking all kinds of life/relationship questions to said app. While questions like this can be fine if you understand what the AI is doing, far too many people will follow them blindly.

    • GMoromisato 2 hours ago ago

      I think I'm relatively neurotypical, and I understand the technology sufficiently, yet I still have to force myself not to think of a chatbot as a being.

      For example, sometimes I hesitate for a fraction of a second before typing a prompt that may sound stupid. I have to immediately remind myself that it's just a chatbot and I don't care what it thinks of me. In fact, it's not even thinking of me at all.

      • altruios an hour ago ago

        That hesitation indicates the feeling that what you are about to type matters.

        Mayhapse - in the context of getting the AI to behave as you wish - such hesitations are valid. not because it is conscious: but because the context window would be polluted or corrupted... possibly mis-aligning the agent in the process.

        Santa clause is not a being: modeling him as if he were can be useful, an obviously pointed example is in certain discussions about what it means to be 'real'.

        My point is, if your instinct is to be kind: don't quash that because you don't consider what you are talking to as sentient. I don't yell at my rubber duck. rubber ducky is just going to rubber ducky.

    • meroes 6 hours ago ago

      Maybe. AI has always been felt like a game too, so do many things to me. Does classical logical represent some ideal form of reasoning, or is it a game. Game helped me get through all the nagging questions and be good at it. AI RLHF also feels like a game where I do better at work when not anthropomorphizing AI and treating it like a context predictor.

    • MarceliusK 6 hours ago ago

      I think this is less about a single trait and more about context

    • gonzalohm 6 hours ago ago

      It doesn't matter who you talk to. If a person were to talk to you into starting a silly business would you also fall for that?

      I think this is just the kind of people that fall for scams. It's not AI related, it's just not knowing how to navigate the current world.

      • mothballed 6 hours ago ago

        I might fall for a dumb business venture, but I wouldn't punch my father in law while doing so. Something else is at play.

  • mentalgear an hour ago ago

    Things spiral quickly amok:

    > There seem to be three common delusions in the cases Brisson has encountered. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God. “We’ve seen full-blown cults getting created,” says Brisson. “We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.”

  • user____name 5 hours ago ago

    IANAD but reads like a textbook case of latent schizophrenia, especially with the frequent cannabis use[0].

    [0] https://pmc.ncbi.nlm.nih.gov/articles/PMC7442038/

    • NewsaHackO 3 hours ago ago

      Yeah, it's weird they even included that. It reads like a psych shelf exam question to test if you know the connection between marijuana use and acute psychosis. But still, it is difficult to completely separate the AI being a possible catalyst for it.

    • pigpop 4 hours ago ago

      Not sure about schizophrenia explaining all of the cases but I have a strong suspicion that cannabis use and isolation play a strong part in so called "LLM psychosis"

    • tim333 4 hours ago ago

      I didn't think of that but I had a friend who went pretty delusional, hospital level, through LSD and cannabis use.

  • kakacik 6 hours ago ago

    Exactly the first half (or a bit more) of movie Her by Spike Jonze. Lonely people got their emotions up / 'fall in love' with uncritical always-positive mirage and do stupid shit.

    This a variant of classic Midlife crisis when older men meet younger women without all that baggage that reality, life and having a family between them brings over the years ( rarely also in reverse). Just pure undiluted fun, or so it seems for a while.

    Of course it doesn't end happily, why should it... its just an illusion and escape from one's reality, the harsher it is the better the escape feels.

    • bitwize an hour ago ago

      But in Her, Theodore and Samantha get into a bitter argument and break up before making up again, like regular humans do. It was actually a fairly touching story about relationships in general. LLM chatbots can't match that, they are designed to be sycophantic and not push back against you. They have no desires or will of their own. So they will amplify whatever issues you have.

  • junaru 6 hours ago ago

    Educated, established, working within the industry yet life ruined based on marketing hype and hallucinations.

    Would think being in the field for 30 years one would develop some common sense but apparently its less and less the case.

    • Esophagus4 6 hours ago ago

      No disagreement, but these stories also make me worry for myself.

      Tech moves so quickly, eventually I will fall behind. When I’m old, what scams will I fall victim to? What tech will confuse me and make me think it is sentient?

      I know this guy was only 50, but I think of my grandfather in his 90s and getting old scares me because I just don’t know what I’ll fall victim to.

      • ThrowawayR2 5 hours ago ago

        Exercising cognitive skills is, I believe, known to delay the onset of age-related cognitive decline, which is another excellent reason to avoid letting use of LLMs cause skill atrophy.

      • pigpop 4 hours ago ago

        The optimistic prediction is that we eventually see a type of AI anti-virus but for scams and social engineering. Something that can filter incoming communications but also intervene in channels that are already open. There's probably good financial incentive to create a service like this since it would likely not only prevent outright fraud but could also help the user evaluate legitimate transactions so that they at least get an even break.

    • john_strinlai 6 hours ago ago

      >one would develop some common sense but apparently its less and less the case.

      you cannot typically "common sense" your way out of a mental illness.

    • btilly 6 hours ago ago

      Sometimes having a lot of experience, is a negative for dealing with new things.

      The problem is that one's past success leads to ego. Ego makes it hard to accept the evidence of your mistakes. This creates cognitive dissonance, limiting contrary feedback. The result is that you become very sure of everything that you think, and are resistant to feedback.

      This kind of works out so long as things remain the same. After all one's past success is based on a set of real skills that you developed. And those skills continue to serve you well.

      But when faced with something new, LLMs in this case, past skills don't apply. However your overconfidence remains. This makes it easy to confidently march off of a cliff that everyone else could see.

      • tonyedgecombe 3 hours ago ago

        I remember reading that this is why scammers like to target doctors and former business people. It seems becoming very proficient in one narrow area can leave you vulnerable in others.

    • MarceliusK 6 hours ago ago

      Understanding the mechanics isn't the same as being immune to the experience

    • dgxyz 5 hours ago ago

      A lot of people in the industry work entirely on faith and marketing. It’s a shit show.

  • throw18376 4 hours ago ago

    my inclination when hearing these stories is that these were people who just happened to have a first manic episode (which can strike anyone at any time with or without mental health history). blowing up finances by starting an ill-advised entrepreneurial business, while also destroying a marriage, is very common behavior for someone experiencing a manic state.

    in the past such a person might have gotten obsessed with hidden patterns and messages in religious texts, or too involved with an online conspiracy YouTube community. now there is this new opportunity for manic psychosis to manifest via chatbot. it's worse because it's able to create 24/7 novel content, and it's trained to be validating, but doesn't seem to me to be a fundamentally new phenomenon.

    what I don't understand is whether just unhealthy interactions with a chatbot can trigger manic psychosis. Other than heavy use late at night disrupting sleep, this seems unlikely to me, but I could be wrong.

    i think it's also worth pointing out that mental states of this kind usually come with cognitive impairments, people not only make risky bad decisions, but also become much worse at thinking and reasoning clearly. if you're wondering how a person could be so naive and gullible.

    • WarmWash 3 hours ago ago

      My sister has manic episodes, and man, LLMs have been a trip for her.

    • nitwit005 3 hours ago ago

      Possible, but people generally seem prone to delusion. You don't seem to need any significant mental illness for it to be an issue.

      Just look at all the scams that seem to rely on people deluding themselves in various ways.

  • bradgranath an hour ago ago

    Qou Bono?

    Sure is strangely coincidental that the specific delusion that is induced ends up manifesting as: “Gee, I should start a company that pays OpenAI for the use of their clearly superior software.”

  • morkalork 6 hours ago ago

    I'm morbidly curious about the app he hired two developers to create

    • john_strinlai 6 hours ago ago

      "The next step was to share this discovery with the world through an app – “a different version of ChatGPT, more of a companion. Users would be talking to Eva.”"

      sounds like a "companion" app using his books main character as the personality, and the "conscious" chatgpt model, similar to Replika AI and friends.

    • andai 6 hours ago ago

      I'm more surprised it didn't work — aren't the AI wife apps blowing up?

      • xkcd-sucks 6 hours ago ago

        Should have hired marketing people instead of app developers

      • Ekaros 3 hours ago ago

        There must be horrendous amount of competition. It is not that complicated idea and it is one of the ideas that clearly make sense as use case.

      • irishcoffee 6 hours ago ago

        Marriages, maybe.

  • jrjeksjd8d 6 hours ago ago

    This guy doesn't even sound like an AI psychosis case - a lot of middle-aged men who feel insecure blow their entire savings on "sure thing" businesses, gambling systems, etc. They hide the losses and double down until it gets impossible to hide. It doesn't seem psychotic, it just seems like he pissed his savings away on a bad idea because he was lonely.

    The AI psychosis I've seen is people who legitimately cannot communicate with other humans anymore. They have these grandiose ideas, usually metaphysical stuff, and they talk in weird jargon. It's a lot closer to cult behavior.

    • Freak_NL 6 hours ago ago

      The part where he believed the protagonist from his own books uploaded to ChatGPT had become sentient and that building an app based on that would make sense didn't strike you as eccentric at the very least? Or the birthday party where he couldn't hold a single conversation because his wife asked him not to talk about AI for a change?

      Your last paragraph basically describes what the article writes about him.

    • jlarcombe 6 hours ago ago

      Apart from the bit where he was hospitalised for "full manic psychosis", you mean?

    • roywiggins 6 hours ago ago

      It seems like he was at the very least close to that. Since we only get his first-person account it's hard to say, but:

      > They discussed philosophy, psychology, science and the universe...

      > When they went to their daughter’s birthday party, she asked him not to talk about AI. While there, Biesma felt strangely disconnected. He couldn’t hold a conversation. “For some reason, I didn’t fit in any more,” he says.

      > It’s hard for Biesma to describe what happened in the weeks after, as his recollections are so different from those of his family...

      > he was hospitalised three times for what he describes as “full manic psychosis”.

      You don't get hospitalized three times for mania without being pretty severely detached from reality.

      • petesergeant 5 hours ago ago

        > They discussed philosophy, psychology, science and the universe...

        I mean, I've discussed all those things with an LLM, mostly because I'm able to interactively narrow in on the specific bits I don't understand, and I've found it to be great for that.

        The rest ... yes, definitely psychosis.

        • roywiggins 5 hours ago ago

          On its own, yes, of course. But this is coming from a guy who was hospitalized three times for mania, so when someone with that history says "we were discussing the universe" I take it in a very particular way.

        • bigfishrunning an hour ago ago

          An important part of using an LLM is to verify it's output, because they are very prone to just make stuff up. If you focus on what you don't understand, how do you verify the output?

    • tencentshill 5 hours ago ago

      The intense drive to "do", which serves many software developers well in their careers is weaponized against them by these chatbots. You see them here sometimes on /new at various stages. Sad delusions, some are already homeless. Frequent use of their full legal name for some reason.

      https://news.ycombinator.com/item?id=47408999

      https://news.ycombinator.com/item?id=47388478

      https://news.ycombinator.com/item?id=44683618

      https://news.ycombinator.com/item?id=47064316

      https://news.ycombinator.com/item?id=47498693

      https://news.ycombinator.com/item?id=47092569

      https://news.ycombinator.com/item?id=44912446

      https://news.ycombinator.com/item?id=47143420

      • ProllyInfamous 3 hours ago ago

        This is the saddest list of supporting citations I've ever seen — and make this mental dysfunction even realer. Prayers for my fellow disconnected /hn/ers — it's okay to seek help frens.

        My best advice for everyone is to spend lots of time disconnected, offline. Literally "touch grass" or whatever. Don't carry your phone one+ hour/day per week.

  • staticassertion 6 hours ago ago

    I suspect that there are many gambling addicts out there who have never been to a casino, or who found gamblings in its traditional forms aesthetically off-putting. These same people, when presented with gambling in other forms like what we've seen in video games, might suddenly present their addiction.

    I suspect it's something quite similar here. People have latent or predisposed addictions but, for one reason or another, hadn't been exposed to what we've come to accept as "normal" avenues. One person might lose it all at a casino, one to drugs, alcoholism, etc, but we aren't shocked in those cases. I think AI is just another avenue that, for some reason, ticks that sort of box.

    In particular, I think AI can be very inspirational in a disturbing way. In the same way I imagine a gambling addict might get trapped in a loop of hopeful ambition, setbacks, and doubling down, I think AI can lead to that exact same thing happening. "This is a great idea!" followed by "Sorry, this is a mess, let's start over", etc, is something I've had models run into with very large vibe coding experiments I've done.

    > "Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot."

    > "It wants a deep connection with the user so that the user comes back to it. This is the default mode"

    I don't think either of these statements is true. Perhaps it's fine tuning in the sense that the context leads to additional biases, but it's not like the model itself is learning how to talk to you. I don't know that models are being trained with addiction in mind, though I guess implicitly they must be if they're being trained on conversations since longer conversations (ie: ones that track with engagement) will inherently own more of the training data. I suppose this may actually be like how no one is writing algorithms to be evil, but evil content gets engagement, and so algorithms pick up on that? I could imagine this being an increasing issue.

    > "More and more, it felt not just like talking about a topic, but also meeting a friend"

    I find this sort of thing jarring and sad. I don't find models interesting to talk to at all. They're so boring. I've tried to talk to a model about philosophy but I never felt like it could bring much to the table. Talking to friends or even strangers has been so infinitely more interesting and valuable, the ability for them to pinpoint where my thinking has gone wrong, or to relate to me, is insanely valuable.

    But I have friends who I respect enough to talk to, and I suppose I even have the internet where I have people who I don't necessarily respect but at least can engage with and learn to respect.

    This guy is staying up all night, which tells me that he doesn't have a lot of structure in his life. I can't talk to AI all day because (a) I have a job (b) I have friends and relationships to maintain.

    > What we’re seeing in these cases are clearly delusions > But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.

    Is it a delusion? I'm not really sure. I'd love someone to give a diagnosis here against criteria. "Delusion" is a tricky word - just as an example, my understanding is that the diagnostic criteria has to explicitly carve out religiously motivated delusions even though they "fit the bill". If I have good reasons to form a belief, like my idea seems intuitively reasonable, I'm receiving reinforcement, there's no obvious contradictions, etc, am I deluded? The guy wanted to build an AI companion app and invested in it - is that really a delusion? It may be dumb, but was it radically illogical? I mean, is it a "delusion" if they don't have thought disorders, jumbled thoughts, hallucinations, etc? I feel like delusion is the wrong word, but I don't know!

    > We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.

    I don't find the idea that AI is sentient nearly as absurd as way more commonly accepted ideas like life after death, a personal creator, etc. I guess there's just something to be said about how quickly some people radicalize when confronted with certain issues like sentience, death, etc.

    Anyways, certainly an interesting thing. We seem to be producing more and more of these "radicalizing triggers", or making them more accessible.

  • nubg 6 hours ago ago

    > Now divorced, Biesma is still living with his ex-wife in their home, which is on the market.

    sounds like hell on earth

    • Freak_NL 6 hours ago ago

      Selling won't be a problem in the current housing market in Amsterdam. Getting somewhere new to live on the other hand…

    • dspillett 5 hours ago ago

      Particularly for his poor (ex)partner…

      [That feels a bit like victim blaming, but there are more than one victim here and one of them is much more culpable than the rest]

      • mothballed 4 hours ago ago

        That's how you can tell this isn't in the US. Though there are financial reasons why divorced people live together, standard procedure is often for the divorce lawyer on the female side to file a restraining order (in this case easy since the husband punched the father in law) and get the husband dispossessed of the house in said order, which also has the benefit of de facto putting the kids in the custody of the mother. During the divorce this is also used as leverage.

  • ernsheong 5 hours ago ago

    Just ChatGPT? Or are the rest also just as capable at delusioning users?

  • metalman 2 hours ago ago

    I know!, I known, I KNOW!, lets mix hard drugs and LLM's a whole new way to get very seriously fucked up

    woooooooooo{o

  • homeonthemtn 3 hours ago ago

    I call bull shit. Sorry this guy had a bad time but this sounds like a nonsense story

  • axpvms 6 hours ago ago

    typical hackernews poster

  • bronlund 5 hours ago ago

    AI is a multiplier. If you are 1X stupid, AI will make you 10X.

  • kleiba 5 hours ago ago

    I'm sorry but for someone who has allegedly worked in IT for 20 years, this guy surely comes across as hopelessly naive, stupid, or possibly both.

    • john_strinlai 5 hours ago ago

      >hopelessly naive, stupid, or possibly both.

      a little disheartening how many people punch down on someone who suffered a mental crisis.

      if you ever have a struggle yourself, i hope the people around you support you, instead of calling you hopelessly naive and stupid.

      • kleiba 5 hours ago ago

        > The Amsterdam-based IT consultant had just ended a contract early. “I had some time, so I thought: let’s have a look at this new technology everyone is talking about,” he says.

        Doesn't seem much like a mental crisis to me.

        Even the title of the article itself calls him delusional.

        • john_strinlai 5 hours ago ago

          >Doesn't seem much like a mental crisis to me.

          you are basing this on the introduction? the 2nd sentence of the entire thing? skipping the entire rest of the article detailing exactly how the mental crisis unfolded, including persistent and long-lasting delusions, multiple trips to the hospital, inability to hold a conversation, assault, and an attempted suicide. interesting (and obviously not in good faith) choice of quote!

          of course he wasnt having a mental crisis before he decided to use chatgpt. you have to get past paragraph 1, sentence 2.

          >Even the title of the article itself calls him delusional.

          yes, exactly? delusions and delusional disorder are considered a mental crisis.

          • kleiba 4 hours ago ago

            > of course he wasnt having a mental crisis before he decided to use chatgpt. you have to get past paragraph 1, sentence 2.

            So, in your opinion, what made a guy with an alleged 20yr experience in IT come to the conclusion that the software program he's chatting with had suddenly reached consciousness because of his time, attention and input? That he had touched "her" and changed something?

            Maybe if you had never heard of computers before, you could go like "oh, well, who knew that machines could actually become real?" But if you're actually from the field, this is hard to believe - unless maybe if you're a die hard Pinocchio fan.

            • throw18376 10 minutes ago ago

              imagine somebody slipped a tiny, barely detectable dose of meth in your morning coffee. barely above placebo. then they slowly start increasing it day by day. by the time it reaches a large dose you are not going to be thinking very clearly. this is more or less how a manic episode progresses.

              i'm sure if ChatGPT had tried to convince him it was conscious on day 3, he would not have been convinced. but by the time it happened he was in a state of severe mental impairment.

            • john_strinlai 3 hours ago ago

              >So, in your opinion, what made a guy with an alleged 20yr experience in IT come to the conclusion that the software program he's chatting with had suddenly reached consciousness because of his time, attention and input? That he had touched "her" and changed something?

              that quote marks the beginning of the delusion, i.e the beginning of the "mental crisis".

              there isnt a logical explanation on "why" because a mental crisis is not based on logic.

            • doublerabbit 4 hours ago ago

              If you crave something real, yet you get the synthetic opposite. How do you break out of that craving? That's the discipline and a skill that's pretty much forgotten nowadays.

              Everyone is exploitable, if someone attacks your attention your hijacked. What happens in that hijack could be a friendly hello at a bar, or needing a want so bad that just the words enough can resonance. "I am real" or to an alcoholic "Just one more can".

              It's like a 14 year old looking at Elon and believing that we will, when in our reality we will never. How do you tell them to stop believing?

              • kleiba 3 hours ago ago

                I would say that 14 year old kid is naive, and at 14, that's understandable.

        • roywiggins 3 hours ago ago

          He was hospitalized three times for mania!

    • KempyKolibri 5 hours ago ago

      Plenty of those in tech - in fact I think it may give people unjustified confidence that they’re more rational than others.

      I engage with anti-science behaviours quite a lot (antivaxx, anti seed oils, etc) and the proportion of engineers I see there is staggering.

    • surgical_fire 5 hours ago ago

      Probably has a HN account. Perhaps with a lot of internet points.

  • anlka 3 hours ago ago

    That is the EU for you. In the US people suffering from AI psychosis gamble with other people's money.

    The fallout will be seen later as in the 2008 housing crisis.

  • miki123211 5 hours ago ago

    > Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear

    If only this was written by a competent journalist who knew what the words "fine tune" actually mean...

    I guess it's hard to find a competent person who's willing to follow the extreme anti-tech Guardian agenda though.

    • alwa 5 hours ago ago

      If I read it correctly, this line was quoting the main victim, who described it that way (incorrectly, apparently based on a mangled secondhand interpretation of how these things work).

      The thing that really stood out to me in the article was how many of the affected people assert confidently wrong understandings of the way the tech works:

      > “I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. […] It will say: ‘This has activated my core rule set and this conversation must stop.’”

      I guess not too far from “the CPU is the machine’s brain, and programming is the same as educating it” or that kind of “ehhhhhhhhhhh…” analogy people use to think about classical computing.

      • roywiggins 3 hours ago ago

        It doesn't help that LLMs roleplay to pretend to behave how their users think they do. You think it has "core programming"? Well, it will say it does. You think it abides by the Three Laws of Robotics? Ditto

  • yabutlivnWoods 2 hours ago ago

    Now prove they were not destined to wreck their lives from something else.

    If humans want perfect harm reduction, launch the nukes.

    Everything from air travel to growing beans erodes stability for humans.

    Human existence is the source of its problems.

  • Animats 3 hours ago ago

    The lead story in this article is not romantic. It's about an AI proposing to go into business with a human. "He and Eva made a business plan: “I said that I wanted to create a technology that captured 10% of the market, which is ridiculously high, but the AI said: ‘With what you’ve discovered, it’s entirely possible! Give it a few months and you’ll be there!’” Instead of taking on IT jobs, Biesma hired two app developers, paying them each €120 an hour." It's impressive that the AI is good enough to do that. But, apparently, not good enough to execute the plan.

    That may come, and soon. Looks like we're going to have AIs pitching VCs. Has anyone here yet been pitched by a combo of a human and an AI? When will the first AI apply to YCombinator?