50 comments

  • jilles 9 hours ago ago

    Not sure if this is a controversial opinion but if the wife decides to divorce the husband based on what ChatGPT says, perhaps the husband is better off.

    • pcthrowaway 9 hours ago ago

      If you read the article, it appears she asked ChatGPT to tell her fortune based on the pattern of coffee grounds in her coffee cup.

      Which.. sounds like she was either just looking for any excuse to divorce, or she was already divorced, from reality.

      There is no story here about ChatGPT, this could have been anything. It could have been the way a dog barked, or hidden messages in the radio, which we wouldn't associate with dogs or radio.

      If we are going to attempt any deeper analysis from this, it should be an analysis of what services are in place for people with mental health issues and how people can be empowered to notice signs to help their loved ones.

      • titanomachy 7 hours ago ago

        The coffee grounds thing is an old folk divination practice in Greece. Not saying it makes sense, but I don’t think it’s any more a sign of mental illness than believing in Jesus or something.

        • pcthrowaway 4 hours ago ago

          If ChatGPT tells you that you should divorce your husband because Jesus said so, you should be equally skeptical

      • UncleMeat 4 hours ago ago

        A bunch of subcommunities with odd supernatural-adjacent belief systems have been completely bowled over by chatgpt. A huge amount of "chatgpt told me that i was the savior of the universe" or "chatgpt told me that such and such thing will come true." While it is true that this could have in principle been anything, the predilection of these chatbots to reflect back or generally approve of the opinions of the user is catnip to a bunch of people.

        This sort of self-radicalization will only grow. I've already seen too many cases of "chatgpt told me that I am God" in weird corners of reddit.

  • calmbonsai 11 hours ago ago

    [flagged]

    • tomhow 4 hours ago ago

      Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

      Please don't fulminate. Please don't sneer...

      Please don't post shallow dismissals...

      https://news.ycombinator.com/newsguidelines.html

    • mossTechnician 9 hours ago ago

      We have no guarantee everyone who uses ChatGPT is a mentally mature adult who understands the limitations of the chatbot. Many criticisms have been raised about how the bot is too sycophantic, such as when it encourages someone displaying schizophrenic behavior to potentially commit violent acts[0] or further harm themselves[1].

      [0]: https://twitter.com/colin_fraser/status/1916994188035690904

      [1]: https://twitter.com/AISafetyMemes/status/1916889492172013989

    • ashoeafoot 11 hours ago ago

      So tech exists in a void, filled with angelic, perfect beings, that read manuals and are in no aspect ever self destructive?

      • calmbonsai 11 hours ago ago

        Not at all. This article is trying to blame a modern technology for an ages-old issue of mystical faith. Replace "ChatGPT" with a Ouija board, a Séance, or the classic cinematic Gypsy Fortune Teller and you'll get the same result.

        • hshdhdhj4444 10 hours ago ago

          So you don’t think it’s useful to learn that there are people who will indeed use ChatGPT as they would an Ouija board, or seance, or a fortune teller?

          Because that is certainly news to me and is extremely counterintuitive, considering that the whole idea behind those other things is that they are infused with mystical spirits whereas even the makers of ChatGPT don’t suggest that.

          Alternatively instead of mysticism, it may be the belief in the technology being scientifically accurate that including in its ability to scientifically accurately predict the future that may be driving this decision making.

          If the latter that raises many questions around us technologists who are working on or with these technologies to do a better job of explaining to the users what the limitations of these technologies are.

          • hnuser123456 10 hours ago ago

            You are putting way too much thought into this. Some people are just extremely emotionally vulnerable and insecure, and will look to ANYTHING to try to satiate that anxiety, and accept any answer. Look at how huge and fervent communities around astrology are. There have been countless studies showing that self-proclaimed "skilled astrologers" have no better chance than random of predicting someone's sign (based on birth month) given a detailed description of that person's life and personality. But that literally doesn't matter, because it's too much fun to these people to divide everyone into groups and stereotype them, regardless if there's no evidence at all behind the stereotypes. And then invent causality for fun, just to feel like they understand why people are they way they are without doing any of the actual work of understanding their life story. Literally, they do not care about the proof of causality, they already fully believe it and fully believe that anyone who doesn't "just doesn't get it." They just want to say "oh I'm an Aries, you can't blame me for being like this." and "Of course he did that, he's a Leo."

            These people are going to exist with or without chatgpt. Maybe we should adjust the instruction training to tell people that you can't make worldly predictions from the arrangement of coffee grounds in a cup, other than the quality of the coffee machine and filters, no matter how creative and fantasy-like the users want the model to be.

      • FirmwareBurner 11 hours ago ago

        Wanna hear about the French woman scammed of 800k Euros by AI Brad Pitt wanting to marry her?

        Some humans are insanely stupid and have no self awareness, they were gonna loose with or without AI. At some point it's natural selection at work.

        We can't halt technology progress just because some people are stupid.

        • ashoeafoot 11 hours ago ago

          Yes, we can and yes we do that thing right now. We have nuclear proliferation allover the middle east right now, a very real powder keg of technology having gone way to far.

          Some technologies are allowed to grow because they lower the chance of self violation , but yes, most technology goes back into the lamp.

          • FirmwareBurner 11 hours ago ago

            I assumed it was obvious I was referring to consumer accessible technology.

            We don't need nuclear energy to kill ourselves. Over one million people die in car accidents per year worldwide, that's two per minute, and if we still haven't banned cars by now then sure as hell we're not gonna ban LLMs.

  • kylehotchkiss 8 hours ago ago

    > that she often got caught up in trends.

    Serious relationship crusher: one person sitting on tiktok/reels all day and setting expectations for their own relationship based on that (often staged) content. Really not healthy to constantly have that level of pressure on your head.

  • undefined 6 hours ago ago
    [deleted]
  • karmakaze 10 hours ago ago

    > According to reports, the woman asked ChatGPT to interpret the coffee grounds left in her cup — in a lighthearted attempt to mimic traditional fortune-telling. The AI responded, supposedly describing a young woman with the initial “E,” claiming the husband had strong feelings for her and that the relationship would soon become a reality.

    ChatGPT is not to blame, they could have gone to a coffee-grounds (human) reader and done the same. If someone can show that human coffee-grounds readers are better than ChatGPT, then there might be a case.

  • staticman2 9 hours ago ago

    There was actually a similar thing on this website where someone here claimed in an article comment section that if you feed your wife's emails into ChatGPT it can simulate her well enough to tell you whether she is cheating.

    The weird thing is I looked at the posting history of that commenter and it was bland tech comments with no obvious signs of mental health issues.

    • nvesp 7 hours ago ago

      I could believe that more than the prediction based on coffee

    • potato3732842 8 hours ago ago

      Not everyone with a spicy opinion has an underlying mental health problem.

  • makeitdouble 7 hours ago ago

    Reading from the comments here I was expecting at lot more meat about the case, but it's just a report on anonymous reports and what the husband allegedly told to the press ?

    Otherwise, does a divorce really need anything more than one party willing to get out of the relationship ?

    Even assuming anyone is calling it quits on frivolous grounds, it also means that's how much emotional investment there was left in the first place and they were due for a break any day really.

    • nimih 7 hours ago ago

      > Otherwise, does a divorce really need anything more than one party willing to get out of the relationship?

      I don't know anything about Greek statutes specifically, but no-fault divorce has actually been a relatively recent development in most legal jurisdictions.

  • TrnsltLife 7 hours ago ago

    Is this materially different from all the wives divorcing their husbands because their friends predict he will cheat on her?

    There were societally beneficial reasons that divorce used to be hard to achieve, and the fickleness of the inconstant moon was one of them, to reference Juliet.

  • SirMaster 10 hours ago ago

    I mean this is a good thing right? If the person you are with would leave you due to what they read online, then that's a massive red flag and you should probably break it off with them anyways. Why would you even want to stay with someone like that?

  • croes 15 hours ago ago

    That's the real risk of AI.

    People blindly belueving a machine

    • graemep 12 hours ago ago

      I think the problem here is old fashioned human stupidity:

      > According to reports, the woman asked ChatGPT to interpret the coffee grounds left in her cup — in a lighthearted attempt to mimic traditional fortune-telling

      This could just has easily happened with a human fortune teller.

      • croes 11 hours ago ago

        But you need to stupid humans, the client and the fortune teller dumb enough to defame someone else.

      • kees99 11 hours ago ago

        No way a human fortune teller would say something like that in their right mind.

        Same as with fortune cookie quips, any "prediction" will be something that sounds deep and intriguing, but always vague enough to be non-falsifiable.

        Otherwise, client will come back and confront fortune teller about it. And human one knows it well enough to avoid making an unnecessary headache for themselves.

        • rpdillon 11 hours ago ago

          The article mentions him falsifying a previous fortune-teller's bullshit:

          > He also noted that this wasn’t the first time she had fallen into irrational beliefs. He claimed that in the past, his wife had visited an astrologer, and it took her a year to accept that nothing they said came true.

          I don't think the problem here is with AI.

          • croes 11 hours ago ago

            You can sue the fortune teller.

            • amanaplanacanal 10 hours ago ago

              If you want to waste money and lose the case, I guess.

              • croes 9 hours ago ago

                Defamation is defamation

    • ty6853 12 hours ago ago

      People have a way of believing anything when the alimony+child support reaches a number where you can just grab (at least a sizeable portion of ) the financial benefits of marriage without having to put in the work of being a partner.

    • foxyv 13 hours ago ago

      I can just imagine a Kafkaesque future society where people are punished based on a predictive model that is nothing more than an overly complicated random number generator.

      • archerx 12 hours ago ago

        Infidelity Report. You will be caught before you cheat!

      • netsharc 9 hours ago ago

        The Company has never existed, and never will.

      • imtringued 11 hours ago ago

        That's exactly what Gattaca is about. The fact that the machine scans your genes is honestly quite irrelevant and probably the biggest thing that everyone gets wrong about that movie: Everyone blindly trusts the scanner, but such a scanner cannot possibly exist.

        gwern summarizes it appropriately: https://news.ycombinator.com/item?id=14867898

        • dragonwriter 11 hours ago ago

          > gwern summarizes it appropriately

          The key claims about the film that gwern's argument builds on are false, though. E.g.: "in a setting where there are no genuine consequences to any of this, no genetic engineering, no embryo selection"

          It is a rather central part of the story that genetic engineering and embryo selection are routine, and the principal protagonist is noteworthy because his parents did not use such techniques, and he was instead a "faith baby".

          gwern is clearly criticizing something, but it's not what is actually portrayed in the film Gattaca.

          • gwern 6 hours ago ago

            My point is that there are no 'genuine consequences'. We are told that supposedly there is selection and that the predictive power is nearly perfect and everyone relies on solely genetics to do everything; and yet, we do not see a world which reflects such effects at all. This worldbuilding does not make sense from a scientific perspective. Instead, what we see is a world in which you have extremely cheap sequencing in a panopticon and a caste society based on seemingly-random arbitrary discrimination with no actual consequences to any supposed inferiority or superiority*, akin to the classification of supposedly-objective 'classes' in North Korea where everyone is finely graded on hereditary loyalty to the elites but could be knocked down a rung at any time for suspected disloyalty.

            (My headcanon is that the esoteric story of _Gattaca_ is that none of the supposed embryo selection or valid/invalid screens exist at all, any embryo selection is just ordinary IVF quality control for aneuploidy etc, there is only cheap universal sequencing developed by a totalitarian police state like the CCP (see: Zero COVID testing), and the classification exists to manufacture distinctions and divide families and punish/reward elite supporters; the Freemans were some lower 'wavering' class, grouped with religious minorities, as indicated by their dissenting 'natural' birth, and so they earned a valid vs invalid split within their family. It makes more sense than the actual story, anyway.)

            * I've remarked elsewhere that _Gattaca_ is an example of how movies with a message tend to fail by loading the dice too heavily for one side, removing any 'moral dilemma', and it is no exception: we never see any evidence that Vincent is doing anything wrong or that he shouldn't go on the space mission. In fact, given his methodical penetration, high competency, motivation, and extraordinary success at getting into the mission, he shows that he should be selected for his moxie and chutzpah. So what _Gattaca_ should have done was to make the ending Vincent suddenly clutching his heart from the shock, and fade to black.

      • giraffe_lady 12 hours ago ago

        This is already pretty much a thing with pretrial risk assessment algorithms. Sure pretrial detention isn't "punishment" in a narrow legal-technical sense. But for the person in jail the difference doesn't matter.

        • foxyv 11 hours ago ago

          Have you seen the documentaries on that. It was BONKERS. They would show up every day to these people's homes and give them tickets for literally anything. That bush is too tall, or whatever. Then they would threaten family members and stalk/harass them.

          John Lang did a little video about it: https://www.youtube.com/watch?v=bfD_2fVqEMk

          • potato3732842 8 hours ago ago

            The real tragedy is that such systems can exist under the cover of political will provided by the "well if they didn't want to get fined they should have complied with <arcane rule that nobody ever expected full compliance with and was only ever intended for selective enforcement>.

      • ashoeafoot 12 hours ago ago

        Claude said i should downvote, you see it has a inner model, a state of vengeful spite.

    • undefined 10 hours ago ago
      [deleted]
  • thro1 5 hours ago ago

    [flagged]

  • matrix87 8 hours ago ago

    I wonder at what point people decide marriage isn't worth it and the increase in risk (because of no fault and annoying cultural climate) is priced in

    I'm already there personally, it just looks like a rip off

  • thro1 5 hours ago ago

    Good for him ? First, I thought the AI was materializing her projections - but actually, she chose to relate with ChatGPT (!) in conspiring against him - which obviously must have to take advantage of that opportunity and some leaves chains in a cup happen to be aligned or weighted so, so well. For them, fallen under control of such factors.. - can they escape from it after all somehow, if they want ? - or, better maybe.. shall they be happy for the wish they/she granted, deserving it, granted by ?? ;)

    So now, "" it's some of third Popper world frozen unequivalently at a past time.. [slope shaped..] - since then, * IT * self-repeat and decide usurping about us already in such a way.. .

    btw elsewhere now it happen [flagged, overtake]. The next ? (.) (modus operandi?) .?