56 comments

  • 0xC0ncord 18 hours ago ago

    >Scott Hennessey, the owner of the New South Wales-based Australian Tours and Cruises, which operates Tasmania Tours, told the Australian Broadcasting Network (ABC) earlier this month that “our AI has messed up completely.”

    To me this is the real takeaway for a lot of these uses of AI. You can put in practically zero effort and get a product. Then, when that product flops or even actively screws over your customers, just blame the AI!

    No one is admitting it but AI is one of the easiest ways to shift blame. Companies have been doing this ever since they went digital. Ever heard of "a glitch in the system"? Well, now with AI you can have as many of those as you want, STILL never accept responsibility, and if you look to your left and right, everyone is doing it, and no one is paying the price.

    • benjedwards 16 hours ago ago

      Yes, it's a big problem. I call it "agency laundering" and I first mentioned it in this article last year: https://arstechnica.com/information-technology/2025/08/is-ai...

      Treating AI models as autonomous minds lets companies shift responsibility for tech failures.

      • clarkmoody 15 hours ago ago

        Wait until your local police force has fully autonomous lethal robots on the streets.

        • TeMPOraL 10 hours ago ago

          This one isn't actually inevitable in the near term. Lethal robots policing the streets isn't something that can just sneak up on us[0] - it's a pretty clear-cut civic issue affecting everyone, so excepting hardcore autocracies with no vertical accountability[1], the public can push such ideas back indefinitely[2].

          It's hard to "agency launder" a killer robot when it's physically patrolling a public square.

          --

          [0] - Except maybe through privatization of law enforcement, which could be more gradual - think police outsourcing more work to private security companies, which in turn decide to "pioneer innovative solutions to ensure personal safety" by giving weapons to mall security patrol robots and putting them out on the streets - but it'll still be pretty obvious what's happening.

          [1] - Some cursory search suggests this is the correct term for the idea I'm thinking of, which is how much the people in power have to, in practice, take their subjects' reactions into account.

          [2] - Well, at least until armed forces of multiple countries start using autonomous robots as ground infantry, and over the years, normalize this idea in the minds of civilians.

    • flakeoil 16 hours ago ago

      > No one is admitting it but AI is one of the easiest ways to shift blame.

      Similar to what Facebook, Google, Twitter/X, Tiktok etc have been doing for a long time using the platform-excuse. "We are just a platform. We are not to blame for all this illegal or repugnant content. We do not have resources to remove it."

    • pjc50 17 hours ago ago

      There's a book "The Unaccountability Machine" that HN may be interested in. Takes a much broader approach across management systems.

      • TeMPOraL 10 hours ago ago

        That famous Bible verse, "there is nothing new under the sun", comes to mind. Even most of the problems with computers and computer systems - especially distributed ones - and information processing, and all problems at the interface layer between those systems and people, are something we've already been dealing with for hundreds of years. For many of those we even developed effective solutions, that most people don't realize exist.

        It takes a little frame shift to see this: one has to realize that bureaucracy is a computing system, built on a runtime made of people instead of silicon, storing data on forms and documents, invoking procedure calls through paper shuffling, executing programs written in legalese, as rules and procedures and laws.

        Accountability shifting? "The program won't let me do that" is just a new, more intense flavor of "this is the company/government policy". The underlying goals remain the same - building a reliable system from unreliable parts, a system to realize some goals - while maintaining control of and visibility into it, all without having to personally micromanage every aspect. Introductions of computers into bureaucracy didn't change its fundamental nature; making process more robust and reducing endpoint variation (i.e. individual autonomy of the workers) just makes it scale better.

        Hell, even AI - at least at this point[0] - isn't really a new thing either. Once you allow yourself to anthropomorphize LLMs a bit and realize they are effectively "People on a Chip", it becomes clear what their role in a computing system is, and that we already have experience dealing with their flaky, unreliable nature.

        And from that perspective, it's clear as day that company blaming AI for a fuckup is just the most recent flavor of shifting blame to a subcontractor.

        --

        [0] - Things will meaningfully change if and when we get to the point of AIs being given moral or legal status as people. Though in all honesty, this wouldn't be a completely new situation either - more like a new take on social and political issues humanity has been dealing with ever since first two ancient tribes found themselves contesting the same piece of land.

    • yojo 16 hours ago ago

      It sounds like in this case there was some troll-fueled comeuppance.

      > “We’re not a scam,” he continued. “We’re a married couple trying to do the right thing by people … We are legit, we are real people, we employ sales staff.”

      > Australian Tours and Cruises told CNN Tuesday that “the online hate and damage to our business reputation has been absolutely soul-destroying.”

      This might just be BS, but at face-value, this is a mom and pop shop that screwed up playing the SEO game and are getting raked over the internet coals.

      Your broader point about blame-washing stands though.

      • ambicapter 16 hours ago ago

        That's the thing about scammers, they operate in plausibly deniable ways, like covering up malice with incompetence. They make taking things at face value increasingly costly for the aggrieved.

      • scblock 16 hours ago ago

        No, this is earned. They chose to do this, to publish lies, and have to live with the consequences.

    • stuaxo 16 hours ago ago

      Commercial enterprises seem designed to launder responsibility, this is perhaps the ultimate version of that system.

    • vivzkestrel 14 hours ago ago
    • ehnto 18 hours ago ago

      I somewhat disagree, because at the end of the day he still has to take responsibility for the fuckup and that will matter in terms of dollars and reputation. I think this is also why a lot of roles just won't speed up that much, the bottleneck will be verification of outputs because it is still the human's job on the line.

      An on the nose example would be, if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong? Customers are going to feel the same way, AI or human, you (the company, the employee) messed up.

      • caminante 17 hours ago ago

        > dollars and reputation

        You're not already numb to data breaches and token $0.72 class action payouts that require additional paperwork to claim?

        In this article, these people did zero confirmatory diligence and got an afternoon side trip out of it. There are worse outcomes.

      • add-sub-mul-div 17 hours ago ago

        > if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong?

        He was likely the one who ordered the use of the AI. He won't fire you for mistakes in using it because it's a step on the path towards obsoleting your position altogether or replacing you with fungible minimum wage labor to babysit the AI. These mistakes are an investment in that process.

        He doesn't have to worry about consequences in the short term because all the other companies are making the same mistakes and customers are accepting the slop labor because they have no choice.

    • nicbou 17 hours ago ago

      I hope that this will result in people paying a premium for human curation and accountability, but I won't hold my breath.

      • TeMPOraL 9 hours ago ago

        I imagine it's already happening, but not at price points most of us would ever afford.

        I.e. I'm not really going to pay lots of money to, say, 1) find a doctor that does not use AI as part of their work, and 2) legally/contractually enforce this is the case. However, I can imagine a government agency or a large company contracting out to some think tank or research organization, and paying through the nose to get a legally binding guarantee that no AI will be used as part of that work.

  • pjc50 19 hours ago ago

    New variant on "I followed my satnav blindly and now I'm stuck in the river", except less reliable.

    It is however fraud on the part of the travel company to advertise something that doesn't exist. Another form of externalized cost of AI.

    • buran77 18 hours ago ago

      > It is however fraud on the part of the travel company to advertise something that doesn't exist

      Just here to point out that from a legal perspective, fraud is deliberate deception.

      In this case a tourist agency outsourced the creation of their marketing material to a company who used AI to produce it, with hallucinations. From the article it doesn't look like either of the two companies advertised the details knowing they're wrong, or had the intent to deceive.

      Posting wrong details on a blog out of carelessness and without deliberate ill intention is not fraud more than using a wrong definition of fraud is fraud.

      • TeMPOraL 9 hours ago ago

        > Posting wrong details on a blog out of carelessness and without deliberate ill intention is not fraud more than using a wrong definition of fraud is fraud.

        There's a concept of "constructive fraud", for cases where there was no deliberate intent to deceive, but the degree of negligence was so great that the fraudlent-looking outcome can just be considered fraud.

      • tantalor 17 hours ago ago

        The standard is to add disclaimers like "Al responses may include mistakes." The chatbot they used to generate that text would have mentioned that.

        Everybody knows AI makes stuff up. It's common knowledge.

        To omit that disclaimer, the author needs to take responsibility for fact checking anything they post.

        Skipping that step, or leaving out the disclaimer, is not carelessness, it is willful misrepresentation.

        • buran77 16 hours ago ago

          > To omit that disclaimer, the author needs to take responsibility for fact checking anything they post.

          > Skipping that step, or leaving out the disclaimer, is not carelessness, it is willful misrepresentation.

          Couldn't help but notice you gave some very convincing legal advice without any disclaimer that you are not a lawyer, a judge, or an expert on Australian law. Your own litmus test characterizes you as a fraudster. The other mandatory components of fraud (knowledge, intention, damages) don't even apply, you said so.

          Australian law isn't at all weird about this. Their definition (simplified) pivots on intentional deception, to obtain gains or to cause loss to others, knowing the outcome.

      • f33d5173 17 hours ago ago

        There has to be a clause for "willful disregard for the truth", no? Having your lying machine come up with plausible lies for you and publishing them without verification is no better than coming up with the lies yourself. What really protects them from fraud accusations is that these blog posts were just content marketing, they weren't making money off of them directly.

        • buran77 15 hours ago ago

          Even for civil law where the bar for the evidence is lower, it's hard to make a case that someone who posted wrong details on a free blog and didn't make money off of it should cover the damages you incurred by traveling based on the advice alone. Not making any reasonable effort to fact check cuts both ways.

          This is a matter of contract law between the two companies, but the people who randomly read an internet blog, took everything for granted, and more importantly didn't use that travel agency's services can't really claim fraud.

          Just being wrong or making mistakes isn't fraud. Otherwise 99% of people saying something on the internet would be on the hook for damages again and again.

        • direwolf20 16 hours ago ago

          And using autocomplete to write travel advertisements has to fall under this category?

    • Lerc 18 hours ago ago

      Seems like closer to fraud on behalf of the marketing company they outsourced to.

      I doubt they commissioned articles on things that don't exist. If you use AI to perform a task that someone has asked you to do, it should be your responsibility to ensure that it has actually done that thing properly.

    • undefined 18 hours ago ago
      [deleted]
    • alpinisme 18 hours ago ago

      The consequences for wrong ai need to be a lot higher if we want to limit slop. Of course, there’s space for llms and their hallucinations to contribute meaningful things, but we need at least a screaming all caps disclaimer on content that looks like it could be human-generated but wasn’t (and absent that disclaimer or if the disclaimer was insufficiently prominent, false statements are treated as deliberate fraud)

  • doodpants 17 hours ago ago

    “our AI has messed up completely.”

    No, it worked as designed. Generative AI simply creates content of the type that you specify, but has no concept of truth or facts.

    • idopmstuff 15 hours ago ago

      I find takes like this very strange. Whether or not it gives the correct information, it's clearly not designed to give false information to factual queries.

      The design of it is based on the intention of the people creating it, not the actual outcome, and it's pretty clear from all available information, plus a general understanding of incentives, that it's designed to be as accurate as possible, even if it does make errors.

    • simianwords 15 hours ago ago

      this is incorrect. it has the concept of truth and facts.

      • usefulcat 15 hours ago ago

        How is knowing what word is most likely to come next in a series of words remotely the same as having "the concept of truth and facts"?

        • simianwords 14 hours ago ago

          how would you prove that a human has it?

          • usefulcat 14 hours ago ago

            Whataboutism is almost never a compelling argument, and this case is no exception.

            ETA:

            To elaborate a bit: based on your response, it seems like you don't think my question is a valid one.

            If you don't think it's a valid question, I'm curious to know why not.

            If you do think it's a valid question, I'm curious to know your answer.

            • simianwords 12 hours ago ago

              its not whataboutism, i'm simply asking how you would perform the same test for a human. then we can see if it applies or not to chatgpt?

              • usefulcat 12 hours ago ago

                I don't know. What is your answer to my question?

                • simianwords 11 hours ago ago

                  Knowing which word is likely to come after the other is trivially the concept of knowing truth for me.

                  Why not? We have optimised for truth and we are predicting the best words that ensure this optimal value.

  • merelysounds 17 hours ago ago

    In case anyone else is curious, I just entered the following in chatgpt: "Without searching the internet, do you know how to get to weldborough hot springs?"

    > Yeah—roughly, from general local knowledge (no web searching, promise ). I’ll flag where my memory might be fuzzy.

    > Weldborough Hot Springs are in northeast Tasmania, near Weldborough Pass on the Tasman Highway (A3) between Scottsdale and St Helens.

    Screenshot with more: https://postimg.cc/14TqgfN4

  • voidUpdate 18 hours ago ago

    How often do you have to update your page on "what's in a town" to "compete with the big boys"? Seems like you could just google what's in the town, or visit if you really want to make sure, rather than just asking your favourite LLM "What's there to do in Weldborough"?

    • lm28469 16 hours ago ago

      > Seems like you could just google what's in the town

      You'll still get an AI generate answer at the top, followed by 3 AI generated sponsored blog scams, etc.

    • zwog 17 hours ago ago

      You probably need to update every now and the because of SEO and such.

    • nicbou 17 hours ago ago

      The goal is to attract search traffic to your page, so that you can promote your product or your brand. AI is making this a lot cheaper than before because you don't even need to create the content, but it's also killing the overall amount of traffic to all websites.

      If you actually take pride in your work, it's a double whammy of competing with AI slop and losing over half of your traffic to AI summaries.

      Useful independent websites are so cooked.

  • verytrivial 16 hours ago ago

    I binged ST:NG before it went away again on Netflix. The more I heard from Data, the more he sounded like where Ai should be heading: Quick, thorough reasoning but followed by explicit, tagged verification from external ground truth.

    There needs to be a more meta, layered approach to reason. Different personalities viewing the output with different hats on: "That's a bold claim, champ. Search required." But I guess the current real-time, interactive nature of these systems makes it difficult to justify.

  • sh3rl0ck 14 hours ago ago

    My problem with AI support agents is that instead of being good classifiers for my problem, they're even more non deterministic now in terms of grievance or query redressal. If a listed option is not available, I'd rather talk to a human who'd know or have the ability to find out, instead of an LLM which would just make something up with no grounding.

  • mettamage 15 hours ago ago

    This is why I don't really believe in agentic AI.

    Not with the current state of technology. I haven't seen that it works yet. It requires supervision.

    It's funny, back in the day computer calculations were checked with human computers. But now? Just trust it bro.

  • metalman 17 hours ago ago

    has anyone checked to see if the AI included time co ordinates as well? it might be that AI is missunderstanding our tempotral limitations, and if prompted correctly will provide a handy portal to when, there will in fact be hot springs at the location suggested.

    • testing22321 16 hours ago ago

      It seems very likely if you go back in time far enough the region was very hot. Something around 4.5 billion years should do it.

  • nephihaha 21 hours ago ago

    Weldborough seems to have done well out of it either way.

  • jmyeet 16 hours ago ago

    I love stories like this because there are still allegedly tech-savvy people who will insist that AIs don't lie, don't hallucinate and rarely if ever make errors.

    At the end of the day, LLMs are a statistical approximation or projection.

    A good example of this is how LLMs struggle with multiplication, particularly multipolcation of large numbers. It's not just that they make mistakes but the nature of the results.

    Tell ChatGPT to multiply 129348723423 and 2987892342424 and it'll probably get it wrong because nowhere on Reddit is that exact question for it to copy. But what's interesting is it'll tend to get the first and large digits correct (more often than not) but the middle is just noise.

    Someone will probably say "this is a solved problem" because somebody, somewhere has added this capability to a given LLM but these kinds of edge cases I think will constantly expose the fundamental limits of transformers, just like the famous "how many r's in strawberry?" example that di the rounds.

    All this comes up when you tell LLMs to write legal briefs. They completely make up a precedent because they learn what a precedent looks like and generate something similar. Lawyers have been caught submitting fake precedents in court filings due to this.

  • NedF 18 hours ago ago

    [dead]

  • re-thc 18 hours ago ago

    Australia has drop bears anyhow. Do they exist?

    Seems par for course.