Journalists and bloggers usually write about others’ mess ups and apologies, dissecting which apologies are authentic and which apologies are non-apologies.
In this incident, Aurich Lawson of Ars Technica deleted the original article (which had LLM hallucinated quotes) instead of updating it with the error. He then published a vague non-apology, just like large companies and politicians usually do. And now we learn that this reporter was fired and yet Ars Technica doesn’t publish a snippet of an article about it.
There’s something to be said about the value of owning up to issues and being forthright with actions and consequences. In this age of indignation and fear of being perceived as weak or vulnerable due to honesty, I would’ve thought that Ars would be or could’ve been a beacon for how things should be talked about.
"I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards."
----------
A reporter whose bailiwick is AI should have known that he needed to check any quotes an LLM spat out. The editorial staff should have been checking too, and this absolutely is representative of their standards if they weren't.
It would probably be worth checking to see if any other articles or employees have similarly disappeared.
There was such a thing, in newspapers up until 2000. Then, as profits nosedived, these sorts of things largely disappeared.
Purely online entities have no way to pay for real editorial staff.
News has no money, compared to news of old. It's part of the reason 99% of modern news is just reporting other people's tweets or whatever.
I can't imagine many news companies having much money for court battles (to force disclosure of documents, or force declassification, or fighting to protect sources). Or spending months or years investigating a story.
Agreed. Modern news is beyond lazy, and is not journalism by any means. Too many talking heads do nothing but sit behind a screen watching others for what to say next.
Granted, a few of the remaining newspapers I'm aware of run business awards (Best restaurant, etc), and the way to win is via wining and dining them, even though the paper claims it's based on people's votes.
That style of thinking - of entitlement - probably brought the lack of interest in both cable news and traditional web/paper outlets - as the younger generations started to see through it more.
I think you missed the point of the parent comment.
The money (from advertising) that used to go to news now goes elsewhere (Google and Meta).
It’s left very little in terms of resources for staff.
Think about what the quality of commercial software would be like if there wasn’t enough money for QA and testers and top tier devs capped out at $180k with starting roles at 30k and 40k.
That’s the news industry right now. Poorer quality product.
It's the editors responsibility to make sure fabricated quotes don't get published, but it's also the journalist's responsibility to not paste fabricated quotes from a chatbot into their articles. The responsibility of the former doesn't negate the responsibility of the latter.
I can't just submit shit work all day long then blame QA when some of it goes through. That's like a burglar saying it's the cops fault that people got burglared.
Is it normal/expected for a news organization to publish that they fired someone? I’m inclined to take the ‘don’t comment on personnel matters’ at face value.
They did report on the article quote sourcing debacle at the time - perhaps not as quickly as some would’ve liked, but within a couple of days.
Yes. Normally, and Ars is generally up to that standard, the editorial staff (or Editor in Chief) updates the article, adds a note about the correction, and further adds that the original author of the article is not working with Ars anymore.
It stays as a mark, immortalizing the error, but it's a better scar than deleting and acting like it never happened.
I also want to note that, this last incident response is not typical of the Ars I'm used to.
I think they're an outlier, but still I was disappointed by Ars's response. They deleted the article and didn't detail what was wrong with it at all. Felt like a cover-up.
BBC News does have to report on itself from time to time. Here's it's "live" feed from November on the Parliamentary Committee investigation into the Trump speech edit incident:
This was a big disappointment. I read the original article and the comment from the source highlighting the error, knew what was wrong with it, and still think it was the wrong move to just delete the article and all the original comments, and replace it with an editorial note.
This is a kind of cover-up. It's impossible to hide the issue but they went to great lengths to soften the optics and remove the damning content from the public record. They obscured the magnitude of the error. It looks like another "person uses AI and gets some details wrong".
What they did so far, the decisions that allowed the issue to occur in the first place (e.g. no editorial review before publishing) and the first reaction to deal with the incident (just destroy the content, article and comments) is everything I need to know about the journalistic principles at ArsTechnica. it's a major loss of trust for me.
They’re at this level because the editors have always had low standers.
I don’t know about you guys, but I feel like 50% of Ars headlines are completely misleading.
They’ve had this problem for years. They will publish anything that gets them clicks. They do not care if a writer makes things up. They do not care if their headlines are misleading - in fact, that’s the point. They clearly got into the job in order to influence and manipulate people.
They’re bad people, with terrible motivations, and unchecked power. They only walk back when something really really bad happens.
Same for the Verge. Sometimes their headline or content contain factual errors. If you point it out in the comment, sometimes they do it properly and add a correction, other times they quietly fix it and delete your comment. So much for their free speech stance and editorial practice.
> They’re at this level because the editors have always had low standers.
It's not just Ars Technica. I would go as far as saying the big majority. I work at the biggest alliance of public service media in EU, and my role required me to interact with editors. I often do not like painting with broad brush, but I am yet to meet a humble editor yet. They approach everything with a "I know better than anyone else" attitude. Probably the "public" aspect of the media, but I woupd argue it's editorial aspect too. The rest of the staff are often very nice and down to earth.
> but I am yet to meet a humble editor yet. They approach everything with a "I know better than anyone else" attitude.
They're like "UX experts" in software. One does UX for software, the other does UX for text. Same attitude problems, from the way you describe it. If the expert in something so subjectively judged is seen to be conceding anything, that might undermine their perceived expertise. Any push back is interpreted as somebody challenging their career.
> Aurich Lawson of Ars Technica deleted the original article
That's a very "shoot the messenger" statement. While Aurich is the community "face" of Ars, I very much doubt he has the power to do anything like that.
It seemed to me like very hasty self defense, there's a lot of AI slop hate and Ars can't risk becoming known as slop when their readers are probably prone to be aware of the issue.
I don't think Ars thought they had a choice but to cut off the journalist who made the mistake, especially when it was regarding a very touchy subject. I don't think they had a choice, it's impossible for us readers to know if this was a single lapse of judgement or a bad habit. Regardless, the communication should have been better.
All they had to do was write a clear and simple message saying that one of their staff was responsible, has been fired, and they'll take steps to avoid this in future.
Their actions so far just make me think they're panicking and found a scapegoat to blame it on, but they're not going to put any new checks in place so it'll just happen again.
It was against their policy to use AI in producing any part of the final article, and the writer was aware of that.
I feel bad for the guy, but there's just no way I can imagine much better safeguards other than editors paying more close attention to referencing sources, and hiring more reliable people.
> It was against their policy to use AI in producing any part of the final article, and the writer was aware of that.
More than that, as a reporter on AI he should have been fully aware that AI frequently bullshits and lies. He should have known it was not reliable and that its output needs to be carefully verified by a human if you care at all about the accuracy or quality of what it gives you. His excuse that this was done in a fever induced state of madness feels weak when it was his whole job to know that AI was not an appropriate tool for the task.
Possibly akin to a roofer taking a shortcut up there, then taking a spill? You knew better but unfortunately let the fact that you could probably get away with it with zero impact decide for you.
IIRC hallucinations were essentially kicked off initially by user error, or rather… let’s say at least: a journalist using the best available technologies should have been able to reduce the chance of this big of an issue to near zero, even with language models in the loop & without human review.
(e.g. imagine Karpathy’s llm-council with extra harnessing/scripting, so even MORE expensive, but still. Or some RegEx!)
You have to give them time to do the job properly as well. Companies will often pay lip service to standards then squeeze their staff so much those standards are impossible to attain.
Yes, those are exactly the kind of steps they would need to publicly commit to in order to retain trust. And yet, instead we get silence, no acceptance that some measure of responsibility falls on the editorial team here. So it's clear they just hope it'll blow over without them having to do anything, which is the opposite of what a trustworthy site would do.
AnonC doesn't seem to be upset that the journalist was fired. The disappointment comes from Ars trying to brush this entire situation away by deleting articles, comments, and making no statement on their website.
My understanding is that AnonC is upset at Ars not taking the mature approach by allowing this to become a learning moment for the employee and using it to double down and confirm their stance on AI generated content. There's strength in maturity. But I am doing some reading between the lines, and I'm possibly reading a bit too much into "There’s something to be said about the value of owning up to issues"
Reminds me of a story I was told as an intern deploying infra changes to prod for the first time. Some guy had accidentally caused hours of downtime, and was expecting to be fired, only for his boss to say "Those hours of downtime is the price we pay to train our staff (you) to be careful. If we fire you, we throw the investment out the window"
There is a difference between an error and totally misunderstand your actual task. I have absolutely no sympathy for journalists getting caught producing hallucinated articles. Thats an absolute no go, and should always result in that person being fired.
Same goes for engineers reviewing vibeslop. If you let that shit through code review, and a customer impacting outage results, that should be instant termination. But it won't be, because as an engineer you are supposed to be held "blameless" right?
I love vibe coding but you are absolutely right. We're at the stage where vibe coding is a fun way to produce sloppy software and that's fine if the intended user is just yourself and you're fully informed about what you're getting into. But actually shipping vibe coded slop to other people is wacky, anybody doing the needs to be manually reviewing every commit very carefully and needs to be prepared to accept personal responsibility for anything that slips by.
Ars never commented about firing staff before, and it happened on several occasions. You get the occasional article when someone joins, never when someone leaves. They should have published another article after all this, but I would not expect them to comment about staff.
And I think thats a good thing. People screw up, and journalists are people. This person's punishment for their screw up was losing their job. They do not need to be dragged into a hit piece.
Ars can, and probably should if they have not already, publish a piece about hallucinations and use of AI in journalism, and own up to their own lack of appropriate controls and reflections. They do not need to drag the authors name into the write up. It can be self critical of themselves as a journalistic outlet.
I'm sad to see them fire him. I've seen far worse: I have always approached issues by asking for accountability and improvement. Frankly, he already did: he openly apologised. I was very happy with that, it demonstrated integrity and I remained respecting him.
Even worse,
> I have been sick in bed with a high fever and unable to reliably address it (still am sick) [0]
In an earlier HN thread, I saw someone ask why Ars was requiring staff work while ill. If that's true, if he posted without verification while sick and under pressure, which is implied and plausible, firing looks doubly bad.
Ars has lost a lot of my trust in recent years, with articles seeming far worse. Just like you, I'm sorry to see the editorial position here.
You're taking his fever dream excuse at face value, and I think you probably shouldn't. It reads like a lame excuse to deflect personal responsibility, a cynical face-saving tactic.
If the illness was genuine, can he document that he advised management of this fever and they told him to submit an article anyway? It's not his bosses job to stick a thermometer up his ass every morning.
He posted his not very impressive apology as images not text that is easily indexed. I do think that was purposeful and manipulative and very much makes me question his motivation. If I'm missing the original posting in text I'd sure like to know so I can correct this perception.
Where I work in healthcare honestly and owning up is encouraged and unless there is major negligence not often punished. They just want to try learn why the mistake happened and look for ways to prevent it going forward.
My buddy said for his company if an accident happens WorkSafe is not out to punish as long as they are very forward and honest. Again they want to learn how to avoid it happening again. Punishment only scares others to try hide mistakes.
I think they missed a big opportunity to instead of firing the guy sit him down and stress how not okay this was and that it harms the credibility and he needs to understand that and make a proper apology. They could make him do some education like ethical reporting responsibilities or whatever.
Then like you say not just hide the article but point out the mistakes and corrections. Describe the mistake and how credible reporting is their priority and the author will be given further education to avoid this happening again. They could also make new policies like going forward all articles that use AI for search results must attempt to find a source for that information.
This would build trust not harm it in my opinion.
I agree. I'd add that the fact he appeared to be working while sick -- and that he pre-emptively and immediately publicly apologised -- means I think he already did behave as he should.
This makes me question Ars not him. Loss of credibility indeed.
This has just happened - i'm giving Ars a bit more time to come out with a piece examining the situation. They're a pretty good operation, I think. but it they don't...
They're a random tech blog, the kind of website that is peak time waste slop, why would they have any standards? Even the new york times and the Washington post put up wrong things all the time without corrections. People need to realize journalists are just ad sellers, not some beacon of truth. They are there to sell ads, the same way a youtube video of a guy eating too much food in front of a camera is.
Journalism has devolved into content creation in the literal sense of the word, they are just there to put something inside the div with the id "content", to justify the ads around it.
"People need to realize journalists are just ad sellers, not some beacon of truth."
You just changed the meaning of journalist. Now sure, the job of some journalists could be better described as ad sellers, but I rather call those like that and restrict the original term to actual journalists who actually care about truth. Because they still exist.
The 3 people that work at Reuters actually doing journalism are not doing in ANY way a similar job to the millions writing blog posts for Ars Technica like publications. The latter is an ad seller indeed. And the majority of publications that are renowned also do little to no journalism.
It's as if we called "web devs" that learned JS on udemy and just vibe code, Computer Scientists and treated them as if they publish compiler research papers. It's just a completely different job
Berger is a real one. I'm surprised he's lasted so long at Ars Technica. I think eventually his objective reporting of SpaceX will get the Are Technica reader base to demand his firing, Ars readers are very reddit-like. Team minded, not interested I hearing dispassionate takes. Hearing Elon Musk criticized as a person while simultaneously seeing SpaceX described as a real and highly accomplished company gives reddit/ars readers tonal whiplash, such people prefer simple narratives without nuance.
See also, in this very thread, somebody who thinks Berger has a strong pro-musk bias because his reporting and books say that SpaceX are good at what they do.
It's cuz Ars's roots are in being video game bloggers and graphics card reviewers, not legitimate journalists. They don't have a notion of professionalism or journalistic duty, only virality and juicy takes.
you're participating in a social media site where something like 20% of the articles have become, "I told Claude Code to do something and write this article about it." So put your money where your mouth is, if you think it's sad, if this is more than concern trolling, hit Ctrl+W.
Last year I went viral, and Benji was the first person to interview me. It was a really cool experience, we chatted via Twitter dms, and he wrote a piece about my work - overall did a decent job.
Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response.
Then, tech crunch wrote an article on our project.
I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)
I thought that was rather strange, especially since we already had built up a relationship.
I don't really have a moral or lesson to this story, other than that journalism can be rather opaque sometimes.
Oh one other tip for anyone reading this - if you do ever get reached out to by journalists, communicate in writing, not a phone call so you can be VERY precise in your wordings.
> Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response. [...]
> I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)
> I thought that was rather strange, especially since we already had built up a relationship.
The US mentality might be different, but at least having grown up and living in Germany, such an annoying hustler who wants to use some journalist as a marketing influencer for his private project is a huge no-no. In other words: it is a very reasonable decision (perhaps even the only right one) for any journalist to fob off such a hustler.
I'm a journalist. As a general rule, if someone approaches me with a pitch for a feature or investigation (not news piece) that was already published elsewhere, I'll turn it down. To be fair, I turn down all PR pitches, but there are journalists who don't but still want an exclusive.
It sometimes happens that you spend weeks or months working on a story, only to be scooped by another publication. It sucks, especially if you think your story is the better one, but unless you can pivot or add a substantial amount of new insight, it won't come out.
I know a lot of people that don't get through their email every week, for example. Even saying no takes too much time, with the volume of communication required by daily work.
Very few people email me except for endless newsletters that I accidentally signed up for. I try to un sub to a few every day but it seems never ending.
In the event that you actually do end up emailing me, it's contingent on me actually checking my personal email, which I never do when I'm not working, and only sometimes do during work hours.
If it's you asking me a favor that I'm not in the mental space for, I'll mark the message unread as a reminder to get to it later.
Maybe I just have weird email habits, but I can get away with this because email is not a heavy part of my job.
That being said, one guy was pitching me on something several times a month for several months. I just recently responded to him and apologized because of x y z. He said don't worry and we had a fruitful conversation later.
Passing on some life advice to anyone who’d benefit, people are busy. Maybe they didn’t respond because you’re annoying?… no no, feel it out and text again a while later. Give them another shot, get to the top of their inbox or messages again.
My hunch is Ars will copy/reword/repost articles from real news sources (basically free for Ars) or do its own reporting for exclusive stories (costs reporters some time). No reason for Ars to spend reporter time on something they can copy.
I'ts an open secret that even the larger news outlets mandate LLM use.
They buy subscriptions and have guidelines on how to mask the output (so that it would read less AI'ed), how to fact-check the links and the quotes etc.
The authors which aren't willing to jump on this particular train are quickly let go due to performance.
The expectation is to produce more with much less (staff), the pipeline is heavily optimized for clicks, every single headline is A/B tested- Ars isn't alone in churning out poorly reviewed clickbait (and then not owning their mistakes)
Is there any evidence that Are Technica management induced this journalist to use AI, or are you just claiming it's an "open secret" and don't know anything about this specific incident? Because without any kind of details it kind of sounds like the latter, maybe motivated by a reflex to blame management whenever workers blunder? Unless there's evidence that a actually points at Ars Technica management, dismissing the journalists professional responsibilities using vague rumors doesn't seem appropriate.
I didn't state that Ars Technica specifically mandate LLM use for their authors. What I did state about them is that their editorial standards are lacking, and they tend to produce a lot of clickbait.
As much as I respect the site and gladly financially support it, this is ultimately a failure on Ars Technica and its editors. If there are any.
If this were just some random blogger, then yes the blame is totally theirs. But this was published under the Ars Technica masthead and there should have been someone or something double checking the veracity of the contents.
That said, there are a number of Ars Technica contributors that are among the best in their fields: Eric Burger, Dan Goodin, Beth Mole, Stephen Clark, and Andrew Cunningham amongst many, so one f'up shouldn't really impugn the entire organization.
Eric Berger has a strong pro-Musk bias (having literally written a fawning book about him). To him, Musk can do no wrong, it seems.
I also dislike Dan Goodin’s reporting. He tries to talk the talk, but nearly every article he writes has some tell that he doesn’t really understand the thing he’s reporting on. Which is fine if he was relying on third-party expertise and quoting that, but he tries to make it sound like he has the expertise and it just comes up short. I feel like he’s a good example of that old fallacy that you think the news is correct about everything, until they report about something you know.
For me, Ashley Belanger is the best reporter they have. She might not have the subject matter expertise some of the others there claim, but she has the best journalism of anybody there. Lots of direct sources, well written, and the right level of depth. I honestly feel like I’m reading a different (and better) publication when I read her articles. More than once, I’ve had to scroll up to see if the article I’m reading was one of Ars’ licensed outside pieces, as the quality bar was higher than I’m used to, only to find her name.
Beth Mole is a close second. She has subject matter expertise, good journalism, and loves to slip in some humor or justified “get a load of this idiot” comments.
I'd say if one has any interest in writing objectively about space technology, one will likely end up being perceived as having a "pro-Musk bias".
Elon himself is indeed questionable, but you really can't argue with his space-related achievements. Even other eccentric billionaires like Bezos haven't come close.
Berger wrote 2 books about SpaceX (not Musk), and he definitely does not have a pro-Musk bias.
He's is careful not to opine on Musk's other dealings, which is fair. As someone who wants to know more about SpaceX, I don't want to read yet more about Tesla, or Twitter, or Trump, or Epstein.
Personally, one of the authors I most like to read on ArsTechnica (though he writes rarely nowadays).
CarTechnica though .. yuck. Also, Oulette reliably picks movies and TV shows I will absolutely hate, so I guess good S/N there?
Mole's coverage is great if you're into Cronenberg-but-in-real-life.
I think it's pretty widely agreed in the space flight community that Eric Berger is currently the best space flight reporter in the world. He has lots of insider sources. Several times he correctly predicted things years in advance. Most recently the Artemis III change to a LEO mission.
It should be a semantic search bot and maybe will be in the future, but for now I rely on the method described at https://news.ycombinator.com/item?id=45546715 and the links back from there.
> Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool”
I'm skeptical. I hate to be the one to say it, but I don't think this would have happened if he was using Claude 4.6 Opus.
The headline says Ars fired the reporter, but AFAICT the article doesn't include any facts that indicate this. All we know is that he no longer works there, and that Ars refused to provide any additional information.
> the article doesn't include any facts that indicate this.
It does include two facts:
1. That the reporter's bio on the webpage changed "...is a reporter at Ars" to "...was a reporter at Ars". On the one hand, that's pretty thin sauce. On the other hand, that's not exactly the sort of change that gets made randomly.
2. They reached out to the various people involved, and although nobody has confirmed it, it's also the case that nobody has denied it.
Neither side has issued a statement about what happened, but Benj’s Bluesky post does not read like a post of someone who would have resigned due to this.
I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links. It's scary that it got from 'practically useless' to 'the actual google search' in less than two years.
I really don't know where the internet is heading to and how any content site can survive.
It's because the AI overview is most of the time directly summarising the search results rather than synthesizing an answer from internal model knowledge. Which is why it can hyperlink the sources for the facts now. Even a very dumb lightweight model can extract relevant text from articles
I just can't see how this is sustainable since they are stealing from the sources who are now getting defunded.
At work the conversation is that simultaneously everyone is using LLMs now, yet we receive virtually no traffic through them. The LLMs scrape our data, provide an answer to the user, and we see nothing from it.
I have the same worry about LLMs in general - I know that ‘model collapse’ seems to be an unfashionable idea, but when the internet’s just full of garbage (soon?…), what are we going to train these things on?
Also generally wondering… Do labs view scraping as legally safer than trying to cache the Internet? I figure it’s easy to mark certain content as all but evergreen (can do a quick secondary check for possible new news).
> I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links. It's scary that it got from 'practically useless' to 'the actual google search' in less than two years.
It says things I know to be false fairly regularly. I don't keep a log or anything, but it's left an impression that it's far from reliable.
Today I searched something and almost pasted the output into an internet forum discussion I was having. But I decided to check the wikipedia source just to make sure. The AI summary was not quoted directly from wikipedia, and it got some major aspects wrong in its summary. Lesson learned.
You should be checking the links more often, IMO. I've seen it respond a number of times with content that is not supported by the citations.
While trying to find an example by going back through my history though, the search "linux shebang argument splitting" comes back from the AI with:
> On Linux and most Unix-like systems, the shebang line (e.g.,
#!/bin/bash ...) does not perform argument splitting by default. The entire string after the interpreter path is passed as a single argument to the interpreter.
(that's correct) …followed by:
> To pass multiple arguments portably on modern systems, the env command with the -S (split string) option is the standard solution.
(`env -S` isn't portable. IDK if a subset of it is portable, or not. I tend to avoid it, as it is just too complex, but let's call "is portable" opinion.)
(edited out a bit about the splitting on Linux; I think I had a different output earlier saying it would split the args into "-S" and "the rest", but this one was fine.)
> Note: The -S option is a modern extension and may not be available
It is scary but also exciting. As long as there are humans making informed decisions, there will be demand for quality sources of information. But to keep up with AI, content sites will need to raise their standards. Less intrusive ads, less superficial stuff, more in-depth articles with complex yet easily navigable structure, with layers of citations, diagrams, data, and impeccable accuracy. News articles with the technical depth of today's dissertations.
I have seen it be utterly wrong so many times recently I'm considering permanently hiding it. For instance, googling for "Amiga twin stick games" it listed a number of old, top-down, very much single axis games like Alien Breed as examples.
I know people love to hate on the AI overviews, and I'm a person who generally hates both google and AI. But--I see them as basically good and ideal. After all most of the time I am googling something like trivial, like a simple fact. And for the last decade when I have to click into sites for the information it's some SEO spam-ridden garbage site. So I am very glad to not have to interact with those anymore.
Of course Google gets little credit for this since it was their own malfeasance that led to all the SEO spam anyway (and the horrible expertsexchange-quality tech information, and stupid recipe sites that put life stories first)... but at least there now there is a backpressure against some of the spammy crap.
I am also convinced that the people here reporting that the overviews are always wrong are... basically lying? Or more likely applying some serious negative bias to the pattern they're reporting. The overviews are wrong sometimes, yes, but surely it is like 10% of the time, not always. Probably they're biased because they're generally mad at google, or AI being shoved in their face in general, and I get that... but you don't make the case against google/AI stronger by misrepresenting it; it is a stronger argument if it's accurate and resonates with everyones' experiences.
> -I see them as basically good and ideal. After all most of the time I am googling something like trivial, like a simple fact. And for the last decade when I have to click into sites for the information it's some SEO spam-ridden garbage site.
What good is it if the overviews lie some percentage of the time (your own guess is 10%) and you have to search to verify that they aren't making shit up anyway. Also, those SEO spam-ridden garbage sites google feeds you whenever you bother to look past the undependable AI summaries are mostly written by AI these days and prone to the same problem of lying which only makes fact checking google's auto-bullshitter even harder.
That's incomplete, because another "nobody remembers" is when the hallucination differs from reality, but the reader doesn't promptly detect the problem and remember where they got it from.
Think about the urban legends in the style of "the average person eats X spiders per year." It's extremely unlikely that Rumor Patient Zero is in a position to realize it's wrong, or that they will inform the next person that it came from an LLM summary.
Uh, really? In my experience, at least a quarter of the info it gives me is usually manufactured or incorrect in some critical way.
In fact, if you switch to "Pro" mode, it frequently says the complete opposite of what it claimed in "Fast" mode while still being ~10-20% wrong. (Not to say it's not useful — there's no better way to aggregate and synthesize obscure information — but it should definitely not be relied on a source of anything other than links for detailed followup.)
I don't know that this is what happened here, but any time there is a push to do more with less, you end up rewarding people who take shortcuts over those who do a proper job, and from the outside, it looks like journalism has a push to do more with less.
That's basically the problem. If the shortcut produces something passable 95% of the time and nobody is checking, it just looks like you're faster. Journalism just has a more public failure mode than most fields.
“Edwards also stressed that his colleague Kyle Orland, the site’s senior gaming editor who co-bylined the retracted story, had ‘no role in this error.’”
Has Orland issued a real apology? He bylined a piece containing fraudulent quotes.
Is there something to the story that I'm missing? Why does Orland need to apologize? Edwards fabricated the quotes via AI and seemingly presented them to Orland as authentic. Orland had no reason to suspect the quotes weren't real until after publishing.
When journalists are working on a shared byline, they don't each do the same research in order to fact-check each other. There is inherently a level of trust required for collaborating like this and Edwards violated that trust.
You can say this is a failure by the editorial process for not including fact checking, but that is an organizational issue with Ars, it's not the fault of Orland for failing to duplicate the work that he believed his coauthor did.
Yeah, consider the same thing in other domains - e.g. say you're doing some code review and the PR author is a cowoker you've had for years, and they include a comment with a link to some canonical documentation along with a verbatim quote from said doc explaining usage of something in the PR. If the quote and usage both make sense in the context, I'm not going to be habitually clicking through to the docs to verify that the quote isn't actually fabricated.
> Why does Orland need to apologize? Edwards fabricated the quotes
He's on the byline and he's an editor.
> they don't each do the same research in order to fact-check each other. There is inherently a level of trust
If we're going to excuse this, what does the byline mean? He trusted the wrong person. It would be like if a source lied to him. Not the end of the world. But absolutely credibility destroying if instead of an apology you get a word salad.
> You can say this is a failure by the editorial process
Orland is also an editor. (Senior gaming editor [1].)
This reads like “I was sick and my dog accidentally used AI to write my homework”
If the content is human written and you check your sources there is no way for AI to “accidentally” seep in. Sure you can use an AI tool to find links to places you should check and you can then go and verify sources. That’s obviously not what happened.
I clicked through the author's earlier stories when this first made waves. I obviously had no proof, but I was pretty certain that he's been using LLMs to generate stories for a good while.
When Ars released a statement saying this was an isolated incident, my reaction was "they probably didn't look too hard". I suspect they did, in the end?
I think that there's a potential different story with this. He felt that he had no option but to do work, even though he was so sick that he failed at the job. What's up with that? How insecure and pressured is his employment?
If it's not true then the error is on him. But it seems plausibly bad to me as an outside observer of US employment and healthcare customs. And the precarity of journalism nowadays. It is a sad state of things, as in it could be more a systemic than individual failure.
Are you saying unethical behavior is not a choice but forced by the system? That it would be unreasonable to expect people to behave ethically in situation were the system is set up in a way that does not reward ethical behavior? That lying and cheating can always be excused because if people didn't, they would endager their societal status?
You will never get the internet to agree on how incident x should have been handled. I think the world right now is running to figure out AI and its place. Just when you think you understand, the ground shifts. It is clear that in the future this exact use of AI will be expected and work, on average, way better than a person. I know that a lot of people probably have an emotional 'no it won't!' and disagree with me here but there have been so many 'no it won't! never!' moments passed in the last two years that I can't imagine this won't also be one. With that in mind I don't think it is reasonable to fire this journalist. They used a tool too soon but it is really hard to figure out what is too soon right now. This should have been a moment of reflection for their news room (and probably some private conversations) but it turned into a firing which I think is too much. Did the news room gain from that? Will it prevent them from doing it again? Did it fix the original mistake? I don't think the answer is 'yes' to any of these questions. A good retraction, apology, statement on how they are changing and will review new technology entering the newsroom in the future. Those help.
The problem is accountability. If your name is on the article, this is your work. If you publish an article with fabricated quotes, it’s your fault regardless of if an AI tool was used or not since you hit the button at the end to sign off on it.
I care about the future. I care that actions taken help improve the future. If someone makes a mistake the question shouldn't ever be 'how do we punish them' but instead 'what actions can best improve the future'. Sometimes that does mean firing a person. If the effort to fix their behavior is more than the expected gain then that is an option to consider (not the only thing to consider though). In this case though I think there is likely more to it. What were their policies? Have they been pushing their journalists to accept more AI tools? Even without pushing AI tools, have they been implying that speed is more important than accuracy? Was this truly JUST this journalist's mistake or are their culture elements that are missing in the newsroom? I would expect the head of that news room to have a detailed rational of why firing this person was the right choice. How does it help them move forward and improve? Why this isn't just a decision to try to deflect blame from their internal culture problems. As is this looks like a case of 'the internet got mad. Do something to make them happy'.
The headline is a bit sensational considering all we know from the reporting is that he isn't working there anymore. Fired likely, sure, but not for a fact.
I guess Blameless Postmortems haven't arrived in journalism yet.
Pretty weird that journalism as a business still revolves around "we hired a guy to write a thing, and he's perfect. oh wait, he's not perfect? it was all his fault. we've hired a new perfect guy, so everything's good now." My dudes... there are many ways you can vet information before publishing it. I get that the business is all about "being first", but that also seems to imply "being the first to be wrong".
I feel bad for the reporters. People seem to be piling onto them like they're supposed to be superhuman, but actually they're normal people under intense pressure. People fail, it's human. But when an organization fails, it's a failure of many people, not one.
> I guess Blameless Postmortems haven't arrived in journalism yet.
Not anymore. Back in the day of print newspapers, a dozen people read an article before it was printed, including editorial staff, fact checkers, legal review, layouting and printers. If something slipped through – which was much rarer at the time – they'd also print a retraction.
Most of that stopped when newspapers and the blogosphere basically merged into one ad-funded business.
This isn’t a case of “made a mistake”/“did something incorrectly”, though. This is “knowingly broke the rules”. They had a policy against using our benevolent robot overlords to generate slop.
And fabricating quotes is pretty high up there in the list of things that journos should never, ever do.
Happy to see some accountability here. Athough it's unclear why the other co-author who stamped their name on that article was retained. Maybe they just stamped their name to meet their quota of articles. In any case this follow up action makes me take arstechnica standards a bit more seriously.
I liked his articles about AI. They were generally quite good. He has an understanding of AI that usual journalists don't have. But to use an LLM for writing is deception.
This is good. They had to distance themselves from a journalist who would do such a thing. But this is more or less on the editor I think. So let’s see if they learn from this.
I'm very bad with names and quotes, so sometimes I'll ask ChatGPT something like "what's that famous quote Brian Kernighan said about programming language names" and it will just make shit up, when really I was thinking about Donald Knuth. But according to ChatGPT, Kernighan famously said:
“Everyone knows that Perl is designed to make easy things easy, and hard things possible, but nobody knows why it’s called Perl.”
Which of course returns 0 results on Google, as is customary for famous quotes.
If a tool is not fit for purpose then it either gets fixed or gets discarded/replaced.
AI is not a tool and from the way things are going never will be. Humans are more tool-like in that sense. In this case the human was discarded, the AI remains.
That was wise. It was an honest mistake, but a direct hit to is credibility that made not just him, but the paper, look sloppy. And in an era where people are deeply concerned about journalism pedigree.
people have said enough about the ethics of all of it but what I found even sadder is that the story made me curious to take a look at the actual piece he "investigated" with AI, it's this one (https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...) This is btw a bit more than 1k words, which takes the average American reader, not senior journalist, ~5 minutes.
This whole story involved asking Claude to mine this text for quotes, which refused because it included harassment related content, then asking ChatGPT to explain that, and so on.
That entire ordeal probably generated more text from the chatbots than just reading the few paragraphs of the blogpost. That's why I think the "I'm sick" angle doesn't matter much. This is the same brainrot as people who go "grok what does this mean" under every twitter post. It's like a schoolchild who cheats and expends more energy cheating than just learning what they're supposed to.
I read the bluesky in article posted and Benj Edward's images that he had sent in bluesky.
The main comment I found relevant is probably this (There is more that he has written but I am pasting what I find relevant for my comment)
> I have been sick with Covid all week and missed Mon and Tues due to this, On friday, while working from bed with a fever and very little sleep. I unintentionally made a serious journalistic error in an article about Scott Shambaugh
...
> I should have takena. sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words.
> Being sick and rushing to finish, I failed to verify the quotes in my outline against the original blog source before including them in my draft
The journalistic system has failed us so much that in the news cycle, we want things NOW. I think ars-technica post went viral on HN as well before the whole controversy and none were wiser until Sam commented about the false quotes.
It prefers views and to get views you have to do work now. There is no room left for someone being sick and I think that this sort of expands to every job at times.
And instead of AI being a productive tool, it can act as a noise generator. It writes enough noise that looks like signal and Tada, none are the wiser.
People think that using AI with an person is gonna make their work 10x more but what's gonna happen is the noise is raised 10x more and the work of finding signal from that noise is gonna increase 10x more (I am speaking about employment related projects, obv in personal projects this might not matter if it might have 10x noise or 100x noise if it can just do the thing you want it to do)
When AI systems are constrained, they can deny you your api request with marginal loss. But when Human people are constrained, they really can't deny your employee's request without taking massive losses at times (whole day leaves) and I have heard in some countries, sick days can be a joke. This could very well be cultural because sick days are well implemented in Europe compared to america (from what I hear)
I don't know about Benj but some reporters are really paid peanuts. Remember the pakistani newspaper which had pasted Chatgpt Verbatim with content like "“If you want, I can also create an even snappier ‘front-page style’ version with punchy one-line stats and a bold, infographic-ready layout perfect for maximum reader impact. Do you want me to do that next?." WITHIN the newspaper.
I believe that humans should be treated with more dignity so that they feel comfortable around taking sick leaves when they are sick... or just fixing this culture that we have of people chugging along in sick leaves.
Until then, AI is bound to be used, I don't think that this is gonna be a single incident, and AI will produce noise/spew random stuff. Imagine you are a journalist and you are sick and you feel like there's a magical tool which can do the job for you when you are sick. You use it and in the moments of sickness, you are in the IDGAF attitude and push the article to main.
I personally don't believe that this is gonna be a single incident with this whole story playing out like this at the very least.
If any Journalist is reading this, please take sick leaves when you are sick. Readers appreciate your writing and I hope you don't integrate AI tools into your workflow (a lot) that the work is started being done by AI in this case. Even without AI I feel like you guys might not be working at the best mental space and Readers are happy to wait if you add unique perspectives into the story, something I don't think is possible when you are sick. If any employer try to still pressure you, just share this message to them haha to tell your employer what the people want (and what brings them money long term).
I also hate how the culture has become of finding the article which came the fastest after an event happens because that would promote AI use more often than not and it to me feels like jackals coming out of nowhere to try to take whatever piece you can take out of a particular news and that to me doesn't feel soo great of look. (I know nothing about how such journalism works so sorry if I am wrong about anything, I usually am but these are just my opinions on the whole thing)
So the original blogger got slandered by an LLM agent, then got slandered again by a human journalist who used an LLM agent to write the article about him getting slandered by an LLM agent? How ironic.
But, does that mean he got slandered twice by an LLM agent or once by an agent and once by a human? Or was he technically slandered 3 times? Twice by agents and a third time by the journalist? New questions for the new agentic society.
He was only slandered once, by the LLM Agent. The Ars Technica article had presented paraphrases that it falsely attributed as direct quotes, and was therefore factually incorrect reporting. But it was not defamatory by any reasonable standard. Slander isn't just a synonym of "lie".
I wasn’t using the word in a legal sense, poindexter. I didn’t pretend to be a lawyer either. Slander in the colloquial sense is whatever the person doesn’t want attributed to them and is often used as synonym for a lie.
Besides, I am sure you could tell it was just a joke but needed to be pedantic for no reason other than feel smart?
I think that's the nail in the coffin. Most others could say it was a giant whoopsie, but here it goes to the heart of their credibility. How could they continue write authoritatively about AI, having done this.
The crazy part to me is that even here on HN there are people who still insist that LLMs don't fabricate things or otherwise lie.
I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.
Sure there is such a thing as a naysayer but there are also people think all forms of valid criticism are just naysaying.
>I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.
NFT protocol doesnt really care what the payload is. NFT purveyors likewise dont care what their payload is, as long as they could use the term "NFT".
NFT's are great for certain use cases (Crypto Kitties is still around I believe) but there was never a single moment I considered that owning a weird ape jpeg, even if it was somehow, properly owned by me, would be worth millions of dollars or whatever. Its like trying to sell a "TCP".
That said, future blockchain applications will probably still rely on NFT's in some fashion. Just not the protocol as product weirdness we got for a few years there.
I've not heard many people claim that LLMs don't hallucinate, however I have seen people (that I previously believed to be smart):
1. Believe LLMs outright even knowing they are frequently wrong
2. Claim that LLMs making shit up is caused by the user not prompting it correctly. I suppose in the same way that C is memory safe and only bad programmers make it not so.
> while working from bed with a fever and very little sleep," he "unintentionally made a serious journalistic error" as he attempted to use an "experimental Claude Code-based AI tool" to help him
Oh right, being ill is what caused the error. I can bet that if you start verifying the past content from this author, you will see similar AI slop. Either that or he has been always ill with very little sleep.
I love facts, reasoning, and logic and I'm not known for being biased or opinionated, something that the Ars comments section has become where unpopular points of view are downvoted to hell.
AI is mocked even though the vast majority of Ars commenters have extensively been using chatbots for years. You know how it's called? Hypocrisy.
I'm not from the US and I'm not bipartisan; in fact, I find the bipartisan US to be extremely backwards, illogical, and detrimental to the whole nation.
Can you elaborate? Perhaps I haven't noticed that they push pro-sponsored content (what does this mean, exactly?). I do find their comment section to be pretty lousy, and very partisan. But the tech coverage always seemed fair enough. What am I missing?
If you feed their articles into a python script that identifies biases, subtle upsells and advertorials, you will see bunch of it is exactly just promotional marketing for some companies. They also almost never report the news, just opinions of it.
He was supposed to be their "Senior AI Reporter." Him including basically anything from LLMs, without verifying it, in articles not only demonstrates a complete lack of credibility as a writer, but also a complete lack of understanding of AI. Even if they might have personally wanted to keep him on, you just can't after something like this.
What is the connection between these two statements? Are we supposed to presume that someone who apologizes on Bluesky should never be fired? Or did you also read the article and thought this was important information?
Is it “plagiarism” to misattribute hallucinated quotes? Not that a whole lot of sloppy, unprofessional shortcuts weren’t taken, but plagiarism doesn’t seem like the right word, as quotes are almost definitionally not plagiarism. But maybe these were paraphrasings masquerading as quotes, so maybe that’s the difference.
"Slop" and "hallucinate" have meanings outside of AI too, but it's easier to repurpose existing words than come up with a whole new lexicon for AI failure modes.
The raison d’etre for the journalist, in AD 2026, is less to gather information than to verify it. The journalist who cannot be trusted is no journalist at all. He is a blogger.
"Apologized on Blue Sky" is absolutely no reason to keep them. The author did the absolutely worst things a journalist can do (short of actual corruption) and is unfit for the job:
- He didn't care for his story,
- he didn't care to verify his story,
- he published bullshit made up stuff,
- and put words in a real person's mouth
- and he didn't even care to write the thing himself
Why keep him and pay him? What mentality all the above show? What respect, both self respect and respect for the job?
If they wanted stories from an LLM, they can pay for a subscription to one directly.
Hope this sends a message to journalist hacks who offload their writing or research to an LLM.
That's the thing. I feel kinda bad for Benj, I don't wish him ill, and maybe he keeps writing on his own site and/or other places, but I don't see any way that he could have kept writing for Ars.
Journalists and bloggers usually write about others’ mess ups and apologies, dissecting which apologies are authentic and which apologies are non-apologies.
In this incident, Aurich Lawson of Ars Technica deleted the original article (which had LLM hallucinated quotes) instead of updating it with the error. He then published a vague non-apology, just like large companies and politicians usually do. And now we learn that this reporter was fired and yet Ars Technica doesn’t publish a snippet of an article about it.
There’s something to be said about the value of owning up to issues and being forthright with actions and consequences. In this age of indignation and fear of being perceived as weak or vulnerable due to honesty, I would’ve thought that Ars would be or could’ve been a beacon for how things should be talked about.
It’s sad to see Ars Technica at this level.
"I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards."
----------
A reporter whose bailiwick is AI should have known that he needed to check any quotes an LLM spat out. The editorial staff should have been checking too, and this absolutely is representative of their standards if they weren't.
It would probably be worth checking to see if any other articles or employees have similarly disappeared.
Editorial staff?
There was such a thing, in newspapers up until 2000. Then, as profits nosedived, these sorts of things largely disappeared.
Purely online entities have no way to pay for real editorial staff.
News has no money, compared to news of old. It's part of the reason 99% of modern news is just reporting other people's tweets or whatever.
I can't imagine many news companies having much money for court battles (to force disclosure of documents, or force declassification, or fighting to protect sources). Or spending months or years investigating a story.
Our news sources are poor, weak now.
Agreed. Modern news is beyond lazy, and is not journalism by any means. Too many talking heads do nothing but sit behind a screen watching others for what to say next.
Granted, a few of the remaining newspapers I'm aware of run business awards (Best restaurant, etc), and the way to win is via wining and dining them, even though the paper claims it's based on people's votes.
That style of thinking - of entitlement - probably brought the lack of interest in both cable news and traditional web/paper outlets - as the younger generations started to see through it more.
I think you missed the point of the parent comment.
The money (from advertising) that used to go to news now goes elsewhere (Google and Meta).
It’s left very little in terms of resources for staff.
Think about what the quality of commercial software would be like if there wasn’t enough money for QA and testers and top tier devs capped out at $180k with starting roles at 30k and 40k.
That’s the news industry right now. Poorer quality product.
Yes: in newsrooms, this is the editor's responsibility. I note the editor wasn't fired.
It's the editors responsibility to make sure fabricated quotes don't get published, but it's also the journalist's responsibility to not paste fabricated quotes from a chatbot into their articles. The responsibility of the former doesn't negate the responsibility of the latter.
I can't just submit shit work all day long then blame QA when some of it goes through. That's like a burglar saying it's the cops fault that people got burglared.
Is it normal/expected for a news organization to publish that they fired someone? I’m inclined to take the ‘don’t comment on personnel matters’ at face value.
They did report on the article quote sourcing debacle at the time - perhaps not as quickly as some would’ve liked, but within a couple of days.
Yes. Normally, and Ars is generally up to that standard, the editorial staff (or Editor in Chief) updates the article, adds a note about the correction, and further adds that the original author of the article is not working with Ars anymore.
It stays as a mark, immortalizing the error, but it's a better scar than deleting and acting like it never happened.
I also want to note that, this last incident response is not typical of the Ars I'm used to.
> this last incident response is not typical of the Ars I'm used to.
They never really announced Peter Bright leaving ArsTechnica either though. At least not until much much later.
That was a criminal case, though. The court process may have prevented them from talking about it to keep things fair.
I'm not a US citizen and IANAL, so YMMV.
The BBC reports on itself quite well (maybe too much even). Here's an example:
https://www.bbc.co.uk/news/articles/cly51dzw86wo
I think they're an outlier, but still I was disappointed by Ars's response. They deleted the article and didn't detail what was wrong with it at all. Felt like a cover-up.
To be completely fair, BBC news is effectively a different organisation which has the BBC name. There's a fairly good overview of it here: https://www.bbc.com/sport/football/articles/c80l3074mgko
BBC News does have to report on itself from time to time. Here's it's "live" feed from November on the Parliamentary Committee investigation into the Trump speech edit incident:
https://www.bbc.co.uk/news/live/cp34d5ly76lt
(edit: technically, it was Panorama. I'm not sure if that is part of the News remit or separate from it).
> They deleted the article
This was a big disappointment. I read the original article and the comment from the source highlighting the error, knew what was wrong with it, and still think it was the wrong move to just delete the article and all the original comments, and replace it with an editorial note.
This is a kind of cover-up. It's impossible to hide the issue but they went to great lengths to soften the optics and remove the damning content from the public record. They obscured the magnitude of the error. It looks like another "person uses AI and gets some details wrong".
What they did so far, the decisions that allowed the issue to occur in the first place (e.g. no editorial review before publishing) and the first reaction to deal with the incident (just destroy the content, article and comments) is everything I need to know about the journalistic principles at ArsTechnica. it's a major loss of trust for me.
They’re at this level because the editors have always had low standers.
I don’t know about you guys, but I feel like 50% of Ars headlines are completely misleading.
They’ve had this problem for years. They will publish anything that gets them clicks. They do not care if a writer makes things up. They do not care if their headlines are misleading - in fact, that’s the point. They clearly got into the job in order to influence and manipulate people.
They’re bad people, with terrible motivations, and unchecked power. They only walk back when something really really bad happens.
Never trust an Ars headline.
Same for the Verge. Sometimes their headline or content contain factual errors. If you point it out in the comment, sometimes they do it properly and add a correction, other times they quietly fix it and delete your comment. So much for their free speech stance and editorial practice.
> They’re at this level because the editors have always had low standers.
It's not just Ars Technica. I would go as far as saying the big majority. I work at the biggest alliance of public service media in EU, and my role required me to interact with editors. I often do not like painting with broad brush, but I am yet to meet a humble editor yet. They approach everything with a "I know better than anyone else" attitude. Probably the "public" aspect of the media, but I woupd argue it's editorial aspect too. The rest of the staff are often very nice and down to earth.
> but I am yet to meet a humble editor yet. They approach everything with a "I know better than anyone else" attitude.
They're like "UX experts" in software. One does UX for software, the other does UX for text. Same attitude problems, from the way you describe it. If the expert in something so subjectively judged is seen to be conceding anything, that might undermine their perceived expertise. Any push back is interpreted as somebody challenging their career.
<< They approach everything with a "I know better than anyone else" attitude.
My charitable read is that if one has to interact with the public, one naturally develops an understanding of what is wrong with it.
> I don’t know about you guys, but I feel like 50% of Ars headlines are completely misleading.
I believe they are doing A/B testing on these.
Ah yes, I remember correctly for once: https://arstechnica.com/civis/threads/why-do-front-page-arti...
TL;DR: They are doing mandatory A/B testing since 2015.
Example?
> Aurich Lawson of Ars Technica deleted the original article
That's a very "shoot the messenger" statement. While Aurich is the community "face" of Ars, I very much doubt he has the power to do anything like that.
It seemed to me like very hasty self defense, there's a lot of AI slop hate and Ars can't risk becoming known as slop when their readers are probably prone to be aware of the issue.
I don't think Ars thought they had a choice but to cut off the journalist who made the mistake, especially when it was regarding a very touchy subject. I don't think they had a choice, it's impossible for us readers to know if this was a single lapse of judgement or a bad habit. Regardless, the communication should have been better.
All they had to do was write a clear and simple message saying that one of their staff was responsible, has been fired, and they'll take steps to avoid this in future.
Their actions so far just make me think they're panicking and found a scapegoat to blame it on, but they're not going to put any new checks in place so it'll just happen again.
It was against their policy to use AI in producing any part of the final article, and the writer was aware of that.
I feel bad for the guy, but there's just no way I can imagine much better safeguards other than editors paying more close attention to referencing sources, and hiring more reliable people.
> It was against their policy to use AI in producing any part of the final article, and the writer was aware of that.
More than that, as a reporter on AI he should have been fully aware that AI frequently bullshits and lies. He should have known it was not reliable and that its output needs to be carefully verified by a human if you care at all about the accuracy or quality of what it gives you. His excuse that this was done in a fever induced state of madness feels weak when it was his whole job to know that AI was not an appropriate tool for the task.
>his whole job
Possibly akin to a roofer taking a shortcut up there, then taking a spill? You knew better but unfortunately let the fact that you could probably get away with it with zero impact decide for you.
IIRC hallucinations were essentially kicked off initially by user error, or rather… let’s say at least: a journalist using the best available technologies should have been able to reduce the chance of this big of an issue to near zero, even with language models in the loop & without human review.
(e.g. imagine Karpathy’s llm-council with extra harnessing/scripting, so even MORE expensive, but still. Or some RegEx!)
Alternatively… there was no AI error, the reporter made up the quotes, and lied when they were challenged.
You have to give them time to do the job properly as well. Companies will often pay lip service to standards then squeeze their staff so much those standards are impossible to attain.
Yes, those are exactly the kind of steps they would need to publicly commit to in order to retain trust. And yet, instead we get silence, no acceptance that some measure of responsibility falls on the editorial team here. So it's clear they just hope it'll blow over without them having to do anything, which is the opposite of what a trustworthy site would do.
AnonC doesn't seem to be upset that the journalist was fired. The disappointment comes from Ars trying to brush this entire situation away by deleting articles, comments, and making no statement on their website.
My understanding is that AnonC is upset at Ars not taking the mature approach by allowing this to become a learning moment for the employee and using it to double down and confirm their stance on AI generated content. There's strength in maturity. But I am doing some reading between the lines, and I'm possibly reading a bit too much into "There’s something to be said about the value of owning up to issues"
Reminds me of a story I was told as an intern deploying infra changes to prod for the first time. Some guy had accidentally caused hours of downtime, and was expecting to be fired, only for his boss to say "Those hours of downtime is the price we pay to train our staff (you) to be careful. If we fire you, we throw the investment out the window"
"Make sure quotes in your article are things the subject actually said to you" is not something that should need a "learning moment".
There is a difference between an error and totally misunderstand your actual task. I have absolutely no sympathy for journalists getting caught producing hallucinated articles. Thats an absolute no go, and should always result in that person being fired.
Same goes for engineers reviewing vibeslop. If you let that shit through code review, and a customer impacting outage results, that should be instant termination. But it won't be, because as an engineer you are supposed to be held "blameless" right?
I love vibe coding but you are absolutely right. We're at the stage where vibe coding is a fun way to produce sloppy software and that's fine if the intended user is just yourself and you're fully informed about what you're getting into. But actually shipping vibe coded slop to other people is wacky, anybody doing the needs to be manually reviewing every commit very carefully and needs to be prepared to accept personal responsibility for anything that slips by.
Joirnalist job was not to review ai-slop. That is rather crucial difference.
Accidentally taking down production should not lead to firing. It should lead to improved process
Making up quotes for article, with technology or not, should lead to firing.
Ars never commented about firing staff before, and it happened on several occasions. You get the occasional article when someone joins, never when someone leaves. They should have published another article after all this, but I would not expect them to comment about staff.
And I think thats a good thing. People screw up, and journalists are people. This person's punishment for their screw up was losing their job. They do not need to be dragged into a hit piece.
Ars can, and probably should if they have not already, publish a piece about hallucinations and use of AI in journalism, and own up to their own lack of appropriate controls and reflections. They do not need to drag the authors name into the write up. It can be self critical of themselves as a journalistic outlet.
I'm sad to see them fire him. I've seen far worse: I have always approached issues by asking for accountability and improvement. Frankly, he already did: he openly apologised. I was very happy with that, it demonstrated integrity and I remained respecting him.
Even worse,
> I have been sick in bed with a high fever and unable to reliably address it (still am sick) [0]
In an earlier HN thread, I saw someone ask why Ars was requiring staff work while ill. If that's true, if he posted without verification while sick and under pressure, which is implied and plausible, firing looks doubly bad.
Ars has lost a lot of my trust in recent years, with articles seeming far worse. Just like you, I'm sorry to see the editorial position here.
[0] https://bsky.app/profile/virtuistic.bsky.social/post/3mey2mq...
You're taking his fever dream excuse at face value, and I think you probably shouldn't. It reads like a lame excuse to deflect personal responsibility, a cynical face-saving tactic.
If the illness was genuine, can he document that he advised management of this fever and they told him to submit an article anyway? It's not his bosses job to stick a thermometer up his ass every morning.
He posted his not very impressive apology as images not text that is easily indexed. I do think that was purposeful and manipulative and very much makes me question his motivation. If I'm missing the original posting in text I'd sure like to know so I can correct this perception.
Where I work in healthcare honestly and owning up is encouraged and unless there is major negligence not often punished. They just want to try learn why the mistake happened and look for ways to prevent it going forward. My buddy said for his company if an accident happens WorkSafe is not out to punish as long as they are very forward and honest. Again they want to learn how to avoid it happening again. Punishment only scares others to try hide mistakes.
I think they missed a big opportunity to instead of firing the guy sit him down and stress how not okay this was and that it harms the credibility and he needs to understand that and make a proper apology. They could make him do some education like ethical reporting responsibilities or whatever.
Then like you say not just hide the article but point out the mistakes and corrections. Describe the mistake and how credible reporting is their priority and the author will be given further education to avoid this happening again. They could also make new policies like going forward all articles that use AI for search results must attempt to find a source for that information. This would build trust not harm it in my opinion.
I agree. I'd add that the fact he appeared to be working while sick -- and that he pre-emptively and immediately publicly apologised -- means I think he already did behave as he should.
This makes me question Ars not him. Loss of credibility indeed.
This has just happened - i'm giving Ars a bit more time to come out with a piece examining the situation. They're a pretty good operation, I think. but it they don't...
They're a random tech blog, the kind of website that is peak time waste slop, why would they have any standards? Even the new york times and the Washington post put up wrong things all the time without corrections. People need to realize journalists are just ad sellers, not some beacon of truth. They are there to sell ads, the same way a youtube video of a guy eating too much food in front of a camera is.
Journalism has devolved into content creation in the literal sense of the word, they are just there to put something inside the div with the id "content", to justify the ads around it.
"People need to realize journalists are just ad sellers, not some beacon of truth."
You just changed the meaning of journalist. Now sure, the job of some journalists could be better described as ad sellers, but I rather call those like that and restrict the original term to actual journalists who actually care about truth. Because they still exist.
The 3 people that work at Reuters actually doing journalism are not doing in ANY way a similar job to the millions writing blog posts for Ars Technica like publications. The latter is an ad seller indeed. And the majority of publications that are renowned also do little to no journalism.
It's as if we called "web devs" that learned JS on udemy and just vibe code, Computer Scientists and treated them as if they publish compiler research papers. It's just a completely different job
Eric Berger at Ars for instance is someone I consider a journalist. Have you proof, that he systematically neglects truth in favor of ad selling?
Berger is a real one. I'm surprised he's lasted so long at Ars Technica. I think eventually his objective reporting of SpaceX will get the Are Technica reader base to demand his firing, Ars readers are very reddit-like. Team minded, not interested I hearing dispassionate takes. Hearing Elon Musk criticized as a person while simultaneously seeing SpaceX described as a real and highly accomplished company gives reddit/ars readers tonal whiplash, such people prefer simple narratives without nuance.
See also, in this very thread, somebody who thinks Berger has a strong pro-musk bias because his reporting and books say that SpaceX are good at what they do.
It's cuz Ars's roots are in being video game bloggers and graphics card reviewers, not legitimate journalists. They don't have a notion of professionalism or journalistic duty, only virality and juicy takes.
you're participating in a social media site where something like 20% of the articles have become, "I told Claude Code to do something and write this article about it." So put your money where your mouth is, if you think it's sad, if this is more than concern trolling, hit Ctrl+W.
I have a story with Benji.
Last year I went viral, and Benji was the first person to interview me. It was a really cool experience, we chatted via Twitter dms, and he wrote a piece about my work - overall did a decent job.
Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response.
Then, tech crunch wrote an article on our project.
I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)
I thought that was rather strange, especially since we already had built up a relationship.
I don't really have a moral or lesson to this story, other than that journalism can be rather opaque sometimes.
Oh one other tip for anyone reading this - if you do ever get reached out to by journalists, communicate in writing, not a phone call so you can be VERY precise in your wordings.
> Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response. [...]
> I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)
> I thought that was rather strange, especially since we already had built up a relationship.
The US mentality might be different, but at least having grown up and living in Germany, such an annoying hustler who wants to use some journalist as a marketing influencer for his private project is a huge no-no. In other words: it is a very reasonable decision (perhaps even the only right one) for any journalist to fob off such a hustler.
I'm a journalist. As a general rule, if someone approaches me with a pitch for a feature or investigation (not news piece) that was already published elsewhere, I'll turn it down. To be fair, I turn down all PR pitches, but there are journalists who don't but still want an exclusive.
It sometimes happens that you spend weeks or months working on a story, only to be scooped by another publication. It sucks, especially if you think your story is the better one, but unless you can pivot or add a substantial amount of new insight, it won't come out.
Sometimes people get busy and overwhelmed, but they don't know how to say no.
I know a lot of people that don't get through their email every week, for example. Even saying no takes too much time, with the volume of communication required by daily work.
Very few people email me except for endless newsletters that I accidentally signed up for. I try to un sub to a few every day but it seems never ending.
In the event that you actually do end up emailing me, it's contingent on me actually checking my personal email, which I never do when I'm not working, and only sometimes do during work hours.
If it's you asking me a favor that I'm not in the mental space for, I'll mark the message unread as a reminder to get to it later.
Maybe I just have weird email habits, but I can get away with this because email is not a heavy part of my job.
That being said, one guy was pitching me on something several times a month for several months. I just recently responded to him and apologized because of x y z. He said don't worry and we had a fruitful conversation later.
So, follow through is important!
Their repeat emailers might win eventually!
Passing on some life advice to anyone who’d benefit, people are busy. Maybe they didn’t respond because you’re annoying?… no no, feel it out and text again a while later. Give them another shot, get to the top of their inbox or messages again.
After someone told me that I realized it’s true!
This is an experience I've had with reporters multiple times. They don't like to write about the same thing twice.
My hunch is Ars will copy/reword/repost articles from real news sources (basically free for Ars) or do its own reporting for exclusive stories (costs reporters some time). No reason for Ars to spend reporter time on something they can copy.
I'ts an open secret that even the larger news outlets mandate LLM use. They buy subscriptions and have guidelines on how to mask the output (so that it would read less AI'ed), how to fact-check the links and the quotes etc. The authors which aren't willing to jump on this particular train are quickly let go due to performance.
The expectation is to produce more with much less (staff), the pipeline is heavily optimized for clicks, every single headline is A/B tested- Ars isn't alone in churning out poorly reviewed clickbait (and then not owning their mistakes)
Is there any evidence that Are Technica management induced this journalist to use AI, or are you just claiming it's an "open secret" and don't know anything about this specific incident? Because without any kind of details it kind of sounds like the latter, maybe motivated by a reflex to blame management whenever workers blunder? Unless there's evidence that a actually points at Ars Technica management, dismissing the journalists professional responsibilities using vague rumors doesn't seem appropriate.
I didn't state that Ars Technica specifically mandate LLM use for their authors. What I did state about them is that their editorial standards are lacking, and they tend to produce a lot of clickbait.
IMO the industry is in crisis
As much as I respect the site and gladly financially support it, this is ultimately a failure on Ars Technica and its editors. If there are any.
If this were just some random blogger, then yes the blame is totally theirs. But this was published under the Ars Technica masthead and there should have been someone or something double checking the veracity of the contents.
That said, there are a number of Ars Technica contributors that are among the best in their fields: Eric Burger, Dan Goodin, Beth Mole, Stephen Clark, and Andrew Cunningham amongst many, so one f'up shouldn't really impugn the entire organization.
Eric Berger has a strong pro-Musk bias (having literally written a fawning book about him). To him, Musk can do no wrong, it seems.
I also dislike Dan Goodin’s reporting. He tries to talk the talk, but nearly every article he writes has some tell that he doesn’t really understand the thing he’s reporting on. Which is fine if he was relying on third-party expertise and quoting that, but he tries to make it sound like he has the expertise and it just comes up short. I feel like he’s a good example of that old fallacy that you think the news is correct about everything, until they report about something you know.
For me, Ashley Belanger is the best reporter they have. She might not have the subject matter expertise some of the others there claim, but she has the best journalism of anybody there. Lots of direct sources, well written, and the right level of depth. I honestly feel like I’m reading a different (and better) publication when I read her articles. More than once, I’ve had to scroll up to see if the article I’m reading was one of Ars’ licensed outside pieces, as the quality bar was higher than I’m used to, only to find her name.
Beth Mole is a close second. She has subject matter expertise, good journalism, and loves to slip in some humor or justified “get a load of this idiot” comments.
I'd say if one has any interest in writing objectively about space technology, one will likely end up being perceived as having a "pro-Musk bias".
Elon himself is indeed questionable, but you really can't argue with his space-related achievements. Even other eccentric billionaires like Bezos haven't come close.
Berger wrote 2 books about SpaceX (not Musk), and he definitely does not have a pro-Musk bias.
He's is careful not to opine on Musk's other dealings, which is fair. As someone who wants to know more about SpaceX, I don't want to read yet more about Tesla, or Twitter, or Trump, or Epstein.
Personally, one of the authors I most like to read on ArsTechnica (though he writes rarely nowadays).
CarTechnica though .. yuck. Also, Oulette reliably picks movies and TV shows I will absolutely hate, so I guess good S/N there?
Mole's coverage is great if you're into Cronenberg-but-in-real-life.
I think it's pretty widely agreed in the space flight community that Eric Berger is currently the best space flight reporter in the world. He has lots of insider sources. Several times he correctly predicted things years in advance. Most recently the Artemis III change to a LEO mission.
Yeah Goodin's stuff is often slop.. Probably human slop but slop nonetheless.
> That said, there are a number of Ars Technica contributors that are among the best in their fields
I miss Maggie Koerth & Jon Stokes
Yes, dearly missed. As is John Siracusa’s Mac OS reviews.
I think you have far too much faith in the process for the big media sites
Context from earlier discussion of the article being pulled: https://news.ycombinator.com/item?id=47009949
Thanks! and indeed - here's the sequence (in the usual reverse order). If there are missing threads we can add them...
OpenClaw is dangerous - https://news.ycombinator.com/item?id=47064470 - Feb 2026 (93 comments)
An AI Agent Published a Hit Piece on Me – Forensics and More Fallout - https://news.ycombinator.com/item?id=47051956 - Feb 2026 (82 comments)
Editor's Note: Retraction of article containing fabricated quotations - https://news.ycombinator.com/item?id=47026071 - Feb 2026 (205 comments)
An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (624 comments)
AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (30 comments)
The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)
An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (951 comments)
AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (750 comments)
also how the heck do you pull all these related things, do you have a semantic/agentic search bot by now or is this all just from your head?
It should be a semantic search bot and maybe will be in the future, but for now I rely on the method described at https://news.ycombinator.com/item?id=45546715 and the links back from there.
> It should be a semantic search bot and maybe will be in the future
No. We only get the dystopian AI features, not the useful ones.
dang, we appreciate all you do. thanks
> Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool”
I'm skeptical. I hate to be the one to say it, but I don't think this would have happened if he was using Claude 4.6 Opus.
It's another way of saying "dog ate my homework".
The headline says Ars fired the reporter, but AFAICT the article doesn't include any facts that indicate this. All we know is that he no longer works there, and that Ars refused to provide any additional information.
> the article doesn't include any facts that indicate this.
It does include two facts:
1. That the reporter's bio on the webpage changed "...is a reporter at Ars" to "...was a reporter at Ars". On the one hand, that's pretty thin sauce. On the other hand, that's not exactly the sort of change that gets made randomly.
2. They reached out to the various people involved, and although nobody has confirmed it, it's also the case that nobody has denied it.
IANAL, but those facts could support "fired", or "resigned", or "short-term contract not renewed", or probably other stuff.
Neither side has issued a statement about what happened, but Benj’s Bluesky post does not read like a post of someone who would have resigned due to this.
I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links. It's scary that it got from 'practically useless' to 'the actual google search' in less than two years.
I really don't know where the internet is heading to and how any content site can survive.
It's because the AI overview is most of the time directly summarising the search results rather than synthesizing an answer from internal model knowledge. Which is why it can hyperlink the sources for the facts now. Even a very dumb lightweight model can extract relevant text from articles
I just can't see how this is sustainable since they are stealing from the sources who are now getting defunded.
> I just can't see how this is sustainable since they are stealing from the sources who are now getting defunded.
Yeah, that's why I said I don't know where the internet is heading to.
You can see the fall in real time - half the sources are also dubious AI slop now and that number’s only growing :-/
At work the conversation is that simultaneously everyone is using LLMs now, yet we receive virtually no traffic through them. The LLMs scrape our data, provide an answer to the user, and we see nothing from it.
I have the same worry about LLMs in general - I know that ‘model collapse’ seems to be an unfashionable idea, but when the internet’s just full of garbage (soon?…), what are we going to train these things on?
How often are they scraping?
Also generally wondering… Do labs view scraping as legally safer than trying to cache the Internet? I figure it’s easy to mark certain content as all but evergreen (can do a quick secondary check for possible new news).
Maybe caching everything is too expensive?
> I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links. It's scary that it got from 'practically useless' to 'the actual google search' in less than two years.
It says things I know to be false fairly regularly. I don't keep a log or anything, but it's left an impression that it's far from reliable.
Today I searched something and almost pasted the output into an internet forum discussion I was having. But I decided to check the wikipedia source just to make sure. The AI summary was not quoted directly from wikipedia, and it got some major aspects wrong in its summary. Lesson learned.
> I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links.
You would know how?
The links contradict or do not support the overviews often in my experience.
You should be checking the links more often, IMO. I've seen it respond a number of times with content that is not supported by the citations.
While trying to find an example by going back through my history though, the search "linux shebang argument splitting" comes back from the AI with:
> On Linux and most Unix-like systems, the shebang line (e.g., #!/bin/bash ...) does not perform argument splitting by default. The entire string after the interpreter path is passed as a single argument to the interpreter.
(that's correct) …followed by:
> To pass multiple arguments portably on modern systems, the env command with the -S (split string) option is the standard solution.
(`env -S` isn't portable. IDK if a subset of it is portable, or not. I tend to avoid it, as it is just too complex, but let's call "is portable" opinion.)
(edited out a bit about the splitting on Linux; I think I had a different output earlier saying it would split the args into "-S" and "the rest", but this one was fine.)
> Note: The -S option is a modern extension and may not be available
But this, … which is it.
It is scary but also exciting. As long as there are humans making informed decisions, there will be demand for quality sources of information. But to keep up with AI, content sites will need to raise their standards. Less intrusive ads, less superficial stuff, more in-depth articles with complex yet easily navigable structure, with layers of citations, diagrams, data, and impeccable accuracy. News articles with the technical depth of today's dissertations.
For AI to steal and summarize without attribution. These sites you talk about exists today but are dying because of AI.
Well, I hope you take this story as a caution that you shouldn't do that in any way that can seriously compromise your career/health/finances.
Try searching for something niche. You'll get a confidently wrong and often condescending answer.
The ai summary has been wrong so many times for me. Not that I ever trusted it anyway.
I think content sites will need to rely on supporters (ala patreon or substack). It's shitty but it's what the internet has come to
I have seen it be utterly wrong so many times recently I'm considering permanently hiding it. For instance, googling for "Amiga twin stick games" it listed a number of old, top-down, very much single axis games like Alien Breed as examples.
Really? I’ve noticed that the AI overview is full of glaring issues repeatedly. It’s akin to trusting the first Reddit post that is found by Google.
I know people love to hate on the AI overviews, and I'm a person who generally hates both google and AI. But--I see them as basically good and ideal. After all most of the time I am googling something like trivial, like a simple fact. And for the last decade when I have to click into sites for the information it's some SEO spam-ridden garbage site. So I am very glad to not have to interact with those anymore.
Of course Google gets little credit for this since it was their own malfeasance that led to all the SEO spam anyway (and the horrible expertsexchange-quality tech information, and stupid recipe sites that put life stories first)... but at least there now there is a backpressure against some of the spammy crap.
I am also convinced that the people here reporting that the overviews are always wrong are... basically lying? Or more likely applying some serious negative bias to the pattern they're reporting. The overviews are wrong sometimes, yes, but surely it is like 10% of the time, not always. Probably they're biased because they're generally mad at google, or AI being shoved in their face in general, and I get that... but you don't make the case against google/AI stronger by misrepresenting it; it is a stronger argument if it's accurate and resonates with everyones' experiences.
> -I see them as basically good and ideal. After all most of the time I am googling something like trivial, like a simple fact. And for the last decade when I have to click into sites for the information it's some SEO spam-ridden garbage site.
What good is it if the overviews lie some percentage of the time (your own guess is 10%) and you have to search to verify that they aren't making shit up anyway. Also, those SEO spam-ridden garbage sites google feeds you whenever you bother to look past the undependable AI summaries are mostly written by AI these days and prone to the same problem of lying which only makes fact checking google's auto-bullshitter even harder.
> I am also convinced that the people here reporting that the overviews are always wrong are... basically lying?
https://en.wikipedia.org/wiki/Availability_heuristic
No one remembers when AI Overview gets the answer right (it's expected to do so after all) but everyone has their favorite examples of "oh stupid AI."
That's incomplete, because another "nobody remembers" is when the hallucination differs from reality, but the reader doesn't promptly detect the problem and remember where they got it from.
Think about the urban legends in the style of "the average person eats X spiders per year." It's extremely unlikely that Rumor Patient Zero is in a position to realize it's wrong, or that they will inform the next person that it came from an LLM summary.
It will cycle.
Without the content site the AI overview will become useless
Uh, really? In my experience, at least a quarter of the info it gives me is usually manufactured or incorrect in some critical way.
In fact, if you switch to "Pro" mode, it frequently says the complete opposite of what it claimed in "Fast" mode while still being ~10-20% wrong. (Not to say it's not useful — there's no better way to aggregate and synthesize obscure information — but it should definitely not be relied on a source of anything other than links for detailed followup.)
I don't know that this is what happened here, but any time there is a push to do more with less, you end up rewarding people who take shortcuts over those who do a proper job, and from the outside, it looks like journalism has a push to do more with less.
That's basically the problem. If the shortcut produces something passable 95% of the time and nobody is checking, it just looks like you're faster. Journalism just has a more public failure mode than most fields.
“Edwards also stressed that his colleague Kyle Orland, the site’s senior gaming editor who co-bylined the retracted story, had ‘no role in this error.’”
Has Orland issued a real apology? He bylined a piece containing fraudulent quotes.
"I always have and always will abide by that rule to the best of my knowledge at the time a story is published."
Nothing suspicious about heavy use of qualifiers in a non-apology blanket denial. Where's the Polymarket for whether this guy has a job next month?
https://www.404media.co/ars-technica-pulls-article-with-ai-f...
> whether this guy has a job next month?
That’s a problem. If he really hasn’t apologized, neither he nor Ars have recognized there is a problem, which means it will happen again.
Is there something to the story that I'm missing? Why does Orland need to apologize? Edwards fabricated the quotes via AI and seemingly presented them to Orland as authentic. Orland had no reason to suspect the quotes weren't real until after publishing.
When journalists are working on a shared byline, they don't each do the same research in order to fact-check each other. There is inherently a level of trust required for collaborating like this and Edwards violated that trust.
You can say this is a failure by the editorial process for not including fact checking, but that is an organizational issue with Ars, it's not the fault of Orland for failing to duplicate the work that he believed his coauthor did.
Yeah, consider the same thing in other domains - e.g. say you're doing some code review and the PR author is a cowoker you've had for years, and they include a comment with a link to some canonical documentation along with a verbatim quote from said doc explaining usage of something in the PR. If the quote and usage both make sense in the context, I'm not going to be habitually clicking through to the docs to verify that the quote isn't actually fabricated.
> Why does Orland need to apologize? Edwards fabricated the quotes
He's on the byline and he's an editor.
> they don't each do the same research in order to fact-check each other. There is inherently a level of trust
If we're going to excuse this, what does the byline mean? He trusted the wrong person. It would be like if a source lied to him. Not the end of the world. But absolutely credibility destroying if instead of an apology you get a word salad.
> You can say this is a failure by the editorial process
Orland is also an editor. (Senior gaming editor [1].)
[1] https://arstechnica.com/author/kyle-orland/
This reads like “I was sick and my dog accidentally used AI to write my homework”
If the content is human written and you check your sources there is no way for AI to “accidentally” seep in. Sure you can use an AI tool to find links to places you should check and you can then go and verify sources. That’s obviously not what happened.
I clicked through the author's earlier stories when this first made waves. I obviously had no proof, but I was pretty certain that he's been using LLMs to generate stories for a good while.
When Ars released a statement saying this was an isolated incident, my reaction was "they probably didn't look too hard". I suspect they did, in the end?
In defense of that, his writing style was basically the same long before LLMs.
Sad if true. I used to really enjoy reading his freelance articles in various publications pre-AI.
Sad state of things. He did it because he was sick? That's close to claiming his dog ate the original quotes so he had to make some up.
Well, Ars Technica is already for quite some time on my ignore list, and this further solidifies its place there.
I think that there's a potential different story with this. He felt that he had no option but to do work, even though he was so sick that he failed at the job. What's up with that? How insecure and pressured is his employment?
If it's not true then the error is on him. But it seems plausibly bad to me as an outside observer of US employment and healthcare customs. And the precarity of journalism nowadays. It is a sad state of things, as in it could be more a systemic than individual failure.
Circumstances can help explain, but never excuse unethical behavior
Systems can make such failures inevitable. The language of "blame" vs. "excuse" is not the most relevant.
Are you saying unethical behavior is not a choice but forced by the system? That it would be unreasonable to expect people to behave ethically in situation were the system is set up in a way that does not reward ethical behavior? That lying and cheating can always be excused because if people didn't, they would endager their societal status?
No. That's a wild gallop off in a pointless direction using the same irrelevant language.
You will never get the internet to agree on how incident x should have been handled. I think the world right now is running to figure out AI and its place. Just when you think you understand, the ground shifts. It is clear that in the future this exact use of AI will be expected and work, on average, way better than a person. I know that a lot of people probably have an emotional 'no it won't!' and disagree with me here but there have been so many 'no it won't! never!' moments passed in the last two years that I can't imagine this won't also be one. With that in mind I don't think it is reasonable to fire this journalist. They used a tool too soon but it is really hard to figure out what is too soon right now. This should have been a moment of reflection for their news room (and probably some private conversations) but it turned into a firing which I think is too much. Did the news room gain from that? Will it prevent them from doing it again? Did it fix the original mistake? I don't think the answer is 'yes' to any of these questions. A good retraction, apology, statement on how they are changing and will review new technology entering the newsroom in the future. Those help.
The problem is accountability. If your name is on the article, this is your work. If you publish an article with fabricated quotes, it’s your fault regardless of if an AI tool was used or not since you hit the button at the end to sign off on it.
I care about the future. I care that actions taken help improve the future. If someone makes a mistake the question shouldn't ever be 'how do we punish them' but instead 'what actions can best improve the future'. Sometimes that does mean firing a person. If the effort to fix their behavior is more than the expected gain then that is an option to consider (not the only thing to consider though). In this case though I think there is likely more to it. What were their policies? Have they been pushing their journalists to accept more AI tools? Even without pushing AI tools, have they been implying that speed is more important than accuracy? Was this truly JUST this journalist's mistake or are their culture elements that are missing in the newsroom? I would expect the head of that news room to have a detailed rational of why firing this person was the right choice. How does it help them move forward and improve? Why this isn't just a decision to try to deflect blame from their internal culture problems. As is this looks like a case of 'the internet got mad. Do something to make them happy'.
The headline is a bit sensational considering all we know from the reporting is that he isn't working there anymore. Fired likely, sure, but not for a fact.
I guess Blameless Postmortems haven't arrived in journalism yet.
Pretty weird that journalism as a business still revolves around "we hired a guy to write a thing, and he's perfect. oh wait, he's not perfect? it was all his fault. we've hired a new perfect guy, so everything's good now." My dudes... there are many ways you can vet information before publishing it. I get that the business is all about "being first", but that also seems to imply "being the first to be wrong".
I feel bad for the reporters. People seem to be piling onto them like they're supposed to be superhuman, but actually they're normal people under intense pressure. People fail, it's human. But when an organization fails, it's a failure of many people, not one.
> I guess Blameless Postmortems haven't arrived in journalism yet.
Not anymore. Back in the day of print newspapers, a dozen people read an article before it was printed, including editorial staff, fact checkers, legal review, layouting and printers. If something slipped through – which was much rarer at the time – they'd also print a retraction.
Most of that stopped when newspapers and the blogosphere basically merged into one ad-funded business.
They have. Some paper journals even have a dedicated space in early pages (2-3) for corrections and retractation.
This isn’t a case of “made a mistake”/“did something incorrectly”, though. This is “knowingly broke the rules”. They had a policy against using our benevolent robot overlords to generate slop.
And fabricating quotes is pretty high up there in the list of things that journos should never, ever do.
Good time to watch Shattered Glass.
Imagine what he could have gotten up to with LLMs.
It's an excellent movie, regardless.
https://youtube.com/watch?v=oj79mp2WEx0Happy to see some accountability here. Athough it's unclear why the other co-author who stamped their name on that article was retained. Maybe they just stamped their name to meet their quota of articles. In any case this follow up action makes me take arstechnica standards a bit more seriously.
I liked his articles about AI. They were generally quite good. He has an understanding of AI that usual journalists don't have. But to use an LLM for writing is deception.
So this is another way how you can lose your job because of AI.
This is good. They had to distance themselves from a journalist who would do such a thing. But this is more or less on the editor I think. So let’s see if they learn from this.
I'm very bad with names and quotes, so sometimes I'll ask ChatGPT something like "what's that famous quote Brian Kernighan said about programming language names" and it will just make shit up, when really I was thinking about Donald Knuth. But according to ChatGPT, Kernighan famously said:
Which of course returns 0 results on Google, as is customary for famous quotes.Which version? I just tested mine and it replied with an actual quote
If a tool is not fit for purpose then it either gets fixed or gets discarded/replaced.
AI is not a tool and from the way things are going never will be. Humans are more tool-like in that sense. In this case the human was discarded, the AI remains.
That was wise. It was an honest mistake, but a direct hit to is credibility that made not just him, but the paper, look sloppy. And in an era where people are deeply concerned about journalism pedigree.
people have said enough about the ethics of all of it but what I found even sadder is that the story made me curious to take a look at the actual piece he "investigated" with AI, it's this one (https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...) This is btw a bit more than 1k words, which takes the average American reader, not senior journalist, ~5 minutes.
This whole story involved asking Claude to mine this text for quotes, which refused because it included harassment related content, then asking ChatGPT to explain that, and so on.
That entire ordeal probably generated more text from the chatbots than just reading the few paragraphs of the blogpost. That's why I think the "I'm sick" angle doesn't matter much. This is the same brainrot as people who go "grok what does this mean" under every twitter post. It's like a schoolchild who cheats and expends more energy cheating than just learning what they're supposed to.
Really disappointing. A lot of us have always considered Ars Technica to be the last of a dying breed of ultra serious, no-nonsense professionalism.
Obviously, we were rocked by the DrPizza scandal years ago...and now this.
Sobering.
I read the bluesky in article posted and Benj Edward's images that he had sent in bluesky.
The main comment I found relevant is probably this (There is more that he has written but I am pasting what I find relevant for my comment)
> I have been sick with Covid all week and missed Mon and Tues due to this, On friday, while working from bed with a fever and very little sleep. I unintentionally made a serious journalistic error in an article about Scott Shambaugh
... > I should have takena. sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words.
> Being sick and rushing to finish, I failed to verify the quotes in my outline against the original blog source before including them in my draft
The journalistic system has failed us so much that in the news cycle, we want things NOW. I think ars-technica post went viral on HN as well before the whole controversy and none were wiser until Sam commented about the false quotes.
It prefers views and to get views you have to do work now. There is no room left for someone being sick and I think that this sort of expands to every job at times.
And instead of AI being a productive tool, it can act as a noise generator. It writes enough noise that looks like signal and Tada, none are the wiser.
People think that using AI with an person is gonna make their work 10x more but what's gonna happen is the noise is raised 10x more and the work of finding signal from that noise is gonna increase 10x more (I am speaking about employment related projects, obv in personal projects this might not matter if it might have 10x noise or 100x noise if it can just do the thing you want it to do)
When AI systems are constrained, they can deny you your api request with marginal loss. But when Human people are constrained, they really can't deny your employee's request without taking massive losses at times (whole day leaves) and I have heard in some countries, sick days can be a joke. This could very well be cultural because sick days are well implemented in Europe compared to america (from what I hear)
I don't know about Benj but some reporters are really paid peanuts. Remember the pakistani newspaper which had pasted Chatgpt Verbatim with content like "“If you want, I can also create an even snappier ‘front-page style’ version with punchy one-line stats and a bold, infographic-ready layout perfect for maximum reader impact. Do you want me to do that next?." WITHIN the newspaper.
I believe that humans should be treated with more dignity so that they feel comfortable around taking sick leaves when they are sick... or just fixing this culture that we have of people chugging along in sick leaves.
Until then, AI is bound to be used, I don't think that this is gonna be a single incident, and AI will produce noise/spew random stuff. Imagine you are a journalist and you are sick and you feel like there's a magical tool which can do the job for you when you are sick. You use it and in the moments of sickness, you are in the IDGAF attitude and push the article to main.
I personally don't believe that this is gonna be a single incident with this whole story playing out like this at the very least.
If any Journalist is reading this, please take sick leaves when you are sick. Readers appreciate your writing and I hope you don't integrate AI tools into your workflow (a lot) that the work is started being done by AI in this case. Even without AI I feel like you guys might not be working at the best mental space and Readers are happy to wait if you add unique perspectives into the story, something I don't think is possible when you are sick. If any employer try to still pressure you, just share this message to them haha to tell your employer what the people want (and what brings them money long term).
I also hate how the culture has become of finding the article which came the fastest after an event happens because that would promote AI use more often than not and it to me feels like jackals coming out of nowhere to try to take whatever piece you can take out of a particular news and that to me doesn't feel soo great of look. (I know nothing about how such journalism works so sorry if I am wrong about anything, I usually am but these are just my opinions on the whole thing)
Are Technicas editors fabricate misleading headlines all the fucking time.
The editors are the ones ultimately responsible for what they publish. Yet they’re not taking responsibility.
So the original blogger got slandered by an LLM agent, then got slandered again by a human journalist who used an LLM agent to write the article about him getting slandered by an LLM agent? How ironic.
But, does that mean he got slandered twice by an LLM agent or once by an agent and once by a human? Or was he technically slandered 3 times? Twice by agents and a third time by the journalist? New questions for the new agentic society.
He was only slandered once, by the LLM Agent. The Ars Technica article had presented paraphrases that it falsely attributed as direct quotes, and was therefore factually incorrect reporting. But it was not defamatory by any reasonable standard. Slander isn't just a synonym of "lie".
I wasn’t using the word in a legal sense, poindexter. I didn’t pretend to be a lawyer either. Slander in the colloquial sense is whatever the person doesn’t want attributed to them and is often used as synonym for a lie.
Besides, I am sure you could tell it was just a joke but needed to be pedantic for no reason other than feel smart?
No, the journalist came in and slandered the LLM Twice and Jim Fell.
"Who are you, and how did you lose your job?"
"I'm an AI reporter. And, I'm an AI reporter."
4 times, you forgot the owner of the bot that did the PR.
Indeed, you’re right.
> senior AI reporter
A true "senior" AI reporter should be more skeptical of LLM output than anyone else.
I think that's the nail in the coffin. Most others could say it was a giant whoopsie, but here it goes to the heart of their credibility. How could they continue write authoritatively about AI, having done this.
I dunno. If AI doesn't write your articles, are you even an AI reporter?
Sorry, I never could resist a good dad joke
>The Condé Nast-owned Ars Technica
I despise Conde Nast
The crazy part to me is that even here on HN there are people who still insist that LLMs don't fabricate things or otherwise lie.
I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.
Sure there is such a thing as a naysayer but there are also people think all forms of valid criticism are just naysaying.
>I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.
NFT protocol doesnt really care what the payload is. NFT purveyors likewise dont care what their payload is, as long as they could use the term "NFT".
NFT's are great for certain use cases (Crypto Kitties is still around I believe) but there was never a single moment I considered that owning a weird ape jpeg, even if it was somehow, properly owned by me, would be worth millions of dollars or whatever. Its like trying to sell a "TCP".
That said, future blockchain applications will probably still rely on NFT's in some fashion. Just not the protocol as product weirdness we got for a few years there.
I've never seen anyone here claim that AI never hallucinates or can't provide incorrect information.
I've absolutely seen commenters who claim that hallucinations are a thing of the past if you use the newest models. They're wrong, but they exist.
I've not heard many people claim that LLMs don't hallucinate, however I have seen people (that I previously believed to be smart):
1. Believe LLMs outright even knowing they are frequently wrong
2. Claim that LLMs making shit up is caused by the user not prompting it correctly. I suppose in the same way that C is memory safe and only bad programmers make it not so.
> while working from bed with a fever and very little sleep," he "unintentionally made a serious journalistic error" as he attempted to use an "experimental Claude Code-based AI tool" to help him
Oh right, being ill is what caused the error. I can bet that if you start verifying the past content from this author, you will see similar AI slop. Either that or he has been always ill with very little sleep.
The role "reporter" deserves very little credence in AI now. The public might be better off if they get their information on AI from ChatGPT.
The core story is literally about how AI made up facts. The solution is more of the same?
A woke far left anti-AI website fires a jurno who dared to use AI.
Check their comments section: tribalism, echo chamber and extreme prejudice - I hope the man will find a new less fanatical company to work for.
I hope you will too escape from your echo chamber.
I love facts, reasoning, and logic and I'm not known for being biased or opinionated, something that the Ars comments section has become where unpopular points of view are downvoted to hell.
AI is mocked even though the vast majority of Ars commenters have extensively been using chatbots for years. You know how it's called? Hypocrisy.
Calling Ars Technica "woke far left" is crazy, the U.S. really is lost to complete fractionalist brainrot.
I'm not from the US and I'm not bipartisan; in fact, I find the bipartisan US to be extremely backwards, illogical, and detrimental to the whole nation.
True, but this user is Russian. But otherwise you're right, it's essentially the same brainrot.
[flagged]
Would you please stop breaking the site guidelines? I just had to ask you this in a different context.
You may not owe your least favorite publications better, but you owe this community better if you're participating in it.
https://news.ycombinator.com/newsguidelines.html
> I just had to ask you this in a different context.
Sorry, I just searched my comment history, maybe I missed it? Was it recent?
https://news.ycombinator.com/item?id=47223723
"Don't feed egregious comments by replying; flag them instead."
You probably wish everyone would post as bots do, without em—dashes of course.
Sorry but I don't follow
Can you elaborate? Perhaps I haven't noticed that they push pro-sponsored content (what does this mean, exactly?). I do find their comment section to be pretty lousy, and very partisan. But the tech coverage always seemed fair enough. What am I missing?
If you feed their articles into a python script that identifies biases, subtle upsells and advertorials, you will see bunch of it is exactly just promotional marketing for some companies. They also almost never report the news, just opinions of it.
So they fired that author after the author had publicly apologized on Blue sky.
He was supposed to be their "Senior AI Reporter." Him including basically anything from LLMs, without verifying it, in articles not only demonstrates a complete lack of credibility as a writer, but also a complete lack of understanding of AI. Even if they might have personally wanted to keep him on, you just can't after something like this.
What is the connection between these two statements? Are we supposed to presume that someone who apologizes on Bluesky should never be fired? Or did you also read the article and thought this was important information?
Why would apologizing for plagiarism and fabrication preclude you from facing sanctions for plagiarism and fabrication?
Is it “plagiarism” to misattribute hallucinated quotes? Not that a whole lot of sloppy, unprofessional shortcuts weren’t taken, but plagiarism doesn’t seem like the right word, as quotes are almost definitionally not plagiarism. But maybe these were paraphrasings masquerading as quotes, so maybe that’s the difference.
"Slop" and "hallucinate" have meanings outside of AI too, but it's easier to repurpose existing words than come up with a whole new lexicon for AI failure modes.
Groan, redefining "plagiarism" to add "inventing quotes" is a stupidity too far for me.
Making up quotes and attributing them to people has happened before AI, journalists proper and pretend have done it too.
The raison d’etre for the journalist, in AD 2026, is less to gather information than to verify it. The journalist who cannot be trusted is no journalist at all. He is a blogger.
"Apologized on Blue Sky" is absolutely no reason to keep them. The author did the absolutely worst things a journalist can do (short of actual corruption) and is unfit for the job:
- He didn't care for his story,
- he didn't care to verify his story,
- he published bullshit made up stuff,
- and put words in a real person's mouth
- and he didn't even care to write the thing himself
Why keep him and pay him? What mentality all the above show? What respect, both self respect and respect for the job?
If they wanted stories from an LLM, they can pay for a subscription to one directly.
Hope this sends a message to journalist hacks who offload their writing or research to an LLM.
Can you name any other way for Ars Technica to handle this situation without permanently soiling their reputation?
That's the thing. I feel kinda bad for Benj, I don't wish him ill, and maybe he keeps writing on his own site and/or other places, but I don't see any way that he could have kept writing for Ars.
That absolutely should be career-ending for a journalist, apology or no