Late last year I tried asking ChatGPT to summarize a collection of 10 researchers' views/findings on a topic and provide representative quotes. It initially looked plausible but when I checked the links, the quotes were from clearly AI generated summaries of actual interviews. The paraphrasing was also plausible but subtly and profoundly incorrect.
I haven't tested this again on the latest models though, so not sure if there's been an improvement.
That's more or less how it works. To actually have the system carry out your intention it would have to use significant hardware resources (and even then who knows if it would actually work). Alternatively you would need to break up the work into chunks that the hardware allocated to you by the system would not be overwhelmed.
A lot of people don't realize this because the work that they are having the AI do does not need to be either true or false. It just has to output media that seems like it fits. The system probably took many shortcuts to keep the resource use low while outputting something plausible but false.
And frankly this is sort of fine as long as you know what it's doing and what the limitations are. Hypothetically if you broke up the task into multiple steps that the system can actually ingest properly it might reduce the time that the task took overall, maybe even significantly, but not down to one prompt.
Just as an aside jumping off this sentence from the article, I am far less tolerant of the practice of naming countries of origin or general locales rather than specific organizations in headlines and stories.
Name the organization, and if you want to in the body, name where they’re from/located/operating as it pertains to the organization. For that matter, if you can offer information on the specific locale (Sweden is a big place after all), you should also do that unless it really is something more national/international.
There's even arguments for doing this even for cases where the actual state entity did something.
"The US did X" The president? The senate? A federal, or municipal body? etc..
But there's arguments against, if "The US bans automatic rifles" then to some extent it's clear what part of the US did it, to some other extent, it doesn't matter, and to some other extent, the part of the country that did the thing represents the whole country by corporization or democritazation.
In History it's very common to say Country did thing, "Germany invaded Poland", "Argentina signed the Roca-Runceman pact" and so on... Possibly because (in addition to the reasons stated above) information needs to be compressed more for the past, we have less space and priority for details of the past than we do for the present, a kind of cold-hot storage mechanism
You always have to check your sources because citation laundering is a thing[0].
In addition, most mainstream[1] journalists cite sources in a more liberal way than a scientist should so the source might not say what the journalist reports. The Atlantic has a bit on Waymo’s poor detection of minorities[2], e.g.
I remember when the "AI feeding AI" issue was raised some time ago, we were told that OpenAI and co were "doing something about it. I've heard nothing since then though.
People like to blame social media for this kind of bullshit, but social media is just the vector.
Just this week I read a "study" because someone claimed on social media that it was made by (Public, famous) Unis A, B and C and reported as an effect an increase in 30% of revenue for the companies that participated in the experiment.
The "study" was commissioned by an interest group (bad sign). It was conducted by people associated with said unis (I didn't check their credentials), and it did report in its headline the 30% revenue increase.
Said study was about an experiment that ran for a few months. Within these months, the revenue was flat (which could be considered good enough for the cause). The 30% was the revenue of this period against the same period the previous year. So somehow the experiment affected the companies retroactively! Not to mention that the researchers were able to find a group of companies that were, on average, growing 30% YoY. Surprising indeed.
So even if you check your sources, it may still be bullshit science or bullshit reporting from well-credentialed sources.
This is spot on. I think anyone who pays attention has probably ran into the same issues themselves. Keeping a chain of custody of information sources was hard before AI and now it doesn't even f$#$%% matter.
"Links No Longer Mean Credibility" - did they used to? I mean, I mostly agree with this article but a person could have written this about the internet. I remember people linking to all sorts of random web pages and using that as a source of credibility.
There's nuance to that. An LLM is quite capable of suggesting relevant reading, given the context. Especially when the context is broad enough that there's enough training data.
"Find me research on code reviews, their size, and quality" would give you more than enough reading. Yet, if you start with a claim, like "Longer PRs mean worse defect detection," the relevant data points fall to few enough for AI to start hallucinating.
You get "something, something, PR length, defect detection, IDK, I don't read research papers." Such output is fine as long as the author cares to validate it.
Skip the second step, and you might be good if you ask about something generic, like "What's the Slack story?" or "How did Blockbuster go bust?" Ask about some specific details, though, and you're bound to end up with made-up stuff that sounds just about right, while it's actually wrong.
Checking is different from finding, though. Source checking means just "verify that this information is actually present in that document". Much harder to hallucinate in this case.
I mostly disagree with this. You can request sources, you can ask it to check, but no LLM I have used can do this correctly more than 50-75% of the time, and some of the major models are extremely bad at this: giving broken links 90% of the time, incapable of giving actual links rather than search engine links, etc. Constant supervision and repetition of requests can sometimes get results, but it is exhausting. The "sources" it finds are often Reddit posts or other questionable secondary or tertiary sources, not actual original sources.
Have we forgotten how bad LLMs were at citing sources when they first came out? So, we had to build a lot of structure (harness engineering) and frontier labs had to do specific training to try to compensate for this.
So, LLMs are inherently bad at citing sources. A lot of effort has been put in to improve this behavior, but it's compensating for an inherent flaw.
Huh? Oh! Were they still treating the LLM as an "oracle box"/online chatbot at the time? (as opposed to a more agentic workflow?)
If they weren't, ignore I said the following, and please tell me what else was going wrong (and with what models and harnesses!).
Models weights are like Wikipedia. A nice starting point, but should never be referenced directly. You need to have your agent actually go out onto the internet and do the research. Now the actual references will be in your agent's actual Context (memory), so then it'd at least be rather more surprising if they don't cite correctly.
I do realize there's still corner cases even in the best setups though; So a final crosscheck sweep is never not a good idea.
I disagree. It is a bullshit machine all the way to the core. LLMs in my world fail to cite full sources and consistently conclude with guesses as facts. It does this much more than an average journalist or reporter would. Only when you double-check it will it then apologize and correct itself.
Personal experience? You ask it for the name of the paper referenced. You google that paper (for some reason it's not great at going out and acquiring the paper). You then upload the pdf and ask it if the paper supports the assertion if it's not quickly findable via ^F. You go read, ask it clarifying questions about hazard ratios, what they controlled for, etc.
When Bard (now known as Gemini) first came out in Europe, I think mid 2023, I tested it out. AI search was still a new thing in those days, and I was excited to see what Google's solution would be like. I had high hopes.
I asked it a question I knew the answer to. It searched the web, and told me the opposite of the truth. (Not nonsense, but a logical inversion of the actual fact. A common failure mode with earlier LLMs.)
Puzzled, I checked the sources. It cited two. Both AI SEO slop.
Bizarrely, I Googled it myself and couldn't even find those pages on Google. Maybe it was using a different search engine? ;)
Fundamentally, yes, it is a different "search engine."
BTW, as critical as I can be to AI, using an argument that something didn't work 3 years ago, so it must be crap, doesn't work in this context. 3 years ago, AI could barely generate several lines of consistent code. Now, it generates working apps with a prompt (it's another discussion how good the code is, but still).
I guess 3 years ago, Gemini couldn't tell how many r's are in the word refrigerator.
Same for research. At some point, I switched from ChatGPT and Gemini to Perplexity as it promised AI-powered search. It worked visibly better. Until it didn't, as GPT and Gemini models made a leap.
Back to the point, as long as we understand that, for now, it's all just a probabilistic machine generating the most likely output, no one should expect bulletproof answers. Search was/is way more deterministic than LLMs.
From my personal experience, ChatGPT doesn't fail at the fringe either. I would really like reproducible errors because I tend to trust this kind of usage almost completely
Ultimate credibility? Sure, they never did. Yet the whole thing Google was built upon was using links as tokens of credibility.
You'd assume an outgoing link from a CNN website has more credibility than one from an anonymous blog. That is, I reckon, still true. Although the credibility either link conveys is degrading. Again, it has been so since we started playing the game of SEO, yet AI-generated content in this context is basically a weapon of mass destruction. The deterioration has sped up dramatically.
Facebook, ever the wasteland of bullshit and scams, has gotten even more bullshit and scammy in the AI era.
I have found the single best way to avoid being pissed off by this shit is to just avoid Facebook. It dramatically cuts down on the amount I am exposed to.
I also run with adblockers, and consume news via brutalist.report, which also helps. (I avoid the Fox News section at the bottom)
Not just Facebook, but also make sure to avoid TikTok, Instagram and YouTube, along with YouTube Shorts. Many of them are just nothing but fake AI content, and these days people are using AI to create fake profiles of good-looking, cute girls doing impossible things or actually showing off their bodies, and so on. At least 50% of what you see on your feed should be considered AI-generated content.
I would say save your time and energy, and invest that into something else - forget all this social media.
I don't do TikTok. My instagram feed doesn't seem too bad by comparison.
Youtube shorts also seem OK for me, but of course definitely elevated compared to regular videos recommended to me.
Lastly:
> I would say save your time and energy, and invest that into something else - forget all this social media.
Agree. The promise of social media hasn't worked out. It was nice during the early Netflix streaming days, but has gotten progressively worse since then.
It's amazing that people think Snopes or other "fact-checkers" are reliable sources of information and represent ultimate truth, as if they're immune to bias and don't receive funding from people / organizations with their own agendas.
They are generally quite good, and they provide ample background info for you to replicate (or repudiate) their findings on your own if you're so inclined.
What's amazing is that people think Snopes or other fact-checkers are automatically wrong. I assume this comes from people who make a habit of believing bullshit and can't handle being corrected.
Um, is Snopes wrong about city cleaning crows, though? As that was the context of the original post. Which, by the way, doesn't say "Go, trust Snopes with everything; they can't be wrong!"
> Ops, the link doesn’t lead to the study, but to another article. But that article, in turn, has a link of its own. Which leads to yet another article that doesn’t even mention the study anymore.
This is a common, infuriating practice: provides a veneer of authoritativeness and credibility to newspaper articles, and who is ever going to click on the links that support those very cogent claims? Nobody of course, so they just link to another article with more vague claims, and at any further level deep your willingness to verify that information evaporates at the same rate as the information itself.
But hey, in the meanwhile the author has managed to sneak in that "scientists have found" and that if you don't believe it you must be anti-science.
Incidentally, highlighting this abuse (together with a bunch of other quality and fact-checking) would be a great use of AI on online news publication.
That thought crossed my mind. However, for such a product to work, there would have to be a human in the loop. With data-starved edge cases, which are many in fact-checking landscape, it would be relatively easy for an LLM to make stuff up or mislabel the context (which it inherently does not understand).
Also, thorough validation would cost a ton in tokens. So it would be both expensive from the tech perspective (AI bills) and labor. Now, whose interest would be to fund such a product? I don't see too many takers...
The way it works, has always worked, and should always work, is that you read some information on article X, and you just quote it as verbatim as linguistically possible. Nobody, literally nobody, reading your articles wants you to paraphrase it. If you don't quote things verbatim, you are doing it wrong. If you don't have sources, you are making it up. Your job is essentially just putting N+1 quotes from other articles into your article, and nobody wants you to do more or less than this.
People just have a fundamentally misguided idea of what they should be doing. You just quote. That is all. Nobody needs your originality as a writer. They just want the quotes, the sources, and, optionally, a synthesis, conclusion, and summary. That is the "work" you need to do and if you do just this, that is enough value already even if it feels like plagiarism by people who don't know what the word plagiarism means.
Everyone knows information came from somewhere. Where did you read it? We all know you didn't just wake up one day and remembered it from a past life or something like that. Why are you trying to pretend you just know it? Where are the links? Where are the screenshots?
If you are giving people the sources that you should have been giving all along, in the correct way, then you don't need to "check your sources." Because "your sources" are literally where you got that information from in first place, so you have already checked them.
Hence, if ChatGPT gave you a source, then your source isn't the source that ChatGPT gave you, because you didn't read it. Your source is, literally, ChatGPT. You should be writing "Smith, 2015, as cited by ChatGPT." Because you didn't read Smith, 2015. You read ChatGPT!
Also relevant: the derision and mockery directed at JD Vance as a “couch fucker” even used by John Oliver.
I read “Hillbilly Elegy” and wondered why it wasn’t in there. Snopes cleared it up in a matter of minutes. Why he hasn’t sued people into oblivion is his prerogative, but it’s a fascinating case study that we are, indeed, living in a Post-Truth environment.
There was a time, in the early to mid 2010s, when the phrase "Fake News" was almost exclusively used by people in publishing to talk about a very real rise in editorial disruption as news readers shifted from being desktop and homepage-driven to mobile and facebook-driven.
And then, one day, the politicians started saying it...
Interesting that you focus on John Oliver's bit considering that it came up in the context of JD Vance doubling down on the whole "they're eating the cats and dogs thing".
Tucker Carlson set the precedent when he was sued for libel by Karen McDougal and won because Fox New lawyers successfully argued he wasn't a reporter and no reasonable person would believe he's stating facts.
Unless he's repeating Trump's lies, then 77M people apparently believe it.
> The challenged statement was an obvious exaggeration, cushioned within an undisputed news story. The statement could not reasonably be understood to imply an assertion of objective fact, and therefore, did not amount to defamation.
"Rachel Maddow Wins in 9th Circuit; OAN Loses Appeal in Defamation Case"
> Maddow’s show is different than a typical news segment where anchors inform viewers about the daily news. The point of Maddow’s show is for her to provide the news but also to offer her opinions as to that news. Therefore, the Court finds that the medium of the alleged defamatory statement makes it more likely that a reasonable viewer would not conclude that the contested statement implies an assertion of objective fact.
Did anyone actually believe that was anything more than a joke? It was a disgusting and weird thing to suggest about a disgusting and weird guy, and highly immature, but it's only libel if it's presented as being true.
In only slightly more isolated regions of the internet than here, it used to be, when this story was new, that whenever this story was brought up, with dozens or more credulous comments with varying attempts at humor, there was at least one person who commented that it was fake, often with citations. As time went on, for each new instance, the number of comments dwindled, and the lone debunking comment appeared more and more rarely. Now I never see it, even though the story still occasionally appears, with a handful of credulous comments. So I would assume that yes, many people believe it.
You're getting downvotes because the target of this particular lie was a known liar, so people probably feel like it's some sort of poetic justice (or they know it's just in-kind retaliation and are cathartically satisfied by it).
I don't think the right answer to widespread disinformation campaigns is retaliatory disinformation campaigns (even if they're couched – pun not intended – in a just-barely-thin-enough veil of "wink wink we know this is a joke").
The right answer is to create systems and measures that actually limit disinformation.
I’m with you. The net effect actually is something akin to honking one’s horn at a guy who honked at you. You think you’re giving him a taste of his own medicine, but walking by I only see two people honking their horn and I’d ideally prefer not to be around the horn honkers since they’re unpleasant.
Purveyors of post-truth lies don’t turn around and sue people. They just peddle more lies, this is the kind of environment scum like the Vance’s live for.
Late last year I tried asking ChatGPT to summarize a collection of 10 researchers' views/findings on a topic and provide representative quotes. It initially looked plausible but when I checked the links, the quotes were from clearly AI generated summaries of actual interviews. The paraphrasing was also plausible but subtly and profoundly incorrect.
I haven't tested this again on the latest models though, so not sure if there's been an improvement.
That's more or less how it works. To actually have the system carry out your intention it would have to use significant hardware resources (and even then who knows if it would actually work). Alternatively you would need to break up the work into chunks that the hardware allocated to you by the system would not be overwhelmed.
A lot of people don't realize this because the work that they are having the AI do does not need to be either true or false. It just has to output media that seems like it fits. The system probably took many shortcuts to keep the resource use low while outputting something plausible but false.
And frankly this is sort of fine as long as you know what it's doing and what the limitations are. Hypothetically if you broke up the task into multiple steps that the system can actually ingest properly it might reduce the time that the task took overall, maybe even significantly, but not down to one prompt.
ChatGPT is horrible overall, for sure, but how exactly did you ask it to summarize, and what model was it exactly?
(I'm not saying "you're holding it wrong", I'm asking "how were you holding it"?
Did you tell it to pull in the sources, did it do so automatically, or were you working from just the base weights? )
> Not Sweden, but one Swedish startup.
Just as an aside jumping off this sentence from the article, I am far less tolerant of the practice of naming countries of origin or general locales rather than specific organizations in headlines and stories.
Name the organization, and if you want to in the body, name where they’re from/located/operating as it pertains to the organization. For that matter, if you can offer information on the specific locale (Sweden is a big place after all), you should also do that unless it really is something more national/international.
There's even arguments for doing this even for cases where the actual state entity did something.
"The US did X" The president? The senate? A federal, or municipal body? etc..
But there's arguments against, if "The US bans automatic rifles" then to some extent it's clear what part of the US did it, to some other extent, it doesn't matter, and to some other extent, the part of the country that did the thing represents the whole country by corporization or democritazation.
In History it's very common to say Country did thing, "Germany invaded Poland", "Argentina signed the Roca-Runceman pact" and so on... Possibly because (in addition to the reasons stated above) information needs to be compressed more for the past, we have less space and priority for details of the past than we do for the present, a kind of cold-hot storage mechanism
You always have to check your sources because citation laundering is a thing[0].
In addition, most mainstream[1] journalists cite sources in a more liberal way than a scientist should so the source might not say what the journalist reports. The Atlantic has a bit on Waymo’s poor detection of minorities[2], e.g.
0: https://wiki.roshangeorge.dev/w/Blog/2026-01-17/Citogenesis
1: Some independent reporters like Matt Yglesias are more rigorous, though their direct reporting can still be bogus
2: https://www.theargumentmag.com/p/no-waymos-arent-racist
I remember when the "AI feeding AI" issue was raised some time ago, we were told that OpenAI and co were "doing something about it. I've heard nothing since then though.
What was/is being done about it?
People like to blame social media for this kind of bullshit, but social media is just the vector.
Just this week I read a "study" because someone claimed on social media that it was made by (Public, famous) Unis A, B and C and reported as an effect an increase in 30% of revenue for the companies that participated in the experiment.
The "study" was commissioned by an interest group (bad sign). It was conducted by people associated with said unis (I didn't check their credentials), and it did report in its headline the 30% revenue increase.
Said study was about an experiment that ran for a few months. Within these months, the revenue was flat (which could be considered good enough for the cause). The 30% was the revenue of this period against the same period the previous year. So somehow the experiment affected the companies retroactively! Not to mention that the researchers were able to find a group of companies that were, on average, growing 30% YoY. Surprising indeed.
So even if you check your sources, it may still be bullshit science or bullshit reporting from well-credentialed sources.
Why not link the study?
Damned if you do, damned if you don't. Spreading fake news isn't great. People also want you to prove what you say is true.
This is spot on. I think anyone who pays attention has probably ran into the same issues themselves. Keeping a chain of custody of information sources was hard before AI and now it doesn't even f$#$%% matter.
"Links No Longer Mean Credibility" - did they used to? I mean, I mostly agree with this article but a person could have written this about the internet. I remember people linking to all sorts of random web pages and using that as a source of credibility.
Ironically, 'source checking' is something AI is quite good at.
There's nuance to that. An LLM is quite capable of suggesting relevant reading, given the context. Especially when the context is broad enough that there's enough training data.
"Find me research on code reviews, their size, and quality" would give you more than enough reading. Yet, if you start with a claim, like "Longer PRs mean worse defect detection," the relevant data points fall to few enough for AI to start hallucinating.
You get "something, something, PR length, defect detection, IDK, I don't read research papers." Such output is fine as long as the author cares to validate it.
Skip the second step, and you might be good if you ask about something generic, like "What's the Slack story?" or "How did Blockbuster go bust?" Ask about some specific details, though, and you're bound to end up with made-up stuff that sounds just about right, while it's actually wrong.
Checking is different from finding, though. Source checking means just "verify that this information is actually present in that document". Much harder to hallucinate in this case.
I mostly disagree with this. You can request sources, you can ask it to check, but no LLM I have used can do this correctly more than 50-75% of the time, and some of the major models are extremely bad at this: giving broken links 90% of the time, incapable of giving actual links rather than search engine links, etc. Constant supervision and repetition of requests can sometimes get results, but it is exhausting. The "sources" it finds are often Reddit posts or other questionable secondary or tertiary sources, not actual original sources.
Have we forgotten how bad LLMs were at citing sources when they first came out? So, we had to build a lot of structure (harness engineering) and frontier labs had to do specific training to try to compensate for this.
So, LLMs are inherently bad at citing sources. A lot of effort has been put in to improve this behavior, but it's compensating for an inherent flaw.
Huh? Oh! Were they still treating the LLM as an "oracle box"/online chatbot at the time? (as opposed to a more agentic workflow?)
If they weren't, ignore I said the following, and please tell me what else was going wrong (and with what models and harnesses!).
Models weights are like Wikipedia. A nice starting point, but should never be referenced directly. You need to have your agent actually go out onto the internet and do the research. Now the actual references will be in your agent's actual Context (memory), so then it'd at least be rather more surprising if they don't cite correctly.
I do realize there's still corner cases even in the best setups though; So a final crosscheck sweep is never not a good idea.
I disagree. It is a bullshit machine all the way to the core. LLMs in my world fail to cite full sources and consistently conclude with guesses as facts. It does this much more than an average journalist or reporter would. Only when you double-check it will it then apologize and correct itself.
Judging by the number of scientific papers that have been outed as AI-generated, precisely because it hallucinated sources, it's not
Citation needed, please
Personal experience? You ask it for the name of the paper referenced. You google that paper (for some reason it's not great at going out and acquiring the paper). You then upload the pdf and ask it if the paper supports the assertion if it's not quickly findable via ^F. You go read, ask it clarifying questions about hazard ratios, what they controlled for, etc.
AI is quite good when grounded in a source.
When Bard (now known as Gemini) first came out in Europe, I think mid 2023, I tested it out. AI search was still a new thing in those days, and I was excited to see what Google's solution would be like. I had high hopes.
I asked it a question I knew the answer to. It searched the web, and told me the opposite of the truth. (Not nonsense, but a logical inversion of the actual fact. A common failure mode with earlier LLMs.)
Puzzled, I checked the sources. It cited two. Both AI SEO slop.
Bizarrely, I Googled it myself and couldn't even find those pages on Google. Maybe it was using a different search engine? ;)
Fundamentally, yes, it is a different "search engine."
BTW, as critical as I can be to AI, using an argument that something didn't work 3 years ago, so it must be crap, doesn't work in this context. 3 years ago, AI could barely generate several lines of consistent code. Now, it generates working apps with a prompt (it's another discussion how good the code is, but still).
I guess 3 years ago, Gemini couldn't tell how many r's are in the word refrigerator.
Same for research. At some point, I switched from ChatGPT and Gemini to Perplexity as it promised AI-powered search. It worked visibly better. Until it didn't, as GPT and Gemini models made a leap.
Back to the point, as long as we understand that, for now, it's all just a probabilistic machine generating the most likely output, no one should expect bulletproof answers. Search was/is way more deterministic than LLMs.
From my personal experience, ChatGPT doesn't fail at the fringe either. I would really like reproducible errors because I tend to trust this kind of usage almost completely
> Links No Longer Mean Credibility
They never did!
Ultimate credibility? Sure, they never did. Yet the whole thing Google was built upon was using links as tokens of credibility.
You'd assume an outgoing link from a CNN website has more credibility than one from an anonymous blog. That is, I reckon, still true. Although the credibility either link conveys is degrading. Again, it has been so since we started playing the game of SEO, yet AI-generated content in this context is basically a weapon of mass destruction. The deterioration has sped up dramatically.
Facebook, ever the wasteland of bullshit and scams, has gotten even more bullshit and scammy in the AI era.
I have found the single best way to avoid being pissed off by this shit is to just avoid Facebook. It dramatically cuts down on the amount I am exposed to.
I also run with adblockers, and consume news via brutalist.report, which also helps. (I avoid the Fox News section at the bottom)
Not just Facebook, but also make sure to avoid TikTok, Instagram and YouTube, along with YouTube Shorts. Many of them are just nothing but fake AI content, and these days people are using AI to create fake profiles of good-looking, cute girls doing impossible things or actually showing off their bodies, and so on. At least 50% of what you see on your feed should be considered AI-generated content.
I would say save your time and energy, and invest that into something else - forget all this social media.
My favorite is the 83-year-old AI grandma on youtube giving retirement advice. They put her in various settings and she looks and sounds very real.
The only obvious tell is the eyes don't track right. But once they fix that, it's really going to be hard to know.
All the comments are how great her advice is, etc.
Every video has a link to a book she "wrote" on amazon. I didn't waste my time trying to figure out what the scam is.
I don't do TikTok. My instagram feed doesn't seem too bad by comparison.
Youtube shorts also seem OK for me, but of course definitely elevated compared to regular videos recommended to me.
Lastly:
> I would say save your time and energy, and invest that into something else - forget all this social media.
Agree. The promise of social media hasn't worked out. It was nice during the early Netflix streaming days, but has gotten progressively worse since then.
>A clearly AI-generated image didn’t help the credibility (a three-legged crow is quite telling)
Actually I checked some sources, and I found some for three-legged crows:
https://en.wikipedia.org/wiki/Kojiki#The_Nakatsumaki_(%E4%B8...
https://en.wikipedia.org/wiki/Three-legged_crow#/media/File:...
https://en.wikipedia.org/wiki/File:Douze_emblemes_des_rites_...
https://en.wikipedia.org/wiki/File:Chengdu_2007_341.jpg
And by refuting this article, I thereby prove that which it sought to refute.
It's amazing that people think Snopes or other "fact-checkers" are reliable sources of information and represent ultimate truth, as if they're immune to bias and don't receive funding from people / organizations with their own agendas.
Snopes (like anywhere) is only as reliable as its track record of collecting firsthand sources and accurately reporting on their contents.
Which is to say: pretty good so far, in their case. For the future? Who knows. But they've done well up to now, at least.
Actually no, their track record is not great: https://en.wikipedia.org/wiki/Snopes#2010s
They are generally quite good, and they provide ample background info for you to replicate (or repudiate) their findings on your own if you're so inclined.
What's amazing is that people think Snopes or other fact-checkers are automatically wrong. I assume this comes from people who make a habit of believing bullshit and can't handle being corrected.
When there is no independent media, it's not difficult to find sources that back up the lies that Snopes and other fact-checkers peddle.
https://fair.org/home/the-digital-media-oligarchy-who-owns-o...
https://swprs.org/the-american-empire-and-its-media/
Um, is Snopes wrong about city cleaning crows, though? As that was the context of the original post. Which, by the way, doesn't say "Go, trust Snopes with everything; they can't be wrong!"
> Ops, the link doesn’t lead to the study, but to another article. But that article, in turn, has a link of its own. Which leads to yet another article that doesn’t even mention the study anymore.
This is a common, infuriating practice: provides a veneer of authoritativeness and credibility to newspaper articles, and who is ever going to click on the links that support those very cogent claims? Nobody of course, so they just link to another article with more vague claims, and at any further level deep your willingness to verify that information evaporates at the same rate as the information itself.
But hey, in the meanwhile the author has managed to sneak in that "scientists have found" and that if you don't believe it you must be anti-science.
Incidentally, highlighting this abuse (together with a bunch of other quality and fact-checking) would be a great use of AI on online news publication.
That thought crossed my mind. However, for such a product to work, there would have to be a human in the loop. With data-starved edge cases, which are many in fact-checking landscape, it would be relatively easy for an LLM to make stuff up or mislabel the context (which it inherently does not understand).
Also, thorough validation would cost a ton in tokens. So it would be both expensive from the tech perspective (AI bills) and labor. Now, whose interest would be to fund such a product? I don't see too many takers...
The way it works, has always worked, and should always work, is that you read some information on article X, and you just quote it as verbatim as linguistically possible. Nobody, literally nobody, reading your articles wants you to paraphrase it. If you don't quote things verbatim, you are doing it wrong. If you don't have sources, you are making it up. Your job is essentially just putting N+1 quotes from other articles into your article, and nobody wants you to do more or less than this.
People just have a fundamentally misguided idea of what they should be doing. You just quote. That is all. Nobody needs your originality as a writer. They just want the quotes, the sources, and, optionally, a synthesis, conclusion, and summary. That is the "work" you need to do and if you do just this, that is enough value already even if it feels like plagiarism by people who don't know what the word plagiarism means.
Everyone knows information came from somewhere. Where did you read it? We all know you didn't just wake up one day and remembered it from a past life or something like that. Why are you trying to pretend you just know it? Where are the links? Where are the screenshots?
If you are giving people the sources that you should have been giving all along, in the correct way, then you don't need to "check your sources." Because "your sources" are literally where you got that information from in first place, so you have already checked them.
Hence, if ChatGPT gave you a source, then your source isn't the source that ChatGPT gave you, because you didn't read it. Your source is, literally, ChatGPT. You should be writing "Smith, 2015, as cited by ChatGPT." Because you didn't read Smith, 2015. You read ChatGPT!
[dead]
Also relevant: the derision and mockery directed at JD Vance as a “couch fucker” even used by John Oliver.
I read “Hillbilly Elegy” and wondered why it wasn’t in there. Snopes cleared it up in a matter of minutes. Why he hasn’t sued people into oblivion is his prerogative, but it’s a fascinating case study that we are, indeed, living in a Post-Truth environment.
There was a time, in the early to mid 2010s, when the phrase "Fake News" was almost exclusively used by people in publishing to talk about a very real rise in editorial disruption as news readers shifted from being desktop and homepage-driven to mobile and facebook-driven.
And then, one day, the politicians started saying it...
Oliver in that clip literally calls the couch-fucking thing "the fun kind of misinformation". He's not suggesting it's true.
Interesting that you focus on John Oliver's bit considering that it came up in the context of JD Vance doubling down on the whole "they're eating the cats and dogs thing".
https://youtu.be/NtRPLCso0Sw?t=14m09s
Makes me believe that you're really not commenting in good faith here.
Tucker Carlson set the precedent when he was sued for libel by Karen McDougal and won because Fox New lawyers successfully argued he wasn't a reporter and no reasonable person would believe he's stating facts.
Unless he's repeating Trump's lies, then 77M people apparently believe it.
> The challenged statement was an obvious exaggeration, cushioned within an undisputed news story. The statement could not reasonably be understood to imply an assertion of objective fact, and therefore, did not amount to defamation.
"Rachel Maddow Wins in 9th Circuit; OAN Loses Appeal in Defamation Case"
https://timesofsandiego.com/business/2021/08/17/rachel-maddo...
-----
> Maddow’s show is different than a typical news segment where anchors inform viewers about the daily news. The point of Maddow’s show is for her to provide the news but also to offer her opinions as to that news. Therefore, the Court finds that the medium of the alleged defamatory statement makes it more likely that a reasonable viewer would not conclude that the contested statement implies an assertion of objective fact.
https://timesofsandiego.com/wp-content/uploads/2020/05/MADDO...
-----
The statement in queston: "[...T]he most obsequiously pro-Trump right wing news outlet in America really literally is paid Russian propaganda."
Did anyone actually believe that was anything more than a joke? It was a disgusting and weird thing to suggest about a disgusting and weird guy, and highly immature, but it's only libel if it's presented as being true.
In only slightly more isolated regions of the internet than here, it used to be, when this story was new, that whenever this story was brought up, with dozens or more credulous comments with varying attempts at humor, there was at least one person who commented that it was fake, often with citations. As time went on, for each new instance, the number of comments dwindled, and the lone debunking comment appeared more and more rarely. Now I never see it, even though the story still occasionally appears, with a handful of credulous comments. So I would assume that yes, many people believe it.
I earnestly did not know that it wasn't "true" or was an exaggeration or anything.
Glad to learn, but, that's zero percent of the reason anyone shouldn't like the guy, zero percent of the reason he's a bad person, etc.
Who cares if someone fucks couches? Apparently that kind of stuff doesn't end your political career anymore anyway.
It's not like he shot a puppy!
You're getting downvotes because the target of this particular lie was a known liar, so people probably feel like it's some sort of poetic justice (or they know it's just in-kind retaliation and are cathartically satisfied by it).
I don't think the right answer to widespread disinformation campaigns is retaliatory disinformation campaigns (even if they're couched – pun not intended – in a just-barely-thin-enough veil of "wink wink we know this is a joke").
The right answer is to create systems and measures that actually limit disinformation.
I’m with you. The net effect actually is something akin to honking one’s horn at a guy who honked at you. You think you’re giving him a taste of his own medicine, but walking by I only see two people honking their horn and I’d ideally prefer not to be around the horn honkers since they’re unpleasant.
Purveyors of post-truth lies don’t turn around and sue people. They just peddle more lies, this is the kind of environment scum like the Vance’s live for.