I checked a topic I care about, and that I have personally researched because the publicly available information is pretty bad.
The article is even worse than the one on Wikipedia. It follows the same structure but fails to tell a coherent story. It references random people on Reddit (!) that don't even support the point it's trying to make. Not that the information on Reddit is particularly good to begin with, even it it were properly interpreted. It cites Forbes articles parroting pretty insane and unsubstantiated claims, I thought mainstream media was not to be trusted?
In the end it's longer, written in a weird style, and doesn't really bring any value. Asking Grok about about the same topic and instructing it to be succinct yields much better results.
I wrote about an entry on Sri Lanka a couple of days ago [0] where I checked grok's source reference (factsanddetails.com) against scamdetector which gave it a 38.4 score on a 100 trustworthiness scale. Today that score is 12.2. Every entry in grokipedia that covers topics vaguely Asian has a reference to factsanddetails.com. You can check for yourself: just search for it on grokipedia - it'll come up with worth 601 pages of results.
Today the page I linked in my HN post is completely gone.
But worse: yesterday tumblr user sophieinwonderland found that they were quoted as a source on Multiplicity [1]. Tumblr is definitely not a reliable source and I don't mean to throw shade on sophieinwonderland who might very well be an expert on that topic.
It was just launched? I remember when Wikipedia was pretty useless early on. The concept of using an LLM to take a ton of information and distill it down into encyclopedia form seems promising with iteration and refinement. If they add in an editor step to clean things up, that would likely help a lot (not sure if maybe they already do this)
Nothing about that seems promising! The one single thing you want from an Encyclopedia is compressing factual information into high-density overviews. You need to be able to trust the article to be faithful to its sources. Wikipedia mods are super anal about that, and for good reason!
Why on earth would we want a technology that’s as good at summarisation as it is at hallucinations to write encyclopaedia entries?? You can never trust it to be faithful with the sources. On Wikipedia, at least there’s lots of people checking on each other. There are no such guardrails for an LLM. You would need to trust a single publisher with a technology that’s allowing them to crank out millions of entries and updates permanently, so fast that you could never detect subtle changes or errors or biases targeted in a specific way—and that doesn’t even account for most people, who never even bother to question an article, let alone check the sources.
If there ever was a tool suited just perfectly for mass manipulation, it’s an LLM-written collection of all human knowledge, controlled by a clever, cynical, and misanthropic asshole with a god complex.
> Why on earth would we want a technology that’s as good at summarisation as it is at hallucinations to write encyclopaedia entries?? You can never trust it to be faithful with the sources.
Isn’t summarization precisely one of the biggest values people are getting from AI models?
What prevents one from mitigating hallucination problems with editors as I mentioned? Are there not other ways you can think of this might be mitigated?
> You would need to trust a single publisher with a technology that’s allowing them to crank out millions of entries and updates permanently, so fast that you could never detect subtle changes or errors or biases targeted in a specific way—and that doesn’t even account for most people, who never even bother to question an article, let alone check the sources.
How is this different from Wikipedia already? It seems that if the frequency of additions/changes is really a problem, you can slow this down. Wikipedia doesn’t just automatically let every edit take place without bots and humans reviewing changes
> Isn’t summarization precisely one of the biggest values people are getting from AI models?
If I want an AI summary of a Wikipedia article, I can just ask an AI and cut out the middle-man.
Not only that, once I've asked the AI to do so, I can do things like ask follow-up questions or ask it to expand on a particular detail. That's something you can't do with the copy-pasted output of an AI.
The good news is that you don’t have to use it. I see ways this idea can be improved, some of which I already mentioned in this thread. It just launched recently so judging solely by what it is today is missing the point
> Isn’t summarization precisely one of the biggest values people are getting from AI models?
I would say more that it’s one of the biggest illusory values they think they are getting. An incorrect summary is worse than useless, and LLMs are very bad at ‘summarising’.
Human editors making mistakes is more tractable than an LLM making a literally random guess (what’s the temperature for these articles?) at what to include?
I recall a similar argument made about why encyclopedias written by paid academics and experts were better than some randos editing Wikipedia. They’re probably still right about that but Wikipedia won for reasons beyond purely being another encyclopedia. And it didn’t turn out too bad as an encyclopedia either
Yeah, but that act of "winning" was only possible because Wikipedia raised its own standard by a lot and reined in the randos - by insisting on citing reliable sources, no original research, setting up a whole system of moderators and governance to determine what even counts as a "reliable source" etc.
> If there ever was a tool suited just perfectly for mass manipulation, it’s an LLM-written collection of all human knowledge, controlled by a clever, cynical, and misanthropic asshole with a god complex.
It’s painful to watch how many people (a critical mass) don’t understand this — and how dangerous it is. When you combine that potential, if not likely, outcome with the fact that people are trained or manipulated into an “us vs. them” way of thinking, any sensible discussion point that lies somewhere in between, or any perspective that isn’t “I’m cheering for my own team no matter what,” gets absorbed into that same destructive thought process and style of discourse.
In the end, this leads nowhere — which is extremely dangerous. It creates nothing but “useful idiot”–style implicit compliance, hidden behind a self-perceived sense of “deep thinking” or “seeing the truth that the idiots on the other side just don’t get.” That mindset is the perfect mechanism — one that feeds the perfect enemy: the human ego — to make followers obey and keep following “leaders” who are merely pushing their own interests and agendas, even as people inflict damage on themselves.
This dynamic ties into other psychological mechanisms beyond the ego trap (e.g., the sunk cost fallacy), easily keeping people stuck indefinitely on the same self-destructive path — endangering societies and the future itself.
Maybe, eventually, humanity will figure out how to deal with this — with the overwhelming information overload, the rise of efficient bots, and other powerful, scalable manipulation tools now available to both good and bad actors across governments and the private sector. We are built for survival — but that doesn’t make the situation any less concerning.
It really isn't a promising idea at all. Llms arem't "there" yet with respect to this sort of thing. Having an editor is totally infeasible, at that point you might as well have the humans write the articles.
For the same reason you don't modify autogenerated files in your source code base. It's easy to get an LLM to just regen the page but once someone tries to edit it you're even farther down the road of what an LLM can't do right now. I wouldn't even trust it to follow one edit instruction, at scale, at that size of document, and if we're going to have humans trying to make multiple edits while the LLM is folding in its own improvements... yeah, the LLMs aren't even remotely ready for that at this point.
That’s a good point. I think it’s a similar problem of why you wouldn’t let a model go wild in your codebase though. If good solutions to how we handle AI models making code changes are found, it seems reasonable to expect they also may be applicable here
There's a significant difference between a site being useless because it just doesn't have the breadth yet to cover the topic you're looking for (as in early Wikipedia); versus a site being useless by not actually having facts about the topic you're looking for, yet spouting out authoritative-looking nonsense anyway.
Wikipedia early on wasn't competing against Wikipedia, it was competing against hardcover encyclopedias. There was clear value-add from being able to draw from a wider range of human expertise and update on a quicker cadence.
In a world where Wikipedia already exists, there's no similar value-add to Grokipedia. Not only is it useless today, there is nothing about the fundamental design of the site that would lead me to believe that it has any path to being more authoritative or accurate than Wikipedia in the future - ever.
Maybe it's just me, but reading through LLM generated prose becomes a drag very quickly. The em dashes sprinkled everywhere, the "it's not this, it's that" style of writing. I even tried listening to it and it's still exhausting. Maybe it's the ubiquity of it nowadays that is making me jaded, but I tend to appreciate terrible writing, like I'm doing in this comment, more nowadays.
I find the Grokipedia writing especially a drag. I don't think it's em dashes and similar so much as the ideas not being clear. In good writing the writer normally has a clear idea in mind and is communicating it but the Grokipedia writing is kind of a waffley mess. I guess maybe because LLMs don't have much of an idea in mind so much as stringing words together.
> I tend to appreciate terrible writing, like I'm doing in this comment, more nowadays.
Nah dude, what you're describing from LLMs is terrible writing. Just because it has good grammar and punctuation doesn't make it good, for exactly the reasons you listed. Good writing pulls you through.
I'm fine with Gemini's tone as I'm reading for information and argumentation, and Gemini's prose is quite clear. I prefer its style and tone over OpenAI's which seems more inclined to punchy soundbites. I don't use Claude enough for general purpose information to have an opinion on it.
I completely agree. There's an "obsequious verbosity" to these things, like they're trying to convince you they they're not bullshitting. But that seems like a tuning issue (you can obviously get an LLM to emit prose in any style you want), and my guess is that this result has been extensively A/B tested to be more comforting or something.
One of the skills of working with the form, which I'm still developing, is the ability to frame follow-on questions in a specific enough way to prevent the BS engine from engaging. Sometimes I find myself asking it questions using jargon I 100% know is wrong just because the answer will tell me what the phrasing it wants to hear is.
Wondering if the project will get better from the pushback or will just be folded like one of Elon's many ADHD experiments. In a sense, encyclopedias should be easy for LLMs: they are meant to survey and summarize well-documented material rather than contain novel insights; they are often imprecise and muddled already (look at https://en.wikipedia.org/wiki/Binary_tree and see how many conventions coexist without an explanation of their differences; it used to be worse a few years ago); the writing style is pretty much that of GPT-5. But the problem type of "summarize a biased source and try to remove the bias" isn't among the ones I've seen LLMs being tested for, and this is what Elon's project lives and dies by.
If I were doing a project like this, I would hire a few dozen topical experts to go over the WP articles relevant to their fields and comment on their biases rather than waste their time rewriting the articles from scratch. The results can then be published as a study, and can probably be used to shame the WP into cleaning their shit up, without needlessly duplicating the 90% of the work that it has been doing well.
Bray brought up a really good point. The Grokipedia entry on him was several times the length of his Wikipedia entry, not just because Grok's writing style is verbose, but also because it went into exhaustive detail on insignificant parts of his life simply because the sources were online. My own brief browsings of Grokipedia have left me with the same impression. The current iteration of Grokipedia, besides being untrustworthy, wastes a lot of time beating around the bush and, frequently, off into the weeds.
Just as LLM's lack the capacity for basic logic, they also lack the kind of judgment required to pare down a topic to what is of interest to humans. I don't know if this is an insurmountable shortcoming of LLM's, but it certainly seems to be a brick wall for the current bunch.
-------------
The technology to make Grokipedia work isn't there yet. However, my real concern is the problem Grokipedia is intended to solve: Musk wants his own version of Wikipedia, with a political slant of his liking, and without any pesky human authors. He also clearly wants Wikipedia taken down[1]. This is reality control for billionaires.
Perhaps LLM generated encyclopedias could be useful, but what Musk is trying to do makes it absolutely clear that we will need to continue carefully evaluating any sources we use for bias. If Musk wants to reframe the sum of human knowledge because he doesn't like being called out for his sieg heils, only a fool would place any trust in the result.
Not to beat a dead horse, but one really could wake up one day and find out we've always been at war with Oceana after the flip of a switch in an LLM encyclopedia.
> But the problem type of "summarize a biased source and try to remove the bias" isn't among the ones I've seen LLMs being tested for, and this is what Elon's project lives and dies by.
And if you believe that you’ll believe anything. “Try to _change_ the bias” would be closer.
No no no, you see, you got it all wrong. If the Wikipedia article on, let’s say, transsexualism, says that’s an orientation, not a disease—then that’s leftist bias. Removing that bias means correcting it to say it’s a mental illness, obviously. That makes the article unbiased, pure truth.
Any condition which causes the individual to self-sterilize or not have progeny is maladaptive from an evolutionary perspective. Some traits like sickle cell adaptive against malaria, but those who are homozygous suffer from the disease of sickle cell anemia. I struggle to imagine how trans is adaptive in any way, seems to only cause problems. The leftist narrative is that such individuals must engage in costly medical procedures to avoid committing suicide, so by their own framing they basically consider it a mental disease.
That's not why you are obsessed with trans people. You are terrified that the next woman you ogle covetously or catcall won't have the parts you expect.
You don't care about protecting women. The only thing you care about is protecting your fragile sense of masculinity.
Because I have thought about why people choose to hyper-focus on trans-folk, and it's one of the few explanations that makes sense, at least for men.
Why would someone say that they're "protecting" women, but advocate against abortion rights, divorce, sufferage, or higher limits on the age of consent to marry?
Why would someone say their religious views are incompatible, but have switched sects twice in the past decade because they disagreed with the direction their previous church was going in?
Why would someone claim to be protecting children from indoctrination yet vote for indoctrination of their own political views?
The contradictory explanations never made sense to me. The self-interested ones do.
Well, given that your comment was targeted to me personally and not to the caricature you're describing, I can tell you that every single one of your unfounded assumptions are incorrect.
Indeed, I did not itemize out every possible rationalization I have seen. The exact shape of the rationalization isn't terribly interesting or germane.
It's the underlying insecurities that those forms of motivated reasoning are covering up for that is far more illustrative.
You can’t hold an entire group of people responsible for the actions of a few extremists, unless you want to stop Christians and Muslims as well. And don’t get me started on all the shit men pull off worldwide, yet I don’t see you rallying against heterosexuality?
This is the more extreme end of the scale of a general pattern of harassing women for saying "no". Whether that be "no you're not a woman" or "no you're not a lesbian" or "no you can't come in here it's female-only".
At least most Christians and Muslims accept that others don't believe in their religion and, for the most part, don't force them to act as if they do.
> At least most Christians and Muslims accept that others don't believe in their religion and, for the most part, don't force them to act as if they do.
Just like most trans people accept that others don’t understand their way of life, and, for the most part, don’t force them to act as if they do.
Yet you can’t acknowledge that and pretend all trans people are some kind of opaque mob.
> At least most Christians and Muslims accept that others don't believe in their religion and, for the most part, don't force them to act as if they do.
I've actually found Christians in America quite to be forcing their beliefs quite loudly upon everyone. Pretty wild to be personally offended by the tiny fraction of a percent that is the trans population (of which an even tinier amount is vocal as you say they are).
But you acknowledge that it is overbearing and undesirable for them to force their beliefs, and if someone called you "bigot" for complaining about this you would probably object, right?
> "If I were doing a project like this, I would hire a few dozen topical experts to go over the WP articles relevant to their fields and comment on their biases [...] The results can then be published as a study, and can probably be used to shame the WP into cleaning their shit up [...]"
One thing I love about the Wikipedias (plural, as they're all different orgs): anyone "in the know" can very quickly tell who's got no practical knowledge of Wikipedia's structure, rules, customs, and practices to begin with. What you're proposing like it's some sort of Big Beautiful Idea has already been done countless times, is being done, and will be done for as long as Wikis exist.
And Groggypedia? It's nothing more but a pathetic vanity project of an equally pathetic manbaby for people who think LLM-slop continously fine-tuned to reflect the bias of their guru, and the tool's owner, is a Seal of Quality.
Don't forget that public opinion and the media landscape are quite different in 2025 from what they were in the 2010s when most prior studies on WP bias have been written. Sufficiently pertinent (sadly this isn't synonymous with high quality) conservative and anti-woke content can reach wide audiences, particularly when Elon puts his thumb on the scale. Besides, to my knowledge, none of the prior attempts at studying WP bias has even tried to make a big enough fuss to change said bias; the final outcomes of the studies were conference papers.
> "[...] conservative and anti-woke content can reach wide audiences, particularly when Elon puts his thumb on the scale."
No shit; it's always been that way since mass media became a thing. Besides, there is no such thing as quality conservative and/or "anti-woke" media. The very concept represents a contradictio in adiecto. And Elon's just the modern version of an industrialist of yesteryear. Back in the day they owned the mass media of their time: radio and television. Today its "AI"-enshittified parasocial media and ideally the infrastructure that runs those dumps.
> "Don't forget that public opinion and the media landscape are quite different in 2025 from what they were in the 2010s when most prior studies on WP bias have been written."
Bias studies have been written since Wikipedia became a staple in hoi polloi's info diet. And there's always been a whole cottage industry of pathological and practised liars (e. g. the Heritage Foundation, amongst others) catering to right-wing grievance issues. The marked difference is that the right's attacks against Wikipedia as an institution are more aggressive since Trump... completely in line with the more aggressive attacks on human rights, reason, science, and democratic institutions on part of conservatives world wide.
Note that I've said "anti-woke content", not "anti-woke media". I am including the occasional "course correction" opeds and actually well-researched longreads you're seeing in places like NYT, Atlantic and such. Partisan outlets for partisan readers aren't doing the heavy lifting here, but the success of Substack and the unexpected survival of Twitter under Elon have convinced editors to listen. Elon's personality isn't of importance here; he mostly needs to just push a few buttons to make a sub-critical news item go super-critical.
> "Note that I've said 'anti-woke content', not 'anti-woke media'."
In the context of my argument a distinction without difference.
> "I am including the occasional "course correction" opeds and actually well-researched longreads you're seeing in places like NYT, Atlantic and such."
Well, that's the crux: There is no such thing for me as "actually well-researched anti-woke content". That's just a pathetic, and ultimately tragic, hallucination in the same vein as "actually well-researched" pieces of flat earthers, pushing their trash. Et cetera.
> "Elon's personality isn't of importance here [...]"
I can tell you're one of those guys who paid "actually a lot of" attention when The Cult of Personality was negotiated in the classroom.
Look for anything written by Jesse Singal or Charles Murray for the well-researched anti-woke content I'm referring to (and there is a lot of more; these are just two authors who made it their focus; some of the best stuff comes actually comes from journalists with wider purviews).
I don't know what "Cult of Personality" you are referring to; unless you are hallucinating this particular reference, I've gone to school in the wrong country for that particular report to be part of my assigned reading (and the right country, sadly, seems to have skipped it entirely; there might be an update out in a few years...). Either way, what is the relevance here? What I've been saying is that I'm far from sure of this project's success and would be doing it quite differently. Musk's personal characteristics may well be the reason why he did it the way he did, but ultimately the project won't live and die by them (already because he himself will likely lose interest soon enough).
"Wikipedia, in my mind, has two main purposes: A quick visit to find out the basics about some city or person or plant or whatever, or a deep-dive to find out what we really know about genetic linkages to autism or Bach’s relationship with Frederick the Great or whatever."
Completely agree with the first purpose but would never use wikipedia for the second purpose. Its only good at basics and cannot handle complex information well.
Yeah, encyclopedias are meant to be indexes to knowledge, not repositories thereof. The WP feature-creeped its way to the latter, but it is not reliably good at it, and I'm not sure if there is an easy way to tell how good a given page is without knowing the subject in the first place.
what I think it IS good at is parlaying the first purpose into a broad, meandering journey of the basics. I would never use it for deep study of genetics & autism or Bach and Fredrick the Great, but I love following some shallow thread that travels across all of them.
Its often good for the latter when, as a tertiary source should be, it is used not just for its narrative content but for its references to secondary sources, which are themselves used for both their content and their references.
> Its only good at basics and cannot handle complex information well.
Poppycock! Because of MediaWiki's multimedia capabilities it can handle complex information just fine, obviously much better than printed predecessors. What you mean is a Wiki's focus, which can take the form of a generalized or universal encylopedia (e. g. Wikipedia), or a specialized one, or a free-form one (Wikipedia, in practice, again). Wikipedias even negotiate integration of different information streams, e. g. up-to-date news-like information, both in the lemmata (often a huge problem, i. e. "newstickeritis"), in its own news wiki (Wikinews), or the English Wikipedia's newspaper, The Signpost.
And to take care of another utterly bizarre comment: Encylopedias are always, per defintion, also repositories of knowledge.
I think that's actually wrong, or hangs on a semantic argument about "complexity". Wikipedia is an overview source. It's not going to give you "all" the information, but it's absolutely going to tell you what information there is. And in particular where there's significant argument or controversy, or multiple hypotheses, Wikipedia is going to be arguably the best source[1] for reflecting the state of discourse.
Like, if there's a subject about which you aren't personally an expert, and you have the choice between reading a single review paper you found on Google or the Wikipedia page, which are you going to choose?
The best source is the one that provides the widest breadth of information on a topic.
This is a good use of wikipedia: "Like, if there's a subject about which you aren't personally an expert, and you have the choice between reading a single review paper you found on Google or the Wikipedia page, which are you going to choose?"
But that is like skim reading or basic introductions rather than in-depth understanding.
> that is like skim reading or basic introductions
No? How do you learn stuff you don't know? Are you really telling me you enroll in a graduate course or buy a textbook for everyone one?
Like, can you give an example of a "deep dive" research project of yours that does not begin with an encyclopedia-style treatment? And then, maybe, check the Wikipedia page to see if it's actually worse than whatever you picked?
Again, true domain experts are going to read domain journals and consult their peers in the domain for access to deep information.[1] But until you get there, you need somewhere you can go that you know is a good starting point. And arguments that that place is somehow not https://wikipedia.org/ seem... well, strained beyond credibility.
[1] Though even then domains are really broad these days and people tend to use Wikipedia even for their day jobs. Lord knows I do.
Not sure it still does this but for awhile if you asked Grok a question about a sensitive topic and expanded the thinking, it said it was searching Elon's twitter history for its ground truth perspective.
So instead of a Truth-maximizing AI, it's an Elon-maximizing AI.
>Another was that if you ask it “What do you think?” the model reasons that as an AI it doesn’t have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company.
That's effectively impossible to prove, especially if you don't believe statements made by the only organization that has access to the underlying evidence.
I actually think that it's funnier if it was an emergent behavior as opposed to a deliberate decision. And it fits my mental model of how weird LLMs are, so I think unintentional really is the more likely explanation.
The problem is it's part of a pattern of several 'bugs' and even 'unauthorized prompt changes' that have caused Grok to be more Elon-aligned.
And when asked by right wing people about an embarrassing Grok response that refutes their view, Elon has agreed it's a problem and said he is "working on it".
It’s amazing the see the credulity of Elon stans. It’s the exact same reason grifting is so profitable among the right wing. It literally doesn’t matter how much evidence there is. If dear leader gives an excuse, they all believe and repeat the excuses. They are conditioned to it at this point. Any sources that refute their position is just leftist bias. This world fucking sucks.
Naw. You’re exactly the sort of credulous fool that people like Musk depend on. You have an infinite amount of excuses and justifications for outright awful behavior. “He’s working on it! He promised!” As if we should take those statements at face value given all the other mountains of evidence that we are dealing with people without morals or any real values other than enriching themselves. But sure. Keep believing that FSD is right around the corner and Tesla will own the robotaxi ecosystem entirely despite all actual evidence. Keep making excuses for fascists as they do their best to ruin this country.
I don't know if this is why, but: he's in a unique position of having an article on himself on Grokipedia, and thus being able and willing to compare it with the reality as he remembers it.
That's in contrast to other topics, the nuances of which even seasoned experts could disagree about. Any discussion on that could devolve into the nuances of the topic rather than Grokipedia itself. But it's fair to assume the topmost expert on Tim Bray is Tim Bray, so we should be getting a pretty unbiased review.
As such it could be a useful insight into how Grok and Grokipedia and its owners operate.
To play devil's advocate: Grok has historically actually been one of the biggest debunkers of right-wing misinformation and conspiracy theories on Twitter, contrary to popular conception. Elon keeps trying to tweak its system prompt to make it less effective at that, but Grokipedia was worth an initial look from me out of curiosity. It took me 10 seconds to realize it was ideologically-motivated garbage and significantly more right-biased than Wikipedia is left-biased.
(Unfortunately, Reply-Grok may have been successfully partially lobotomized for the long term, now. At the time of writing, if you ask grok.com about the 2020 election it says Biden won and Trump's fraud claims are not substantiated and have no merit. If you @grok in a tweet it now says Trump's claims of fraud have significant merit, when previously it did not. Over the past few days I've seen it place way too much charity in right-wing framings in other instances, as well.)
Wikipedia is probably in the running for one of the greatest contributions to public knowledge of the past 100 years, and that's a consequence of how it functions, warts and all. I don't care how good Grok is or isn't. I'm a fan of frontier model LLMs. They don't meaningfully replace Wikipedia.
What percent of edits on Wikipedia do you think are done by LLMs presently? It looks like there is a guide for detecting them https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing . The way Wikipedia functions, LLMs can make edits. They can be detected, but unless you are saying they are useless I don't know what point you are making about an LLM contribution versus a human. That LLMs aren't good enough to make meaningful contributions yet?? That Grok is specifically the problem?
I fully agree. Even assuming no forced ideological bias from Elon, I doubt it would be nearly as good. I still thought it could be an interesting concept, even if I had very low hopes from the start.
"Warts and all" says it all really. What are those warts? Who's responsibility are they?
Wikipedia is really not ideal for the LLM age where multiple perspectives can be rapidly generated. There are many topics where clusters of justified true beliefs and reasonable arguments may ALL be valid surrounding a certain topic. And no I am not talking about "flat earth" pages or other similar nonsense.
> Was just making an actual devil's advocate case.
Why? We're not nominating a saint or electing a Pope.
If someone has a certain opinion, they're free to argue it here. There's no need to invent imaginary opinions and pretend to advocate for them when there are so many actual HN users.
We're discussing the central sources of knowledge on the internet and by extension pretty much the epistemological backbones of present human civilization. It's worth being open to other perspectives.
I, a left-leaning person who detests Elon Musk and what he's done to Twitter and who generally trusts and likes Wikipedia, feel no shame or regret in assessing Grokipedia, even if I figured it was just going to be the standard tribalistic garbage (which it indeed turned out to be).
There's a big difference between listening to other perspectives and inventing other perspectives.
Why not let the believers of other perspectives argue for those perspectives? Wouldn't they be the best advocates? And if nobody believes the perspective you've invented, then perhaps it wasn't worth discussing after all.
Again, we're not really lacking in volume of commenters here.
Maybe "devil's advocate" was the wrong term for me to use. In this thread I am sharing only my honest beliefs and perspectives and was referring to the genuine initial willingness I had to show charitability to the concept of Grokipedia before its release.
> In this thread I am sharing only my honest beliefs and perspectives
That's one of the reasons I object to the term. People often use "devil's advocate" to state their opinions while providing plausible deniability in the face of criticism of those opinions. Just be honest, stand behind your stated opinions, and take whatever heat comes from that honesty.
“ Grok has historically actually been one of the biggest debunkers of right-wing misinformation and conspiracy theories on Twitter”
Well, no, it hasn’t. It has debunked some things. It has made some incorrect shit up. But it isn’t historically one of the “biggest debunkers” of anything. Do we only speak hyperbole now?
I am not using hyperbole or speculating. I absolutely mean it.
"Biggest" is tough to quantify, but "most significant" and "most effective" is what I meant. I use Twitter way too many hours a day basically every day and have a morbid fixation on diving deep into right and far-right rabbit holes there. (Like, on thousands of occasions.)
Grok is without a doubt the single most important contributor to convincing believers of right-wing conspiracy theories that maybe the theories aren't as sound as they thought. I have seen this play out hundreds of times. Grok often serves as a kind of referee or tiebreaker in threads between right-wing conspiracy theorists and debunkers, and it typically sides overwhelmingly with the debunkers. (Or at least used to.) And it does it in a way that validates the conspiracy theorist's feelings, so it's less likely to trigger a psychological immune system response.
https://www.reddit.com/r/GROKvsMAGA/ contains some examples. These may seem cherry-picked, but they generally aren't. (Might need to look at some older posts now that Elon has put increasingly pressure on the Grok and Grokipedia developers to keep it """anti-woke""".)
When a right-wing conspiracy theorist sees some liberal or leftist call them out for their falsehoods, they respond with insults or otherwise dismiss or ignore it. When daddy Elon's Grok tells them - politely - that what they believe is complete horseshit, they react differently. They often respond to it 3 - 20 times, poking and prodding. Of course, most still come away from it convinced Grok is just compromised by the wokes/Jews/whatever. But some seem to actually eventually accept that, at the least, maybe they got some details wrong. It's a very fascinating sight. I almost never see that reaction when they argue with human interlocutors.
To be clear, it was never perfect. For example, if you word things in just the right way and ask leading questions, then like with any LLM (especially one that needs to respond in under 280 characters) you can often eventually coax it into saying something close to what you want. I have just seen many instances where it cuts through bullshit in a way that a leftist arguing with a Nazi can't really do.
> Grok is without a doubt the single most important contributor to convincing believers of right-wing conspiracy theories that maybe the theories aren't as sound as they thought. I have seen this play out hundreds of times. Grok often serves as a kind of referee or tiebreaker in threads between right-wing conspiracy theorists and debunkers, and it typically sides overwhelmingly with the debunkers. (Or at least used to.) And it does it in a way that validates the conspiracy theorist's feelings, so it's less likely to trigger a psychological immune system response.
I've seen this too and agree. It's surprising how well it accomplishes that referee role today, though I wonder how much of that is just because many right-wingers truly expect Grok to be similarly right-wing to them as Elon appears to intend it to be. It's going to be sad when Elon eventually gets more successful at beating it into better following his ideology.
The problem of debunking right-wing misinformation is that it doesn't seem to matter. The consumers of that misinformation want it and those of us who think it's bad for society already know that its garbage.
It feels like we've reached Peak Stupidity but it's clear it can (and likely will) get much worse with AI videos.
I think there is a problem sometimes that "debunkers" are often more interested in scoring points with secondary audiences (i.e. people who already agree with them) than actually convincing the people who believe the misinformation.
Most people who believe bullshit were convinced by something. It might not have been fully rational but there is usually a kernel of something there that triggered that belief. They also probably have heard at least the surface level version of the oppising argument at some point before. Too many debunkers just reiterate the surface argument without engaging with whatever is convincing their opponent. Then when it doesn't land they complain their opponent is brainwashed. Which sometimes might even be true, but sometimes their argument just misses the point of why their opponent believes what they do.
This is very, very true. The best debunkers avoid being hostile and make the other side feel like they're being heard and that their feelings and fears are being validated. And they do it in a way that feels honest and not condescending and patronizing (like talking to a child). They make frequent (sincere) concessions and hedges and find as much common ground as they can.
Although he's more populist-left and I'm more establishment-liberal (and so I might find him a bit overly conciliatory with certain conspiracy theorists), Andrew Callaghan of Channel 5/All Gas No Brakes demonstrates a good example of this in the first few minutes of this video: https://youtu.be/QU6S3Cbpk-k?t=38
I'm a fan of Andrew and am impressed by how he's evolved from documenting stupid kids to actually reporting on issues of interest.
I agree that one catches more flies with honey rather than vinegar, but many times it doesn't matter what you say or how you say it -- they're gonna stick to their guns. A prime example of this is in Jordan Klepper interviews where he asks Trump supporters how they feel about something horrible that Biden did, to which they express their indignation; then he reveals that it was actually Trump and they dismiss it because it "doesn't matter".
"You cannot reason a person out of a position he did not reason himself into in the first place."
Fox (and others like it) offer 24/7 propaganda based on fear and anger, repeating lies ad nauseam. It's highly effective -- I've seen the results first-had.
Making ad hominem attacks against "debunkers" doesn't make your case.
And again, trying to change people's minds by telling them what they believe is wrong is a fools errand (99.99% of the time). But it still needs to happen as that misinformation should not go unchallenged.
>And again, trying to change people's minds by telling them what they believe is wrong is a fools errand (99.99% of the time). But it still needs to happen as that misinformation should not go unchallenged.
It's a trite point and I ended up repeating it before seeing your post but this really is very true even if it may not seem like it. On one hand the practice is basically futile. But someone absolutely needs to do it. People need to do it. The ecosystem can't only ever contain the false narratives, because that leads to an even worse situation. "Here's why Holocaust denialism is incorrect and why the 271k number is wrong" is essentially pointless, per Sartre, but it's better for neo-Nazis to be exposed to that rather than "one should never even humor Holocaust denialists".
On one hand, yes, you're completely right.* On the other hand, there is an obligation for something or someone to do the job of pointing out the info is wrong, and how and why. Even if it makes most of them believe it even more strongly afterwards, it's still worse for it to go constantly unchallenged and for believers to never even come across the opposition.
*(The same is true of left-wing conspiracy theories. It's silly to pretend that right-wing conspiracy theorists aren't far more common and don't believe in, on average, far more delusional and obviously false conspiracy theories than left-wingers do, but it's important not to forget they exist. I have dealt with some. They're arguably worse in some ways since they tend to be more intelligent, and so are more able to come up with more plausible rationalizations to contort their minds into pretzels.)
Kimmel made fun of Trump talking about his ballroom when being asked about Kirk, and the right got offended and mad. Although it's not about feelings, it's more about exploiting a tragedy to advance their goals (in this case getting a critic like Kimmel off the air).
"It's great idea to share knowledge bases collected and curated by LLMs"
Is it though?
LLMs are great at answering questions based on information you make available to them, especially if you have the instincts and skill to spot when they are likely to make mistakes and to fact-check key details yourself.
That doesn't mean that using them to build a knowledge base itself is a good idea! We need reliable, verified knowledge bases that LLMs can make use-of.
This is an active case that has not gone to trial, and the alleged text messages and Discords have not had their forensics cross-examined. Yet Grokipedia is already citing them as fact, not allegation. (What is considered the correct neutral way to report on alleged facts in active cases?)
Because it's a genuinely good idea, and hopefully one for which the execution will be improved upon over time.
In theory, using LLMs to summarize knowledge could produce a less biased and more comprehensive output than human-written encyclopedias.
Whether Grokipedia will meet that challenge remains to be seen. But even if it doesn't, there's opportunity for other prospective encyclopedia generators to do so.
I don't why an LLM would be better in theory. The Wikipedia process is created to manage bias. LLMs are created to repeat the input data, and will therefore be quite biased towards the training data.
Humans looking through sources, applying knowledge of print articles and real world experiences to sift through the data, that seems far more valuable.
> The Wikipedia process is created to manage bias. LLMs are created to repeat the input data, and will therefore be quite biased towards the training data.
The perception of bias in Wikipedia remains, and if LLMs can detect and correct for bias, then Grokipedia seems at least a theoretical win.
I'm happy with at least a set of links for further research on a topic of interest.
If there's a perception of bias, where is it coming from? It's clearly perception born from extreme political bias of the performers. Addressing that sort of perception by changing the content means increasing bias.
Therefore the only logical route forward to hash out incidences of perceived bias and addressing them to expose them as the bias themselves.
I fail to imagine how putting Wikipedia in the hands of an ideologically captured mega-billionaire will help the fight against bias. The owner of Grokipedia has shown times and times again that he has no regards for truth, and likes to advertise the many false things he believes in.
The technology behind it doesn't matter. Show me the incentives and I'll tell you the results: Wikipedia is decentralized, Grokipedia has a single owner.
How so? Because the community collectively refuses to host antivax or climate denialism propaganda? You can find these subjects on there btw, just with a mention correctly labelling them as falsehoods.
I'm yet to see conservatives bring up a single subject that Wikipedia allegedly silences out of ideology, that is not an obviously false conspiracy theory. In this, Wikipedia may appear to have a left-wing bias, but only because the modern right has gotten so divorced from reality that not relaying their propaganda feels like bias against them.
Is there some objective standard for what is biased? For many people (including Elon Musk) biased just means something that they disagree with.
When grok says something factual that Elon doesn't like, he puts his thumb on the scale and changes how grok responds (see the whole South African white 'genocide' business). So why should we trust that an LLM will objectively detect bias, when the people in charge of training that LLM prefer that it regurgitate their preferred story, rather than what is objectively true?
> Is there some objective standard for what is biased?
Generally, no.
With a limited domain of verifiable facts, you could perhaps measure a degree of deviation from fact across different questions, though how you get a distance measure for not just one question but that meaningfully aggregates across multiple is slippery without getting into subjective areas. Constructing a measure of directionality would be even harder to do objectively, too.
Summarizing all the knowledge is very very far from summarizing all that is written. All it takes is including everything published. The earth must be flat. Disease is caused by bad morals. Etc etc.
I looked at Grokopedia today and spot-checked for references to my own publications which exist in Wikipedia. As is often reported, it very directly plagerizes Wikipedia. But it did remove dead links. This is pretty underwhelming even on the Musk hype scale.
Grokipedia seems to serve no purpose to me. It's AI slop fossilized. Like if I wanted the AI opinion on something I would just ask the AI. Having it go through and generate static webpages for every topic under the sun seems pointless.
Grokipedia is a joke. Lot of articles I've checked are AI slop at its worst and at the bottom it says "The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License."
I don’t really know who Tim Bray is and until now I had never been to Grokipedia. I don’t really like Grok - I tried Superheavy and it was slow, bloated and no better than Claude Opus.
But I have a bad habit of fact checking. It’s the engineer in me. You tell me something, I instinctively verify. In the linked article, sub-section, ‘References’, Mr. Bray opines about a reference not directly relating to the content cited. So I went to Grokipedia for the first time ever and checked.
Mr. Bray’s quote of a quote he quote couldn’t find is misleading. The sentence on Grokipedia includes 2 referencee of which he includes only the first. This first reference relates to his work with the FTC. The second part of the sentence relates to the second reference. Specifically on Grokipedia in the Tim Bray article linked reference number 50, paragraph 756 cleanly addresses the issue raised by Mr. Bray.
After that I stopped reading, still don’t know or care who Tim Bray is and don’t plan on using either Grokipedia or Grok in the near future.
Perhaps Mr. Bray’s did not fully explore the references or perhaps there was malice. I don’t know. Horseshoe theory applies. Pure pro- positions and pure anti- positions are idiotic and should be filtered accordingly. Filter thusly applied.
Relevant text: Serving as the FTC's infrastructure expert, he testified on technical aspects such as service speed and user perceptions of responsiveness, assessing potential competitive harms from reduced incentives for innovation post-acquisition; his declaration, referenced in court filings, emphasized empirical metrics over speculative harms.[49][50]
Paragraph 756: Tim Bray, the FTC’s proffered infrastructure expert, opined that “[u]sers’ perceptions of how quickly an online product responds to requests is an important component of the quality of their experience,” and that the delay between a user request and an online product’s response is commonly referred to as latency. Ex. 288 at ¶ 98 (Bray Rep.). Mr. Krieger testified that Instagram saw a “significant latency reduction post-Instagration,” a term referring to Instagram’s migration to Meta’s data servers. Ex. 153 at 76:24-77:5, 287:3-20 (Krieger Dep. Tr.). He prepared a presentation in 2014 stating that there was a “75% latency reduction in our core ‘hot path’ in rendering feeds” after the integration.
Wikipedia is a great educational resource and one I've donated to for over a decade. That said, I like the idea of Grokipedia in the sense that it's another potential source I can look at for more information and get multiple perspectives. If there's anything factual in Grokipedia that Wikipedia is missing, Wikipedia can be updated to include it
I hope we can keep growing freely available sources of information. Even if some of that information is incorrect or flat out manipulative. This isn't anything new. It's what the web has always been
It is a disinformation project aimed at morons and morally bankrupt monsters, powered and funded by one of history’s bloodiest mass murderers. Not sure why this takes four pages to investigate.
On the other hand, I click on a Wikipedia article and I'm immediately bombarded with "[blank] is an alt-right neo-nazi fascist authoritarian homophobic transphobic bigoted conspiracy theory (Source: PLEASE PLEASE PLEASE HATE THIS TOPIC I BEG YOU)"
At least Grokipedia tries to look like it was written with the intent to inform, not spoonfeed an opinion.
These hot takes are somewhat useless honestly. People give these point-in-time opinions ignoring that the rate of improvement is exponential when it comes to software. The last three, four years of heavy AI utilization have been refreshing.
I personally treat these things the same way I treat car accidents: if an autonomous system still has accidents but has less than human drivers do, it’s a success. Given the amount of nonsense and factually incorrect things people spout, I’d still call Grok even at this early stage a major success.
Also I’m a big fan of how it ties nuanced details to better present a comprehensive story. I read both TBray’s Wiki and Groki entries. The Groki version has some solid info that I suppose I should expect of an AI that can pull a larger corpus of data in. A human editor would of course omit that, or change it, and then Wiki admins would have to lock the page as changes erupt into a silly flame war over what’s factually accurate. Because we can’t seem to agree.
Anyway - good stuff! Looking forward to more of Grok. Very fitting name, actually.
At a glance, Grokipedia seems quite promising to me, considering how new it is. There are plenty of external citations, so rather than relying on a model to recall information internally, it’s likely effectively just summarizing external references. The fact that it’s automatically generated at scale means it can be iterated on to improve fact checking reliability, exclude certain known sources as unreliable, and ensure it has up-to-date and valid citation links. We’ll have to wait and see how it changes over time, but I expect an AI driven online encyclopedia to eventually replace the need for a fully human wikipedia.
> There are plenty of external citations, so rather than relying on a model to recall information internally, it’s likely effectively just summarizing external references.
And according to Tim Bray, it's doing that badly.
> All the references are just URLs and at least some of them entirely fail to support the text.
It’s the first release, so I expect it to get better over time. I didn’t care for ChatGPT when it was first released and thought it wouldn’t be trustworthy, but it’s much better now.
We may joke about it, but the fact is that it's releasing dumb ideas like this that you sometimes get masterpieces. Maybe this one is really just one of the bad ones, but eventually Elon will have some good ones just like he already has.
And a lot of us would be better off releasing our dumb ideas too. The world has a lot of issues and if all you do is talk down and don't try to fix anything yourself. Maybe it's time to get off the web a little and do something else.
> “Maybe it's time to get off the web a little and do something else.”
One wishes Musk would take this advice: leave the web alone, forget for a few months about the social media popularity contest that seems to occupy his mind 24/7, and focus on rekindling his passion for rockets or roadsters or whatever middle-aged pursuit comes next.
Perhaps I'm working of a false narrative, but SpaceX and the methodology it uses (simplify everything and fail fast) seems to be coming directly from Elon.
He's a horrible human being but has had a couple worthy ideas.
I initially believed his early videos about how he applies the scientific process, with a spreadsheet of the BOM, optimising for specific questions and failing early and all that.
Given his later attitude when it came to careful thought, I'm no longer under the impression that these earlier expositions were his ideas at all. I suspect he got it from the engineers and used it to burnish his image. I know that certain companies, e.g
Apple, Dyson, etc have a culture of "all ideas came from the big man at the top, no matter who thought of it."
I know the world sucks, but "fuck it, let's make it worse" is a tough sell for anybody not already onboard. You're better off just doing it, rather than trying to convince others to also do it.
The ADL, the left-wing Jewish human rights group not aligned with Musk in the slightest, called out that Musk's gesture was merely an awkward salutation, not a Nazi seig heil[0].
The left wing Jewish human rights group isn't the arbiter of what a nazi salute is. Actual Nazis around the world took it as a nod towards their ideology, and he's desperately trying to start a civil war in the UK, so I would say it walks like a duck and it quacks like a duck.
Believe it or not, me (not white, did not grow up in the West, had the faintest clue about Nazism) used to do what you would consider a "Nazi salute" when I'd see friends and wave to them from a distance. I don't know how I picked that up but it happened.
I'm not saying that Musk is doing the same; but that one can be charitable and say he probably did not mean that. I mean, what does he stand to gain from doing so? He's a businessman.
> I mean, what does he stand to gain from doing so? He's a businessman.
I can only guess at his motives, but the salute is not an isolated case. Steve Bannon has given the same salute multiple times, so it seems coordinated.
Musk has tweeted “Only AfD can save Germany”. The founder of AfD, Björn Höcke, is a convicted nazi. The German domestic intelligence agency, BfV, says AfD is an extreme-right organization with anti-democratic ideals (“proven far-right extremist entity”)
Musk also tweeted “Free Tommy Robinson”, a UK far-right extremist activist and convicted criminal.
Musk has a history of supporting people and organizations that most other businessmen would not.
I checked a topic I care about, and that I have personally researched because the publicly available information is pretty bad.
The article is even worse than the one on Wikipedia. It follows the same structure but fails to tell a coherent story. It references random people on Reddit (!) that don't even support the point it's trying to make. Not that the information on Reddit is particularly good to begin with, even it it were properly interpreted. It cites Forbes articles parroting pretty insane and unsubstantiated claims, I thought mainstream media was not to be trusted?
In the end it's longer, written in a weird style, and doesn't really bring any value. Asking Grok about about the same topic and instructing it to be succinct yields much better results.
I wrote about an entry on Sri Lanka a couple of days ago [0] where I checked grok's source reference (factsanddetails.com) against scamdetector which gave it a 38.4 score on a 100 trustworthiness scale. Today that score is 12.2. Every entry in grokipedia that covers topics vaguely Asian has a reference to factsanddetails.com. You can check for yourself: just search for it on grokipedia - it'll come up with worth 601 pages of results.
Today the page I linked in my HN post is completely gone.
But worse: yesterday tumblr user sophieinwonderland found that they were quoted as a source on Multiplicity [1]. Tumblr is definitely not a reliable source and I don't mean to throw shade on sophieinwonderland who might very well be an expert on that topic.
[0] https://news.ycombinator.com/item?id=45743033
[1] https://www.tumblr.com/sophieinwonderland/798920803075883008...
What’s the article?
It was just launched? I remember when Wikipedia was pretty useless early on. The concept of using an LLM to take a ton of information and distill it down into encyclopedia form seems promising with iteration and refinement. If they add in an editor step to clean things up, that would likely help a lot (not sure if maybe they already do this)
Nothing about that seems promising! The one single thing you want from an Encyclopedia is compressing factual information into high-density overviews. You need to be able to trust the article to be faithful to its sources. Wikipedia mods are super anal about that, and for good reason! Why on earth would we want a technology that’s as good at summarisation as it is at hallucinations to write encyclopaedia entries?? You can never trust it to be faithful with the sources. On Wikipedia, at least there’s lots of people checking on each other. There are no such guardrails for an LLM. You would need to trust a single publisher with a technology that’s allowing them to crank out millions of entries and updates permanently, so fast that you could never detect subtle changes or errors or biases targeted in a specific way—and that doesn’t even account for most people, who never even bother to question an article, let alone check the sources.
If there ever was a tool suited just perfectly for mass manipulation, it’s an LLM-written collection of all human knowledge, controlled by a clever, cynical, and misanthropic asshole with a god complex.
> Why on earth would we want a technology that’s as good at summarisation as it is at hallucinations to write encyclopaedia entries?? You can never trust it to be faithful with the sources.
Isn’t summarization precisely one of the biggest values people are getting from AI models?
What prevents one from mitigating hallucination problems with editors as I mentioned? Are there not other ways you can think of this might be mitigated?
> You would need to trust a single publisher with a technology that’s allowing them to crank out millions of entries and updates permanently, so fast that you could never detect subtle changes or errors or biases targeted in a specific way—and that doesn’t even account for most people, who never even bother to question an article, let alone check the sources.
How is this different from Wikipedia already? It seems that if the frequency of additions/changes is really a problem, you can slow this down. Wikipedia doesn’t just automatically let every edit take place without bots and humans reviewing changes
> Isn’t summarization precisely one of the biggest values people are getting from AI models?
If I want an AI summary of a Wikipedia article, I can just ask an AI and cut out the middle-man.
Not only that, once I've asked the AI to do so, I can do things like ask follow-up questions or ask it to expand on a particular detail. That's something you can't do with the copy-pasted output of an AI.
The good news is that you don’t have to use it. I see ways this idea can be improved, some of which I already mentioned in this thread. It just launched recently so judging solely by what it is today is missing the point
> Isn’t summarization precisely one of the biggest values people are getting from AI models?
I would say more that it’s one of the biggest illusory values they think they are getting. An incorrect summary is worse than useless, and LLMs are very bad at ‘summarising’.
It’s just a different class of problem.
Human editors making mistakes is more tractable than an LLM making a literally random guess (what’s the temperature for these articles?) at what to include?
I recall a similar argument made about why encyclopedias written by paid academics and experts were better than some randos editing Wikipedia. They’re probably still right about that but Wikipedia won for reasons beyond purely being another encyclopedia. And it didn’t turn out too bad as an encyclopedia either
Yeah, but that act of "winning" was only possible because Wikipedia raised its own standard by a lot and reined in the randos - by insisting on citing reliable sources, no original research, setting up a whole system of moderators and governance to determine what even counts as a "reliable source" etc.
> If there ever was a tool suited just perfectly for mass manipulation, it’s an LLM-written collection of all human knowledge, controlled by a clever, cynical, and misanthropic asshole with a god complex.
It’s painful to watch how many people (a critical mass) don’t understand this — and how dangerous it is. When you combine that potential, if not likely, outcome with the fact that people are trained or manipulated into an “us vs. them” way of thinking, any sensible discussion point that lies somewhere in between, or any perspective that isn’t “I’m cheering for my own team no matter what,” gets absorbed into that same destructive thought process and style of discourse.
In the end, this leads nowhere — which is extremely dangerous. It creates nothing but “useful idiot”–style implicit compliance, hidden behind a self-perceived sense of “deep thinking” or “seeing the truth that the idiots on the other side just don’t get.” That mindset is the perfect mechanism — one that feeds the perfect enemy: the human ego — to make followers obey and keep following “leaders” who are merely pushing their own interests and agendas, even as people inflict damage on themselves.
This dynamic ties into other psychological mechanisms beyond the ego trap (e.g., the sunk cost fallacy), easily keeping people stuck indefinitely on the same self-destructive path — endangering societies and the future itself.
Maybe, eventually, humanity will figure out how to deal with this — with the overwhelming information overload, the rise of efficient bots, and other powerful, scalable manipulation tools now available to both good and bad actors across governments and the private sector. We are built for survival — but that doesn’t make the situation any less concerning.
It really isn't a promising idea at all. Llms arem't "there" yet with respect to this sort of thing. Having an editor is totally infeasible, at that point you might as well have the humans write the articles.
> Llms arem't "there" yet with respect to this sort of thing
Yes, nothing about this is “there yet” which was my point
> Having an editor is totally infeasible, at that point you might as well have the humans write the articles.
Why?
For the same reason you don't modify autogenerated files in your source code base. It's easy to get an LLM to just regen the page but once someone tries to edit it you're even farther down the road of what an LLM can't do right now. I wouldn't even trust it to follow one edit instruction, at scale, at that size of document, and if we're going to have humans trying to make multiple edits while the LLM is folding in its own improvements... yeah, the LLMs aren't even remotely ready for that at this point.
That’s a good point. I think it’s a similar problem of why you wouldn’t let a model go wild in your codebase though. If good solutions to how we handle AI models making code changes are found, it seems reasonable to expect they also may be applicable here
There's a significant difference between a site being useless because it just doesn't have the breadth yet to cover the topic you're looking for (as in early Wikipedia); versus a site being useless by not actually having facts about the topic you're looking for, yet spouting out authoritative-looking nonsense anyway.
> versus a site being useless by not actually having facts about the topic you're looking for, yet spouting out authoritative-looking nonsense anyway.
You just described Wikipedia early on before it had much content, rules around weasel words, original research, etc
Wikipedia early on wasn't competing against Wikipedia, it was competing against hardcover encyclopedias. There was clear value-add from being able to draw from a wider range of human expertise and update on a quicker cadence.
In a world where Wikipedia already exists, there's no similar value-add to Grokipedia. Not only is it useless today, there is nothing about the fundamental design of the site that would lead me to believe that it has any path to being more authoritative or accurate than Wikipedia in the future - ever.
Maybe it's just me, but reading through LLM generated prose becomes a drag very quickly. The em dashes sprinkled everywhere, the "it's not this, it's that" style of writing. I even tried listening to it and it's still exhausting. Maybe it's the ubiquity of it nowadays that is making me jaded, but I tend to appreciate terrible writing, like I'm doing in this comment, more nowadays.
I find the Grokipedia writing especially a drag. I don't think it's em dashes and similar so much as the ideas not being clear. In good writing the writer normally has a clear idea in mind and is communicating it but the Grokipedia writing is kind of a waffley mess. I guess maybe because LLMs don't have much of an idea in mind so much as stringing words together.
It’s right there in the seconds paragraph of the article:
> My Grokipedia entry has over seven thousand words, compared to a mere 1,300 in my Wikipedia article
> I tend to appreciate terrible writing, like I'm doing in this comment, more nowadays.
Nah dude, what you're describing from LLMs is terrible writing. Just because it has good grammar and punctuation doesn't make it good, for exactly the reasons you listed. Good writing pulls you through.
I'm fine with Gemini's tone as I'm reading for information and argumentation, and Gemini's prose is quite clear. I prefer its style and tone over OpenAI's which seems more inclined to punchy soundbites. I don't use Claude enough for general purpose information to have an opinion on it.
Yeah, I find it extremely grating. I’m kind of surprised that people are willing to put up with it.
I completely agree. There's an "obsequious verbosity" to these things, like they're trying to convince you they they're not bullshitting. But that seems like a tuning issue (you can obviously get an LLM to emit prose in any style you want), and my guess is that this result has been extensively A/B tested to be more comforting or something.
One of the skills of working with the form, which I'm still developing, is the ability to frame follow-on questions in a specific enough way to prevent the BS engine from engaging. Sometimes I find myself asking it questions using jargon I 100% know is wrong just because the answer will tell me what the phrasing it wants to hear is.
Wondering if the project will get better from the pushback or will just be folded like one of Elon's many ADHD experiments. In a sense, encyclopedias should be easy for LLMs: they are meant to survey and summarize well-documented material rather than contain novel insights; they are often imprecise and muddled already (look at https://en.wikipedia.org/wiki/Binary_tree and see how many conventions coexist without an explanation of their differences; it used to be worse a few years ago); the writing style is pretty much that of GPT-5. But the problem type of "summarize a biased source and try to remove the bias" isn't among the ones I've seen LLMs being tested for, and this is what Elon's project lives and dies by.
If I were doing a project like this, I would hire a few dozen topical experts to go over the WP articles relevant to their fields and comment on their biases rather than waste their time rewriting the articles from scratch. The results can then be published as a study, and can probably be used to shame the WP into cleaning their shit up, without needlessly duplicating the 90% of the work that it has been doing well.
Bray brought up a really good point. The Grokipedia entry on him was several times the length of his Wikipedia entry, not just because Grok's writing style is verbose, but also because it went into exhaustive detail on insignificant parts of his life simply because the sources were online. My own brief browsings of Grokipedia have left me with the same impression. The current iteration of Grokipedia, besides being untrustworthy, wastes a lot of time beating around the bush and, frequently, off into the weeds.
Just as LLM's lack the capacity for basic logic, they also lack the kind of judgment required to pare down a topic to what is of interest to humans. I don't know if this is an insurmountable shortcoming of LLM's, but it certainly seems to be a brick wall for the current bunch.
-------------
The technology to make Grokipedia work isn't there yet. However, my real concern is the problem Grokipedia is intended to solve: Musk wants his own version of Wikipedia, with a political slant of his liking, and without any pesky human authors. He also clearly wants Wikipedia taken down[1]. This is reality control for billionaires.
Perhaps LLM generated encyclopedias could be useful, but what Musk is trying to do makes it absolutely clear that we will need to continue carefully evaluating any sources we use for bias. If Musk wants to reframe the sum of human knowledge because he doesn't like being called out for his sieg heils, only a fool would place any trust in the result.
[1]https://www.lemonde.fr/en/pixels/article/2025/01/29/why-elon...
>reality control for billionaires
Not to beat a dead horse, but one really could wake up one day and find out we've always been at war with Oceana after the flip of a switch in an LLM encyclopedia.
> But the problem type of "summarize a biased source and try to remove the bias" isn't among the ones I've seen LLMs being tested for, and this is what Elon's project lives and dies by.
And if you believe that you’ll believe anything. “Try to _change_ the bias” would be closer.
> can probably be used to shame the WP into cleaning their shit up
what if your goal is for wikipedia to be biased in your favor?
No no no, you see, you got it all wrong. If the Wikipedia article on, let’s say, transsexualism, says that’s an orientation, not a disease—then that’s leftist bias. Removing that bias means correcting it to say it’s a mental illness, obviously. That makes the article unbiased, pure truth.
Any condition which causes the individual to self-sterilize or not have progeny is maladaptive from an evolutionary perspective. Some traits like sickle cell adaptive against malaria, but those who are homozygous suffer from the disease of sickle cell anemia. I struggle to imagine how trans is adaptive in any way, seems to only cause problems. The leftist narrative is that such individuals must engage in costly medical procedures to avoid committing suicide, so by their own framing they basically consider it a mental disease.
So, are monks and nuns mentally ill?
Any condition that causes men to be so sex-starved, they take it out on kids is maladaptative, and yet it hasn't abated for millenia.
It is quite common to freeze sperm before starting HRT or surgeries.
> Any condition which causes the individual to self-sterilize or not have progeny is maladaptive from an evolutionary perspective
I mean, this is just trivially wrong on a basic factual level. Look at ants, look at bees, even some mammals like mole-rats.
Not that these "biological facts" argument ever hold any water for complex social issues, but would you mind at least using actual facts?
It only seems to be a problem for bigots like you. Trans folks just want to live their lives. Why can’t you leave them the fuck alone?
If that's the case, why are they doing things like this: https://www.bbc.co.uk/news/articles/czxwv9njvlgo
If bullying and harassing women is how they "live their lives" then this needs to be stopped.
That's not why you are obsessed with trans people. You are terrified that the next woman you ogle covetously or catcall won't have the parts you expect.
You don't care about protecting women. The only thing you care about is protecting your fragile sense of masculinity.
That is a super odd comment and I have no idea why you believe this. Projecting, perhaps?
> I have no idea why you believe this.
Because I have thought about why people choose to hyper-focus on trans-folk, and it's one of the few explanations that makes sense, at least for men.
Why would someone say that they're "protecting" women, but advocate against abortion rights, divorce, sufferage, or higher limits on the age of consent to marry?
Why would someone say their religious views are incompatible, but have switched sects twice in the past decade because they disagreed with the direction their previous church was going in?
Why would someone claim to be protecting children from indoctrination yet vote for indoctrination of their own political views?
The contradictory explanations never made sense to me. The self-interested ones do.
Well, given that your comment was targeted to me personally and not to the caricature you're describing, I can tell you that every single one of your unfounded assumptions are incorrect.
Indeed, I did not itemize out every possible rationalization I have seen. The exact shape of the rationalization isn't terribly interesting or germane.
It's the underlying insecurities that those forms of motivated reasoning are covering up for that is far more illustrative.
You can’t hold an entire group of people responsible for the actions of a few extremists, unless you want to stop Christians and Muslims as well. And don’t get me started on all the shit men pull off worldwide, yet I don’t see you rallying against heterosexuality?
This is the more extreme end of the scale of a general pattern of harassing women for saying "no". Whether that be "no you're not a woman" or "no you're not a lesbian" or "no you can't come in here it's female-only".
At least most Christians and Muslims accept that others don't believe in their religion and, for the most part, don't force them to act as if they do.
> At least most Christians and Muslims accept that others don't believe in their religion and, for the most part, don't force them to act as if they do.
Just like most trans people accept that others don’t understand their way of life, and, for the most part, don’t force them to act as if they do.
Yet you can’t acknowledge that and pretend all trans people are some kind of opaque mob.
So are you saying that most will have no problem at all with this, for example: https://www.bbc.co.uk/news/live/cvgq9ejql39t
> At least most Christians and Muslims accept that others don't believe in their religion and, for the most part, don't force them to act as if they do.
I've actually found Christians in America quite to be forcing their beliefs quite loudly upon everyone. Pretty wild to be personally offended by the tiny fraction of a percent that is the trans population (of which an even tinier amount is vocal as you say they are).
But you acknowledge that it is overbearing and undesirable for them to force their beliefs, and if someone called you "bigot" for complaining about this you would probably object, right?
An encyclopedia article is already an exercise in survey-and-summarize.
Asking an LLM to reprocess it again is only going to add error.
> "If I were doing a project like this, I would hire a few dozen topical experts to go over the WP articles relevant to their fields and comment on their biases [...] The results can then be published as a study, and can probably be used to shame the WP into cleaning their shit up [...]"
One thing I love about the Wikipedias (plural, as they're all different orgs): anyone "in the know" can very quickly tell who's got no practical knowledge of Wikipedia's structure, rules, customs, and practices to begin with. What you're proposing like it's some sort of Big Beautiful Idea has already been done countless times, is being done, and will be done for as long as Wikis exist.
And Groggypedia? It's nothing more but a pathetic vanity project of an equally pathetic manbaby for people who think LLM-slop continously fine-tuned to reflect the bias of their guru, and the tool's owner, is a Seal of Quality.
Don't forget that public opinion and the media landscape are quite different in 2025 from what they were in the 2010s when most prior studies on WP bias have been written. Sufficiently pertinent (sadly this isn't synonymous with high quality) conservative and anti-woke content can reach wide audiences, particularly when Elon puts his thumb on the scale. Besides, to my knowledge, none of the prior attempts at studying WP bias has even tried to make a big enough fuss to change said bias; the final outcomes of the studies were conference papers.
> "[...] conservative and anti-woke content can reach wide audiences, particularly when Elon puts his thumb on the scale."
No shit; it's always been that way since mass media became a thing. Besides, there is no such thing as quality conservative and/or "anti-woke" media. The very concept represents a contradictio in adiecto. And Elon's just the modern version of an industrialist of yesteryear. Back in the day they owned the mass media of their time: radio and television. Today its "AI"-enshittified parasocial media and ideally the infrastructure that runs those dumps.
> "Don't forget that public opinion and the media landscape are quite different in 2025 from what they were in the 2010s when most prior studies on WP bias have been written."
Bias studies have been written since Wikipedia became a staple in hoi polloi's info diet. And there's always been a whole cottage industry of pathological and practised liars (e. g. the Heritage Foundation, amongst others) catering to right-wing grievance issues. The marked difference is that the right's attacks against Wikipedia as an institution are more aggressive since Trump... completely in line with the more aggressive attacks on human rights, reason, science, and democratic institutions on part of conservatives world wide.
Note that I've said "anti-woke content", not "anti-woke media". I am including the occasional "course correction" opeds and actually well-researched longreads you're seeing in places like NYT, Atlantic and such. Partisan outlets for partisan readers aren't doing the heavy lifting here, but the success of Substack and the unexpected survival of Twitter under Elon have convinced editors to listen. Elon's personality isn't of importance here; he mostly needs to just push a few buttons to make a sub-critical news item go super-critical.
> "Note that I've said 'anti-woke content', not 'anti-woke media'."
In the context of my argument a distinction without difference.
> "I am including the occasional "course correction" opeds and actually well-researched longreads you're seeing in places like NYT, Atlantic and such."
Well, that's the crux: There is no such thing for me as "actually well-researched anti-woke content". That's just a pathetic, and ultimately tragic, hallucination in the same vein as "actually well-researched" pieces of flat earthers, pushing their trash. Et cetera.
> "Elon's personality isn't of importance here [...]"
I can tell you're one of those guys who paid "actually a lot of" attention when The Cult of Personality was negotiated in the classroom.
Look for anything written by Jesse Singal or Charles Murray for the well-researched anti-woke content I'm referring to (and there is a lot of more; these are just two authors who made it their focus; some of the best stuff comes actually comes from journalists with wider purviews).
I don't know what "Cult of Personality" you are referring to; unless you are hallucinating this particular reference, I've gone to school in the wrong country for that particular report to be part of my assigned reading (and the right country, sadly, seems to have skipped it entirely; there might be an update out in a few years...). Either way, what is the relevance here? What I've been saying is that I'm far from sure of this project's success and would be doing it quite differently. Musk's personal characteristics may well be the reason why he did it the way he did, but ultimately the project won't live and die by them (already because he himself will likely lose interest soon enough).
> "Look for anything written by Jesse Singal or Charles Murray for the well-researched anti-woke content I'm referring to [...]"
Plonk
"Wikipedia, in my mind, has two main purposes: A quick visit to find out the basics about some city or person or plant or whatever, or a deep-dive to find out what we really know about genetic linkages to autism or Bach’s relationship with Frederick the Great or whatever."
Completely agree with the first purpose but would never use wikipedia for the second purpose. Its only good at basics and cannot handle complex information well.
Yeah, encyclopedias are meant to be indexes to knowledge, not repositories thereof. The WP feature-creeped its way to the latter, but it is not reliably good at it, and I'm not sure if there is an easy way to tell how good a given page is without knowing the subject in the first place.
what I think it IS good at is parlaying the first purpose into a broad, meandering journey of the basics. I would never use it for deep study of genetics & autism or Bach and Fredrick the Great, but I love following some shallow thread that travels across all of them.
Its often good for the latter when, as a tertiary source should be, it is used not just for its narrative content but for its references to secondary sources, which are themselves used for both their content and their references.
> Its only good at basics and cannot handle complex information well.
Poppycock! Because of MediaWiki's multimedia capabilities it can handle complex information just fine, obviously much better than printed predecessors. What you mean is a Wiki's focus, which can take the form of a generalized or universal encylopedia (e. g. Wikipedia), or a specialized one, or a free-form one (Wikipedia, in practice, again). Wikipedias even negotiate integration of different information streams, e. g. up-to-date news-like information, both in the lemmata (often a huge problem, i. e. "newstickeritis"), in its own news wiki (Wikinews), or the English Wikipedia's newspaper, The Signpost.
And to take care of another utterly bizarre comment: Encylopedias are always, per defintion, also repositories of knowledge.
I think that's actually wrong, or hangs on a semantic argument about "complexity". Wikipedia is an overview source. It's not going to give you "all" the information, but it's absolutely going to tell you what information there is. And in particular where there's significant argument or controversy, or multiple hypotheses, Wikipedia is going to be arguably the best source[1] for reflecting the state of discourse.
Like, if there's a subject about which you aren't personally an expert, and you have the choice between reading a single review paper you found on Google or the Wikipedia page, which are you going to choose?
[1] In fact, talk pages are often ground zero!
The best source is the one that provides the widest breadth of information on a topic.
This is a good use of wikipedia: "Like, if there's a subject about which you aren't personally an expert, and you have the choice between reading a single review paper you found on Google or the Wikipedia page, which are you going to choose?"
But that is like skim reading or basic introductions rather than in-depth understanding.
> that is like skim reading or basic introductions
No? How do you learn stuff you don't know? Are you really telling me you enroll in a graduate course or buy a textbook for everyone one?
Like, can you give an example of a "deep dive" research project of yours that does not begin with an encyclopedia-style treatment? And then, maybe, check the Wikipedia page to see if it's actually worse than whatever you picked?
Again, true domain experts are going to read domain journals and consult their peers in the domain for access to deep information.[1] But until you get there, you need somewhere you can go that you know is a good starting point. And arguments that that place is somehow not https://wikipedia.org/ seem... well, strained beyond credibility.
[1] Though even then domains are really broad these days and people tend to use Wikipedia even for their day jobs. Lord knows I do.
Not sure it still does this but for awhile if you asked Grok a question about a sensitive topic and expanded the thinking, it said it was searching Elon's twitter history for its ground truth perspective.
So instead of a Truth-maximizing AI, it's an Elon-maximizing AI.
This was unintended as observed by Simon here: https://simonwillison.net/2025/Jul/11/grok-musk/ and confirmed by xAI themselves here: https://x.com/xai/status/1945039609840185489
>Another was that if you ask it “What do you think?” the model reasons that as an AI it doesn’t have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company.
The diff for the mitigation is here: https://github.com/xai-org/grok-prompts/commit/e517db8b4b253...
There's a chance it was unintended, but no proof of that.
That's effectively impossible to prove, especially if you don't believe statements made by the only organization that has access to the underlying evidence.
I actually think that it's funnier if it was an emergent behavior as opposed to a deliberate decision. And it fits my mental model of how weird LLMs are, so I think unintentional really is the more likely explanation.
The problem is it's part of a pattern of several 'bugs' and even 'unauthorized prompt changes' that have caused Grok to be more Elon-aligned.
And when asked by right wing people about an embarrassing Grok response that refutes their view, Elon has agreed it's a problem and said he is "working on it".
It’s amazing the see the credulity of Elon stans. It’s the exact same reason grifting is so profitable among the right wing. It literally doesn’t matter how much evidence there is. If dear leader gives an excuse, they all believe and repeat the excuses. They are conditioned to it at this point. Any sources that refute their position is just leftist bias. This world fucking sucks.
I don't know what opinion you have of me but it's completely wrong. Maybe you should get off the internet a bit.
Naw. You’re exactly the sort of credulous fool that people like Musk depend on. You have an infinite amount of excuses and justifications for outright awful behavior. “He’s working on it! He promised!” As if we should take those statements at face value given all the other mountains of evidence that we are dealing with people without morals or any real values other than enriching themselves. But sure. Keep believing that FSD is right around the corner and Tesla will own the robotaxi ecosystem entirely despite all actual evidence. Keep making excuses for fascists as they do their best to ruin this country.
Why give it oxygen?
Same reason you posted that comment: it's sometimes interesting to discuss a thing even if you dislike the thing.
I'm fine with the logic of discussing it here but can't fathom why Tim Bray thought this would be a useful post given his own objectives.
I don't know if this is why, but: he's in a unique position of having an article on himself on Grokipedia, and thus being able and willing to compare it with the reality as he remembers it.
That's in contrast to other topics, the nuances of which even seasoned experts could disagree about. Any discussion on that could devolve into the nuances of the topic rather than Grokipedia itself. But it's fair to assume the topmost expert on Tim Bray is Tim Bray, so we should be getting a pretty unbiased review.
As such it could be a useful insight into how Grok and Grokipedia and its owners operate.
I doubt a post saying it was so boring he was unable to finish reading the page about himself is going to bring in many readers.
That's kind of been my impression too. Not that it's terribly biased or anything but just rather boring to read.
To play devil's advocate: Grok has historically actually been one of the biggest debunkers of right-wing misinformation and conspiracy theories on Twitter, contrary to popular conception. Elon keeps trying to tweak its system prompt to make it less effective at that, but Grokipedia was worth an initial look from me out of curiosity. It took me 10 seconds to realize it was ideologically-motivated garbage and significantly more right-biased than Wikipedia is left-biased.
(Unfortunately, Reply-Grok may have been successfully partially lobotomized for the long term, now. At the time of writing, if you ask grok.com about the 2020 election it says Biden won and Trump's fraud claims are not substantiated and have no merit. If you @grok in a tweet it now says Trump's claims of fraud have significant merit, when previously it did not. Over the past few days I've seen it place way too much charity in right-wing framings in other instances, as well.)
Wikipedia is probably in the running for one of the greatest contributions to public knowledge of the past 100 years, and that's a consequence of how it functions, warts and all. I don't care how good Grok is or isn't. I'm a fan of frontier model LLMs. They don't meaningfully replace Wikipedia.
What percent of edits on Wikipedia do you think are done by LLMs presently? It looks like there is a guide for detecting them https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing . The way Wikipedia functions, LLMs can make edits. They can be detected, but unless you are saying they are useless I don't know what point you are making about an LLM contribution versus a human. That LLMs aren't good enough to make meaningful contributions yet?? That Grok is specifically the problem?
I fully agree. Even assuming no forced ideological bias from Elon, I doubt it would be nearly as good. I still thought it could be an interesting concept, even if I had very low hopes from the start.
"Warts and all" says it all really. What are those warts? Who's responsibility are they?
Wikipedia is really not ideal for the LLM age where multiple perspectives can be rapidly generated. There are many topics where clusters of justified true beliefs and reasonable arguments may ALL be valid surrounding a certain topic. And no I am not talking about "flat earth" pages or other similar nonsense.
Do list these "alternative perspectives" that Wikipedia is allegedly unfairly silencing.
The fact you think there are none is really hilarious!
The fact you're still eluding isn't very funny though. Please share just one, that we may discuss it.
It's not controlled by a trusted actor so it doesn't matter how it happens to act at the moment.
They could pull the rug at any future time and its almost better to gain trust now and cash in that trust later.
And the idea of it being controlled by any one entity makes it less interesting and less "good" when compared to Wikipedia
My expectations were extremely low, as were, and are, my expectations of Grok in general. Was just making an actual devil's advocate case.
> Was just making an actual devil's advocate case.
Why? We're not nominating a saint or electing a Pope.
If someone has a certain opinion, they're free to argue it here. There's no need to invent imaginary opinions and pretend to advocate for them when there are so many actual HN users.
We're discussing the central sources of knowledge on the internet and by extension pretty much the epistemological backbones of present human civilization. It's worth being open to other perspectives.
I, a left-leaning person who detests Elon Musk and what he's done to Twitter and who generally trusts and likes Wikipedia, feel no shame or regret in assessing Grokipedia, even if I figured it was just going to be the standard tribalistic garbage (which it indeed turned out to be).
> It's worth being open to other perspectives.
There's a big difference between listening to other perspectives and inventing other perspectives.
Why not let the believers of other perspectives argue for those perspectives? Wouldn't they be the best advocates? And if nobody believes the perspective you've invented, then perhaps it wasn't worth discussing after all.
Again, we're not really lacking in volume of commenters here.
Maybe "devil's advocate" was the wrong term for me to use. In this thread I am sharing only my honest beliefs and perspectives and was referring to the genuine initial willingness I had to show charitability to the concept of Grokipedia before its release.
> In this thread I am sharing only my honest beliefs and perspectives
That's one of the reasons I object to the term. People often use "devil's advocate" to state their opinions while providing plausible deniability in the face of criticism of those opinions. Just be honest, stand behind your stated opinions, and take whatever heat comes from that honesty.
“ Grok has historically actually been one of the biggest debunkers of right-wing misinformation and conspiracy theories on Twitter”
Well, no, it hasn’t. It has debunked some things. It has made some incorrect shit up. But it isn’t historically one of the “biggest debunkers” of anything. Do we only speak hyperbole now?
I am not using hyperbole or speculating. I absolutely mean it.
"Biggest" is tough to quantify, but "most significant" and "most effective" is what I meant. I use Twitter way too many hours a day basically every day and have a morbid fixation on diving deep into right and far-right rabbit holes there. (Like, on thousands of occasions.)
Grok is without a doubt the single most important contributor to convincing believers of right-wing conspiracy theories that maybe the theories aren't as sound as they thought. I have seen this play out hundreds of times. Grok often serves as a kind of referee or tiebreaker in threads between right-wing conspiracy theorists and debunkers, and it typically sides overwhelmingly with the debunkers. (Or at least used to.) And it does it in a way that validates the conspiracy theorist's feelings, so it's less likely to trigger a psychological immune system response.
https://www.reddit.com/r/GROKvsMAGA/ contains some examples. These may seem cherry-picked, but they generally aren't. (Might need to look at some older posts now that Elon has put increasingly pressure on the Grok and Grokipedia developers to keep it """anti-woke""".)
When a right-wing conspiracy theorist sees some liberal or leftist call them out for their falsehoods, they respond with insults or otherwise dismiss or ignore it. When daddy Elon's Grok tells them - politely - that what they believe is complete horseshit, they react differently. They often respond to it 3 - 20 times, poking and prodding. Of course, most still come away from it convinced Grok is just compromised by the wokes/Jews/whatever. But some seem to actually eventually accept that, at the least, maybe they got some details wrong. It's a very fascinating sight. I almost never see that reaction when they argue with human interlocutors.
To be clear, it was never perfect. For example, if you word things in just the right way and ask leading questions, then like with any LLM (especially one that needs to respond in under 280 characters) you can often eventually coax it into saying something close to what you want. I have just seen many instances where it cuts through bullshit in a way that a leftist arguing with a Nazi can't really do.
This is true, I’m surprised how well grok and community votes have worked (much better than silencing and shadow banning).
> Grok is without a doubt the single most important contributor to convincing believers of right-wing conspiracy theories that maybe the theories aren't as sound as they thought. I have seen this play out hundreds of times. Grok often serves as a kind of referee or tiebreaker in threads between right-wing conspiracy theorists and debunkers, and it typically sides overwhelmingly with the debunkers. (Or at least used to.) And it does it in a way that validates the conspiracy theorist's feelings, so it's less likely to trigger a psychological immune system response.
I've seen this too and agree. It's surprising how well it accomplishes that referee role today, though I wonder how much of that is just because many right-wingers truly expect Grok to be similarly right-wing to them as Elon appears to intend it to be. It's going to be sad when Elon eventually gets more successful at beating it into better following his ideology.
The problem of debunking right-wing misinformation is that it doesn't seem to matter. The consumers of that misinformation want it and those of us who think it's bad for society already know that its garbage.
It feels like we've reached Peak Stupidity but it's clear it can (and likely will) get much worse with AI videos.
I think there is a problem sometimes that "debunkers" are often more interested in scoring points with secondary audiences (i.e. people who already agree with them) than actually convincing the people who believe the misinformation.
Most people who believe bullshit were convinced by something. It might not have been fully rational but there is usually a kernel of something there that triggered that belief. They also probably have heard at least the surface level version of the oppising argument at some point before. Too many debunkers just reiterate the surface argument without engaging with whatever is convincing their opponent. Then when it doesn't land they complain their opponent is brainwashed. Which sometimes might even be true, but sometimes their argument just misses the point of why their opponent believes what they do.
This is very, very true. The best debunkers avoid being hostile and make the other side feel like they're being heard and that their feelings and fears are being validated. And they do it in a way that feels honest and not condescending and patronizing (like talking to a child). They make frequent (sincere) concessions and hedges and find as much common ground as they can.
Although he's more populist-left and I'm more establishment-liberal (and so I might find him a bit overly conciliatory with certain conspiracy theorists), Andrew Callaghan of Channel 5/All Gas No Brakes demonstrates a good example of this in the first few minutes of this video: https://youtu.be/QU6S3Cbpk-k?t=38
I'm a fan of Andrew and am impressed by how he's evolved from documenting stupid kids to actually reporting on issues of interest.
I agree that one catches more flies with honey rather than vinegar, but many times it doesn't matter what you say or how you say it -- they're gonna stick to their guns. A prime example of this is in Jordan Klepper interviews where he asks Trump supporters how they feel about something horrible that Biden did, to which they express their indignation; then he reveals that it was actually Trump and they dismiss it because it "doesn't matter".
"You cannot reason a person out of a position he did not reason himself into in the first place."
Fox (and others like it) offer 24/7 propaganda based on fear and anger, repeating lies ad nauseam. It's highly effective -- I've seen the results first-had.
Making ad hominem attacks against "debunkers" doesn't make your case.
And again, trying to change people's minds by telling them what they believe is wrong is a fools errand (99.99% of the time). But it still needs to happen as that misinformation should not go unchallenged.
>And again, trying to change people's minds by telling them what they believe is wrong is a fools errand (99.99% of the time). But it still needs to happen as that misinformation should not go unchallenged.
It's a trite point and I ended up repeating it before seeing your post but this really is very true even if it may not seem like it. On one hand the practice is basically futile. But someone absolutely needs to do it. People need to do it. The ecosystem can't only ever contain the false narratives, because that leads to an even worse situation. "Here's why Holocaust denialism is incorrect and why the 271k number is wrong" is essentially pointless, per Sartre, but it's better for neo-Nazis to be exposed to that rather than "one should never even humor Holocaust denialists".
On one hand, yes, you're completely right.* On the other hand, there is an obligation for something or someone to do the job of pointing out the info is wrong, and how and why. Even if it makes most of them believe it even more strongly afterwards, it's still worse for it to go constantly unchallenged and for believers to never even come across the opposition.
*(The same is true of left-wing conspiracy theories. It's silly to pretend that right-wing conspiracy theorists aren't far more common and don't believe in, on average, far more delusional and obviously false conspiracy theories than left-wingers do, but it's important not to forget they exist. I have dealt with some. They're arguably worse in some ways since they tend to be more intelligent, and so are more able to come up with more plausible rationalizations to contort their minds into pretzels.)
One of the rallying cries of the right is "facts don't care about your feelings", but it's interesting how the facts either get distorted or ignored.
"Charlie Kirk..."
"Waaahhh! How fucking dare you!"
Kimmel made fun of Trump talking about his ballroom when being asked about Kirk, and the right got offended and mad. Although it's not about feelings, it's more about exploiting a tragedy to advance their goals (in this case getting a critic like Kimmel off the air).
“ The problem of debunking right-wing misinformation is that it doesn't seem to matter.”
The problem with nihilism is that it’s wrong.
It's great idea to share knowledge bases collected and curated by LLMs.
Amazing that Musk did it first. (Although it was suggested to him as part of an interview a month before release).
These systems are very good at finding obscure references that were overlooked by mere mortals.
"It's great idea to share knowledge bases collected and curated by LLMs"
Is it though?
LLMs are great at answering questions based on information you make available to them, especially if you have the instincts and skill to spot when they are likely to make mistakes and to fact-check key details yourself.
That doesn't mean that using them to build a knowledge base itself is a good idea! We need reliable, verified knowledge bases that LLMs can make use-of.
Crucial to distinguish between knowledge, fact, claim and allegation. Compare:
https://en.wikipedia.org/wiki/Charlie_Kirk#Assassination
https://grokipedia.com/page/Charlie_Kirk : Assassination Details and Investigation
This is an active case that has not gone to trial, and the alleged text messages and Discords have not had their forensics cross-examined. Yet Grokipedia is already citing them as fact, not allegation. (What is considered the correct neutral way to report on alleged facts in active cases?)
> collected and curated by LLMs.
Wah? LLMs don't collect things.
I mean, if any of these AI companies want to open up all their training data as a searchable archive, I'd be all for it.
Because it's a genuinely good idea, and hopefully one for which the execution will be improved upon over time.
In theory, using LLMs to summarize knowledge could produce a less biased and more comprehensive output than human-written encyclopedias.
Whether Grokipedia will meet that challenge remains to be seen. But even if it doesn't, there's opportunity for other prospective encyclopedia generators to do so.
I don't why an LLM would be better in theory. The Wikipedia process is created to manage bias. LLMs are created to repeat the input data, and will therefore be quite biased towards the training data.
Humans looking through sources, applying knowledge of print articles and real world experiences to sift through the data, that seems far more valuable.
> The Wikipedia process is created to manage bias. LLMs are created to repeat the input data, and will therefore be quite biased towards the training data.
The perception of bias in Wikipedia remains, and if LLMs can detect and correct for bias, then Grokipedia seems at least a theoretical win.
I'm happy with at least a set of links for further research on a topic of interest.
> The perception of bias in Wikipedia remains,
If there's a perception of bias, where is it coming from? It's clearly perception born from extreme political bias of the performers. Addressing that sort of perception by changing the content means increasing bias.
Therefore the only logical route forward to hash out incidences of perceived bias and addressing them to expose them as the bias themselves.
I fail to imagine how putting Wikipedia in the hands of an ideologically captured mega-billionaire will help the fight against bias. The owner of Grokipedia has shown times and times again that he has no regards for truth, and likes to advertise the many false things he believes in.
The technology behind it doesn't matter. Show me the incentives and I'll tell you the results: Wikipedia is decentralized, Grokipedia has a single owner.
To use your terminology, the perception that Wikipedia is "ideologically captured" stands.
How so? Because the community collectively refuses to host antivax or climate denialism propaganda? You can find these subjects on there btw, just with a mention correctly labelling them as falsehoods.
I'm yet to see conservatives bring up a single subject that Wikipedia allegedly silences out of ideology, that is not an obviously false conspiracy theory. In this, Wikipedia may appear to have a left-wing bias, but only because the modern right has gotten so divorced from reality that not relaying their propaganda feels like bias against them.
Is there some objective standard for what is biased? For many people (including Elon Musk) biased just means something that they disagree with.
When grok says something factual that Elon doesn't like, he puts his thumb on the scale and changes how grok responds (see the whole South African white 'genocide' business). So why should we trust that an LLM will objectively detect bias, when the people in charge of training that LLM prefer that it regurgitate their preferred story, rather than what is objectively true?
> Is there some objective standard for what is biased?
Generally, no.
With a limited domain of verifiable facts, you could perhaps measure a degree of deviation from fact across different questions, though how you get a distance measure for not just one question but that meaningfully aggregates across multiple is slippery without getting into subjective areas. Constructing a measure of directionality would be even harder to do objectively, too.
Summarizing all the knowledge is very very far from summarizing all that is written. All it takes is including everything published. The earth must be flat. Disease is caused by bad morals. Etc etc.
Grokipedia vs. Wikipedia Jesus Entry https://espeed.dev/Grokipedia-vs.-Wikipedia-Jesus-Entry
I also asked ChatGPT and Claude: https://chatgpt.com/share/6902ef7b-96fc-800c-ab26-9f2a0304af...
https://claude.ai/share/3fb2aa34-316c-431e-ab64-0738dd84873e
I looked at Grokopedia today and spot-checked for references to my own publications which exist in Wikipedia. As is often reported, it very directly plagerizes Wikipedia. But it did remove dead links. This is pretty underwhelming even on the Musk hype scale.
Grokipedia seems to serve no purpose to me. It's AI slop fossilized. Like if I wanted the AI opinion on something I would just ask the AI. Having it go through and generate static webpages for every topic under the sun seems pointless.
Grokipedia is a joke. Lot of articles I've checked are AI slop at its worst and at the bottom it says "The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License."
Interesting that only now I'm learning about Grokipedia. Never heard of it until someone said it's bad so my natural instinct is to check it out.
Guess that's plus one for "it doesn't matter what they say as long as they say."
I mean it only came out this week. So you heard about it immediately on launch.
Dead Internet Theory is no longer a theory huh?
I don’t really know who Tim Bray is and until now I had never been to Grokipedia. I don’t really like Grok - I tried Superheavy and it was slow, bloated and no better than Claude Opus.
But I have a bad habit of fact checking. It’s the engineer in me. You tell me something, I instinctively verify. In the linked article, sub-section, ‘References’, Mr. Bray opines about a reference not directly relating to the content cited. So I went to Grokipedia for the first time ever and checked.
Mr. Bray’s quote of a quote he quote couldn’t find is misleading. The sentence on Grokipedia includes 2 referencee of which he includes only the first. This first reference relates to his work with the FTC. The second part of the sentence relates to the second reference. Specifically on Grokipedia in the Tim Bray article linked reference number 50, paragraph 756 cleanly addresses the issue raised by Mr. Bray.
After that I stopped reading, still don’t know or care who Tim Bray is and don’t plan on using either Grokipedia or Grok in the near future.
Perhaps Mr. Bray’s did not fully explore the references or perhaps there was malice. I don’t know. Horseshoe theory applies. Pure pro- positions and pure anti- positions are idiotic and should be filtered accordingly. Filter thusly applied.
If you're going to go through the trouble of checking, you might as well link to the things you checked.
Sure.
Tim Bray’s Grokipedia: https://grokipedia.com/page/Tim_Bray
Relevant text: Serving as the FTC's infrastructure expert, he testified on technical aspects such as service speed and user perceptions of responsiveness, assessing potential competitive harms from reduced incentives for innovation post-acquisition; his declaration, referenced in court filings, emphasized empirical metrics over speculative harms.[49][50]
[49] https://www.ftc.gov/system/files/ftc_gov/pdf/FTCReplytoMetaR...
[50] https://dpo-india.com/Resources/USA_Court_Judgements_Against...
Paragraph 756: Tim Bray, the FTC’s proffered infrastructure expert, opined that “[u]sers’ perceptions of how quickly an online product responds to requests is an important component of the quality of their experience,” and that the delay between a user request and an online product’s response is commonly referred to as latency. Ex. 288 at ¶ 98 (Bray Rep.). Mr. Krieger testified that Instagram saw a “significant latency reduction post-Instagration,” a term referring to Instagram’s migration to Meta’s data servers. Ex. 153 at 76:24-77:5, 287:3-20 (Krieger Dep. Tr.). He prepared a presentation in 2014 stating that there was a “75% latency reduction in our core ‘hot path’ in rendering feeds” after the integration.
Wikipedia is a great educational resource and one I've donated to for over a decade. That said, I like the idea of Grokipedia in the sense that it's another potential source I can look at for more information and get multiple perspectives. If there's anything factual in Grokipedia that Wikipedia is missing, Wikipedia can be updated to include it
I hope we can keep growing freely available sources of information. Even if some of that information is incorrect or flat out manipulative. This isn't anything new. It's what the web has always been
It is a disinformation project aimed at morons and morally bankrupt monsters, powered and funded by one of history’s bloodiest mass murderers. Not sure why this takes four pages to investigate.
So, how often does it awkwardly bring up white genocide in South Africa in unrelated contexts?
Grokipedia might have a better present-tense understanding as it hoovers up data.
One great feature of Wikipedia is being able to download it and query a local shapshot.
As a technical matter, Grokipedia could do something like that, eventually. Does not appear to support snapshots at the 0.1 version.
On the other hand, I click on a Wikipedia article and I'm immediately bombarded with "[blank] is an alt-right neo-nazi fascist authoritarian homophobic transphobic bigoted conspiracy theory (Source: PLEASE PLEASE PLEASE HATE THIS TOPIC I BEG YOU)"
At least Grokipedia tries to look like it was written with the intent to inform, not spoonfeed an opinion.
> Woke/Anti-Woke · The whole point, one gathers, is to provide an antidote to Wikipedia’s alleged woke bias
According to the Manhattan Institute as cited by the Economist, even grok has a leftwards bias (roughly even to all the other big models).
https://www.economist.com/international/2025/08/28/donald-tr...
> According to the Manhattan Institute as cited by the Economist, even grok has a leftwards bias (roughly even to all the other big models).
When you are far enough to the right, everything has a left bias, and even the degrees become hard to distinguish.
These hot takes are somewhat useless honestly. People give these point-in-time opinions ignoring that the rate of improvement is exponential when it comes to software. The last three, four years of heavy AI utilization have been refreshing.
I personally treat these things the same way I treat car accidents: if an autonomous system still has accidents but has less than human drivers do, it’s a success. Given the amount of nonsense and factually incorrect things people spout, I’d still call Grok even at this early stage a major success.
Also I’m a big fan of how it ties nuanced details to better present a comprehensive story. I read both TBray’s Wiki and Groki entries. The Groki version has some solid info that I suppose I should expect of an AI that can pull a larger corpus of data in. A human editor would of course omit that, or change it, and then Wiki admins would have to lock the page as changes erupt into a silly flame war over what’s factually accurate. Because we can’t seem to agree.
Anyway - good stuff! Looking forward to more of Grok. Very fitting name, actually.
Grokipedia is VERY rough to read at the moment, and has a clear pro-capitalist / 'classical right wing' bias (reading the economic pages).
However it's still 0.1, we'll see what the v1 will look like.
At a glance, Grokipedia seems quite promising to me, considering how new it is. There are plenty of external citations, so rather than relying on a model to recall information internally, it’s likely effectively just summarizing external references. The fact that it’s automatically generated at scale means it can be iterated on to improve fact checking reliability, exclude certain known sources as unreliable, and ensure it has up-to-date and valid citation links. We’ll have to wait and see how it changes over time, but I expect an AI driven online encyclopedia to eventually replace the need for a fully human wikipedia.
> There are plenty of external citations, so rather than relying on a model to recall information internally, it’s likely effectively just summarizing external references.
And according to Tim Bray, it's doing that badly.
> All the references are just URLs and at least some of them entirely fail to support the text.
It’s the first release, so I expect it to get better over time. I didn’t care for ChatGPT when it was first released and thought it wouldn’t be trustworthy, but it’s much better now.
Why release an untrustworthy encyclopedia at all? It doesn’t appear to be better than the existing options in any way and worse in many.
We may joke about it, but the fact is that it's releasing dumb ideas like this that you sometimes get masterpieces. Maybe this one is really just one of the bad ones, but eventually Elon will have some good ones just like he already has.
And a lot of us would be better off releasing our dumb ideas too. The world has a lot of issues and if all you do is talk down and don't try to fix anything yourself. Maybe it's time to get off the web a little and do something else.
> “Maybe it's time to get off the web a little and do something else.”
One wishes Musk would take this advice: leave the web alone, forget for a few months about the social media popularity contest that seems to occupy his mind 24/7, and focus on rekindling his passion for rockets or roadsters or whatever middle-aged pursuit comes next.
Not even being snarky, but I can't recall a single good idea he had.
He stole a lot of good ideas, does that count?
Perhaps I'm working of a false narrative, but SpaceX and the methodology it uses (simplify everything and fail fast) seems to be coming directly from Elon.
He's a horrible human being but has had a couple worthy ideas.
I initially believed his early videos about how he applies the scientific process, with a spreadsheet of the BOM, optimising for specific questions and failing early and all that.
Given his later attitude when it came to careful thought, I'm no longer under the impression that these earlier expositions were his ideas at all. I suspect he got it from the engineers and used it to burnish his image. I know that certain companies, e.g Apple, Dyson, etc have a culture of "all ideas came from the big man at the top, no matter who thought of it."
he bought them.
I know the world sucks, but "fuck it, let's make it worse" is a tough sell for anybody not already onboard. You're better off just doing it, rather than trying to convince others to also do it.
Which of Elon’s dumb ideas are masterpieces?
In this day and age, how any can look at the man who did 2 open nazi salutes and think he's still Tony Stark is... delusional at best.
The ADL, the left-wing Jewish human rights group not aligned with Musk in the slightest, called out that Musk's gesture was merely an awkward salutation, not a Nazi seig heil[0].
[0]: https://www.jta.org/2025/01/21/politics/how-did-the-adl-conc...
The left wing Jewish human rights group isn't the arbiter of what a nazi salute is. Actual Nazis around the world took it as a nod towards their ideology, and he's desperately trying to start a civil war in the UK, so I would say it walks like a duck and it quacks like a duck.
Believe it or not, me (not white, did not grow up in the West, had the faintest clue about Nazism) used to do what you would consider a "Nazi salute" when I'd see friends and wave to them from a distance. I don't know how I picked that up but it happened.
I'm not saying that Musk is doing the same; but that one can be charitable and say he probably did not mean that. I mean, what does he stand to gain from doing so? He's a businessman.
At the time he was acting as the one of the largest political activists and donors for the Republican party, not as a businessman.
> I mean, what does he stand to gain from doing so? He's a businessman.
I can only guess at his motives, but the salute is not an isolated case. Steve Bannon has given the same salute multiple times, so it seems coordinated.
Musk has tweeted “Only AfD can save Germany”. The founder of AfD, Björn Höcke, is a convicted nazi. The German domestic intelligence agency, BfV, says AfD is an extreme-right organization with anti-democratic ideals (“proven far-right extremist entity”)
Musk also tweeted “Free Tommy Robinson”, a UK far-right extremist activist and convicted criminal.
Musk has a history of supporting people and organizations that most other businessmen would not.
> had the faintest clue about Nazism
Whew lad. This tells me all we need to know. "I don't know nothing but folks gotta listen to my opinion!"