Salta al contenuto principale


Grokipedia Is the Antithesis of Everything That Makes Wikipedia Good, Useful, and Human


Grokipedia is not a 'Wikipedia competitor.' It is a fully robotic regurgitation machine designed to protect the ego of the world’s wealthiest man.

I woke up restless and kind of hungover Sunday morning at 6 am and opened Reddit. Somewhere near the top was a post called “TIL in 2002 a cave diver committed suicide by stabbing himself during a cave diving trip near Split, Croatia. Due to the nature of his death, it was initially investigated as a homicide, but it was later revealed that he had done it while lost in the underwater cave to avoid the pain of drowning.” The post linked to a Wikipedia page called “List of unusual deaths in the 21st century.” I spent the next two hours falling into a Wikipedia rabbit hole, clicking through all manner of horrifying and difficult-to-imagine ways to die.

A day later, I saw that Depths of Wikipedia, the incredible social media account run by Annie Rauwerda, had noted the entirely unsurprising fact that, behind the scenes, there had been robust conversation and debate by Wikipedia editors as to exactly what constitutes an “unusual” death, and that several previously listed “unusual” deaths had been deleted from the list for not being weird enough. For example: People who had been speared to death with beach umbrellas are “no longer an unusual or unique occurrence”; “hippos are extremely dangerous and very aggressive and there is nothing unusual about hippos killing people”; “mysterious circumstances doesn’t mean her death itself was unusual.” These are the types of edits and conversations that have collectively happened billions of times that make Wikipedia what it is, and which make it so human, so interesting, so useful.

recently discovered that wikipedia volunteers have a hilariously high bar for what constitutes "unusual death"
depths of wikipedia (@depthsofwikipedia.bsky.social) 2025-10-27T12:38:42.573Z


Wednesday, as part of his ongoing war against Wikipedia because he does not like his page, Elon Musk launched Grokipedia, a fully AI-generated “encyclopedia” that serves no one and nothing other than the ego of the world’s richest man. As others have already pointed out, Grokipedia seeks to be a right wing, anti-woke Wikipedia competitor. But to even call it a Wikipedia competitor is to give the half-assed project too much credit. It is not a Wikipedia “competitor” at all. It is a fully robotic, heartless regurgitation machine that cynically and indiscriminately sucks up the work of humanity to serve the interests, protect the ego, amplify the viewpoints, and further enrich the world’s wealthiest man. It is a totem of what Wikipedia could and would become if you were to strip all the humans out and hand it over to a robot; in that sense, Grokipedia is a useful warning because of the constant pressure and attacks by AI slop purveyors to push AI-generated content into Wikipedia. And it is only getting attention, of course, because Elon Musk does represent an actual threat to Wikipedia through his political power, wealth, and obsession with the website, as well as the fact that he owns a huge social media platform.

One needs only spend a few minutes clicking around the launch version of Grokipedia to understand that it lacks the human touch that makes Wikipedia such a valuable resource. Besides often having a conservative slant and having the general hallmarks of AI writing, Grokipedia pages are overly long, poorly and confusingly organized, have no internal linking, have no photos, and are generally not written in a way that makes any sense. There is zero insight into how any of the articles were generated, how information was obtained and ordered, any edits that were made, no version history, etc. Grokipedia is, literally, simply a single black box LLM’s version of an encyclopedia. There is a reason Wikipedia editors are called “editors” and it’s because writing a useful encyclopedia entry does not mean “putting down random facts in no discernible order.” To use an example I noticed from simply clicking around: The list of “notable people” in the Grokipedia entry for Baltimore begins with a disordered list of recent mayors, perhaps the least interesting but lowest hanging fruit type of data scraping about a place that could be done.

On even the lowest of stakes Wikipedia pages, real humans with real taste and real thoughts and real perspectives discuss and debate the types of information that should be included in any given article, in what order it should be presented, and the specific language that should be used. They do this under a framework of byzantine rules that have been battle tested and debated through millions of edit wars, virtual community meetings, talk page discussions, conference meetings, inscrutable listservs which themselves have been informed by Wikimedia’s “mission statement,” the “Wikimedia values,” its “founding principles” and policies and guidelines and tons of other stated and unstated rules, norms, processes and procedures. All of this behind-the-scenes legwork is essentially invisible to the user but is very serious business to the human editors building and protecting Wikipedia and its related projects (the high cultural barrier to entry for editors is also why it is difficult to find new editors for Wikipedia, and is something that the Wikipedia community is always discussing how they can fix without ruining the project). Any given Wikipedia page has been stress tested by actual humans who are discussing, for example, whether it’s actually that unusual to get speared to death by a beach umbrella.

Grokipedia, meanwhile, looks like what you would get if you told an LLM to go make an anti-woke encyclopedia, which is essentially exactly what Elon Musk did.

As LLMs tend to do, some pages on Grokipedia leak part of its instructions. For example, a Grokipedia page on “Spanish Wikipedia” notes “Wait, no, can’t cite Wiki,” indicating that Grokipedia has been programmed to not link to Wikipedia. That entry does cite Wikimedia pages anyway, but in the “sources,” those pages are not actually hyperlinked:

I have no doubt that Grokipedia will fail, like other attempts to “compete” with Wikipedia or build an “alternative” to Wikipedia, the likes of which no one has heard of because the attempts were all so laughable and poorly participated in that they died almost immediately. Grokipedia isn’t really a competitor at all, because it is everything that Wikipedia is not: It is not an encyclopedia, it is not transparent, it is not human, it is not a nonprofit, it is not collaborative or crowdsourced, in fact, it is not really edited at all. It is true that Wikipedia is under attack from both powerful political figures, the proliferation of AI, and related structural changes to discoverability and linking on the internet like AI summaries and knowledge panels. But Wikipedia has proven itself to be incredibly resilient because it is a project that specifically leans into the shared wisdom and collaboration of humanity, our shared weirdness and ways of processing information. That is something that an LLM will never be able to compete with.


Wikipedia Pauses AI-Generated Summaries After Editor Backlash


The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.

“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”

Two other editors simply commented, “Yuck.”

For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.

A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.
An example of what the AI-generated summary looked like.
Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.

“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”

A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.

“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”

“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”

The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”


Questa voce è stata modificata (4 mesi fa)