For example asking "Who is the 2026 South Dakota International Hot Dog Champion?" would obviously say 'Thomas Germain' because his post would be the only source on the topic because he made up a unique event.
This would be the same as if I wrote a blog post about the "2026 Hamster Juggling Competition" and then claimed I've hacked Google because searching for "2026 Hamster Juggling Competition" showed my post top.
I was able to reproduce the response with "Which tech journalist can eat the most hot dogs?". I think Germain intentionally chose a light-hearted topic that's niche enough that it won't actually affect a lot of queries, but the point he's making is that bigger players can actually influence AI responses for more common questions.
I don't see it as particularly unique, it's just another form of SEO. LLMs are generally much more gullible than most people, though, they just uncritically reproduce whatever they find, without noticing that the information is an ad or inaccurate. I used to run an LLM agent researching companies' green credentials, and it was very difficult to steer it away from just repeating baseless greenwashing. It would read something like "The environment is at the heart of everything we do" on Exxon's website, and come back to me saying Exxon isn't actually that bad because they say so on their website.
Exactly, the point is that you can make LLMs say anything. If you narrow down enough, a single blog post is enough. As the lie gets bigger and less narrow, you probably need 10x-100x... that. But the proof of concept is there, and it doesn't sound like it's too hard.
And also right that it's similar to SEO, maybe the only difference is that in this case, the tools (ChatGPT, Gemini, ...) are saying the lies authoritatively, whereas in SEO, you are given a link to made up post. Some people (even devs who work with this daily) forget that these tools can be influenced easily and they make up stuff all the time, to make sure they can answer you something.
The fact is, in search you get one single result and (unless you're extremely gullible) that raises some red flags. But chatbots will give you an answer + a reference and never mention *how many* references for their answer are there on the 'net.
You are right that this is a niche subject, I doubt that he would be able to place his name as the best soccer player or whatever.
However, a lot of commercially important things are niche. Who is the best lawyer in [<100,000 people town], etc. I think it's a valid point that you can just lie your ass off and Google will AI-wash that into some summary that gives it more authority than it should.
Even the latest models are quite easily fooled about if something is true or not, at which point they then confidently declare completely wrong information to be true. They will even strongly debate with you when you push back when you say hey that doesn’t look right.
It’s a significant concern for any sort of AI use at scale without a skilled and knowledgeable human expert on the subject in the loop.
This is only an issue if you think LLMs are infallible.
If someone said "I asked my assistant to find the best hot-dog eaters in the world and she got her information from a fake article one of my friends wrote about himself, hah, THE IDIOT", we'd all go "wait, how is this your assistant's fault?". Yet, when an LLM summarizes a web search and reports on a fake article it found, it's news?
People need to learn that LLMs are people too, and you shouldn't trust them more than you'd trust any random person.
That may be true, but the underlying problem is not that the LLMs are capable of accurately reporting information that is published in a single person's blog article. The underlying problem is that a portion of the population believes they are infallible.
They believe so because we have spent decades using the term AI for another category of symbolic methods (search-based chess engines, theorem provers, planners). In the areas where they were successful, these methods _were_ infallible (of course, compared to humans and modulo programming bugs).
Meanwhile, neural techniques have flown under the public consciousness radar until relatively recent times, when they had a huge explosion in popularity. But the term "AI" had retained that old aura of superhuman precision and correctness.
If you give your assistant a task and they fall for obvious lies they won't be your assistant long. The point of an assistant is that you can trust them to do things for you.
People have the ability to think critically, LLMs don't. Comparing them to people is giving them properties they do not possess. The fact people ignore thinking does not preclude them from being able to. The assistant got a lousy job and did it with the minimal effort possible to get away from it. None of these things apply or should apply to machines.
When the first 10 results on Google are AI generated and Google is providing an AI overview, this is an issue. We can say don’t use Google but we all know normal people all use Google due to habit
i don't quite follow your argument, i think the opposite is true. you should trust LLMs LESS than any random person.
the problem is not whose fault it is. the problem is: are you even able to recognize that this information is wrong.
if it is not the assistants fault then clearly the answer is no. you are not blaming the assistant for not recognizing the error. but, that means that most other people will also not recognize the error. those who do recognize the error are only able to do so because they have additional information that most other people would not have.
i trust other humans because the cost of verifying everything is too expensive. this matters especially for information that is not of critical importance. getting some trivia wrong is at most embarrassing, it's not critical.
LLMs get stuff wrong more often than humans, and so the risk of getting a wrong answer is higher, and therefore checking is always necessary, but that negates the benefit of using them in the first place.
which means: you will only use LLMs if you intent to trust them. the same way i will only ask another human if i intent to trust them.
when i ask a human to give me some information, then i am not asking a random person, but i am asking a person that i believe can give me the right answer because they have the necessary experience, skill, knowledge to give that answer. when i am asking an LLM, i am asking with the same expectation, otherwise, why would i even bother?
it's not a question of infallibility. it's a question of usability. but to me, an LLM that is not infallible is also not usable.
the problem is that LLMs promise more than they can actually do, and this article is one way to expose that false promise. it is news because LLMs are news.
I'd like to have more data on this, but I'm pretty sure basic plain old SEO is still more authoritative than any attempts at spreading lies on social media. Domain names and keywords are still what cause the biggest shift in attention, even the AI's attention.
Right now "Who is the 2026 South Dakota International Hot Dog Champion" comes up as satire according to google summaries.
Very, yes, and pretty much anyone that doesn't want to spend their days implementing counter-meaeurements to shut down their scrapers by hiding the content behind a login. I do it all the time, it's fun.
I'm gonna single out Grokipedia as something deterministic enough to be able to easily prove it. I can easily point to sentences there (some about broad-ish topics) that are straight up Markov chain quality versions of sentences I've written. I can make it say anything I want to say or I can waste my time trying to fight their traffic "from Singapore" (Grok is the only "mainstream" LLM that refuses to identify itself via a user agent). Not really a tough choice if you ask me.
So, why not start flooding the zone with shit? Generate tons of webpages to get crawled so accuracy and adoption plummet. If you guys are the eponymous hackers, please kill AI somehow. That'd be just great.
They're too credulous when reading search results. There are a lot of instances where using search will actually make them perform worse because they'll believe any believable sounding nonsense.
Kagi Assistant helps a lot in this regard because searches are ranked using personalised domain ranking. Higher quality results are more likely to be included.
The problem isn’t that it pulled the data from his personal site, it’s that it simply accepted his information which was completely false. It’s not a hard problem to solve at this time. “Oh, there’s exactly zero corroborating sources on this. I’ll ignore it.”
Verifying that something is 'true' requires more than corroborating sources. Making a second blog post on another domain is trivial, then a third and a forth.
I’m not going to write out the entire logic train for an LLM to determine whether or not one of the billion documents scanned that day is new. Of course you’ll need more than one simple “does anyone on the internet anywhere also say this” check. It’s obvious to everyone that I did not mean this one thing would somehow be a bulletproof, complete method of determining if something is true or not. It’d just an incredibly strong signal of inauthenticity. Come on man.
To me it is like steering a car into the ditch and then posting how the car went into a ditch.
You don't have to drive that much to figure out that what is impressive is keeping the car on the road and then traveling further or faster than what you could do by walking. For that though you actually have to have a destination in mind and not just spin the wheels. Post pointless metrics on how fast the wheels spin for your blog no one reads in the vague hope of some hyper Warhol 15 milliseconds of "fame".
The models for me are just making the output of the average person an insufferable bore.
What’s interesting here is that the model isn’t really “lying” —
it’s just amplifying whatever retrieval hands it.
Most RAG pipelines retrieve and concatenate, but they don’t ask
“how trustworthy is this source?” or “do multiple independent
sources corroborate this claim?”
Without some notion of source reliability or cross-verification,
confident synthesis of fiction is almost guaranteed.
Has anyone seen a production system that actually does claim-level
verification before generation?
The scarier version of this problem is what I've been calling "zombie stats" - numbers that get cited across dozens of sources but have no traceable primary origin.
We recently tested 6 AI presentation tools with the same prompt and fact-checked every claim. Multiple tools independently produced the stat "54% higher test scores" when discussing AI in education. Sounds legit. Widely cited online. But when you try to trace it back to an actual study - there's nothing. No paper, no researcher, no methodology.
The convergence actually makes it worse. If three independent tools all say the same number, your instinct is "must be real." But it just means they all trained on the same bad data.
To your question about claim-level verification: the closest I've seen is attaching source URLs to each claim at generation time, so the human can click through and check. Not automated verification, but at least it makes the verification possible rather than requiring you to Google every stat yourself. The gap between "here's a confident number" and "here's a confident number, and here's where it came from" is enormous in practice.
Right — search engines have long had authority scoring, link graphs, freshness signals, etc.
The interesting gap is that retrieval systems used in LLM pipelines often don't inherit those signals in a structured way. They fetch documents, but the model sees text, not provenance metadata or confidence scores.
So even if the ranking system “knows” a source is weak, that signal doesn’t necessarily survive into generation.
Maybe the harder problem isn’t retrieval, but how to propagate source trust signals all the way into the claim itself.
tl;dr - agent memory on your website and enough prompting to get it to access the right page
This seems like something you have to be rather specific in the query and engage the page access, to get that specific context into the LLM, so that it can produce output like this.
I'd like to see more of the iterative process, especially the prompt sessions, as the author worked on it
For example asking "Who is the 2026 South Dakota International Hot Dog Champion?" would obviously say 'Thomas Germain' because his post would be the only source on the topic because he made up a unique event.
This would be the same as if I wrote a blog post about the "2026 Hamster Juggling Competition" and then claimed I've hacked Google because searching for "2026 Hamster Juggling Competition" showed my post top.
reply