4/20/2026 at 4:20:47 PM
Gah, the writing on this is so painful to read, it feels like this was most likely written by an LLM.The writing style is so unclear, it's hard to figure out one of the key points: it mentions that Gemini doesn't use a distinct user-agent for its grounding. It doesn't mention whether it actually hit the endpoint during the test, though it kind of implies that with "Silence from Google is not evidence of no fetch." Uh, if there are no requests coming in live, that means no fetch, it's using a cache of your site.
It makes a difference whether it fetches a page live, or whether it's using a cached copy from a previous crawl; that tells you something about how up-to-date answers are going to be from people asking questions about your website from Gemini. But I guess the LLM writing this article just wanted to make things sound punchy an impressive, not actually communicate useful information.
Anyhow, LLM marketing spam from an LLM marketing spam company. Bleh.
by lambda
4/20/2026 at 5:54:09 PM
I haven't seen an LLM write this poorly yet (at least not passed off as good writing). This seems more like a person that used AI to organize things, but then didn't want it to seem like it was written by AI, so they rewrote it themselves. I think the problem here is just a genuinely unskilled author, and likely not a native English speaker judging by some of the awkward phrasing.by stronglikedan
4/20/2026 at 4:35:40 PM
I had to quit after a couple of paragraphs, I cant read such AI slop anymore :(by anygivnthursday
4/20/2026 at 4:38:13 PM
I did use AI to organize my ideas but I didn't think it was that bad, I'll modify and make it easier to read.Anyway, in my test I saw zero requests from any Google UA after multiple Gemini and AI mode prompts that should have triggered grounding, so the working interpretation is that Gemini served from its own index/cache rather than doing a live provider-side fetch. The original phrasing was fuzzier than it should have been.
by startages
4/20/2026 at 7:07:55 PM
If this weren't on HN I wouldn't have given this more than a few seconds of reading before switching away. Some examples of phrasing that triggers me:> attributing hits was a grep, not a guess > values below are copied from the probe’s log file, not paraphrased > a User-agent: Claude-User disallow is the live control > Only Claude-User is the user-initiated retrieval signal
I could go on and on but I won't. Phrasing aside, the text is too structured with many sections and subsections when the intent was clearly more narrative. "I was curious about X and did Y and I am going to tell you about it."
Signals that suggest a human who cares would be: use of the first-person; demonstrated curiosity, humility, and uncertainty; inline hyperlinks; and any kind of personality or opinion.
"Idiolect" is both subtle and distinct: the choice of vocabulary, grammar, phrasing and colloquial metaphors will vary in kind and frequency for everyone like an intellectual signature. You can sometimes tell if someone has been reading too much of a particular author recently just because of the way the author's choice of vocabulary bleeds into their own speech patterns. Sometimes it's a permanent influence.
I wonder if reading so much LLM stuff lately has affected my idiolect and that I write (or worse, think) more machine-like than before...
by zenoprax
4/21/2026 at 10:32:20 AM
> I wonder if reading so much LLM stuff lately has affected my idiolect and that I write (or worse, think) more machine-like than before...Totally of topic ofc, but I always get triggered by the claim that llms are "machine-like". I'm aware it's a total pet peeve and a lil irrational, but "machine-like" would imply to me that it's thinking like a machine, which in turn implies machine intelligence - which in turn implies they're doing something which they aren't.
I'm not trying to undersell their capabilities. Used well they're able to do a lot of things. But the way they achieve it is by mimicking human dialogue and rhetoric processes to facilitate this process. That's in my opinion anything but machine intelligence. I struggle finding an applicable word for it though
by ffsm8
4/20/2026 at 4:57:35 PM
Sometimes when we point the moon to people they prefer to discuss at length about the finger.Don't worry.
by realo
4/20/2026 at 5:08:36 PM
If you point six index fingers and a bifurcated thumb at the moon, then many people will worry.by bigyabai
4/20/2026 at 7:27:10 PM
[flagged]by worik