4/15/2026 at 12:11:40 PM
Is it me, or does the article sound like LLM output?The pattern "It's not mere X — it's Y", occurs like 4 times in the text :v
by temp7000
4/15/2026 at 2:18:42 PM
I can't believe you'd impugn the high moral standards of "gizmoweek dot com".by Andrex
4/15/2026 at 2:07:13 PM
I don't care if it's written by an LLM.The problem with the article is the complete lack of details. No benchmarks on the iPhone capable models. No details, whatsoever.
Human or LLM - the article is a whole lot of nothing.
by BeetleB
4/15/2026 at 2:13:55 PM
Funnily enough, to me these aphorisms (?) sound almost like the replicant test in Blaze Runner. Like these are the unit bit of "nudging"by doliveira
4/15/2026 at 6:56:36 PM
LLM, recite your baseline:"It's not just X – it's Y." Slop. "You're absolutely right!" Slop. "And this is key –" Slop. "This is a nuanced topic." Slop.
by nozzlegear
4/16/2026 at 7:29:43 PM
The problem is not authorship. It's the lack of substanceby nextaccountic
4/17/2026 at 8:32:37 AM
This is just prompting an LLM and just dumping it on the site (which is clearly what is happening here, all the articles show the same signs of AI output, no human writing, no style, as far as I can tell).If this is the level of care that goes into news articles, then we're doomed. What will ultimately happen is that AI summarizes AI articles, which got summarized from another AI article, which got summarized from another AI article, .. and after enough rewriting all facts will be gone from articles. I don't care to read this slop, and I'm shocked people are so readily accepting this new state of affairs.
by dax_
4/15/2026 at 3:32:41 PM
This article is all fluff because real benne marketing. If they mentioned that a 4B model on an iPhone 16 drains 15% of the battery for a single long prompt and triggers hard thermal throttling after 20 seconds, nobody would be clicking on headlines about "commercial viability" fwiwby veunes
4/15/2026 at 3:57:43 PM
I ran several Gemma 4 quants on my 24gb mac mini, and with proper context size tuning they're quick enough I guess, but I would really love to see them working well on an iphone with 2/3gb of ram...by Domenic_S
4/15/2026 at 1:06:22 PM
Ran it through Claude, Grok, whatever...for me, they all flagged issues (no sources, punchy phrases with repetition,...) with these content farms.My favorite: couldn't even prove the author is a real person. They all found no record!
by caminante
4/15/2026 at 1:16:50 PM
As someone said we live in a strange but amazing era, where although it has never been easier to be deceived, but its _also_ much easier to uncover said deception especially on the internet.by itissid
4/15/2026 at 2:21:38 PM
Or at least think you've uncovered deception. It's not clear to me yet that any of these "AI detectors" are reliable, and if they are, it's just an arms race.by ryandvm
4/15/2026 at 2:18:32 PM
It's much faster and simpler to assume everything on the internet is crookedby walthamstow
4/15/2026 at 12:13:28 PM
> :vI guess I found the millennial. I haven't seen that in so long!
by figmert
4/15/2026 at 12:30:18 PM
:<by Den_VR
4/15/2026 at 1:38:59 PM
:')by neals
4/15/2026 at 2:19:02 PM
>_>by Andrex
4/15/2026 at 3:47:37 PM
\o/by xiconfjs
4/15/2026 at 2:13:58 PM
Analog emojis FTWby yangm97
4/17/2026 at 6:37:23 PM
Neither analog nor emojis. An analog emoji would just be a picture printed on paper.by Gormo
4/18/2026 at 1:12:46 PM
¬_¬by yangm97
4/15/2026 at 7:35:02 PM
(╯°□°)╯︵ ┻━┻by mannycalavera42
4/15/2026 at 3:47:09 PM
¯\_(ツ)_/¯by Melatonic
4/15/2026 at 3:20:59 PM
It is like the AI is training us to avoid certain language patterns. I rebel at the hostage of weak language: for strong language is next.by altruios
4/15/2026 at 3:44:53 PM
The mighty semi colon prepares for its return !by Melatonic
4/15/2026 at 12:19:47 PM
An AI slop pattern so widespread it’s now referred to as “it’s not pee pee it’s poo poo”.by mtremsal
4/15/2026 at 3:41:42 PM
It's not just a widespread pattern –––––––––––––––– it's a sign of things to come.by lynndotpy
4/15/2026 at 3:58:43 PM
You didn't just nail it ------------ you cut to the core of the issue.by Domenic_S
4/15/2026 at 2:14:50 PM
I haven't heard that—that's good.by Cider9986
4/15/2026 at 4:19:38 PM
It does in fact sound like LLM outputby odo1242
4/15/2026 at 4:03:28 PM
Smells like slop to me, looks like the site exists solely to garner search hits.by wtyvn
4/15/2026 at 12:13:55 PM
You would be correct. Ran the article through GPTZero, 100% AI.by kbouw
4/15/2026 at 12:45:13 PM
These detectors are a scam falsely flagging non-native English speakers: https://plagiarismcheckerai.app/ai-detector-false-positives-...At this point relying on their judgement is beyond folly.
by subscribed
4/15/2026 at 1:18:28 PM
It's both ironic an confusing that this website itself promotes an AI detector.by cubefox
4/17/2026 at 9:13:57 AM
Yeah, I admit I lazily chose one of the first results reporting on this study instead on the best one, so the irony is not lost on me.Sorry for making you snort and shake your head in amusement :D
by subscribed
4/15/2026 at 12:23:54 PM
https://redd.it/13mft8sby xd1936
4/15/2026 at 1:07:22 PM
user-friendly Old reddit link:https://old.reddit.com/r/ChatGPT/comments/13mft8s/apparently...
by rationalist
4/15/2026 at 12:22:19 PM
Would not trust any of these tools in the slightest.by 71bw
4/15/2026 at 12:35:01 PM
AI detectors that use text as a basis are not real. It is fundamentally impossible for them to exist.by devmor
4/15/2026 at 1:40:29 PM
Huh?LLM output doesn't have the variety of human output, since they operate in fixed fashion - statistical inference followed by formulaic sampling.
Additionally, the statistics used by LLMs are going be be similar across different LLMs since at scale its just "the statistics of the internet".
Human output has much more variety, partly because we're individuals with our own reading/writing histories (which we're drawing upon when writing), and partly because we're not so formulaic in the way we generate. Individuals have their own writing styles and vocabulary, and one can identify specific authors to a reasonable degree of accuracy based on this.
It's a bit like detecting cheating in a chess tournament. If an unusually high percentage of a player's moves are optimal computer moves, then there is a high likelihood that they were computer generated. Computers and humans don't pick moves in the same way, and humans don't have the computational power to always find "optimal" moves.
Similarly with the "AI detectors" used to detect if kids are using AI to write their homework essays, or to detect if blog posts are AI generated ... if an unusually high percentage of words are predictable by what came before (the way LLMs work), and if those statistics match that of an LLM, then there is an extremely high chance that it was written by an LLM.
Can you ever be 100% sure? Maybe not, but in reality human written text is never going to have such statistical regularity, and such an LLM statistical signature, that an AI detector gives it more than a 10-20% confidence of being AI, so when the detector says it's 80%+ confident something was AI generated, that effectively means 100%. There is of course also content that is part human part AI (human used LLM to fix up their writing), which may score somewhere in the middle.
by HarHarVeryFunny
4/15/2026 at 3:20:43 PM
> LLM output doesn't have the variety of human output, since they operate in fixed fashion - statistical inference followed by formulaic sampling.This is the wrong thing to look at; your chess analogy is much stronger, the detection method similar (if you can figure out a prompt that generates something close to the content, it almost certainly isn't human origin).
But to why the thing I'm quoting doesn't work: If you took, say, web comic author Darren Gav Bleuel, put him in a sci-fi mass duplication incident make 950 million of him, and had them all talking and writing all over the internet, people would very quickly learn to recognise the style, which would have very little variety because they'd all be forks of the same person.
Indeed, LLMs are very good at presenting other styles than their defaults, better at this than most humans, and what gives away LLMs is that (1) very few people bother to ask them to act other than their defaults, and (2) all the different models, being trained in similar ways on similar data with similar architectures, are inherently similar to each other.
by ben_w
4/15/2026 at 8:31:44 PM
An LLM is just computer function that predicts next word based on the input you give it. It doesn't make any difference what the input is (e.g. please respond in style X) - the function doesn't change, and the statistical signature of how it works will still be there.If you don't believe me, try it for yourself. Ask an AI to generate some text and give it to the AI detector below (paste your text, then click on scan). Now ask the AI to generate in a different style and see if it causes the detector to fail.
by HarHarVeryFunny
4/16/2026 at 8:02:40 AM
I can't use that linked app, paywall immediately. Unlike the person you were replying to here[0], I do not claim that this is impossible:LLM is indeed just computer function that does stats. And our brains are just electro-chemistry that does stats. This is why stylometric analysis of human writing is a thing.
My previous experience with things such as you have linked to, is they used to be quite poor. I assume they're better since then, but then again so are the models.
by ben_w
4/16/2026 at 2:27:35 PM
> I assume they're better since then, but then again so are the models.Yes, but "better" means different things for each of these.
Detectors are trying to get better at distinguishing human from LLM-generated text.
LLMs are being improved to generate more useful (and benchmark maxxing) outputs, not to attempt to avoid detection.
LLMs are in fact explicitly trained to be as predictable as possible. The training goal is to minimize continuation prediction errors, which means they are in effect being trained to generate output where each word can be predicted by what came before it (which we can contrast to a human who tries to spice it up and keep it interesting by not being too predictable!).
RL post-training, which is especially used for computer code and math, is going to change this word-by-word predictability (detectability) a bit since the focus is now on a longer term goal rather than next word, but to some extent you could also view it as just steering/narrowing the output of the model towards that goal, not totally overriding the next-word statistics.
I don't know if there are AI detectors specifically trained to detect AI code rather than prose, but I'd expect that is more difficult to do, both because of the RL factor, and because computer code is so predictable in the first place - adhering to rigid syntax etc.
by HarHarVeryFunny
4/15/2026 at 3:56:21 PM
What if the prompt includes, "Produce output that doesn't sound like an AI generated it."?by newsoftheday
4/15/2026 at 6:05:47 PM
I got curious and tried: https://claude.ai/share/3af7bd6a-15f8-4533-9dc3-a44adef255b3by js8
4/16/2026 at 8:07:12 AM
Basically the same on ChatGPT. DeepSeek managed to generate output rather than meta-discussion about how to generate output.by ben_w
4/15/2026 at 9:06:58 PM
That's actually interesting, thanks. It's like AI is tattling on itself.by newsoftheday
4/15/2026 at 10:26:11 PM
A human can easily produce output that looks like anything an LLM can produce, therefor an LLM detector that can say "this is 100% written by AI" cannot exist. It's really that simple.> Can you ever be 100% sure? Maybe not
The commenter I was replying to claimed exactly this. Their AI detector showed that the text was "100%" AI generated.
by devmor
4/16/2026 at 2:47:44 AM
I was just expressing some caution. Saying you are 100% certain of anything when the evidence is statistical seems a bit too certain, especially if it was just from a short text sample.Compare to flipping a coin, counting heads vs tails, and trying to assess if it's a fair or biased coin. After 1000 flips if it's not close to 50/50 you would rightfully be suspect, and if it was 10/90 you should be almost certain it's biased. But you can never be 100% sure.
by HarHarVeryFunny
4/15/2026 at 3:17:18 PM
[dead]by goodmythical
4/15/2026 at 1:10:26 PM
[flagged]by watsonL1F7