2/26/2026 at 1:57:31 AM
Is there an article about this written by a human? The “it’s not X. It’s Y” is too distracting.by Infernal
2/26/2026 at 3:31:56 AM
For me it's the constant feel of everything being "exciting" while no real information is actually conveyed. It's a common tactic of both AI and clickbaity articles. There's no hard evidence here, just hearsay. Nothing really to report until there's more information. I don't want drama in reporting, I want facts. But I guess I'm an outlier which is how we got both this AI style and the clickbait it was trained on...It also doesn't help that all the title graphics have the same dramatic feeling and are certainly AI generated.
by godelski
2/26/2026 at 2:28:56 PM
> For me it's the constant feel of everything being "exciting" while no real information is actually conveyed. It's a common tactic of both AI and clickbaity articles.Yes! You put your finger on what bugs me about "no LLM" rules. It's not that LLMs writing isn't uniquely bad, but that it tends towards low quality clickbaity prone writing we already see everywhere. Banning LLM content is redundant.
Side note, I'd guess LLMs don't tend towards vapid writing just because of clickbaity training material. Rather it's more fundamental. Writing well takes effort and energy. LLMs seem to avoid effort just like humans. Emotional based reasoning in humans is itself a heuristic system favored by evolution. Thinking is expensive. Emotional slop is cheap.
by elcritch
2/26/2026 at 3:33:43 PM
Here's why no LLM rules make sense:Imagine you know a guy named Patel. He pirated every movie ever made and is a prolific writer. So prolific, in fact, that he has a blog, called "Patel's Log." On this blog is a review of every movie ever made.
At first, you think that's neat. It's not exactly a book of all knowledge, but it's a significant human achievement, perhaps even historic.
Things take a turn for the worse when you're reading a review in the Times. You recognize Patel's distinctive style, and call him up to ask if the Times stole his post. He says that a Times columnist asked for his opinion, and he sent them a link. It turns out the columnist copied his blog post verbatim: but he says he can't complain without being inconsistent, since he pirated every movie ever made.
You find this humorous, until you recognize his style in the Atlantic - then the Post. Eventually you're disappointed when the Ebert staff publish an opinion piece in favor of Patel's Log matching (PatelLM), and you're forced to wonder if that' what Ebert would have thought.
Your boss sends you copy-pasted PatelLM content in a morning Slack message about a movie she watched over the weekend. Your friends quote Patel's Log verbatim on Discord. Hollywood starts using PatelLM to indirectly plagiarize other movies. Soon, Patel's posts begin to echo each other as the supply of novel perspectives is overwhelmed by PatelLM. Film criticism become a dessicated corpse, filled with plastic and presented in a glass case with a pin through its heart. Thought is dead. There is only Patel.
by whatshisface
2/28/2026 at 12:27:39 PM
> It turns out the columnist copied his blog post verbatim: but he says he can't complain without being inconsistent, since he pirated every movie ever made.Copyright laws should be applied to LLMs and their users just like any others. If they verbatim reproduce a post (or near enough), then it should be a copyright violation.
> You find this humorous, until you recognize his style in the Atlantic - then the Post.
There's nothing inherently wrong with humans or LLMs learning to mimic someones style. This is actually a basis for styles and genres, etc. Whole trends in arts are just people copying others style's. Sometimes with little improvements.
> Hollywood starts using PatelLM to indirectly plagiarize other movies. Soon, Patel's posts begin to echo each other as the supply of novel perspectives is overwhelmed by PatelLM. Film criticism become a dessicated corpse, filled with plastic and presented in a glass case with a pin through its heart. Thought is dead. There is only Patel.
How exactly is this different than what Hollywood did pre-LLMs for the last decade or two? LLMs didn't cause the homogenization of culture. Corporate Hollywood and the internet did that.
by elcritch
2/26/2026 at 7:29:45 PM
I've had a similar complaint about publishing in machine learning conferences. They're putting in these "no LLM" rules but those are just idiotic. Proving LLM usage is really really difficult, but (one of) the underlying problem has always been bad reviews, or low quality reviews. So why write a LLM rule? Why not tackle the problem more directly?I don't care if people use LLMs, I care about generating slop. The two correlate, but by concentrating on the LLM part you just let the problem continue. It's extremely frustrating. Slop is slop. Doesn't matter how it is generated or by who. Slop is slop. Doesn't matter if you dress it up and put lipstick on a pig. Slop is slop.
by godelski
2/26/2026 at 1:58:53 AM
Waiting for the only human I trust in this space to report on this:by consumer451
2/26/2026 at 2:58:06 AM
You could also check Matt Levine from Money Stuff - Bloomberg. He is quite known on HN. The way he writes plus his great knowledge with no BS makes him my favorite (and only) journalist I follow.Edit: actually someone already found his article and posted it: https://news.ycombinator.com/item?id=47160848
by bdelmas
2/26/2026 at 4:04:24 AM
Thanks, but the link to the journalist I shared has been threatened multiple times, and yet she kept trucking through. I rarely say "avoid MSM," but in this case, in 2026, I would personally recommend avoiding your MSM recommendation.No hard feelings.
by consumer451
2/26/2026 at 2:12:05 AM
+1 on that. Thanks!I'm sure she'll be right on it...
by ChrisMarshallNY
2/26/2026 at 2:30:58 AM
She's far braver than most of us. I self-censor all the time on this website.Havel's greengrocer, placing the sign.
Carney at Davos, his eyes uncovered.
by consumer451
2/26/2026 at 2:03:41 AM
Matt Levine has a take: https://www.bloomberg.com/opinion/newsletters/2026-02-24/ai-...> Look, I am sorry. But if you go to Jump Trading and Jane Street and say “hello, I have an unregulated poorly designed mechanism that could lead to $50 billion of market value collapsing overnight, would you like to trade with me,” they are going to say yes, but their eyes are going to light up, you know? If at Time 0 you give them an extremely gameable system that can produce billions of dollars of profit, at Time 10 your system is going to be a smoking wreckage and they are going to have billions of dollars of profit. That’s their whole job, you know? I couldn’t tell you in advance what all the intermediate steps will be, and in fact in hindsight I cannot tell you what the intermediate steps actually were, how Jump and Jane Street made money off the collapse of Terra. But as a heuristic, I mean, come on. Terra was like “hello we have a balloon full of money, here is a pin, dooooooon’t pop the balloon.” Guess what!
by necubi
2/26/2026 at 3:50:38 AM
How about a non-paywalled link? archive.is seems to be having issues today.by bsder
2/26/2026 at 5:08:08 AM
I have the newsletter in email and there wasn't much more than what was posted above, other than quoting from this article:https://www.wsj.com/finance/currencies/jane-street-accused-o...
That point was the crux of Matt Levine's argument: Terra and Luna were unregulated and easy-to-game securities. So you can't complain when the smartest people on Wall Street figured out how to pop the balloon in their favor -- (not ai emdash) particularly when it's their job.
I will quote the first few paragraphs leading up to it though:
>The basic story of Terra is:
>Terra was a big crypto project, led by a company called Terraform Labs and a guy named Do Kwon, which at its peak had a market value of about $50 billion.
>It had a token, the currency of its blockchain, called Luna, which at its peak traded at almost $120 per token. It also had an algorithmic stablecoin, TerraUSD, whose mechanism was that it could always be redeemed for $1 worth of Luna.
>That’s a bad idea! The problem, which was extremely obvious and which everyone knew about, was that, if people lost confidence in Luna, there would be a death spiral: People would redeem TerraUSD for Luna and sell the Luna, which would drive down the price of Luna, which would lead to more redemptions, which would create even more Luna, until Luna was trading at a tiny fraction of a penny and every TerraUSD would be redeemed for millions of them.
>In May 2022 that very much happened. Terra collapsed, people lost a lot of money and Do Kwon got 15 years in prison for fraud.
>At its peak, though, Terra was a pretty big crypto project, and it had various dealings with some very smart and somewhat sharky trading firms like Jump Trading and Jane Street.
by bb88
2/26/2026 at 9:00:13 AM
“Can’t complain” doesn’t make it legal. I had this argument a number of times with cryptobros at the time “if it’s on the chain it’s fair game” I heard quite often. Just, no. Just because some code allows you to get away with something doesn’t make it not illegal[1].The thing is you or I don’t get to say what is or isn’t a market that is covered by market abuse laws. Regulators do, and while it’s true to say none of the relevant regulators had stepped up and conclusively shown these markets were under their jurisdiction, they had repeatedly said they were looking into them and given hints they felt they had jurisdiction. Heck, I was in a meeting with Kevin Warsh around 2014 or so[2] where he asked about bitcoin so it’s clear the fed was at least looking into crypto at that time long before they made public comment. ISTR talking to the cftc at the same time and they asked about it too.
So “unregulated” in this context doesn’t mean “not covered by regulation” it means “regulatory status extremely uncertain”. If you want to go in with a very aggressive strategy you’re taking some risk that regulators will post facto go after you because they do that a lot in conventional markets.
[1] Market abuse in this case, but it’s obviously the case in cybersecurity also.
[2] This isn’t some kind of weird boast btw, cbankers and regulators meet with people from industry all the time as part of their normal information-gathering process and he met with a group of us who were working with some bank on detecting things like market abuse. He had some sort of academic position at Stanford at the time iirc looking into various types of bank regulation, but he was still plugged into the fed governors because he had only just left that.
by seanhunter
2/26/2026 at 5:23:57 PM
> I had this argument a number of times with cryptobros at the time “if it’s on the chain it’s fair game” I heard quite often. Just, no. Just because some code allows you to get away with something doesn’t make it not illegal[1]But that is/was the cryptobros argument: Code is Law! And now instead of fixing the algos they're going right to suing each other just like TradFi with TradLaw.
Crypto is the future!
by bb88
2/26/2026 at 2:07:58 AM
This this this so much. Thanks for pointing it out.> Ten minutes is not a coincidence. It is a trade.
by nubg
2/26/2026 at 2:21:52 AM
Wow, that's... considerably worse than typical output nowadays.by zahlman
2/26/2026 at 1:59:21 AM
https://www.wsj.com/finance/currencies/jane-street-accused-o... is a pretty normal oneby jlund-molfese
2/26/2026 at 3:39:36 AM
“It’s not about human writing. It’s about the message.”AI is turning the entire web into LinkedIn itisnotaboutism.
by manoDev
2/26/2026 at 2:38:37 AM
Where is "it's not X. It's Y"? I didn't notice it.by tzs
2/26/2026 at 2:53:50 AM
The negative parallelism pattern is broader than the literal phrasing "not X, but Y". Here are some examples from the article:- "A new lawsuit doesn’t just revisit the $40 billion Terra-Luna meltdown; it questions whether..."
- "Ten minutes is not a coincidence. It is a trade."
- "It reads less like a rescue offer and more like a firm positioning itself..."
- "These are not isolated; they are part of Snyder’s broader efforts..."
- "Not just as bystanders, but as alleged participants..."
by duskwuff
2/26/2026 at 3:29:39 AM
Math papers using LLMs: It's not true, it's falseby energy123
2/26/2026 at 3:02:13 AM
Thanks. I probably didn't notice them because they don't seem at all unnatural.by tzs
2/26/2026 at 3:47:54 AM
Not seeming unnatural is literally what the LLM is trained to be, but it's pretty interesting how little sense they make when you dig in. Goes to how little attention we pay normally, and/or how much weight we put on text seeming natural."A new lawsuit doesn’t just revisit the $40 billion Terra-Luna meltdown; it questions whether..." -- the purpose of a lawsuit is to question something (by making an allegation), you don't sue someone to "revisit".
"Ten minutes is not a coincidence. It is a trade." So is an hour, or thirty seconds, or...?
"Not just as bystanders, but as alleged participants" -- the "just" doesn't make sense; participants aren't bystanders.
Of the list, only "It reads less like a rescue offer" and "These are not isolated; they are part of Snyder’s broader efforts..." makes any sense in context.
by wzdd
2/26/2026 at 8:39:52 AM
> Not seeming unnatural is literally what the LLM is trained to be, but it's pretty interesting how little sense they make when you dig in.It's as if they're optimizing for all the surface-level indicators of well-formed, meaningful thought, without the actual substance to back that up.
by duskwuff
2/26/2026 at 2:52:05 AM
The awful graphic at the top is certainly not made by a human.by cynicalkane