3/19/2026 at 10:33:31 AM
To be clear, as the article says, these authors were offered a choice and agreed to be on the "no LLMs allowed" policy.And detection was not done with some snake oil "AI detector" but by invisible prompt injection in the paper pdf, instructing LLMs to put TWO long phrases into the review. They then detected LLM use through checking if both phrases appear in the review.
This did not detect grammar checks and touchups of an independently written review. The phrases would only get included if the reviewer fed the pdf to the LLM in clear violation to their chosen policy.
> After a selection process, in which reviewers got to choose which policy they would like to operate under, they were assigned to either Policy A or Policy B. In the end, based on author demands and reviewer signups, the only reviewers who were assigned to Policy A (no LLMs) were those who explicitly selected “Policy A” or “I am okay with either [Policy] A or B.” To be clear, no reviewer who strongly preferred Policy B was assigned to Policy A.
by bonoboTP
3/19/2026 at 10:36:20 AM
In that case, I hope these frauds have been banned for life.by mikkupikku
3/19/2026 at 12:42:30 PM
I'm not sure what experience anyone in this thread has with grad level research as a student/author, but I can assure you that heads roll over this kind of thing.A professor's career is built on reputation, and that reputation is as strong as their students' (who do much of the "work" such as it is). It comes down to the professor, but this can be a career-ending moment for those students and I'm quite confident there were some very uncomfortable discussions as a result of this.
by jvanderbot
3/20/2026 at 3:52:39 AM
Depends on the field. One of the most influential papers in economics was found to be incorrectly constructed with signs pointing to just straight up fraud. Basically it didn't include data that it said it did, which when included reverses the conclusion. Then when the authors were called out, they doubled down offering up the explanation that the conclusion again reverses if you add a third set of cherry picked data, followed by dragging the person calling them out through the mud in a NY Times opinion piece.Those authors are still extremely prestigious professors in the field, and have suffered essentially no penalty. https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt
by monocasa
3/20/2026 at 1:07:33 PM
All due respect this is by no means one of the most influential papers in economics.by jsw97
3/20/2026 at 6:10:11 PM
It literally crushed economies and guided international monetary policy for at least a decade.There's a reason why it's one of the only economic papers that has its own wiki page.
by monocasa
3/19/2026 at 12:48:09 PM
It's just a tool.Writing papers is exhausting, and if the data and results are real, then what's the problem? If the human author checked the output, is that not the same as a human writing the prose?
Everyone in the field will be doing this in a few years anyway. It's a shame that this Salem Witch Trial is happening for the early adopters.
If the findings are being fabricated or the paper isn't being reviewed and corrected by the author, that's a different story. But I'd be shocked if that were the case.
by echelon
3/19/2026 at 1:37:49 PM
I consider LLMs to be a very useful tool and use them every day. But if I sign a slip of paper saying I won't use them for some project, and then use them anyway, not merely using them but copying without even the pretense of putting it into my own words, then that's fraud. LLMs being a tool is completely orthogonal to this fraud.by mikkupikku
3/19/2026 at 12:53:30 PM
This comment doesn't seem to fit the discussion at all?The discussion is not about humans using LLLs to write papers. It is about humans who agreed not to use LLVM in reviewing papers, then did exactly that.
by amoss
3/19/2026 at 1:56:36 PM
There's a lot of irony in a defensive comment being written based on misreading / inattentive reading of a post about reviewing papers (requiring attentive reading).by Cthulhu_
3/19/2026 at 3:50:44 PM
In addition to being a reviewer, they also submitted their own research to this journal. So it leads to the question: if they were willing to cheat on the side of review with less incentive, why wouldn’t they cheat on the side that provides more incentives?(Meaning, your career doesn’t get boosted much for reviewing papers, but much more so for publishing papers)
by bumby
3/19/2026 at 2:03:32 PM
It might be that paper authors required others not to use LLMs for reviewing their work. Then, by the rule of reciprocity, they shouldn't use LLMs for reviewing others work. The article is unclear on whether this implied reciprocity rule was explicitly stated or not.by bjourne
3/19/2026 at 7:59:14 PM
It was. More details here: https://icml.cc/Conferences/2026/LLM-PolicyIn particular: "Any reviewer who is an author on a paper that requires Policy A must also be willing to follow Policy A."
by ameliaquining
3/19/2026 at 1:04:16 PM
A hammer can be used to build a house, or to kill a person. We have a lot of history, law, and culture (likely more), around using tools like hammers so that we know what is good use vs what is bad. The above applies for many others tools as well.LLMs can be very useful tools. However we also know there are a lot of bad uses and we are still trying to figure out where there are problems and where there are none.
by bluGill
3/19/2026 at 4:30:39 PM
This has nothing to do with whether it is ok to use AI or not, it is about whether it is ok to lie about using it.by cortesoft
3/19/2026 at 12:53:32 PM
They agreed to the no LLM policy.by jszymborski
3/19/2026 at 1:21:50 PM
> what's the problem?Read the article. They self-selected into the no-LLM group and then copy/pasted from an LLM. Not only dishonest but just not smart.
by pton_xd
3/19/2026 at 1:52:32 PM
Reading the article is exhausting. If I can leave a comment just as well without reading the article, then what's the problem? If I got something wrong, other people will point it out. That's a more efficient use of my time./s
by tdeck
3/19/2026 at 3:48:31 PM
Not to water down the snark, but isnt cause of situation described in the article the exact mentality you are mocking?by 112233
3/19/2026 at 7:59:36 PM
I believe that's the joke, yes.by ameliaquining
3/19/2026 at 6:21:18 PM
The issue is not the tool use - research is a small community and violating submission terms is gonna get you stuck in the naughty corner.by jvanderbot
3/19/2026 at 10:38:44 AM
I was thinking this too, but I don't believe this is the case, and I feel like it would not be a good idea either.Most of these people are likely students; this should be a learning moment, but I don't think it is yet grounds for their entire academic career to be crippled by being unable to publish in a top-tier ML venue.
by hodgehog11
3/19/2026 at 10:41:21 AM
If this is tolerated, it sends exactly the wrong kind of message. The students, if they are, should be banned for life. Let them serve as an example for myriads of future students, this will be a better outcome in the long run.This didn't trip for people who were merely bouncing ideas off a LLM, they caught people who copy and pasted straight from their LLM.
by mikkupikku
3/19/2026 at 10:48:34 AM
It's not a fully consensus view, but a majority of sociologists agree that high severity deterrence has limited effectiveness against crime. Instead, certainty of enforcement is the most salient factor.by linkregister
3/19/2026 at 11:04:15 AM
But this method is now spent, as if someone is determined on keep using LLM, this should be pretty easy to overcome.I suppose though new methods could be devised, but it's not "certainty" that they will catch them.
by _flux
3/19/2026 at 11:07:58 AM
That's not true. People still pick up USB sticks from the street, people still fall for scam phone calls and people still click on links in mail.Just because a method was successful once does not mean it was 'burned', none of these people will be checking each and every future pdf or passing it through a cleaner before they will do the same thing all over again and others are going to be 'virgin' and won't even be warned because this is not going to be widely distributed in spite of us discussing it here.
If anything you can take this as proof that this method is more or less guaranteed to work.
by jacquesm
3/19/2026 at 11:57:47 AM
Deterrence is only part of it. It's morally instructive, it tells people that they live in a society that takes rules seriously.by mikkupikku
3/19/2026 at 12:52:00 PM
What is the aim of "moral instruction" if not deterrence? Surely it needs be instruction in pursuit of an outcome?by andybak
3/19/2026 at 1:38:50 PM
It makes honest people feel rewarded, valued and acknowledge. It teaches people who wish to follow the rules and conform to social norms what those norms are and where we actually draw the line in practice.https://en.wikipedia.org/wiki/Punishment#Education_and_denun...
by mikkupikku
3/19/2026 at 3:44:14 PM
Looked at slightly differently, given a split between high trust and low trust preventing conversions from high to low is similarly important to inducing conversions from low to high.by fc417fc802
3/19/2026 at 11:45:08 AM
Enforcement without consequences just wears down the people who are supposed to enforce it.by noduerme
3/19/2026 at 12:14:23 PM
There's a pretty large area between "no consequences" and "banned forever"by RHSeeger
3/19/2026 at 12:12:21 PM
GP suggested a life ban. Maybe suspend for 6 months instead? That's a long time without publishing in the current publish-or-perish academia.by maleldil
3/19/2026 at 12:49:02 PM
> Maybe suspend for 6 months instead?Suspend for 6 months from a conference that is held yearly?
by sampo
3/20/2026 at 12:16:28 AM
I wasn't thinking about ICML specifically. My mind was on the ARR.by maleldil
3/19/2026 at 4:40:37 PM
> Instead, certainty of enforcement is the most salient factor.hodgehog11 is proposing effectively no enforcement
by matkoniecz
3/19/2026 at 2:47:54 PM
The point of a punishment is not solely to deter future crimes, it's also to actually punish the present crime thoughFor instance jail time is not *just a deterrence, it's physically preventing someone from committing more crimes against the public
by bluefirebrand
3/19/2026 at 12:05:36 PM
Correct. We also have evidence both from cheating in sports and in academia that stiff punishments do not work. Many people hold the false belief that if it is easy to cheat then the punishments must be extremely severe to scare would be cheaters. It just does not work. Preventing cheating is way easier said than done.by bjourne
3/19/2026 at 7:04:34 PM
> We also have evidence both from cheating in sports and in academia that stiff punishments do not work.Maybe so, but there is evidence that lack of punishment also don't work.
Neither extreme "works". Just because terminal punishments do not prevent the worst cheating does not in any way imply that slap on the wrists reduce incidents of cheating.
by lelanthran
3/19/2026 at 10:58:24 AM
Yup, precisely this. Doing something bad is rarely a rational commitment and cost of benefits. Likelihood and celerity of getting caught seem to be the driving factors.by crimsoneer
3/19/2026 at 10:58:17 AM
But the mob wants their kick.by jona-f
3/19/2026 at 11:23:31 AM
> The students, if they are, should be banned for life.I'm all for repurcussions ... but a life is a long time and students are usually only at the beginning of it.
by withinboredom
3/19/2026 at 11:22:04 AM
Why not put them on a chain and let village stone them? Or better yet shoot them on the spot! That would send a message for sure.by wiseowise
3/19/2026 at 12:52:12 PM
Well, maybe they found themselves in the last hours of the deadline without the reviews done... in some cases due to procrastination, but in a few cases perhaps because life is hard and they just couldn't do it. So they used the LLM as a last resort to not go beyond deadline (which I assume maybe was penalized as well?)To err is human, it makes sense that they are punished (and the harshest part of the punishment is not having a paper rejected, it's the loss of face with coauthors and others, BTW. Face is important in academia) but "for life" is way too much IMO.
by Al-Khwarizmi
3/19/2026 at 1:24:22 PM
This year, having their own submissions desk-rejected is strong enough of a signal that the policy has some teeth behind it. Let’s ban em for life next year.I strongly feel that deterrence should be the goal here, not retribution IMO.
by gcr
3/19/2026 at 12:13:46 PM
It has been shown time and again that, for most people, teaching them to be better and giving second chances is more effective than using forever-punishment as a warning for others.by RHSeeger
3/19/2026 at 10:56:25 AM
This line of reasoning interests me because it seems to arise in other contexts as well.Do very harsh punishments significantly reduce future occurrences of the offense in question?
I've heard opponents of the death penalty argue that it's generallynot the case. E.g., because often the criminals aren't reasoning in terms that factor in the death penalty.
On the other hand (and perhaps I'm misinformed), I've heard that some countries with death penalties for drug dealers have genuinely fewer problems with drug addiction. Lower, I assume, than the numbers you'd get from simply executing every user.
So I'm curious where the truth lies.
by CoastalCoder
3/19/2026 at 11:00:58 AM
Is the death penalty scarier than life in prison?by armchairhacker
3/19/2026 at 2:42:13 PM
I'm not sure it was meant that way, but nice metaphor. For some students "academic death" might really be better than a life of being trapped in a system that they can only navigate by cheating.by sieste
3/19/2026 at 11:18:16 AM
I assume that depends on the individual.But FWIW, my point was about very harsh punishments in general, not specifically the death penalty.
by CoastalCoder
3/19/2026 at 11:04:46 AM
My understanding is that something among those lines happened:> All Policy A (no LLMs) reviews that were detected to be LLM generated were removed from the system. If more than half of the reviews submitted by a Policy A reviewer were detected to be LLM generated, then all of their reviews were deleted, and the reviewer themselves was removed from the reviewer pool.
Half is a bit lenient in my view, but I suppose they wanted to avoid even a single false positive.
by Tade0
3/19/2026 at 1:36:56 PM
Between banning someone for life and not doing anything, there usually are some other options.by lukan
3/19/2026 at 3:15:06 PM
Like burned at the stake, tarred and feathered, drawn and quartered, etc.?by hn_go_brrrrr
3/19/2026 at 2:16:33 PM
- return to drawn and quartered in the town square?by bethekidyouwant
3/19/2026 at 10:48:51 AM
[flagged]by harmf
3/19/2026 at 10:58:51 AM
FYI we tend to use up votes rather than "I agree" comments, partly because it keeps the overall signal-to-noise ratio for comments higher.by CoastalCoder
3/19/2026 at 11:41:07 AM
Thank goodness we have you passing judgment on the internet; otherwise who else would be around for us to do it? I'm glad you're willing to destroy someone for a mistake rather than letting them learn and change. We all know that arbitrary and harsh punishments solve everything.by laughingcurve
3/19/2026 at 11:56:00 AM
> destroy someone for a mistake"Oops, you told me not to do this, and I volunteered to agree to these stricter standards yet I flagrantly disregarded them, please forgive me" doesn't seem like something you just accidentally do, it's a conscious choice.
by embedding-shape
3/19/2026 at 11:47:28 AM
ML reviewing is a total joke. Why do you have noob students reviewing a conference paper.by anonymousDan
3/19/2026 at 12:03:21 PM
I've been an AC (the person who manages the reviewing process and translates reviews into accept/reject decisions) at ICML and similar conferences a few times. In my experience, grad students tend to be pretty good reviewers. They have more time, they are less jaded, and they are keener to do a good job. Senior people are more likely to have the deep and broad field knowledge to accurately place a paper's value, but they are also more likely to write a short shallow review and move on. I think the worst reviews I've seen have been from senior people.by ancillary
3/19/2026 at 12:13:53 PM
It's usually not "noob" students. Big conferences require reviewers to have at least one (usually more) published paper in major venues. For students, this usually means they went through the process of being the first author on a few papers.by maleldil
3/19/2026 at 12:15:55 PM
Because someone has to do it. Conference submissions have ballooned as the field itself has ballooned.Whats your suggestion?
by bonoboTP
3/19/2026 at 12:22:14 PM
It's better if nobody does it than to send it to the randomizer.by marcosdumay
3/19/2026 at 4:10:14 PM
Ok but you need peer reviewed publications to graduate with a PhD.And if you retort that the whole academic system is obsolete, well, it still carries a lot of prestige and legitimacy that makes politicians interested in maintaining it, so it's not going anywhere soon.
by bonoboTP
3/19/2026 at 5:58:53 PM
What makes you think these are mostly students? I may have missed that in the methodologyby bumby
3/19/2026 at 11:42:09 AM
2% would be on the very low end of the number of people who lie, get caught, and become repeat offenders anyway.by noduerme
3/19/2026 at 11:34:59 AM
In many cases authors and reviewers are not the same. In your first two publications to such venues you are not allowed to review yourself and need someone else.I think consequences are well deserved, but hopefully not on the authors cost (if innocent).
by notrealyme123
3/19/2026 at 1:31:02 PM
Banned from doing free work?by rat9988
3/19/2026 at 11:11:06 AM
What terrible deeds have you done to outburst so harshly?by nurettin
3/19/2026 at 11:17:15 AM
It’s an unethical, false choice. The reviewers are not perfectly rational agents that do free work, they have real needs and desires. Shame on ICML for exploiting their desperation.by quinndupont
3/19/2026 at 11:24:01 AM
Banned for life is a stretch but the actual response is completely fine. They can just resubmit to the next conference.Words mean something, if you promise to uphold a contract and break it, there are consequences. The reviewers were free to select the policy which allows LLM use.
by qbit42
3/19/2026 at 11:23:05 AM
Is it? The reviewers could simply have chosen a different option in a form field. While I understand that they were "forced" to review under reciprocal review, they still had other choices where I don't see coercion happening and that could have avoided the outcome for them.by jojomodding
3/19/2026 at 1:35:08 PM
[flagged]by mikkupikku