4/15/2026 at 6:53:05 PM
The article seems quite editorialized, shifting between describing "large-scale AI models" and "neural network-based approaches".The underlying paper itself is more precise, comparing against LUAR, a 2021 method based on bert-style embeddings (i.e. a model with 82M parameters, which is 0.2% the size of e.g. the recent OS Gemma models). I don't fault the authors of the paper at all for this, their method is interesting and more interpretable! But you can check the publication history, their paper was uploaded originally in 2024: https://arxiv.org/abs/2403.08462
A good example of why some folks are bearish on journals.
"AI bad" seems to sell in some circles, and while there are many level-headed criticisms to be made of current AI fads, I don't think this qualifies.
by spindump8930
4/16/2026 at 2:02:01 AM
I don't see it. Seems even-keeled for the most part. Not a polemic."Researchers found that a relatively simple, linguistically grounded method can perform as well as - and in some cases better than - complex artificial intelligence systems in identifying authorship.
The study suggests that increasingly sophisticated AI is not always necessary for high-performing writing analysis, particularly when methods are designed around established principles of how language works."
by adi_kurian
4/15/2026 at 7:21:03 PM
Are you prepared to demonstrate a superior result with models newer than those available when the research was done? Can you suggest a candidate experiment design to test your hypothesis?by throwanem