3/19/2026 at 4:08:08 PM
Papercuts like this are why I moved away from macOS.I will say, I don't love the use of LLMs to write these bug reports. It's probably fine if reviewed, but at least review for things like "worked on macOS 25", which obviously didn't exist. If that wasn't caught, how sure are you that the rest of the report is accurate? We all want the bugs fixed, but people are going to start throwing out the obviously LLM written reports rather than have to validate each claim, since the author probably didn't.
by mrbuttons454
3/19/2026 at 5:10:44 PM
Its my strong belief that using AI in any capacity which does not upfront state "the following content was generated by artificial intelligence" is never acceptable. In most situations, allowing an AI to wield your name gives off the scent of "My time is more valuable than yours, so I've automated writing to you." It is quite disgraceful. If your use-case would be materially harmed by an upfront disclosure of AI generated content, then you need to take a good, hard think on what that means for what you're doing (then again, maybe you're not interested in thinking anymore and that's how you got to this point in your life).by 827a
3/19/2026 at 5:30:25 PM
It’s good-faith arbitrage. Until everyone automatically suspects everything to be LLM generated and there is zero trust, anyone doing this is eroding the good faith that lets them get away with it in the first place.by misnome
3/19/2026 at 10:32:53 PM
How far do you take this policy?1. Am I allowed to ask an AI to proofread a draft for grammatical errors?
2. Am I allowed to ask an AI to proofread a draft for technical errors?
3. In both #1 and #2, am I allowed to ask the AI to suggest revisions, or is it only allowed to point out what's wrong and why?
4. If I write a sentence like "Lucy's laughter ___ her underlying anxiety" and I'm having trouble coming up with the right word to fill in the blank, can I give the sentence to an AI and ask it for a list of possible options?
5. While brainstorming, can I use an AI as a souped up rubber duck before I begin writing?
by Wowfunhappy
3/19/2026 at 11:50:28 PM
In general, I think those use cases are fine.But... AI generated content is a slippery slope. Someone earlier today asked me to "review" a 50 page document they had completely generated with AI yet obviously not reviewed themselves. It is embarrassing.
by icedchai
3/20/2026 at 2:29:54 AM
This happened to me recently at work, I just ignored the request, but I was tempted to feed it to copilot and just send them the response.by crimsontech
3/20/2026 at 2:49:24 AM
Sending someone something that'll take them longer to read than it took you to write is taking the piss.I think that's a good rule of thumb for AI-generated output.
by deanishe
3/19/2026 at 11:23:47 PM
Not parent commenter, but my answer is "no to all".by Halian
3/19/2026 at 11:37:47 PM
...then I guess my next question would be, why? How do you feel about spellcheck? Should mobile users turn off autocorrect unless they disclose that it's turned on?I don't really understand your philosophy if you're opposed to an LLM pointing out when someone got the tense wrong.
by Wowfunhappy
3/20/2026 at 1:17:58 AM
Who said anything about spellcheck?by throwaway290
3/20/2026 at 2:03:12 AM
GP said they weren't okay with someone using an AI to check for grammatical errors. If they would be okay with using software to check for spelling errors, I'd be interested to know why they're making that distinction. And I'd like to know what they think of autocorrect, which at least on the iPhone uses an on-device LLM nowadays.by Wowfunhappy
3/20/2026 at 6:42:49 AM
"AI" can mean anything with machine learning. Spellchecker can use some sort of machine learning too. But what people mean when they say "AI" is LLM chatbot. But a spellchecker highlights mistakes, it doesn't suggest to rewrite the text arbitrarily like an LLM chatbot. So I totally understand how you can be for one and not otherBy the way autocorrect on the iphone got worse recently, bunch of times it "corrected" the word to a wrong one for me
by throwaway290
3/19/2026 at 9:03:21 PM
I agree with you when you are talking with a human in good faith. I disagree when it comes to large corporations and government officials. Often times theres a lot of red tape you have to get through and create documents that nobody on their side is actually reading. Usually this is just to discourage people from completing the action they are trying to accomplish. LLM generated content has gotten me back improperly held taxes and generated multiple extension requests where the receiver just had to check a box that they got it.by Larrikin
3/20/2026 at 1:03:40 AM
The position I think I can most simplify my beliefs in to is "it should have taken you longer to write something than the person receiving it is going to spend reading it."That position still fits your scenario, where if they're not actually caring enough to read it then you don't need to care enough to write it, but for something like this targeted at a technical audience it's a higher bar.
Also of course the accuracy of the writing is relevant in both cases, which is something LLMs are absolutely worse than humans at, as noted in some of the comments here this article had the LLM hallucinate the existence of macOS 25 which is a mistake no human would have made while writing such an article entirely by hand.
by wolrah
3/19/2026 at 8:58:58 PM
> state "the following content was generated by artificial intelligence""… but reviewed by a human / me for accuracy."
by alsetmusic
3/19/2026 at 6:15:05 PM
[flagged]by brookst
3/19/2026 at 6:58:45 PM
I am genuinely curious if you are trolling, or putting that forward as a genuine argument?Trivially, it's the difference between medium, and message/content.
On one axis, whether message is spoken, written via pen or typewriter or word processor, sent electronically, faxed,mailed, etc - it is fundamentally a communication from one human being to another, even if medium / mechanics differ.
The other axis is actual content - genuine human interaction, intent, message and connection, vs a result of a prompt.
by NikolaNovak
3/19/2026 at 6:35:41 PM
> Same thing as using a word processor and printer rather than handwriting a note. Inexcusable.There is no confusion, when in receipt of something written using a word processor, that it was so written, and people are free to respond accordingly (though, of course, most of us don't care). There is no such certainty with products generated by AI, so it is appropriate responsibly to disclose it.
by JadeNB
3/19/2026 at 4:25:35 PM
I'm used to papercuts on every OS, but at least with a Linux box I can roll it back. Usually it's as easy as picking the previous boot menu entry (with NixOS, the whole system rolls back that way). I find macOS acceptable enough for my laptop, but I'm doing most of my real work in Linux containers anyway.by chuckadams
3/19/2026 at 9:07:25 PM
Every OS has papercuts like this. Want me to write a story about Linux or Windows that is equally painful? Pick your poison .. i've dealt with it all.by st3fan
3/19/2026 at 11:22:10 PM
Sure, when was the last time a Linux update overwrote your DNS resolver settings?by bigyabai
3/19/2026 at 11:54:34 PM
A trivial search can find examples of Linux updates breaking all sorts of things, like wifi and other drivers/firmware.by hombre_fatal
3/20/2026 at 2:57:06 AM
Today, literally today. My pihole updated, broke its own dns, stopped working, and broke dns for every device on my network.by CamJN
3/20/2026 at 7:36:42 AM
PiHole is a Linux kernel now?by arcfour
3/20/2026 at 6:47:02 AM
Pihole's not an OS, it's an application/service.by velocity3230
3/19/2026 at 11:30:09 PM
A month ago my debian 13 laptop out of the complete blue decided it wanted zero DNS. Might have been an update that did it I am not sure.I was unable to resolve it and ended up reinstalling.
by jackvalentine
3/20/2026 at 3:47:01 AM
You must be using some flavor that doesn't have systemd, at least. Or a Linux from a decade ago.by askbjoernhansen
3/20/2026 at 12:56:34 AM
Doesnt systemd fuck with resolv.conf?Also, I have had many system updates that broke my X11 config
by pdntspa
3/19/2026 at 5:48:58 PM
> Papercuts like this are why I moved away from macOS.It's been this way for decades. Microsoft was known for preserving backwards compatibility, while Apple was known for being willing to break stuff.
The differences aren't that extreme in reality: Microsoft breaks stuff more than it used to, while Apple has become comparatively more conservative than once upon a time.
by rectang
3/20/2026 at 5:18:05 AM
> but people are going to start throwing out the obviously LLM written reports rather than have to validate each claim,So nothing will change as Apple is renown for throwing out reports?
by eviks
3/19/2026 at 4:13:15 PM
Yes, for the time being the final report should probably come from us (but endless opportunity along the way to clarify thinking and understand industry standard terms).by Barbing
3/19/2026 at 4:11:18 PM
Using LLMs for any kind of writing is unethical, with the narrow exception of translation. If you didn't take the time to compose your words thoughtfully then you aren't owed the time to read them.by duped
3/19/2026 at 4:22:15 PM
There is a huge difference between using an llm and just blindly dumping it's output on someone verbatim.I think it's fine to have an llm write a first or second draft of something, then go through and reword most of it to be in your own voice.
by dec0dedab0de
3/19/2026 at 6:02:40 PM
If one is trying to avoid plagiarism, starting with an AI draft and polishing it to avoid signs of its true origins is not a good method.by oasisbob
3/19/2026 at 4:28:43 PM
at this point I really think its better to read broken english than have to read some clanker slop. it immediately makes me want to just ignore whatever text i'm reading, its just a waste of timeby r_lee
3/19/2026 at 4:49:46 PM
I do wonder, we had pretty good (by some measure of good) machine translations before LLMs. Even better, the artifacts in the old models were easily recognized as machine translation errors, and what was better, the mistranslation artifacts broke spectacularly, sometimes you could even see the source in the translation and your brain could guess the intended meaning through the error.With LLMs this is less clear, you don’t get the old school artifacts, instead you get hallucinations, and very subtle errors that completely alter the meaning while leaving the sentence intact enough that your reader might not know this is a machine translation error.
by runarberg
3/19/2026 at 5:27:44 PM
and not just artifacts/hallucinations, the worst thing about is the fact that its basically "perfect" English, perfect formatting, which makes it just look like grey slop, since it all sounds the same and its hard to distinguish between the slop articles/comments/PRs/whatever.and it will also "clean up" the text to the point where important nuances and tangents get removed/transformed into some perfect literature where it loses its meaning and/or significance
by r_lee
3/19/2026 at 4:54:08 PM
The LLM presents a perverse incentive here - It is used for perceived efficiency gains, most of which would be consumed by the act of rewriting and redrafting. The alienness of the thoughts in the document is also non-condusive to this; Reading a long document about something you think you know but did not write is exhausting and mentally painful - This is why code review has such relatively poor results.Quite frankly, while having an LLM draft and rewriting it would be okay, I do not believe it is reasonable to expect that to ever happen. It will be either like high school paper plagarism (Just change around some of the sentences and rephrase it bro), or it will simply not even get that much. It is unreasonable with what we know about human psychology to expect that "Human-Rewrites of LLM drafts", at the level that the human contributes something, are maintainable and scalable; Most people psychologically can't put in that effort.
by GauntletWizard
3/19/2026 at 5:40:39 PM
>The LLM presents a perverse incentive here - It is used for perceived efficiency gains, most of which would be consumed by the act of rewriting and redrafting.It might give efficiency gains for the writer, but the reader has to read the slop and try to guess at what it was intending to communicate and weed out "hallucinations". That's a big loss of efficiency for the reader.
by leptons
3/20/2026 at 4:06:15 AM
I completely agree - The efficiency gains are purely from a selfish standpoint.by GauntletWizard
3/20/2026 at 6:14:10 AM
I just can't seem to square up that the same people that complained left-and-right about "code smells" are the same ones that are shitting out slop code and are proud they shipped 50k lines of code in a week. It's going to be a maintenance nightmare for someone else. I'm not sure how anyone coming in is going to learn a codebase written by LLMs when it's 10x more code than is reasonably needed to solve the problem.by leptons
3/19/2026 at 5:44:59 PM
I don't think that's fine, I think that's an example of why using LLMs to write is unethical and creates no value.The purpose of written language is to express your thoughts or ideas to others. If you're synthesizing text and then refining it you're not engaging in that practice.
by duped
3/19/2026 at 5:08:37 PM
Using LLM is perfect for writing documentation which is something I always had problems with it.by rebolek
3/19/2026 at 5:45:20 PM
As someone who has dealt with projects with AI-generated documentation... I can't really say I agree. Good documentation is terse, efficiently communicating the essential details. AI output is soooooooo damn verbose. What should've been a paragraph becomes a giant markdown file. I like reading human-written documentation, but AI-slop documentation is so tedious I just bounce right off.Plus, when someone wrote the documentation, I can ask the author about details and they'll probably know since they had enough domain expertise and knowledge of the code to explain anything that might be missing. I can't trust you to know anything about the code you had an AI generate and then had an AI write documentation for.
Then there's the accuracy issue. Any documentation can always be inaccurate and it can obviously get outdated with time, but at least with human-authored documentation, I can be confident that the content at some point matched a person's best understanding of the topic. With AI, no understanding is involved; it's just probabilistically generated text, we've all hopefully seen LLMs generate plausible-sounding but completely wrong text enough to somewhat doubt their output.
by mort96
3/20/2026 at 2:00:14 PM
Probabilistically generated text is light years better than my human generated mess. I know my limits and documentation is one of them.by rebolek
3/19/2026 at 6:20:26 PM
Classic perfect/good.The choice is not usually “have humans write amazing top notch documentation, or use an LLM”.
The choice is usually “have sparse, incomplete, out-of-date documentation… or use an LLM”.
by brookst
3/19/2026 at 6:58:01 PM
And my claim is that the latter is better.by mort96
3/19/2026 at 7:17:55 PM
Cool, so just ignore documentation then. Problem solved for everyone.by brookst
3/19/2026 at 10:13:06 PM
I dont see how that solves anything.by mort96
3/20/2026 at 2:40:05 AM
We wouldn't have these silly arguments?by dare944
3/20/2026 at 12:00:49 PM
Gah hopefully the meaning was clear from context, but I just realized I said "latter" when I meant "former". Inconsistent human documentation is better than miles upon miles of AI-slop documentation.by mort96
3/20/2026 at 12:09:51 AM
Given that people have access to LLMs themselves, publishing their output in lieu of good documentation (no matter how sparse) seems like it’s mostly downside.by systoll
3/19/2026 at 5:46:39 PM
This immediately invalidates a software or technical project for me. The value of documentation isn't the output alone, but the act of documenting it by a person or people that understand it.I have done a lot of technical writing in my career, and documenting things is exactly where you run into the worst design problems before they go live.
by duped
3/19/2026 at 5:21:30 PM
I disagree with the downvotes, but let me put it differently: if you don’t understand, have reviewed and be ready to own all of LLM output (the thoughtful part), then you aren’t owned the time to read them. If you didn’t try to reign in the verbose slop that’s the default for LLMs, I don’t want to read it.Maybe the poster is running a local LLM.. you’d think that a SOTA model would have surmised that an overnight MacOS upgrade can only be a minor version.
by yearolinuxdsktp
3/19/2026 at 4:16:47 PM
[flagged]by eru
3/19/2026 at 7:48:15 PM
Agreed, which is why I didn't bother reading this comment before downvoting it. If you think that you were owed some other behavior from me despite not paying me for it, feel free to elaborate; for example, you could acknowledge that there exists an implicit social contract when it comes to basic human communication.by kibwen
3/19/2026 at 4:41:12 PM
> If you didn't take the time to compose your words thoughtfully then you aren't owed the time to read them.Apply this argument to code, to art, to law, to medicine.
It fails spectacularly.
Blaming the tool for the failure of the person is how you get outrageous arguments that photography cant be art, that use of photoshop makes it not art...
Do you blame the hammer or the nail gun when the house falls down, or is it the fault of the person who built it?
If you dont know what you're doing, it isnt the tools fault.
by zer00eyz
3/19/2026 at 5:02:29 PM
I of course expect my lawyer and doctor to thoughtfully apply their knowledge to help me. Why should they be any different?by abenga
3/19/2026 at 7:36:38 PM
“compose thoughtfully” != layman terminologyLawyers thoughtfully write laws that other lawyers understand. I’m not sure why that’s confusing.
by lurking_swe
3/19/2026 at 5:41:12 PM
I do apply it to those, and I don't see how it "fails" at anything.Presenting synthesized words as original thought isn't using a tool, it's laziness at best.
by duped
3/19/2026 at 5:05:06 PM
That's very elitist and unfair to people who previously struggled to form their words but now have a better chance at doing so.by wyufro
3/19/2026 at 5:40:37 PM
An elitist attitude towards plagiarists is common.by bigyabai
3/19/2026 at 6:17:11 PM
Also elitist attitudes towards people for whom English isn’t a native language, elitist attitudes towards people with dyslexia and other conditions that make writing difficult, and elitist attitudes towards people with lower education levels.by brookst
3/19/2026 at 6:54:38 PM
The BBC used to encourage its announcers to use Received Pronunciation, which was associated with high social class.The solution to this form of elitism was not to make everyone speak RP, but to encourage non-RP accents, which is more common in the modern BBC.
Your comment seems elitist by encouraging the use of artifice to fit better into an elitist world, rather than breaking down elitism.
by eesmith
3/19/2026 at 5:40:02 PM
I disagree, because those aren't their words.by duped
3/19/2026 at 6:18:35 PM
Do we care about words or thoughts? Many folks are more interested in semantic meaning than character sequences. To each their own of course.by brookst
3/19/2026 at 7:57:29 PM
One problem I see with the broader use of LLMs these days is the death of literacy.For example, you chose to read my response and attack the vocabulary as if that was the point I was trying to make. This is a misunderstanding. I am purposefully reusing the word choice of the comment I'm replying to.
I was trying to very concisely point out that if an LLM is generating your writing it is not your words or your thoughts that you're trying to communicate.
by duped
3/19/2026 at 6:25:57 PM
How'd you learn to write?by CamperBob2
3/20/2026 at 6:28:22 PM
[dead]by sieabahlpark
3/20/2026 at 1:10:15 AM
[dead]by 2postsperday