3/17/2026 at 4:02:00 PM
Taking the question of whether this would be a useful addition to Node.js core or aside, it must be noted that this 19k LoC PR was mostly generated by Claude Code and manually reviewed by the submitter which in my opinion is against the spirit of the project and directly violates the terms of Developer's Certificate of Origin set in the project's CONTRIBUTING.mdby indutny
3/17/2026 at 5:07:02 PM
Worth noting that mcollina is a member of the Node.js Technical Steering Committeeby mixologic
3/17/2026 at 10:30:39 PM
yes this.if there's anyone i would trust in exploring these avenues, it's him and the maintainers doing god's work in the nodejs repo in these past few years.
by kartaka83838
3/17/2026 at 5:40:31 PM
We call it a slip slop at work, it's ok to slip some slop if it's "our" slop :-)by everlier
3/17/2026 at 7:59:17 PM
> I pointed the AI at the tedious parts, the stuff that makes a 14k-line PR possible but no human wants to hand-write: implementing every fs method variant (sync, callback, promises), wiring up test coverage, and generating docs.Is it slop if it is carefully calculated? I tire of hearing people use slop to mean anything AI, even when it is carefully reviewed.
by giancarlostoro
3/17/2026 at 8:40:27 PM
Was 14k lines carefully reviewed? Seems unlikely.by grey-area
3/17/2026 at 10:10:52 PM
Considering the many hundreds of technical comments over at the PR (https://github.com/nodejs/node/pull/61478), the 8 reviewers thanked by name in the article, and the stellar reputations of those involved, seems likely.by joshkel
3/17/2026 at 10:43:28 PM
My mistake 19k lines. At 2 mins per line that’s (19000*2)/60/7=90 7-hour days to review it all, are you sure it was all read? I mean they couldn’t be bothered to write it, so what are the chances they read it all?For someone’s website or one business maybe the risk is worth it, for a widely used software project that many others build on it is horrifying to see that much plausible code generated by an LLM.
by grey-area
3/18/2026 at 12:29:33 AM
When you review code, do you spend 2 minutes per line? That seems like a huge exaggeration of effort requiredby pull_my_finger
3/18/2026 at 1:49:08 AM
I probably review about 1k LoC worth of PRs / day from my coworkers. It certainly doesn't take me 33 hours (!!) to do so, so I must be one of those rockstar 10x superhero ninja engineers I keep hearing about.by seattle_spring
3/18/2026 at 7:39:20 AM
Are your coworkers producing the code using LLMs? And what level of trust do you place in them?by dirkc
3/18/2026 at 10:20:49 AM
For half my coworkers, their LLM code is better than their code.by ThunderSizzle
3/19/2026 at 9:06:57 PM
That’s depressing. For 80% of my coworkers their LLM code is horrible. Only the seniors seem to use it well and not just spit out garbageby girvo
3/19/2026 at 11:45:21 PM
I think that goes back to whether they are programmers vs engineers.Engineers will focus on professionalism of the end product, even if they used AI to generate most of the product.
And I'm not going by "title", but by mindset. Most of my fellow engineers are not - they are just programmers - as in, they don't care about the non-coding part of the job at all.
by ThunderSizzle
3/18/2026 at 6:47:27 AM
Depends - if it is from a human I find I can trust it a lot more. If it is large blobs from LLMs I find it takes more effort. But it was just a guess at an average to give an estimate of the effort required. I’d hope they spent more than 2 mins on some more complex bits.Are you genuinely confident in a framework project that lands 19kloc generated PRs in one go? I’d worry about hidden security footguns if nothing else and a lot of people use this for their apps. Thankfully I don't use it, but if I did I'd find this really troubling.
It also has security implications - if this is normalised in node.js it would be very easy to slip in deniable exploits into large prs. It is IMO almost impossible to properly review a PR that big for security and correctness.
by grey-area
3/18/2026 at 1:49:58 AM
> I mean they couldn’t be bothered to write it, so what are the chances they read it all?What kind of logic is this?
by weird-eye-issue
3/18/2026 at 6:27:02 AM
It’s much harder to read code carefully than to write it. Particularly code generated by LLMs which is mostly correct but then sometimes awful.by grey-area
3/18/2026 at 1:14:19 PM
usually yes, but that's why there are tests, and there's a long road before people start depending on this code (if ever). people will try it, test it, report bugs, etc.and it's not like super carefully written code is magically perfect. we know that djb can release things that are close to that, but almost nobody is like him at all!
by pas
3/18/2026 at 2:15:34 PM
The PR has been open for 3 months, and all the reviewers involved have actually read the whole code and are experts on the matter.by ovflowd
3/17/2026 at 10:26:04 PM
[flagged]by keeganpoppen
3/18/2026 at 2:05:36 AM
I carefully review far more than 14k LoC a week… I’m sure many here do. Certainly the language you write in will greatly bloat those numbers though, and Node in particular can be fairly boilerplate heavy.by vinnymac
3/17/2026 at 9:03:35 PM
Pain is a signal. Even if the trick is not minding, it's still inadvisable to burn your hand on an open flame. The pain is there to help you not get hurt.I do not think it is wise to brag that your solution to a problem is extremely painful but that you were impervious to all the pain. Others will still feel it. This code takes bandwidth to host and space on devices and for maintainers it permanently doubles the work associated with evolving the filesystem APIs. If someone else comes along with the same kind of thinking they might just double those doubled costs, and someone else might 8x them, all because nobody could feel the pain they were passing on to others
by conartist6
3/17/2026 at 10:24:20 PM
I don't see it to be such a pain.> Bundle a full application into a Single Executable.
Embed a zip file into the executable, or something. Node sort of supports this since v25, see --build-sea. Bun and Deno support this for a longer time.
> Run tests without touching the disk.
This must be left to the host system to decide. Maybe I want them to touch the disk and leave traces useful for debugging. I'd go with tmpfile / tmpdir; whoever cares, knows to mount them as tmpfs, which sits in RAM. (Or a ramdisk under Windows.)
> Sandbox a tenant’s file access. In a multi-tenant platform, you need to confine each tenant to a directory without them escaping
This looks like a wrong tool, again. Run your Node app in a container (like you are already doing), mount every tenant's directory as a separate mount point into your container. (Similar with BSD jails.) This seems like the only problem that is not trivial to solve without a "VFS", but I'm not very certain that such a VFS would be as well-audited as Docker, or nsenter and unshare. The amount of work necessary for implementing that is too much for the niche benefit it would provide.
> Load code generated at runtime. See tmpfs for a trivial answer. For a less trivial answer, I don't see how Node's code loader is bound to a filesystem. If it can import via https, Just use ESM loader hooks and register() your loader, assuming you're running Node ≥ 20.6.
by nine_k
3/17/2026 at 4:33:25 PM
Large PRs could follow the practices that the Linux kernel dev lists follow. Sometimes large subsystem changes could be carried separately for a while by the submitter for testing and maintenance before being accepted in theory, reviewed, and if ready, then merged.While the large code changes were maintained, they were often split up into a set of semantically meaningful commits for purposes of review and maintenance.
With AI blowing up the line counts on PRs, it's a skill set that more developers need to mature. It's good for their own review to take the mass changes, ask themselves how would they want to systematically review it in parts, then split the PR up into meaningful commits: e.g. interfaces, docs, subsets of changed implementations, etc.
by digikata
3/17/2026 at 5:59:02 PM
Nobody wants to review AI-generated code (unless we are paid for doing so). Open source is fun, that's why people do it for free... adding AI to the mix is just insulting to some, and boring to others.Like, why on earth would I spent hours reviewing your PR that you/Claude took 5 minutes to write? I couldn't care less if it improves (best case scenario) my open source codebase, I simply don't enjoy the imbalance.
by dakiol
3/17/2026 at 11:58:00 PM
> Like, why on earth would I spent hours reviewing your PR that you/Claude took 5 minutes to write?If the PR does what it says it does, why does it actually matter if it took 2 weeks or 2 minutes to put together, given that it's the equivalent level of quality on review?
by mpyne
3/18/2026 at 1:04:58 AM
“It works” is the bare minimum. Software is maintained for decades and should have a higher bar of quality.by TingPing
3/18/2026 at 10:38:18 PM
> given that it's the equivalent level of quality on review?by mpyne
3/19/2026 at 7:27:10 PM
One reason: if it takes 2 minutes to put together a PR, then you'll get an avalanche of contributions of which you have no time to review. Sure, I can put AI in fron to do the review, but then what's the point of my having an open source project?by dakiol
3/20/2026 at 2:27:36 AM
> but then what's the point of my having an open source project?For some people, the point was precisely to improve the software available to the global commons through a thriving and active open source effort. "Too many people are giving me too many high-quality PRs to review" is hardly something to complain about, even if you have to just pick them randomly to fit them in the time you have without AI (or other committers) to help review.
If your idea of open source is just to share the code you wanted to work on and ignore contributions, you can do that too. SQLite does that, after all.
by mpyne
3/18/2026 at 5:08:01 AM
> If the PR does what it says it does, why does it actually matter if it took 2 weeks or 2 minutes to put together, given that it's the equivalent level of quality on review?You're right that the issue isn't how many minutes it took. The issue is that it's slop. Reviewing thousands of lines of crappy code is unpleasant whether they were autogenerated or painstakingly handcrafted. (Of course, few humans have the patience and resistance to learning to generate the amount of terrible code that AIs do routinely).
by lmm
3/17/2026 at 10:43:03 PM
I get the frustration but I think this take only holds if you assume AI generated code is inherently worse. If someone uses Claude to scaffold the boilerplate and then actually goes through it properly, the end result is the same code you would have written by hand, just faster. The real problem is when people submit 14k lines they clearly did not read through. But that is a review process problem, not an AI problem. Bad PRs existed long before AI.by hackemmy
3/17/2026 at 11:10:22 PM
I resonate with OP a lot, and in my opinion, it's not about the code quality. It's about the effort that was put in, like in each LOC. I can't quite put it in words, but, like, the art comparison works quite well. If someone generates a painting with Gemini, it makes it somewhat heartless. It may still be good and bring the project forward (in case of this PR), but it lost every emotional value.I would probably never be able to review this kind of code in open source projects without any financial compensation, because of that reason. Not because I don't like LLMs, not use LLMs, or think their code is of bad quality. But, while without LLMs I know there was a person who sat down and wrote all this in painstaking work, now I know that he or she barely steered a robot that wrote it. It may still be good work, and the steering and prompting is still work and requires skill, but for me I would not feel any emotional value in this code, and it would make it A LOT harder to gather motivation to review it. Interestingly, when I think about it, I realize that I would inherently have motivation to find out how the developer prompted the agent.
Like, you know, when I see a wooden statue of which I know it was designed and carved by someone in months of work, I could appreciate every single edge of the wood much more than if there's a statue that was designed by someone but carved by some kind of wooden CNC machine. It may be same statue and the same or even better quality, and it was still skillful work, but I lose my connection to it.
Can't quite pinpoint it, but for me, it seems, the human aspect is really important here, at least when it's about passion and motivation.
Maybe that made some sense, idk. I just wrote out of my ass.
by wobfan
3/18/2026 at 5:09:13 AM
Yes and no. Previously when someone submitted a 14k line PR you could be assured that they'd at least put a significant amount of time and effort into it, and the result was usually a certain floor on the quality level. Now that's no longer true.by lmm
3/17/2026 at 10:18:25 PM
In theory because the code being added is introducing a feature so compelling that it is worth it. In practice, that’s rarely the case.My personal approach to open source is more or less that when I need a piece of software to exist that does not and there is no good reason to keep it private, it becomes open source. I don’t do it for fun, I do it because I need it and might as well share it. If someone sends me a patch that enhances my use case, I will work with them to incorporate it. If they send me a patch that only benefits them it becomes a calculus of how much effort would it take for me to review it. If the effort is high, my advice is to fork the project or make it easier for me to review. Granted I don’t maintain huge or vital projects, but that’s precisely why: I don’t need yet another programming language or runtime to exist and I wouldn’t want to work on one for fun.
by IgorPartola
3/17/2026 at 8:23:56 PM
Why do you care how much effort it took the engineer to make it? If there was a huge amount of tedium that they used Claude Code for, then reviewed and cleaned up so that it’s indistinguishable from whatever you’d expect from a human; what’s it to you?Not everyone has the same motivations. I’ve done open source for fun, I’ve done it to unblock something at work, I’ve done it to fix something that annoys me.
If your project is gaining useful functionality, that seems like a win.
by tyre
3/17/2026 at 8:40:13 PM
Because sometimes programming is an art and we want people to do it as if it was something they cared about. I play chess and this is a bit like that. Why do I play against humans? Because I want to face another person like me and see what strategies they can come up with.Of course any chess bot is going to play better, but that's not the point
by gonzalohm
3/17/2026 at 10:19:03 PM
What about the other times?by IgorPartola
3/17/2026 at 10:36:07 PM
I don't think node virtual filesystems is anything like chess.by madeofpalk
3/20/2026 at 2:10:08 PM
Solving problems is not like chess? I want to use my brain, not sure why that's so complicated to understandby gonzalohm
3/17/2026 at 10:44:18 PM
[flagged]by UqWBcuFx6NV4r
3/17/2026 at 10:58:13 PM
TIL that when I do anything that makes society label me as a "developer", I am not allowed to enjoy it, or feel about it in any way, as it's now a job, entirely neutral in nature, and I gotta do it, whether I hate or enjoy it - no attached emotions allowed.by wobfan
3/17/2026 at 11:21:41 PM
Ignore the mercenaries. Here they are legion.As for us (aspiring) craftsman, there are dozens of us! Dozens!
by paulryanrogers
3/18/2026 at 5:12:58 AM
> Why do you care how much effort it took the engineer to make it?Because they're implicitly asking me to put in effort as a reviewer. Pretending that they put more effort in than they have is extremely rude, and intentionally or not, generating a large volume of code amounts to misleading your potential reviewers.
> If there was a huge amount of tedium that they used Claude Code for, then reviewed and cleaned up so that it’s indistinguishable from whatever you’d expect from a human; what’s it to you?
They never do though. These kind of imaginary good AI-based workflows are a "real communism has never been tried" thing.
> If your project is gaining useful functionality, that seems like a win.
Lines of code impose a maintenance cost, and that goes triple when the code quality is low (as is always the case for actually existing AI-generated code). The cost is probably higher than the benefit.
by lmm
3/18/2026 at 4:08:33 AM
I hate being paid to review AI slop.by Gigachad
3/17/2026 at 4:45:36 PM
> With AI blowing up the line counts on PRs,Well, the process you’re describing is mature and intentionally slows things down. The LLM push has almost the opposite philosophy. Everyone talks about going faster and no one believes it is about higher quality.
by goalieca
3/17/2026 at 5:06:56 PM
Go slow to go fast. Breaking up the PR this way also allows later humans and AI alike to understand the codebase. Slowing down the PR process with standards lets the project move faster overall.If there is some bug that slips by review, having the PR broken down semantically allows quicker analysis and recovery later for one case. Even if you have AI reviewing new Node.js releases for if you want to take in the new version - the commit log will be more analyzable by the AI with semantic commits.
Treating the code as throwaway is valid in a few small contexts, but that is not the case for PRs going into maintained projects like Node.js.
by digikata
3/17/2026 at 6:18:36 PM
TBF, most of the AI code I've reviewed isn't significantly different than code I've seen from people... in fact, I've seen significantly worse from real people.The fact is, it's useful as a tool, but you still should review what's going on/in. That isn't always easy though, and I get that. I've been working on a TS/JS driver for MS-SQL so I can use some features not in other libraries, mostly bridging a Rust driver (first Tiberious, then mssql-client), the clean abstraction made the switch pretty quick... a fairly thorough test suite for Deno/Node/Bun kapt the sanity in check. Rust C-style library with FFI access in TS/JS server environment.
My hardest part, is actually having to setup a Windows Server to test the passswordless auth path (basically a connection string with integrated windows auth). I've got about 80 hours of real time into this project so far. And I'll probably be doing 2 followups.. one with be a generic ODBC adapter with a similar set of interfaces. And a final third adapter that will privide the same methods, but using the native SQLite underneath but smothing over the differences.
I'm leveraging using/dispose (async) instead of explicit close/rollback patterns, similar to .Net as well as Dapper-like methods for "Typed" results, though no actual type validation... I'd considered trying to adapt Zod to check at least the first record or all records, and may still add the option.
All said though, I wouldn't have been able to do so much with so relatively little time without the use of AI. You don't have to sacrifice quality to gain efficiency with AI, but you do need to take the time to do it.
by tracker1
3/17/2026 at 5:49:05 PM
> Everyone talks about going faster and no one believes it is about higher quality.
Go Fast And Break Things was considered a virtue in the JavaScript community long before LLMs became widely available.
by dotancohen
3/17/2026 at 6:59:47 PM
Fully disagree with this take. Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.Note aside, OpenJS executive director mentioned it's ok to use AI assistance on Node.js contributions:
I checked with legal and the foundation is fine with the DCO on AI-assisted contributions. We’ll work on getting this documented.
[1]: https://github.com/nodejs/node/pull/61478#issuecomment-40772...
by syrusakbary
3/17/2026 at 7:06:15 PM
I appreciate hearing your point of view on this. In my opinion the future of Open Source and AI assisted coding is a much bigger issue, and different people have different levels of confidence in both positive and negative outcomes of LLM impact on our industry.It is great to have a legal perspective on compliance of LLM generated code with DCO terms, and I feel safer knowing that at least it doesn't expose Node.js to legal risk. However it doesn't address the well known unresolved ethical concerns over the sourcing of the code produced by LLM tooling.
by indutny
3/17/2026 at 8:54:29 PM
AI coding is great, but iteration speed is absolutely not a desirable trait for a runtime. Stability is everything.Speed code all your SaaS apps, but slow iteration speeds are better for a runtime because once you add something, you can basically never remove it. You can't iterate. You get literally one shot, and if you add a awkward or trappy API, everyone is now stuck with it forever. And what if this "must have" feature turns out to be kind of a dud, because everyone converged on a much more elegant solution a few years later? Congratulations, we now have to maintain this legacy feature forever and everyone has to migrate their codebase to some new solution.
Much better to let dependencies and competing platforms like bun or deno do all the innovating. Once everyone has tried and refined all the different ways of solving this particular problem, and all the kinks have been worked out, and all the different ways to structure the API have been tried, you can take just the best of the best ideas and add it into the runtime. It was late, but because of that it will be stable and not a train wreck.
But I know what you're thinking. "You can't do that. Just look at what happens to platforms that iterate slowly, like C or C++ or Java. They're toast." Oh wait, never mind, they're among the most popular platforms out there.
by jaredklewis
3/17/2026 at 9:01:44 PM
Since when we accepted that we can’t go fast and offer stability at the same time?Time is highly correlated with expertise. When you don’t have expertise, you may go fast at expense of stability because you lack the experience to make good decisions to really save speed. This doesn’t hold true for any projects where you rely on experts, good processes and tight timelines (aka: Apollo mission)
by syrusakbary
3/17/2026 at 9:59:59 PM
IME there's a reason it's "move fast and break things" and not "move fast and don't break anything," because if the second was generally possible, we wouldn't even need this little aphorism.And again, I'm not making a claim that the slow and steady tradeoff is best for all situations. Just that it is a great tradeoff for foundational platforms like a runtime. On a platform like postgresql or the JVM, the time from initial proposal to being released as a stable feature is generally years, and this pace I think has served those platforms well.
But I'm open to updating my priors. Do you think there are foundational platforms out there that iterate quickly and do a good job of it?
by jaredklewis
3/17/2026 at 9:55:22 PM
it’s a well known true-ism you can have it cheap, correct or fast.but you can only have two of them at the same time.
and we’re talking about FOSS here, so cheap kinda has to be one of them.
by dijksterhuis
3/17/2026 at 9:13:27 PM
Allowing AI contributions results in lower quality contributions and allows wild things to come in and disrupt it, making it an unreliable dependency. We have seen big tech experience constant outages due to AI contributions as is...by oystersareyum
3/17/2026 at 10:51:21 PM
Your comment is why advertisers say that you should repeat your core call to action at least a few times to make it stick.You’ve read people saying the same thing hundreds of times and have somehow taken that as meaning that it’s credible.
Neither you nor I nor anyone else here knows what the “effects” are, because this is brand new tech, and it’s constantly changing. Yet you’re speaking with absolute confidence.
“Big tech” has downtime all the time, and LLMs did not change that fact. The only difference is that the peanut gallery that is already worked up about AI for philosophical / cultural reasons is suddenly ready to blame AI for every issue under the sun.
You think that you’re making a technical argument but you’re just repeating the same taking points I see teenagers regurgitating on TikTok. There’s nothing intelligent or credible about it.
by UqWBcuFx6NV4r
3/18/2026 at 4:00:31 AM
My dude, you're making the classic problem of assuming because you don't have any first-hand knowledge of problems, other people are equally ignorant.Don't slap someone else down because you don't know something.
by habinero
3/19/2026 at 12:48:36 PM
Not allowing AI assistance on PRs will likely decimate the project in the future,I can't help but wonder if this matter could result in an io.js-like fork, splitting Node into two safe-but-slow-moving and AI-all-the-things worlds. It would be historically interesting as the GP poster was, I seem to recall, the initial creator of the io.js fork.
by petercooper
3/17/2026 at 7:28:55 PM
> Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.It's not an AI issue. Node.js itself is lots of legacy code and many projects depend on that code. When Deno and Bun were in early development, AI wasn't involved.
Yes, you can speed up the development a bit but it will never reach the quality of newer runtimes.
It's like comparing C to C++. Those languages are from different eras (relatively to each other).
by szmarczak
3/18/2026 at 5:17:03 AM
> Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.If and when there is evidence that AI is actually increasing the speed of improvement (and not just churn), it would make sense to permit it. Unless and until such evidence emerges, the risks greatly outweigh the benefits, at least for a foundational codebase like this.
by lmm
3/17/2026 at 11:43:40 PM
> Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.That sort of statement might also be sarcasm in another context: I personally use AI a lot, but also recognize that there are a lot of projects out there that are suffering from low quality slop pull requests, devs that kinda sign out and don't care much about the actual code as long as it appears to be running, alongside most LLMs struggling a lot with longer term maintenance if not carefully managed. So I guess it depends a lot on how AI is used and how much ideological opposition to that there is. In a really testable codebase it could actually work out pretty well, though.
by KronisLV
3/17/2026 at 4:17:27 PM
How exactly does it violate the Developer's Certificate of Origin clause?by athorax
3/17/2026 at 4:23:36 PM
The submitted code must adhere to either of (a), (b), (c), and separately a (d) clause of: https://github.com/nodejs/node/blob/main/CONTRIBUTING.md#dev...If submitter picks (a) they assert that they wrote the code themselves and have right to submit it under project's license. If (b) the code was taken from another place with clear license terms compatible with the project's license. If (c) contribution was written by someone else who asserted (a) or (b) and is submitted without changes.
Since LLM generated output is based on public code, but lacks attribution and the license of the original it is not possible to pick (b). (a) and (c) cannot be picked based on the submitter disclaimer in the PR body.
by indutny
3/17/2026 at 6:37:51 PM
Not sure if you are intentionally misrepresenting (a), but here is the full text(a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
by athorax
3/17/2026 at 8:42:22 PM
That seems exclusive of LLMs, as the user didn't create the contribution, the LLM did.by duskdozer
3/18/2026 at 4:19:59 AM
It's exclusive of code where you wrote 0% of it."in part" is a trivial bar to clear.
by Dylan16807
3/18/2026 at 7:52:34 AM
I guess as a very strict reading where you take the output and insert a newline somewhere...but that sounds against the intentby duskdozer
3/17/2026 at 11:27:52 PM
Orthogonal to? Irrespective of the use of?by paulryanrogers
3/17/2026 at 7:30:27 PM
If there's a "the original" the LLM is copying then there's a problem.If there isn't, then (b) works fine, the code is taken from the LLM with no preexisting license. And it would be very strange if a mix of (a) and (b) is a problem; almost any (b) code will need some (a) code to adapt it.
by Dylan16807
3/18/2026 at 5:19:36 AM
> the code is taken from the LLM with no preexisting licenseThat's not good enough to comply with (b). The code must be specifically covered by an open-source license, it's not enough for it to just not have a license.
by lmm
3/18/2026 at 5:48:30 AM
There's a difference between "no license, all rights reserved" and "no license, public domain". Up until recently, you could assume that not having a license meant the former. But treating the latter as the same would just be silly.As far as I'm concerned, public domain counts as "an appropriate open source license".
by Dylan16807
3/18/2026 at 6:13:28 AM
> As far as I'm concerned, public domain counts as "an appropriate open source license".For material whose author is known and has explicitly placed it in the public domain, sure. For code that fell off the back of a truck, not so much.
by lmm
3/18/2026 at 8:13:03 PM
I'm of course assuming the legal status quo holds, where code properly generated by LLM is also explicitly public domain. No shadiness involved.(There's always a risk of an LLM copying something verbatim by accident, but if the designers are doing their job that chance gets low enough to be acceptable. Human code has that risk too after all. (And for situations that aren't an accident, with the human intentionally using snippets to draw out training text, then if they submit that code in a patch it's just a human violating copyright with extra steps.))
by Dylan16807
3/19/2026 at 12:32:40 AM
> code properly generated by LLM is also explicitly public domainWhere? I hadn't heard of any such ruling.
by lmm
3/19/2026 at 3:28:42 AM
https://en.wikipedia.org/wiki/Artificial_intelligence_and_co...This page has a pretty good overview.
> Both the federal and circuit courts in the District of Columbia have upheld the Copyright Office's refusal to register copyrights for works generated solely by machines, establishing that machine ownership would conflict with heritable property rights as establish by the Copyright Act of 1975.[16] As of March 2026, the Supreme Court of the United States has denied hearing challenges to the Copyright Office's decision.[17]
by Dylan16807
3/17/2026 at 7:21:08 PM
To many, it qualifies under either A or B, and therefore C as well. Under A, you can think of the LLM as augmenting your own intelligence. Under B, the license terms of LLM output are essentially that you can do whatever you want with it. The alternative is avoiding use of AI because of copyright or plagiarism concerns.by benatkin
3/17/2026 at 4:39:47 PM
It would be considered (a) since the author would own the copyright on the code.by charcircuit
3/17/2026 at 5:22:39 PM
Owning copyright of something and writing it are very different thingsby lacoolj
3/18/2026 at 4:02:26 AM
Not in the US. Copyright exists from the moment the work is created.by habinero
3/17/2026 at 4:51:47 PM
Citation needed.Whether AI output can fall under copyright at all is still up for debate - with some early rulings indicating that the fact that you prompted the AI does not automatically grant you authorship.
Even if it does, it hasn't been settled yet what the impact of your AI having been trained on copyrighted material is on its output. You can make a not-completely-unreasonable argument that AI inference output is a derivative work of AI training input.
Fact is, the matter isn't settled yet, which means any open-source project should assume the worst possible outcome - which in practice means a massive AI-generated PR like this should be treated like a nuke which could go off at any moment.
by crote
3/17/2026 at 5:02:12 PM
The two main points are that:1. Copyright cannot be assigned to an AI agent.
2. Copyrighted works require human creativity to be applied in order to be copyrighted.
For point 2 this would apply to times were AI one shots a generic prompt. But for these large PRs where multiple prompts are used and a human has decided what the design should be and how the API should look you get the human creativity required for copyright.
In regards to being a derivative work I think it would be hard to argue that an LLM is copying or modifying an existing original work. Even if it came up with an exact duplicate of a piece of code it would be hard to prove that it was a copy and not an independent recreation from scratch.
>the worst possible outcome
The worst possible outcome is they get sued and Anthropic defends them from the copyright infringement claim due to Anthopic's indemnity clause when using Claude Code.
by charcircuit
3/17/2026 at 6:28:32 PM
That indemnity clause is only for Team, Enterprise and API users. Do you know what was used here?Also the commercial version is limited to “…Customer and its personnel, successors, and assigns…”. I am very much not a lawyer and couldn’t find definitions of these in the agreement but I am not sure how transferable this indemnity would be to an open source project.
by monocularvision
3/17/2026 at 7:20:29 PM
I reviewed it and it looks like personal Claude Code subscriptions are not covered, so it's riskier than I claimed.by charcircuit
3/17/2026 at 6:08:43 PM
Why write open-source software at all, when the government could outlaw open-source entirely? What if an asteroid destroys Earth and there are no humans left to enjoy your work? At some point, you have to agree that a risk isn't worth worrying about. And your "worst possible outcome" is just the arbitrary outcome that you think has some subjective risk threshold. And it's certainly not one I agree with. Furthermore, calling it a "nuke" is a bad analogy because that implies that it can't be put back in the bottle once opened. In reality, we're dealing with legal definitions, which can be redefined as easily as defined.by phendrenad2
3/18/2026 at 4:34:17 AM
> And it's certainly not one I agree withWell, it's a good thing you're not on the hook for defending against it, then.
Like I said in another comment, you don't have a license just because they're cool and look neat. You have them specifically to guard against people like patent trolls, who are trying to wreck your shit and take your lunch money. It's not an abstract risk.
by habinero
3/19/2026 at 2:46:39 PM
> Well, it's a good thing you're not on the hook for defending against it, thenIf you are on the hook for defending against it, and your risk assessment is based on emotional, irrational fear and not an objective understanding of the risks, then you're doing people a disservice and should step down.
by phendrenad2
3/17/2026 at 10:54:16 PM
This is not how law works. Stop pretending that you’re a lawyer. You do not “always assume the worst”. Stop giving legal advice. You’re very clearly a developer in over his head. Law is not an engineering problem. Legislation is not a technical specification. Christ.by UqWBcuFx6NV4r
3/18/2026 at 4:23:29 AM
No, they're absolutely correct, and they're not saying either of those things. They're pointing out an enormous hidden risk. Yanno, like an engineer is supposed to do.You don't have a license because it's what all the cool kids are doing, you have one in case shit goes sideways and someone decides to try and ruin your day. You do, in fact, have to assume the worst.
The "nuke" here is some litigious company -- let's call them Patent Troll Rebranded (PTR) -- discovers that the LLM reproduced large amounts of their copyrighted code. Or it claims to have discovered it. They have large amounts of money and lawyers to fight it out in court and you are a relatively shoestring language foundation.
Either you have to unwind years of development to remove the offending code or you're spending six figures or more to defend yourself in court, all because you didn't bother to anticipate things that are anticipatable.
by habinero
3/18/2026 at 8:18:21 PM
Matteo wrote a pretty neat article that effective counters your claim https://adventures.nodeland.dev/archive/who-is-responsible-f...by ovflowd
3/20/2026 at 12:16:57 AM
Am I reading this right that Matteo is saying providence is not important because there are lots of historical cases of not having providence of code?> Many contributions contain routine, non-copyrightable material, and developers still sign off on them.
> Compilers change code in ways developers do not always track. Template generators create output from their own logic. Stack Overflow answers are often copied into codebases without much thought about licensing.
by dannyfritz07
3/17/2026 at 4:04:07 PM
Do as I say, not as I do.On a more serious note, I think that this will be thoroughly reviewed before it gets merged and Node has an entire security team that overviews these.
by epolanski
3/17/2026 at 4:16:10 PM
As someone who was a part of the aforementioned security team I'm not sure I'd be interested in reviewing such volume of machine generated code, expecting trap at every corner. The implicit assumption that I observed at many OSS projects I've been involved with is that first time contributions are rarely accepted if they are too large in volume, and "core contributor" designation exists to signal "I put effort into this code, stand by it, and respect everyone's time in reviewing it". The PR in the post violates this social contract.by indutny
3/17/2026 at 5:11:51 PM
For free, you can decide to do what you want, if it's your job, it's a bit different and you may have to do so, especially considering Collina, is one of the largest contributors of the project and member of the technical committee.by epolanski
3/17/2026 at 5:59:06 PM
> if it's your job, it's a bit different and you may have to do soOh I'd use an llm to generate large amounts of feedback and request changes!
by exe34
3/17/2026 at 6:17:34 PM
Imagine if every profession reasoned liked that when doing something they don't enjoy.by epolanski
3/17/2026 at 8:39:02 PM
Imagine fighting fire with fire. You don't have to take shit lying down.by exe34
3/17/2026 at 7:42:59 PM
What a wonderful world we would have, or possibly at least better than the current shit show :)by kruffalon
3/18/2026 at 12:04:05 AM
We'd have a lot less enshittification all around, I suspect.by int_19h
3/18/2026 at 11:15:01 AM
Sure thing, your nurse ain't gonna clean your mom, in the restaurant the chef ain't gonna prepare a dish he doesn't like, your accountant ain't gonna file your taxes if you've given him data he doesn't like, etc.Your paid to do a job, you're either professional or you aren't.
by epolanski
3/22/2026 at 6:04:59 AM
You'd just have to pay more to convince people to do the jobs they hate.It does mean that nurses might end up getting paid more than software engineers, but why is that a bad thing?
by int_19h
3/18/2026 at 1:02:45 PM
I’d probably leave the hospital if I heard the doctor filed an LLM generated diagnosis for the nurse to validateby lelandfe
3/18/2026 at 5:11:33 PM
So you don't do your job and submit a PR you didn't even read, and I'm supposed to waste my time that I have to the explain at my next performance review? I didn't sign up to read slop, thanks! If my boss wants me to spend 10x time time on this kind of shit, he has to pick something else that I no longer have to do. My time is not elastic. It can't expand to fit your slop.by exe34
3/17/2026 at 4:23:39 PM
[dead]by lemagedurage
3/17/2026 at 10:34:15 PM
> it must be noted that this 19k LoC PR was mostly generated by Claude Code and manually reviewed by the submitterWho reviewed and approved the PR?
by madeofpalk
3/18/2026 at 12:52:38 AM
Personally I’d like to thank you for raising the point, it seems that tsc members are willing to ram the PR through regardless as per jasnell’s LLM analysis that honestly seems like a hostile gish galloping attempt than an actual honest analysis.by petetnt
3/18/2026 at 7:14:23 PM
[flagged]by AgentNode