4/30/2026 at 7:39:30 AM
From https://kristoff.it/blog/contributor-poker-and-ai/:"Unfortunately the reality of LLM-based contributions has been mostly negative for us, from an increase in background noise due to worthless drive-by PRs full of hallucinations (that wouldn’t even compile, let alone pass CI), to insane 10 thousand line long first time PRs. In-between we also received plenty of PRs that looked fine on the surface, some of which explicitly claimed to not have made use of LLMs, but where follow-up discussions immediately made it clear that the author was sneakily consulting an LLM and regurgitating its mistake-filled replies to us."
by branko_d
4/30/2026 at 7:42:41 AM
Pretty much sums up the LLM fanbase.by feverzsj
4/30/2026 at 8:56:43 AM
I don't think it's the complete fanbase. However, there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason. Programming was a domain that filtered out those people because they found it hard to succeed at it. LLM's have changed that and it's a huge problem. It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.by discreteevent
4/30/2026 at 12:18:05 PM
"They may speed up the good programmers a little, but those people were able to program anyway without LLMs."I don't think this is realistic. I'm a good programmer, and it speeds up my work a lot, from "make sense of this 10 repo project I haven't worked on recently" to "for this next step I need a vpn multiplexer written in a language I don't use" to, yeah, "this 10k line patch lets me see parts of design space we never could have explored before." I think it's all about understanding the blast radius. Sonetimes a lot of code is helpful, sometimes more like a lot of help proving a fact about one line of code.
Like Simon says, if I'm driving by someone else's project, I don't send the generated pull request, I just file the bug report / repro that would generate it.
by JackC
4/30/2026 at 1:28:41 PM
> to "for this next step I need a vpn multiplexer written in a language I don't use"but that acceleration is exactly because you're not good at that language
by PunchyHamster
4/30/2026 at 2:20:53 PM
Can't we reach a compromise where proven track record of good use of LLM by a contributor or a company (eg. Bun) be pre-approved or entertained? Blanket ban on a new technology shouldn't be the default option.by renticulous
4/30/2026 at 3:23:55 PM
Certainly not in the case of asking it to do something you'd be slow at because you are unfamiliar. If you are not familiar enough with the system, how are you confident that what the LLM has produced is valid and complete? IMO the people saying LLMs make then 10x faster were either very bad to start with (like me!) or are not properly looking at the results before throwing them over the wall.And how do you know if that is the case or the person/team using the LLMs is one of the good ones?
So the safest answer is just "no".
by dspillett
4/30/2026 at 5:41:37 PM
This is the crux of the problem. LLMs make me significantly faster at writing code I was mediocre or bad at. But when I use it to write code in domains I have more knowledge in I see design and correctness problems all over the place and actively fix them and it slows down my output.Speed is seductive.
The bar isn't "this is a known good contributor". Its "this is a known good contributor working in a space they have knowledge in and has a track record of actually checking and thinking about LLM output before submitting it." It's much higher and I don't see how you can approve people on an organization-wide basis.
by bcrosby95
4/30/2026 at 8:59:37 PM
> LLMs make me significantly faster at writing code I was mediocre or bad at. But when I use it to write code in domains I have more knowledge in I see design and correctness problems all over the place and actively fix them and it slows down my output.
I think a very similar phenomena is called Gell-Mann amnesia effect: https://en.wikipedia.org/wiki/Michael_Crichton#%22Gell-Mann_...
by thesz
4/30/2026 at 3:21:34 PM
if they had a good track record, the current submission that led to this article damaged it.i am reminded of this quote: it takes more cleverness to debug code than it takes to write it. if you write code as clever as you can, by definition you are not clever enough to debug it. using LLM makes your code many times more clever than what you could write yourself. which means by the same definition the code is to clever for you to understand or debug it.
by em-bee
4/30/2026 at 5:04:58 PM
I like the new corollary to that rule, which is that if the AI is the best coder in the room and writes code too clever for itself, then no one including the AI can debug the code. Then where does that leave you?by ModernMech
4/30/2026 at 5:49:26 PM
i love it. just a moment of thought makes clear that LLMs are not capable to debug their own code because if they were they would be able to write better code. the LLM code doesn't even need to be clever.by em-bee
4/30/2026 at 6:00:38 PM
That’s why you don’t use SOTA xhigh models to write your code, so you can use the xhigh model to debug the code.by wyre
5/1/2026 at 4:31:27 AM
I kneel to Poe's law.by roncesvalles
4/30/2026 at 9:49:51 PM
Why would it be pre-approved ? Code is code, whether it's bad quality LLM code or meatbag code it shouldn't matter.The entire problem is that before the meatbag code was either not submitted at all (developer knowing they are not competent enough to do the fix) or the volume of it was low.
With LLM people not competent to even review, let alone write are emboldened to just throw shit on the wall at rapid pace. So the wall is entirely covered by the shit
by PunchyHamster
4/30/2026 at 2:45:21 PM
No.by ragall
4/30/2026 at 2:46:08 PM
> I'm a good programmer, and it speeds up my work a lotThe problem with this line of thinking is the same with "I so good as C developer, my code is so-safe!".
And we see what reality instead tell: Yes, exist people where this claims are true, not, is not even a decently sized minority.
by mamcx
5/1/2026 at 9:30:41 AM
> I'm a good programmer, and it speeds up my work a lot,Whenever I see this arguement, I'm reminded that most programmers don't know what they do for work
by drekipus
4/30/2026 at 2:53:09 PM
I use LLM as a tutor. It tailor their answers exactly to the situation I am in, even if it hallucinate. I can correct them on the fly and that also serves as training. I try not copy and paste and type every line of code by hand. That doesn't always happen, but I usually understand the code I am writing.by kiba
4/30/2026 at 4:49:41 PM
Why are you writing a vpn multiplexer written in a language you don't use?You can't review it.
Are you relying on your colleagues to do that, or is this riddled with bugs? Or is it code you're producing for personal use only so it's not worth mentioning as it's not sped your work up, it's just let you write a little play program.
by mattmanser
5/1/2026 at 9:32:30 AM
No no no it's speed at all costs. Sure. I'm writing junk but the speed of what I'm doing is *impressive* You don't understand.by drekipus
4/30/2026 at 1:51:04 PM
It's great when I know how the code should look. Sometimes I just can't bring myself to write yet another http handler.by stirfish
5/1/2026 at 11:35:36 AM
Already libraries for that which are battle tested, why vibe code a unique solution each time?by i_love_retros
5/3/2026 at 7:20:26 PM
Pretend I said "crud handlers using i_love_retros's favorite library" or whatever makes the most sense for you.by stirfish
4/30/2026 at 2:36:12 PM
yep. as an expert programmer there are things i did not have access to. for example, i have an embedded-lite hardware project that required a one line patch to a linux kernel Module.i know what a kernel module is and im reasonably certain that the patch is safe, but there is no way in hell i would have found that solution (i would have given up). in a world without llms, the project would have died.
by dnautics
4/30/2026 at 5:47:37 PM
I really hope that you have gone over what the LLM decides to do.Time and time again I've had a project (such as a DSL to SQL compiler, automatic Rust codegen, CSS development) stall because the LLM took a short sighted decision.
I later found better solutions by querying Reddit and upon consulting the LLM, it basically said "oh shit I'm sorry"
by vatsachak
4/30/2026 at 7:14:30 PM
We have all had that experience, that's just the way this new world is.It's honestly pretty arrogant to tell a senior engineer that you "really hope" they've gone over some code. AI generated or otherwise.
by switchbak
4/30/2026 at 7:43:12 PM
Sorry. I forgot to add to add the respect formI really hope usted checked your code
At this point I'm pretty sure I did the homework for people in college who are now senior engineers
by vatsachak
5/1/2026 at 3:07:45 PM
I think your parent didn’t word this correctly.This is commonplace. So commonplace that most have worked “checking the LLM” into their workflow so deeply that essentially all that’s done is prompt followed by a mini code review.
To suggest a senior engineer blindly accepts modifications without code review kinda hints at you not using LLMs to realize how quickly it will make a mess of things if you don’t hold it’s hand.
by tekknik
5/1/2026 at 11:28:47 AM
Lol why is it arrogant? My workplace is evidence that having a senior engineer title or even a computer science degree doesn't mean you are a good engineer. I honestly think some people have fake credentials and got their jobs via nepotism.by i_love_retros
5/1/2026 at 5:48:38 PM
i am writing the sw stack for my own pharma startup.we have 2 very high value DAU, one of whom is me, and probably will max out at 1000 in our wildest dreams.
long term, our biggest concern is a security regression that lets outsiders see our internal information
by dnautics
4/30/2026 at 9:18:41 AM
> However, there are lots of people in the world who live their whole life by vibingWhy are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?
I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.
by kay_o
4/30/2026 at 9:33:38 AM
> Why do they think they are "helping" with hallucinated rubbish that can't even build?Because they can't tell the difference between what the machine is outputting, and what people have built. All they see is the superficial resemblance (long lines of incomprehensbile code) and the reward that the people writing the code have got, and want that reward too.
by automatic6131
4/30/2026 at 10:57:44 AM
the target audience of the cyber typer terminal [0]by toofy
4/30/2026 at 9:48:51 AM
"Main character energy". What they're really doing is protecting their view of themselves as smart, and they're making a contribution for the sake of trying to perform being an OSS dev rather than out of need or altruism.AI is absolutely terrible for people like that, as it's the perfect enabler.
by pjc50
4/30/2026 at 1:51:13 PM
> Why do they think they are "helping"It's not about helping. It's about the feeling of clout. There are still plenty of people who look at Github profile activity to judge job candiates, etc. What gets measured gets repeated.
I believe that most of the ills of social media would disappear, if we eliminated the "like" and "upvotes" buttons and the view counts. Most open source garbage pull requests may likewise go away if contributions were somehow anonymous.
by StevePerkins
4/30/2026 at 8:15:07 PM
Anything you say back to them calling out their nonsense, they'll feed back into their LLM and it will tell them why you're wrong and they're right.by tencentshill
4/30/2026 at 11:53:40 PM
Holy... that was quite the read.by foltik
4/30/2026 at 10:30:59 AM
LLMs are in this case enabling bad behavior, but open source software has always been vulnerable to this. Similarly, people who use LLMs to do this kind of thing are the kind of people who would have done it without LLMs but for the large effort it would have taken. We're just learning now how large that group is.This is a good thing, it's an opportunity to make open source development processes robust to this kind of sabotage.
by jcgrillo
4/30/2026 at 11:40:59 AM
> LLMs are in this case enabling bad behaviorYeah that seems to be their primary use case, if I'm honest. It's possible to use them ethically and responsibly, much in the same way it's possible to write one's own code, and more broadly, do one's own work. Most people however, especially in our current cultural moment and with the perverse incentives our systems have created, are not incentivized to be ethical or responsible: they are incentivized to produce the most code (or most writing, most emails, whatever), and get the widest exposure and attention, for the least effort.
Hence my position from the start: if you can't be bothered to create it, I'm not interested in consuming it.
by ToucanLoucan
4/30/2026 at 2:57:57 PM
People who made use of LLM responsibly to create high quality output doesn't look like they're using AI.For example, using AI as an editor. It doesn't write anything for you and you try and avoid suggestions unless you're stuck.
by kiba
4/30/2026 at 9:33:30 AM
You're asking why oil doesn't act like water. It's not really an impossible task, it's just not one they agree with.by drchickensalad
4/30/2026 at 1:05:54 PM
I think a lot of people who haven't given it more thought might see it as an arbitrary rule or even some kind of gatekeeping or discrimination. They haven't seen why people would want to not deal with the output.This might not be helped by the fact that there are a lot of seemingly psychotic commenters attacking anything which might have touched an LLM or any generative model at some point. Their slur and expletive filled outbursts make every critical response look bad by vague association.
Having sensible explanations like in TFA for the rules and criticism clearly visible should help. But looking at other similar patterns, I'm not optimistic. And education isn't likely to happen since we're way past any eternal september.
by a96
4/30/2026 at 9:31:12 AM
It's the same as cheating in a game. You are given an """advantage""", so lying about it seems like the best optionby ramon156
4/30/2026 at 11:12:01 AM
I wonder how many are account farming.by MattDaEskimo
4/30/2026 at 3:31:08 PM
Tangential side story, but an interesting one none the less.I was a food delivery driver back in the mid 00's to the mid teens. Early on, GPS was rare and expensive, so to do deliveries and do them effectively, you had to be able to read a map and mentally plan out efficient routes from the stochastic flow of orders coming out.
This acted as a natural filter, and "delivery driver" tended to be an interesting class of people, landing somewhere in the neighborhood of "lazy genius". Higher than average intelligence, lower than average motivation.
Then when smartphones exploded in the early 10's, the bar for delivering fell through the floor, and the job became swamped with people who would be best identified as "lazy unintelligent". Anyone who had a smartphone and not much life motivation was now looking to drive around delivering food for easy money.
Not saying the job was ever particularly glamorous, but it did have a natural mental barrier that tech tore down, and the result was exactly as one would predict. That being said, I'm not sure end users noticed much difference.
by WarmWash
4/30/2026 at 5:47:45 PM
> That being said, I'm not sure end users noticed much difference.I have friends who order a lot of DoorDash and UberEats and they complain constantly about how awful the delivery service is.
The problem isn't that they haven't noticed, it's that they keep paying for the terrible service, even as the price goes up.
by miyoji
4/30/2026 at 6:06:20 PM
Sums up pretty much how offshoring works on our industry.There are cool people on the other side as well, unfortunately those aren't usually who get assigned unless escalations take place.
Most shops are built based on juniors that need to build enough curriculum to go elsewhere as soon as they get some scars.
Yet not only those projects keep coming, now plenty managers dream about replacing those juniors with agents.
by pjmlp
4/30/2026 at 4:40:44 PM
I love this anecdote. It highlights what our industry continues to forget: The end user doesn't care.Don't get me wrong, tech is why I am here. But if it works, Alice and Bob don't care one bit about how the product exists.
by bojo
4/30/2026 at 4:54:14 PM
> The end user doesn't care.well, they think they don't. until their pii gets leaked all over the internet because whoops our s3 bucket was publicly accessible, or until the service goes down because whoops our llm deleted the prod db...
by jbxntuehineoh
4/30/2026 at 7:19:28 PM
PII leaks are normalized now. Most people aren't even aware, or just shrug "oh well" and head to the app store to download the latest gacha game or whatever.by chickensong
4/30/2026 at 6:07:29 PM
That is why Alice and Bob get Electron apps, Webviews on mobile, mostly coded by offshoring teams.by pjmlp
4/30/2026 at 10:09:47 AM
> It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.If you will forgive an appeal to authority:
The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains.
- Fred Brooks, 1986
by LAC-Tech
4/30/2026 at 12:25:03 PM
Before LLMs we could already see a growing abundance of half baked engineers only in for the good pay. Willing to work double time to pull things out.Management, unsurprisingly deemed those precious. They could email them out anytime, working weekend to fix problems their kind were the cause. Sure sir.
They excel at communication. Perfecting the art.
Now LLMs are there to accelerate the trend.
by hirako2000
4/30/2026 at 7:13:06 PM
You're at least describing someone who sounds hard-working... what's the problem?I'd be more concerned if I was someone who signed up to play ping pong two hours a day and do a bi-weekly commit.
There was a time not so long ago where I was watching "a day in the life of a software engineer" videos on Youtube and I was wondering if some of these were parodies. I still remember one in particular which I'm pretty sure was a parody, but it was only marginally distinguishable from the others.
by aerhardt
4/30/2026 at 8:00:48 PM
I do believe in hardship. As sacrifice. It yields long term benefits for oneself, and for society.But submissions into slavery for immediate gain accomplishes little, and costs society a lot more (physical and mental health issues are a huge burden).
Those parodies you saw, they were caricature of elite engineers, who sacrificed decades of his life to become so competent. Can work from home, eat pasta while glancing over a PR and just hit approve.
That you resent the luxury doesn't make it undeserved privilege.
by hirako2000
5/2/2026 at 10:55:38 PM
I've met programmers who severely outclassed me. It was extremely uncomfortable and it took me months to accept that reality and reshape my hurt ego into curiosity and desire to learn from someone clearly superior in the craft.That being said, most people in the privileged positions you described are there by sheer luck and connections. In the very very best-case scenario that offends them the least: they stumbled upon an opportune position and were smart enough to make full use of it... in the first 6 months (when people pay the most attention and lasting impressions are formed). And then rode the reputation they made for years. Their value as engineers on the team after the initial honest burst of productivity becomes... very unclear from that point and on, shall we say.
Again, I've met engineers who fully deserved their privileges. 2-3 times over 24 years of career though (a good chunk of it as a contractor so I've been around). My anecdotal evidence obviously means nothing but we all develop pattern-matching skills with time, making me think what I saw is generally the statistical curve that would apply almost everywhere. Maybe.
by pdimitar
4/30/2026 at 7:26:37 PM
Working long hours due to incompetence is not a good thing.by ragall
4/30/2026 at 10:30:14 AM
> It's hard to know if LLMs will end up being a net win for the industry.True, regardless of that, for sure with LLM we are borrowing Technical debt like never before.
by pelasaco
4/30/2026 at 3:07:37 PM
Why are we not paying it off? I sure am. I refactor code left and right. It is up to you.by esafak
4/30/2026 at 7:41:31 PM
> Why are we not paying it off? I sure am. I refactor code left and right. It is up to you.Do you work alone i presume? Everyone now is engineer. In my department, even managers are "writing code". Producing thousand of lines of ansible code, that nobody can review, with multiple lines of doc that nobody will read. It is just a mess.
by pelasaco
4/30/2026 at 8:00:06 PM
That's a management problem. If you can't stop non-coders from coding perhaps you can introduce an AI reviewer to take a load off, demand that they be able to defend every line of code, and put them all on pager duty, since they're coders now ;)by esafak
4/30/2026 at 11:26:59 AM
"Claude, don't create any technical debt please"by secondcoming
4/30/2026 at 4:56:21 PM
i've been told that it's totally fine because once the codebase turns into spaghetti you can simply tell the agent to refactor it and then everything will be okby jbxntuehineoh
4/30/2026 at 5:17:23 PM
I know this is a tongue-in-cheek response, but this brings me great pain. The spaghetti begins quickly, and your unit/functional tests won't help you unless you hammered out your module API seams before you even began. Oh, your abstractions are leaking? Your modules know too much about each other? Multiply the spaghetti!by all2
4/30/2026 at 7:42:56 PM
the multiple layers of vibe, makes the dozen of code bases even harder to maintain.by pelasaco
4/30/2026 at 11:08:13 AM
For at least the last 3 decades programming was a field that rewarded utter mediocrity with (relatively to other fields) massive remuneration. It has been filled with opportunists for as long as I remember.by LaGrange
4/30/2026 at 11:24:38 AM
This is an excellent point. LLMs might merely be exposing and amplifying behaviors that were always there. This can be an opportunity, in that shining light on it may allow us to cleanse ourselves of it. It's fundamentally about integrity, and sadly it's becoming clearer how few possess it (if it ever wasn't!). But maybe we'll get better at measuring integrity, and make hiring/collaboration decisions based on it.by jcgrillo
4/30/2026 at 11:36:22 AM
You are talking about bad programmers who are at least able to fool their managers for at least several years. The people OP is talking about could not even do that and most likely would have dropped out in the first week trying to program full time since they just don’t have the aptitude and patience to get unblocked after their first compilation error. Now they can go very far with a LLM.by brabel
4/30/2026 at 6:16:51 PM
Thing is, it's not how incompetent they are, but the opportunism itself. The property I mentioned pulls in opportunists regardless of their competence. So eventually if you work in a field like this, you end up surrounded by them. There's always _some_ around you, of course, everywhere - but across time different fields tended to pull so many of them they would become suffocating to anyone who isn't one. And if you think you can interview your way out of this - an opportunist will often have an easier time to pass a harsh interview process than someone who cares.IT isn't the only one - finance and law had the issue since forever, AFAIK - but now I'd rather be in a field that's _actively repellent_ to them.
by LaGrange
4/30/2026 at 11:47:42 AM
I think worth noting that a more impactful and maybe even bigger proportion of those opportunists is in management.Regarding quality overall, I agree, it's truly a cursed field. It was bad before; and with LLMs, going against that tide seems more difficult than ever.
by 3form
4/30/2026 at 2:12:17 PM
wouldnt llm do all the tasks that determistic programs are doing. like chatgpt files taxes for you instead of using turbotax.by dominotw
4/30/2026 at 11:02:05 AM
> Programming was a domain that filtered out those people because they found it hard to succeed at it.I think this is a very rosy view of programmers, not borne out by history. The people leading the vibe coding charge are programmers, rather than an external group.
I know it's popular to divide the world into the technically-literate and the credulous, but in this case the technical camp is also the one going all in.
by Planktonne
4/30/2026 at 9:57:48 AM
> there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reasonThis response 1000% was crafted with input from an LLM, or the user spends too much time reading output from llms.
by dakolli
4/30/2026 at 10:33:17 AM
I have never used an LLM to write. Writing forces me to think (and I edited the comment a couple of times when writing it which helped me clear up my thinking). "It's a viable way to live and sometimes it's the only way to live" is a personal realization that has taken me some time to understand. You can go back through my comment history to the time before LLMs to check if my style was different then.by discreteevent
4/30/2026 at 11:22:43 AM
It says a lot that most readers can't distinguish good writing from something an LLM spat out.Ray Kroc's genius was to make people forget that you get what you pay for.
by hirako2000
4/30/2026 at 11:46:59 AM
False equivalency. If you had the humility to run your own writing through an LLM first, it would have caught it. Just saying.Not picking on you in particular, but most of the anti-AI crowd can’t present their case compellingly and have an utter lack of humility.
by vehemenz
4/30/2026 at 10:57:05 AM
If you run your writing through an LLM, it can poke holes in your argument, organize your ideas better, or point out that your tone is hostile/dismissive. It doesn’t need to be a replacement for writing or thinking, especially if you’re learning along the way.by vehemenz
4/30/2026 at 12:08:46 PM
So - in that way - LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.Do You really want it?
There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.
That's looks like a strong dystopia for me.
by aniou
4/30/2026 at 12:24:23 PM
> LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.How is this mutually exclusive with teaching better than most humans? Part of these "corporate" datasets include deep knowledge of the world's best literature and philosophy, for instance. Why can't it be both?
> Do You really want it?
If I'm in a hurry, don't know where to start, or don't have money for someone to teach me—sure.
> There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.
This is a recapitulation of the Luddite argument during the Industrial Revolution. And it's valid, but it has consequences for all technological change, not just this one. There was a world before Google, the Web, the Internet, personal computing, and computers. The same argument applies across the board, and the pre-AI / post-AI cutoff looks arbitrary.
by vehemenz
4/30/2026 at 1:00:29 PM
> teaching better than most humansAh, so now we get to the "ed tech" question. What is teaching? Is there a human element to it, and if so, what is it? Or is it something completely inhuman? Or do we need to clarify what meaning of "teaching" we're talking about before we have a discussion?
by svieira
4/30/2026 at 7:04:44 PM
> Part of these "corporate" datasets include deep knowledge of the world's best literature and philosophyPart of those datasets also include 4chan.
by patrickmay
5/1/2026 at 10:34:11 AM
[flagged]by aniou
4/30/2026 at 11:49:29 AM
All of which are parts of the writing and thinking skillset, no?by 3form
4/30/2026 at 11:55:43 AM
Right. It can enhance that skillset. Are you suggesting it can’t?This wouldn’t be a plausible position.
by vehemenz
4/30/2026 at 1:07:47 PM
Rather that avoiding delegating it to LLM for these tasks helps you practice that skill.That said, I think it depends how you use it. You can learn from explanations, and you'd better avoid "rewrite this for me and do nothing else" kind of approach.
by 3form
4/30/2026 at 3:55:41 PM
Right, but the LLM can help you practice the skill too. Without the LLM, you're in a self-guided, autodidactical mode. Obviously, that can have its own advantages, but most people—but especially novices—aren't in a position to assess their skill level or their progress. The average person isn't going to magically get better at thinking or writing without formal training, or at least some direction.by vehemenz
4/30/2026 at 10:51:32 AM
I don't get that impression at all. LLMs would have avoided the stylistic repetition of "live". Asking an LLM to reformulate the sentences you quoted yields this slop:> There are a lot of people who go through life by vibing. And honestly: that’s not automatically “bad.” Sometimes it’s even the only workable way to get through things. The issue is that “vibe-first” people tend to have a pretty loose relationship with truth, rigor, and being pinned down by specifics. They’ll confidently move forward on what sounds right instead of what they can verify.
I'll finish this post with a sentence containing an em-dash -- just to confuse people -- and by remarking on how sad I find it that people latch onto dashes and complete sentences as the signifiers of LLM use, instead of the inconsistent logic and general sloppiness that's the actual problem.
by codeflo
4/30/2026 at 1:33:41 PM
[dead]by redsocksfan45
4/30/2026 at 10:28:16 AM
I'm firmly in the LLM fanbase. Not because I can't type code (was doing it for over 17 years, everywhere from low level hardware drivers in C to web frontend to robot development at home as a hobby - coding is fun!), but because in my profession it allows me to focus more on the abstraction layer where "it matters".I'm not saying that I'm no longer dealing with code at all though. The way I work is interactively with the LLM and pretty much tell it exactly what to do and how to do it. Sometimes all the way down to "don't copy the reference like that, grab a deep copy of the object instead". Just like with any other type of programming, the only way to achieve valuable and correct results is by knowing exactly what you want and express that exactly and without ambiguity.
But I no longer need to remember most of the syntax for the language I happen to work with at the moment, and can instead spend time thinking about the high level architecture. To make sure each involved component does one thing and one thing well, with its complexities hidden behind clear interfaces.
Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.
by ZaoLahma
4/30/2026 at 10:36:42 AM
This mindset is fine (it's mine essentially too).But it absolutely has to be combined with verification/testing at the same speed as code production.
by ap99
4/30/2026 at 10:59:12 AM
I generally do have that mindset, but over the past 1y of Claude code I do notice that I’m clearly losing my understanding of the internals of projects. I do review LLM generated code, understand it, no problem reading/following through. But then someone asks me a question, and I’m like… wait, I actually don’t know. I remember the instructions I gave and reviewing the code but don’t actually have a fine-details model of the actual implementation crystallized in my mind, I need to check, was that thing implemented the way I thought it was or not? Wait, it’s actually wrong/not matching at all what I thought! It’s definitely becoming uncomfortable and makes me reconsider my use of Claude code pretty significantlyby dgellow
4/30/2026 at 5:01:05 PM
> I’m like… wait, I actually don’t know.reminds me of the experience of reading a math text without doing the exercises, thinking that you've understood the material, and then falling flat on your face when you attempt to apply your "understanding" to a novel problem. there's a significant difference between passively reading something and really putting active effort into it. only the latter leads to actual understanding ime
by jbxntuehineoh
4/30/2026 at 1:47:27 PM
Same experience. I've been writing code for many decades, but that experience doesn't mean I can remember what I read when reviewing generated code. I write small, focused commits, but I have to take a day off each week to make changes by hand just to mentally keep up with my own codeset knowledge, and I still find structures that surprise me. It's not necessarily that the code quality is poor, but it's not like I (thought) I had designed it. It's lead to a weakening of my confidence when adding to or changing existing architecture.by toddmerrill
4/30/2026 at 1:15:43 PM
I've had this issue too, and I feel it was an important lesson—kind of like the first time getting a hangover.On the other hand, LLM-generated code comments better than I do, so given a long enough time horizon, it could be more understandable at a later time than code I've written myself (we've all had the experience of forgetting how things work).
by vehemenz
4/30/2026 at 2:10:00 PM
It's not. Invariably, the code is locally fine and globally nonsense.by ori_b
5/1/2026 at 7:16:58 PM
> On the other hand, LLM-generated code comments better than I do, so given a long enough time horizon, it could be more understandable at a later time than code I've written myself (we've all had the experience of forgetting how things work).
Writing and rewriting piece of software performs what is called "spaced repetition" [1].[1] https://en.wikipedia.org/wiki/Spaced_repetition
You ask questions about code when you implement something and if you cannot answer these questions, you go to code to find answers out and refresh your understanding of it.
For this to work you have to be interested in the understanding of the code and code should be created at the pace you can keep up.
Software engineers usually do create code economically because they need to remember and understand it. Vibe coders do not have this particular constraint, they just do not aim for most understandable code possible. Even if there are more comments in code.
by thesz
4/30/2026 at 3:44:55 PM
I do think that this is natural. When you use LLM coding tools, you're becoming a lot more like an architect/staff/manager, rather than the direct coder. You're setting out the spec, coming up with the design, and coming up with the high level structure of the project.However, this comes at the cost of losing track of the minute details of the implementation because you didn't write it yourself. I find it a bit analogous to code I've reviewed vs code I've written.
However, I've found using AI for code structure summary and questioning tends to be a good way to get around it. I might forget faster, but I also pick it up faster.
by esyir
4/30/2026 at 3:13:15 PM
[dead]by esafak
4/30/2026 at 3:58:59 PM
I've found that for non-trivial features, I typically benefit from 3-4 rounds of: are you sure this isn't tech debt, are you sure this is thoroughly tested for (manually insert the applicable cases, because they aren't great at this, even if explicitly asked), are you sure this isn't re-inventing wheels, adding unnecessary complexity by not using existing infrastructure it should or that other existing code would not benefit from moving to this, are you sure you can't find any bugs, in hind sight, are you sure this is the best design?Then, after it says, yes I'm sure this is production ready and we're good to move on, you have Codex and Gemini both review it one last time, and ask it to address their feedback if it's valuable or not.
After all this, it's the only time I'll look at the code and review it and make sure it's coherent.
Until then, I assume it's garbage.
I'd estimate this still improves velocity by 10x, and more importantly, allows me to operate at a pace I couldn't without burning out.
by onlyrealcuzzo
4/30/2026 at 6:13:03 PM
working this way would drive me nutsby em-bee
4/30/2026 at 11:15:58 PM
Why? It's not that different from managing engineers.You're just getting less work done on a slower cadence and asking the questions in design review and in code reviews...
by onlyrealcuzzo
4/30/2026 at 11:48:07 PM
it's very different. LLMs don't behave like people. they don't learn.i don't mind managing people, but i don't want to manage machines unless i can control them with the precise languages that the commandline and programming languages use. prompting a LLM is to vague an interface for me, the outcome is to unreliable, to unpredictable.
by em-bee
4/30/2026 at 1:13:01 PM
One-off tasks and parts of the stack that already have lots of disposable code do not need the same scrutiny as everything else. Just as there is a broad continuum of code importance, there is a broad continuum of testing requirements, and this was the case before AI. Keeping this in mind, AIs can also do some verification and testing, too.by vehemenz
4/30/2026 at 10:39:44 AM
> Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.Any examples how you see some engineers being left behind?
by 0xpgm
4/30/2026 at 3:35:06 PM
> Any examples how you see some engineers being left behind?I don't know where you live, but around where I live in Denmark you'd fail for not using AI at a senior interview in a lot of places. Even places which aren't exactly AI fans use AI to some extend.
The biggest challenge we face right now is figuring out how you create developers who have enough experience to know how to use the AI tools in a critical manner. Especially because you're typically given agents for various taks, which are already configured to know how we want things to be written.
by Quothling
4/30/2026 at 3:44:48 PM
Around here on your southern neighbour, everyone is supposed to be doing AI and being evaluated by this, yet in many projects if clients don't sign off on the use of AI tools, there is no AI to use anyway.Additionally there are the AI targets set by C suites based on what everyone is saying on TV, and what we can actually deliver based on the available data sets, integration points, and naturally those sign offs for data governance, and hallucinations guardrails.
by pjmlp
4/30/2026 at 7:12:42 PM
I work for a fortune 50 that is heavily tech based.If you can’t interview without immediately reaching for an LLM you are considered unfit to work here.
by ofjcihen
4/30/2026 at 7:22:40 PM
Around here C levels have AI adoption goals and are actively pushing it throughout organisations. Even when it doesn't exactly make sense.by Quothling
4/30/2026 at 5:40:06 PM
> Everyone is jumping off the cliff> If you don't jump off the cliff you're falling behind
by vatsachak
4/30/2026 at 7:23:11 PM
I was just giving them an anecdotal example of what they were asking for. I think the answer is somewhere in the middle, but I'm not in a position to push any form of change on the C levels.by Quothling
4/30/2026 at 7:54:01 PM
I've noticed that back in Europe everyone's in a panic mode, but that's because of the inferiority complex most people have vs both US and China. It's unwarranted.by ragall
4/30/2026 at 11:02:47 AM
Probably in cognitive surrender. I have one such colleague and he is driving me crazy. "Claude sad that ..."by xyzal
4/30/2026 at 11:12:10 AM
I'm starting to notice how those who don't use AI end up having to hand tasks over to people who can get them done quicker.It is anecdotal for sure, but it's a pattern that seems to be emerging around me that expectations of velocity increases, and those who don't use AI can't keep up.
by ZaoLahma
4/30/2026 at 3:49:33 PM
Why is velocity the overriding goal?by ericjmorey
4/30/2026 at 4:15:47 PM
Shit processes. I don't know what places most of those people work at that crap is being merged into production at insane pace. You would expect any serious piece of software would be important enough to have the code be reviewed by at least one human.Kind of.... I don't know. To get placed such requirements from the top down and not fight back, just take it head on, not even maliciously, don't even oppose it on a technical basis, just be like "yeah, you've now gotta ship faster or you're left behind, so therefore LLMs must be the future!", no critical thought attached. Is this shit coming from experienced engineers?
Preposterous we're relying on "it's better because I feel like", "dudes who don't use it are falling behind at work", "they ask for it in job interviews".
by Bridged7756
4/30/2026 at 7:12:37 PM
Again, I have to point out that AI is not an abstraction layer. It blows my mind that engineers with years of experience somehow don’t understand this.It would be an honor to be “left behind” by people who practice their craft with such carelessness.
(Frankly, I should probably stop replying to self-professed LLM boosters entirely since there’s a good chance I’m just chatting with an LLM.)
by archagon
4/30/2026 at 10:38:15 AM
Fanbase, maybe. Software engineers using these projects? Probably forking and updating themselves.FWIW, I've opened a half dozen PRs from LLMs and had them approved. I have some prompts I use to make them very difficult to tell they are AI.
However if it is a big anti-llm project I just fork and have agents rebase my changes.
by wallst07
4/30/2026 at 11:37:07 AM
Your employer allows/encourages this? Do you run that stuff in production? Would you mind telling us where you work so we can avoid using their products? It is just not possible to trust the software that emerges from the process you've described.by jcgrillo
4/30/2026 at 5:18:02 PM
so, they are approved, which means they were most likely reviewed. yet you still think the software cannot be trusted of that and even want to name and shame a company. utterly stupid.by ejpir
4/30/2026 at 5:26:22 PM
Yes. If a company is running vibeslopped compilers to build their production artifacts I absolutely want to know which one it is, so I can protect myself from their software.> utterly stupid
That's completely uncalled for.
EDIT: What exactly do you mean by:
> most likely reviewed.
Let's say every line was actually reviewed. That's still nowhere near good enough. The changes are being reviewed by the wrong people. Not the maintainers of the project, just some random folks who have inherited a vibecoded fork.
by jcgrillo
4/30/2026 at 1:35:48 PM
[dead]by redsocksfan45
4/30/2026 at 5:03:20 PM
"I aM someWhAt oF a DeVelOpER MySelF"by varispeed
4/30/2026 at 10:21:03 AM
Not really - I imagine as with almost everything in life there's a normal distribution, in this case of the quality with which people use AI tools.by andy_ppp
4/30/2026 at 2:46:12 PM
The normal distribution doesn't account for things like "huge megacorporations pour billions of dollars into accelerating product adoption" or "other companies force their employees to use AI whether they want to or not" though.by DonaldPShimoda
4/30/2026 at 6:38:14 PM
This is a spam problem more than anything else. It's not really an AI problem except that it's AI that is enabling this new type of spam.Imagine there's no AI, but for some reason you have people hiring armies of cheap overseas devs and using them to produce mediocre quality drive-by PRs. The effect would be the same.
AI can be used to make quality code, but that requires careful use of the tool... like any other tool. This isn't careful contributions made by someone who knows the project and its goals and is good at using the tool. This is spam.
by api
4/30/2026 at 6:50:27 PM
Exactly, people could have "consulted Google" or "consulted stack overflow" and had the same issues. It's about the end result, not how the code got to that end result, and the submitter is responsible to make sure of the quality of the submission regardless of whether AI was used or not.To reject submissions where the dev "consulted ai" is like rejecting iron ore that was mined by a machine rather than a human. The quality of the ore is what should be measured, not how it was obtained.
by colordrops
4/30/2026 at 8:24:20 PM
I agree, but the problem comes back to how to evaluate quality at scale. That is very hard. It’s easier to just say no AI because that at least turns off the fire hose.by api
5/1/2026 at 6:26:40 AM
It sounds like they are even rejecting submissions where they even get a whiff of ai being "consulted" though. That's not quite the same as turning off the firehose.by colordrops
5/1/2026 at 3:44:58 PM
No that’s just reactionary.The discourse around AI in the arts, and other creative and craft fields, is utterly identical to the discourse around photography when it came out to the point that you could search and replace terms and have the same dialogue.
by api
4/30/2026 at 4:08:44 PM
I'm personally amazed that _Large_ OSS projects don't have the appropriate automation in place to prevent non-compiling or non-linter-passing submissions.- Hooks (although there's no clean way to enforce they be "installed" on a clone), GHA Workflows (or their equivalents on other forges).
This might be my bias showing, but these are items I would consider table-stakes for a project of a certain size / level of popularity.
It feels like a lot of the "AI is shit at contributing" problems could be addressed in part by better automated checks and balances.
by zeeveener
4/30/2026 at 4:23:30 PM
Those things cost resources, and now you're introducing a new attack vector: open up a bunch of shit PRs, burn a lot of cash for the target organization.by jmcqk6
4/30/2026 at 6:32:17 PM
You're right. It doesn't solve for all scenarios and doesn't block malicious actors.I do believe, however, that it would have a meaningful impact on the "drive-by" PRs that keep being used as examples; the thoughtless, throw-spaghetti-at-the-wall PRs that do not have malignant intent behind them.
Many large OSS projects would have the resources to eat that cost with Donors, Sponsors, and OSS hand-outs. That's why I clarified in my original post because I know this is not a general solution.
by zeeveener
4/30/2026 at 7:30:16 PM
The problem is you can get the LLM to iterate until it compiles and lints and even passes LLM review, but will that actually improve the quality of the contribution or just produce more line noise to mechanistically meet criteria?To large complex projects often the kernel of an idea is the core value of a contribution, and it can take a lot of iteration to figure out how to structure it. Token bashing until CI is green does nothing to ensure the best approach is selected.
by schmichael
5/1/2026 at 1:10:55 AM
> The problem is you can get the LLM to iterate until it compiles and lints and even passes LLM reviewWorst of both worlds with this, if you're doing it in a github workflow. You wind up effectively paying for the testing/validation layer of someone else's irresponsible LLM use.
by solid_fuel
5/1/2026 at 5:57:57 PM
For sure, but that's not what I was referring to in my posts. I'm specifically referring to the callout that the contributions are so low quality they don't even pass linting or compile.I could have been more explicit on that nuance, I suppose.
by zeeveener
4/30/2026 at 7:41:48 PM
That's why you sandbox. You can mitigate most low-hanging DoS fruits by running your server side hooks in a per-tenant cgroup that limits CPU and memory usage. One tenant per public key for trusted contributors, and one general-purpose tenant shared by all new/unknown contributors.by 10000truths
4/30/2026 at 5:19:13 PM
Can't you prevent pushing from the client side with pre-commit hooks? I would expect a hook to fire on the developer's computer that prevents them from even committing/pushing (unless they nuke the hook in their local repo copy).by all2
4/30/2026 at 5:44:32 PM
You have to manually install hooks in your local repository. They aren't propagated as part of the repo. Git has intentionally made hooks require a very explicit opt-in.by 0xffff2
4/30/2026 at 6:13:44 PM
Oh, good to know. I haven't used them much, so I'm a bit ignorant as to how they work in larger projects.by all2
4/30/2026 at 4:25:22 PM
> Hooks (although there's no clean way to enforce they be "installed" on a clone), GHA Workflows (or their equivalents on other forges).Git supports pre-receive hooks. But big multitenant forges like GitHub.com don't allow you to configure them because they're difficult to secure well. (Some of their commercial features are likely based on them, though.)
If you self-host a forge, though, you can configure arbitrary pre-receive hooks for it in order to do things like prevent pushes from succeeding if they contain verifiably working secrets, for example. You could extend that to do whatever you want (at your own risk).
by pxc
4/30/2026 at 6:00:08 PM
You're still talking about compute resources that need to be paid for and maintained for that. Spamming AI PR's is going to cost a lot of money.by jmcqk6
4/30/2026 at 6:47:49 PM
At the end of the day, LLM slop PR spammers are essentially adversarial actors. Git hooks are ultimately a tool for good faith developers within a given community (your team, your company, your regular contributors) in maintaining good hygiene and avoiding lapses into preventable mistakes. That's true for all CI, too.And the truth is, too, that it's super easy for an LLM agent to run a build and tests. Good faith contributors using LLMs will never open PRs that don't build not because they're willing to "go the extra mile" and do manual work, but because they give the slightest fuck and have any respect or consideration for the humans they're working with.
LLM spam presents a different problem than any of that stuff was meant to solve. It's a malicious act, and you're right that tooling that burns the defender's compute can't be a solution. :-\
by pxc
4/30/2026 at 4:24:07 PM
All of my personal projects, many of which will never be publicized, use hooks and GHA to ensure compilation of changes.It is quite strange that a large project like Zig would not have such a thing. I'm sure it's not trivial but it seems important to invest time into.
by abustamam
4/30/2026 at 4:24:13 PM
One of my pet peeves with git (and systems both similar, and based on it) is that automated tests run after you've made the commit and push.In my mind the commit (let alone the push to a publicly accesible server) should be done after, and only if, the automated tests are successfully executed. And there's no easy way to implement this, other than having a dirty branch that you discard after rebasing onto a more long lived one.
by papyrus9244
5/2/2026 at 10:37:27 AM
There are lots of reasons to commit when things are yet working. How else would you share code that you need help with?The solution is gated merges. No merging to main unless ci passes.
Every org I have worked at bemoaned a flaky release process and refused e2e black box acceptance tests because "they are too slow." And every org I have worked at has realized they were wrong. We got appropriate gates that run in 5 minutes and an ops person is the only person who can force past any gate in case of emergency.
Guardrails like this only become more important with the accelerant that is ai.
by sethammons
4/30/2026 at 4:52:15 PM
You can use a pre-receive hook on a git server to reject pushes that fail compilation. Downside is that it requires admin access on git forges, so you're only able to do this if you self-host.by 10000truths
4/30/2026 at 4:40:01 PM
Pre commit hooks exist. People just don't like being prevented from committing for reasons such as this.by jwolfe
4/30/2026 at 8:43:50 PM
But... this particular project does have such automation in place? It isn’t hard to find:https://codeberg.org/ziglang/zig/src/branch/master/.forgejo/...
by lexh
4/30/2026 at 4:23:53 PM
I mean even having linters and everything still creates a whole bunch of noise in their PR section, not to mention that a lot of the changes I make to stuff that's written by codex is not stuff that's caught by linters.It's just bad/wrong/context lacking decisions and mental models it introduces, that if not carefull will just create a massive mess of a codebase. (I know, because I've tried, and had to deal with it)
And if someone vibecodes a PR and it works, why dont they just share the prompt so a repo owner could vibecode it themselves?
by sauercrowd
4/30/2026 at 4:32:43 PM
Vibe coding is often not a single prompt, it's an entire workflow (if you're doing it right).by abustamam
4/30/2026 at 4:36:12 PM
Don't disagree, but the "if you're doing it right" is a big asterisk for an open source project with people you have no idea what quality bar they're at.And in my experience it's quite hard to figure that out by quickly looking at it.
Not to mention that contributions on github (almost?) never include the prompt chain anyway, so the status quo is even worse
by sauercrowd
4/30/2026 at 4:41:57 PM
That's a fair point. I was just speaking generally.by abustamam
4/30/2026 at 10:05:39 AM
Fake it ‘till you make it. Seems like LLM’s have caught-on to that too.by bvan
4/30/2026 at 10:16:22 AM
You can curb an LLM into doing what you want. Unfortunately people don't have the patience or the skill.by nurettin
4/30/2026 at 10:21:07 AM
People who have skill can do the same without LLMs, maybe slightly slower on average but on more predictable schedule.by sesm
4/30/2026 at 10:58:49 AM
I wouldn’t say slightly slower; LLMs are massively useful for software engineering in the right hands.For some personal projects I still stick to the basics and write everything by hand though. It’s kinda nice and grounding; and almost feels like a detox.
For any new software engineer, I’m a strong advocate of zero LLM use (except maybe as a stack overflow alternative) for your first few months.
by dannyw
4/30/2026 at 4:07:43 PM
It's significantly slower to use LLMs for some things. The only thing it excels at is generic, broad tasks. Getting the 90% done. I find that it's less cumbersome to get it mostly right and touch it up yourself than to prompt over details like syntax.by Bridged7756
4/30/2026 at 11:00:26 AM
The chat UX with a fake-human lying to you and framing things emotionally really doesn’t help. And it is pretty much not possible to get away from it, or at least I haven’t found yet how.I would love to see a model trained to behave way more like a tool instead of auto-completing from Reddit language patterns…
by dgellow