1/17/2025 at 12:45:13 AM
> A major project will discover that it has merged a lot of AI-generated codeMy friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"
by kirubakaran
1/17/2025 at 5:47:04 AM
I have heard the same response from junior devs and external contractors for years, either because they copied something from StackOverflow, or because they copied something from a former client/employer (popular one in China), or even because they just uncritically copied something from another piece of code in the same project.From the point of view of these sorts of developers they are being paid to make the tests go green or to make some button appear on a page that kindasorta does something in the vague direction of what was in the spec, and that's the end of their responsibility. Unused variables? Doesn't matter. Unreachable code blocks? Doesn't matter. Comments and naming that have nothing to do with the actual business case the code is supposed to be addressing? Doesn't matter.
I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time. Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.
Sorry, got a little carried away. Anywho, the point is LLMs are just another tool for these folks. It's not new, it's just worse now because of the mixed messaging where executives are hyping the tech as a magical solution that will allow them to ship more features for less cost.
by alisonatwork
1/17/2025 at 7:56:21 AM
> I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time. Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.For them, this clearly sound like personal success.
There's also a lot of folks who view programming just as a stepping stone in the path to becoming well paid managers and couldn't care any less about all of the stuff the nerds speak about.
Kind of unfortunate, but oh well. I also remember helping out someone with their code back in my university days and none of it was indented, things that probably shouldn't be on the same line were and their answer was that they don't care in the slightest about how it works, they just want it to work. Same reasoning.
by KronisLV
1/17/2025 at 8:51:09 AM
I used to be fascinated about computers, but then I understood that being a professional meeting attender pays more for less effort.by anal_reactor
1/17/2025 at 9:02:10 AM
I still like it, I just acknowledge that being passionate isn't compatible with the corpo culture.Reminds me of this: https://www.stilldrinking.org/programming-sucks
by KronisLV
1/17/2025 at 2:50:22 PM
That is an all time favorite that I've come back to many times over the years. It's hard to choose just one quote, but this one always hits for me:> You are an expert in all these technologies, and that’s a good thing, because that expertise let you spend only six hours figuring out what went wrong, as opposed to losing your job.
by epiccoleman
1/17/2025 at 1:06:45 PM
Pays more for less effort and frequently less risk. Just make sure to get enough headcount to go over the span of control number.by oblio
1/17/2025 at 8:44:39 AM
> Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.Wow. I am probably very lucky, but most of managers, and especially architects I know are actually also exceptional engineers. A kind of exception was a really nice, helpful and proactive guy who happened to just not be a great engineer. He was still very useful for being nice, helpful and proactive, and was being promoted for that. "Failing up" to management would actually make a lot of sense for him, unfortunately he really wanted to code though.
by oytis
1/17/2025 at 7:34:56 AM
What you describe is the state of most devops.Copy / download some random piece of code, monkey around to change some values for your architecture and up we go. It works! We don't know how, we won't be able to debug it when the app goes down but that's not our problem.
And that's how you end up with bad examples or lack of exhaustive options in documentations, most tutorials being a rehash of some quickstart and people tell you "just use this helm chart or ansible recipe from some github repo to do what you want". What those things really install? Not documented. What can you configure? Check the code.
Coming from the dev world it feels like the infrastructure ecosystem still lives in a tribal knowledge model.
by arkh
1/17/2025 at 8:44:24 AM
I'm ashamed to say this is me with trying to get Linux to behave tbh.I like fully understanding my code and immediate toolchain, but my dev machine is kinda held together with duct tape it feels.
by whatevertrevor
1/17/2025 at 9:42:36 AM
Oof, same to be honest. It doesn't help that at some point Apache changed its configuration format, and that all of these tools seem to have reinvented their configuration file format. And that, once it's up you won't have to touch it again for years (at least in my personal server use case, I've never done enterprise level ops work beyond editing a shell script or CI pipeline)by Cthulhu_
1/17/2025 at 9:09:52 AM
I disagree. A lot of DevOps is using abstractions, yes. But using a Terraform module to deploy your managed database without reading the code and checking all options is the same as using a random library without reading the code and checking all parameters in your application. People skimping on important things exist in all roles.> people tell you "just use this helm chart or ansible recipe from some github repo to do what you want". What those things really install? Not documented. What can you configure? Check the code.
I mean, this is just wrong. Both Ansible roles and Helm charts have normalised documentations. Official Ansible modules include docs with all possible parameters, and concrete examples how they work together. Helm charts also come with a file which literally lists all possible options (values.yaml). And yes, checking the code is always a good idea when using third party code you don't trust. Which is it you're complaining about, that DevOps people don't understand the code they're running or that you have to read the code? It can't be both, surely.
> Coming from the dev world it feels like the infrastructure ecosystem still lives in a tribal knowledge model.
Rose tinted glasses, and bias. You seem to have worked only with good developer practices (or forgotten about the bad), and bad DevOps ones. Every developer fully understands React or the JS framework du jour they're using because it's cool? You've never seen some weird legacy code with no documentation?
by sofixa
1/17/2025 at 10:37:52 AM
> Rose tinted glasses, and bias. You seem to have worked only with good developer practices (or forgotten about the bad), and bad DevOps ones. Every developer fully understands React or the JS framework du jour they're using because it's cool? You've never seen some weird legacy code with no documentation?Not really. I'm mainly in code maintenance so good practices are usually those the team I join can add to old legacy projects. Right now trying to modernize a web of 10-20 old add-hoc apps. But good practices are known to exist and widely shared even between dev ecosystems.
For everything ops and devops it looks like there are like islands of knowledge which are not shared at all. At least when coming with a newbie point of view. Like for example with telemetry: people who worked at Google or Meta all rave about the mythical tools they got to use in-house and how they cannot find anything equivalent outside... and yes when you check what is available "outside" it looks less powerful and all those solutions feel like the same. So you got the FAANG islands of tools and way to do things, the big box commercial offering and their armies of consultants and then the OpenSource and Freemium way of doing telemetry.
by arkh
1/17/2025 at 11:11:45 AM
> For everything ops and devops it looks like there are like islands of knowledge which are not shared at allVery strongly disagree, if anything it's the opposite. Many people read the knowledge shared by others and jump to thinking it's suitable for them as well. Microservices and Kubernetes got adopted by everyone and their grandpa because big tech uses them, without any consideration if its suitable or not for each org.
> At least when coming with a newbie point of view. Like for example with telemetry: people who worked at Google or Meta all rave about the mythical tools they got to use in-house and how they cannot find anything equivalent outside... and yes when you check what is available "outside" it looks less powerful and all those solutions feel like the same. So you got the FAANG islands of tools and way to do things, the big box commercial offering and their armies of consultants and then the OpenSource and Freemium way of doing telemetry.
The latter two are converging with OpenTelemetry and Prometheus and related projects. Both ways are well documented, and there are a number of projects and vendors providing alternatives and various options. People can pick what works best for them (and it could very well be open source but hosted for you, cf. Grafana Cloud). I'm not sure how that's related to "islands of knowledge"... observability in general is one of the most widely discussed topics in the space.
by sofixa
1/17/2025 at 11:23:48 AM
It's definitely worse for LLMs than for StackOverflow. You don't need to fully understand a StackOverflow answer, but you at least need to recognise if the question could be applicable. With LLMs, it makes the decisions completely for you, and if it doesn't work you can even get it to figure out why for you.I think young people today are at severe risk of building up what I call learning debt. This is like technical debt (or indeed real financial debt). They're getting further and further, through university assignments and junior dev roles, without doing the learning that we previously needed to. That's certainly what I've seen. But, at some point, even LLMs won't cut it for the problem they're faced with and suddenly they'll need to do those years of learning all at once (i.e. the debt becomes due). Of course, that's not possible and they'll be screwed.
by quietbritishjim
1/17/2025 at 12:44:29 PM
> With LLMs, it makes the decisions completely for you, and if it doesn't work you can even get it to figure out why for you.To an extent. The failure modes are still weird, I've tried this kind of automation loop manually to see how good it is, and while it can as you say produce functional mediocre code*… it can also get stuck in stupid loops.
* I ran this until I got bored; it is mediocre code, but ChatGPT did keep improving the code as I wanted it to, right up to the point of boredom: https://github.com/BenWheatley/JSPaint
by ben_w
1/17/2025 at 5:59:10 AM
>Unused variables? Doesn't matter. Unreachable code blocks? Doesn't matter. Comments and naming that have nothing to do with the actual business case the code is supposed to be addressing? Doesn't matter.maybe I am just supremely lucky but while I have encountered people like (in the coding part) it is somewhat rare from my experience. These comments on HN always makes it seem like it's at least 30% of the people out there.
by bryanrasmussen
1/17/2025 at 6:23:14 AM
I think even though these types of developers are fairly rare, they have a disproportionate negative impact on the quality of the code and the morale of their colleagues, which is perhaps why people remember them and talk about it more often. The p95 developers who are more-or-less okay aren't really notable enough to be worth complaining about on HN, since they are us.by alisonatwork
1/17/2025 at 6:56:25 AM
And, as OP alluded to, I bet these kinds of programmers tend to “fail upward” and disproportionately become eng managers and directors, spreading their carelessness over a wider blast radius, while the people who care stagnate as perpetual “senior software engineers”.by ryandrake
1/17/2025 at 9:29:04 AM
maybe they care more about the quality as they become managers etc. quality takes effort, maybe they don't like taking the effort but like making other people take the effort.by bryanrasmussen
1/17/2025 at 8:08:42 AM
Do other companies not have static analysis integrated into the CI/CD pipeline?We by default block any and all PRs that contain funky code: high cyclomatic complexity, unused variables, bad practise, overt bugs, known vulnerabilities, inconsistent style, insufficient test coverage, etc.
If that code is not pristine, it's not going in. A human dev will not even begin the review process until at least the static analysis light is green. Time is then spent mentoring the greens as to why we do this, why it's important, and how you can get your code to pass.
I do think some devs still use AI tools to write code, but I believe that the static analysis step will at least ensure some level of forced ownership over the code.
by beAbU
1/17/2025 at 10:53:49 AM
I think it’s a good thing to use such tools. But no amount of tooling can create quality.It gives you an illusion of control. Rules are a cheap substitute for thinking.
by liontwist
1/17/2025 at 8:16:29 AM
Just wait till AI learns how to pass your automated checks, without getting any better in the semantics. Unused variables bad? Let’s just increment/append whatever every iteration, etc.by lrem
1/17/2025 at 8:40:42 AM
And then we'll need AI tools to diagnose and profile AI generated code to automagically improve performance.I can't wait to retire.
by whatevertrevor
1/17/2025 at 2:49:37 PM
That is a softball question for an AI: this block of code is throwing these errors, can you tell me why?by ericmcer
1/17/2025 at 6:18:25 AM
I have been told (at a FAANG) not to fix those kind of code smells in existing code. “Don’t waste time on refactoring.”by ojbyrne
1/17/2025 at 8:36:02 AM
To be fair sometimes it just isn’t worth the companies time.by dawnerd
1/17/2025 at 6:22:21 AM
> then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasantThat's because they come across as result oriented, go getter kind of persons while the others will be seen as uptight individuals. Unfortunately, management for better or worse self selects the first kind.
LLMs are only going to make it worse. If you can write clean code in half a day and an LLM can generate a "working" sphagetti mess in few mins, management will prefer the mess. This will be the case for many organizations where software is just an additional supporting expense and not critical part of the main business.
by devsda
1/17/2025 at 10:11:39 AM
The LLMs are not just another tool for these folks, but for folks who should not be touching code at all. That's the scary part. In my field (industrial automation), I have had to correct issues three times now in the ladder logic on a PLC that drives an automation cell that can definitely kill or hurt someone in the right circumstances (think maintenance/repair). When asked where the logic came from, they showed me the tutorials they feed to their LLM of choice to "teach" it ladder logic, then had it spit out answers to their questions. Safety checks were missed, needless to say, which thankfully only broke the machines.These are young controls engineers at big companies. I won't say who, but many of you probably use one of their products to go to your own job.
I am not against using LLMs as a sort of rubber duck to bounce ideas off of or maybe get you thinking in a different directions for the sake of problem solving, but letting them do the work for you and not understanding how to check the validity of that work is maddeningly dangerous in some situations.
by 0xEF
1/17/2025 at 12:36:18 PM
> Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.I’ve heard this sentiment several times over the years and what I think a lot of people don’t realize is that they’re just playing a different game than you. Their crappy code is a feature not a bug because they’re expending the energy on politics rather than coding. In corporations politics is a form a work, but it’s not work that many devs want to do. So people will say the uncaring dev is doing poor work, but really they’re just not seeing the real work being done.
I’m not saying this is right or wrong, it’s just an observation. Obviously this isn’t true for everyone who does a poor job, but if you see that person start climbing the ladder, that’s the reason.
by redeux
1/17/2025 at 1:59:01 PM
The kind of work you're describing doesn't benefit the company, it benefits the individual. It's not what they were hired to do. The poor quality code they produce can be a net negative when it causes bugs, maintenance issues, etc. I think it's always the right choice to boot such a person from any company once they've been identified.by stcroixx
1/17/2025 at 12:35:11 PM
> I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time.I've seen that, though fortunately only in one place. Duplicated entire files, including the parts to which I had added "TODO: deduplicate this function" comments, rather than change access specifiers from private to public and subclass.
By curious coincidence, 20% was also roughly the percentage of lines in the project which were, thanks to him, blank comments.
by ben_w
1/17/2025 at 2:34:13 PM
I have incorporated a lot of SO code. I never incorporate it, until I understand exactly what it does.I usually learn it, by adapting it to my coding style, and documenting it. I seldom leave it untouched. I usually modify in one way or another, and I always add a HeaderDoc comment, linking to the SO answer.
So far, I have not been especially thrilled with the AI-generated code that I've encountered. I expect things to improve, rapidly, though.
by ChrisMarshallNY
1/17/2025 at 9:41:05 AM
You can lead a horse to water, etc. What worked for me wasn't so much a mentor telling me xyz was good / bad, but metrics and quality gates - Sonar (idk when it was renamed to sonarqube or what the difference is) will flag up these issues and simply make the merge request unmergeable unless the trivial issues are fixed.Because that's the frustrating part; they're trivial issues, unreachable code and unused variables are harmless (on paper), just a maintenance burden and frustrating for whoever has to maintain it later on. But because they're trivial, the author doesn't care about them either. Trivial issues should be automatically fixed and / or checked by tooling, it shouldn't cost you (the reviewer) any headspace in the first place. And it shouldn't need explanation or convincing to solve either. Shouldn't, but here we are.
But yeah, the next decade will be interesting. I'm not really using it in my code yet because idk, the integration broke again or I keep forgetting it exists. But we integrated a tool in our gitlab that generates a code review, both summarzing the changes and highlighting the risks / issues if any. I don't like that, but the authors of merge requests aren't writing proper merge request descriptions either, so I suppose an AI generated executive summary is better than nothing.
by Cthulhu_
1/17/2025 at 3:24:15 PM
> failed up to manager...see, everything around can be a tool. Sticks, screwdrivers, languages, books, phones, cars, houses, roads, software, knowledge, ..
in my rosy glasses this line stops at people (or maybe life-forms?). People are not tools. Should not be treated as such.
But that is not the case in reality. So anyone for whom other people are tools, will fail (or fall) upwards (or will be pulled there). Sooner or later.
sorry if somewhat dark..
by svilen_dobrev
1/17/2025 at 2:15:56 AM
This is more of an early career engineer thing than a ChatGPT thing. 'I don't know, I found it on stackoverflow' could have easily been the answer for the last ten years.by Taylor_OD
1/17/2025 at 3:42:35 AM
The main problem is not the source of solution but not making an effort to understand the code they have put in.The "I don't know" might as well be "I don't care".
by devsda
1/17/2025 at 7:39:16 AM
That's where you'd like your solution engine to be able to tell you how to get the solution it is giving you. Something good answers on Stack Overflow will do: links to the relevant documentation, steps you can go through to get a better diagnostic of your problem etc.Get the fire lit with the explanation of where to get wood and how to light it in your condition so next time you don't need to consult you solution engine.
by arkh
1/17/2025 at 8:58:13 AM
No, a real engineer goes on SO to understand. A junior goes on SO to copy and paste. If your answer is "I don't know I just copied" you're not doing any engineering and it's awful to pretend you are. Our job is literally about asking "why" and "how" until we don't need to anymore because our pattern matching skills allow us to generalize.At this point in my career I rarely ever go to SO, and when I do it's because of some obscure thing that 7 other people came across and decided to post a question about. Or to look up "how to do the most basic shit in language I am not familiar with", but that role was taken over by LLMs.
by Vampiero
1/17/2025 at 8:16:31 AM
There's nothing inherently wrong with getting help from either and LLM, or StackOverflow, it's the "I don't know' part that bothers me.One the funnier reactions to "I got it from StackOverflow" is the followup question "From the question or the answers?"
If you just adds code, without understanding how it works, regardless of where it came from and potential licensing issues, then I question your view on programming. If I have a paint come in and paint my house and get paint all over the place, floors, windows, electrical socket but still get the walls the color I want, then I wouldn't consider that person a professional painter.
by mrweasel
1/17/2025 at 3:48:10 PM
The LLM also tends to do a good bit of the integrations of the code in your codebase. With SO you need to do it yourself, so you at least need to understand the outer boundary of the code. And on StackOverflow it often has undergone some form of peer review. The LLM just outputs without any bias or footnote.by sebazzz
1/17/2025 at 3:19:59 AM
I am fairly certain that if someone did that where I work then security would be escorting them off the property within the hour. This is NOT Okay.by DowsingSpoon
1/17/2025 at 3:54:36 AM
Where I work we are actively encouraged to use more AI tools while coding, to the point where my direct supervisor asked why my team’s usage statistics were lower than company average.by bitmasher9
1/17/2025 at 4:00:08 AM
It's not necessarily the use of AI tools (though the license parts are an issue), is that someone submitted code for review without knowing how it works.by dehrmann
1/17/2025 at 8:01:55 AM
I use AI these days and I know how things work, there really is a huge difference. It helps me make AI write me code faster and the way I want it, something I could do, except more slowly.by johnisgood
1/17/2025 at 4:37:25 AM
Didn't people already do that before, copy and pasting code off stack overflow? I don't like it either but this issue has always existed, but perhaps it is more common nowby xiasongh
1/17/2025 at 5:04:40 AM
Maybe it's because I'm self-taught, but I have always accounted for every line I push.It's insulting that companies are paying people to cosplay as programmers.
by hackable_sand
1/17/2025 at 7:38:50 AM
It's probably more common among self-taught programmers (and I say that as one myself). Most go through the early stage of copying chunks of code and seeing if they work. Maybe not blindly copying it, but still copying code from examples or whatever. I know I did (except it was 25 years ago from Webmonkey or the php.net comments section rather than StackOverflow). I'd imagine formally-educated programmers can skip some (though not all) of that by having to learn more of the theory at first.by ascorbic
1/17/2025 at 8:50:50 AM
If people are being paid to copy and run random code, more power to them. I wouldn't have dreamt of getting a programming job until I was literate.by hackable_sand
1/17/2025 at 7:07:49 AM
I've seen self taught and graduates alike do that.by guappa
1/17/2025 at 6:27:33 AM
Now there is even lesser excuse for not knowing what it does, because the same chatGPT that gave you the code, can explain it too. That wasn't a luxury available in copy/paste-from-StackOverflow days (though explanations with varying degrees of depth were available there too).by noisy_boy
1/17/2025 at 7:43:22 AM
Yes, and I think the mistakes that LLMs commonly make are less problematic than Stack Overflow. LLMs seem to most often either hallucinate APIs, or use outdated ones. They're easier to detect when they just don't work. They're not perfect, but seem less inclined to generate the bad practices and security holes that are the bread and butter of Stack Overflow. In fact they're pretty good at identifying those sort of problems in existing code.by ascorbic
1/17/2025 at 5:28:33 AM
Or importing a new library that's not been audited. Or compile it with a compiler that's not been audited? Or run it on silicon that's not been audited?We can draw the line in many places.
I would take generated code that a rookie obtained from an llm and copied without understanding all of it, but that he has thoughtfully tested, over something he authored himself and submitted for review without enough checks.
by rixed
1/17/2025 at 7:10:31 AM
> We can draw the line in many places.That doesn't make those places equivalent.
by yjftsjthsd-h
1/17/2025 at 8:23:33 AM
That's a false dichotomy. People can write code themselves and thoroughly test it too.by whatevertrevor
1/17/2025 at 4:04:38 AM
I think we should / have already reached to a place where AI written code is acceptable.by masteruvpuppetz
1/17/2025 at 4:16:49 AM
Whether it's acceptable or not to submit AI code, it is clearly unacceptable to submit code that you don't even understand. If that's all an employee is capable of, why on earth would the employer pay them software engineer salary versus hiring someone to do the exact same for minimum wage?by bigstrat2003
1/17/2025 at 5:17:26 AM
Or even replace them with the AI directly.by userbinator
1/17/2025 at 4:22:52 AM
What a god awful thing to hear.by dpig_
1/17/2025 at 5:56:21 AM
The problem is that "AI" is likely whitewashing the copyright from proprietary code.I asked one of the "AI" assistants to do a very specific algorithmic problem for me and it did. And included unit tests which just so happened to hit all the exact edge cases that you would need to test for with the algorithm.
The "AI assistant" very clearly regurgitated the code of somebody. I, however, couldn't find a particular example of that code no matter how hard I searched. It is extremely likely that the regurgitated code was not open source.
Who is liable if I incorporate that code into my product?
by bsder
1/17/2025 at 7:09:14 AM
According to microsoft: "the user".There's companies that scan code to see if it matches known open source code or not. However they probably just scan github so they won't even have a lot of the big projects.
by guappa
1/17/2025 at 7:08:52 AM
This seems like you don't believe that AI can produce correct new work, but it absolutely can.I've no idea whether in this case it directly copied someone else's work, but I don't think that it writing good unit tests is evidence that it did - that's it doing what it was built to do. And you searching and failing to find a source is weak evidence that it did not.
by kybernetikos
1/17/2025 at 4:22:50 AM
To be fair I don't think someone should get fired for that (unless it's a repeat offense). Kids are going to do stupid things, and it's up to the more experienced to coach them and help them to understand it's not acceptable. You're right that it's not ok at all, but the first resort should be a reprimand and being told they are expected to understand code they submit.by bigstrat2003
1/17/2025 at 5:02:28 AM
Kids, sure. University trained professional and paid like one? No.by LastTrain
1/17/2025 at 5:34:33 AM
You're having high expectations of the current batch of college graduates(and honestly it's not like the past graduates were much better, but they didn't have chatgpt)
by raverbashing
1/17/2025 at 5:59:05 AM
A cynical take would be that the current market conditions allow you to filter out such college graduates and only take the better ones.by The_Colonel
1/17/2025 at 8:13:59 AM
And how do you propose filtering them out? There's a reason why college students are using LLMs, they're getting better grades for less effort. I don't assume you're proposing selecting students with worse grades on purpose?by solatic
1/17/2025 at 8:31:12 AM
I wouldn't hire based on grades.I think what the junior did is a reason to fire them (then you can try again with better selection practices). Not because they use code from LLMs, but that they don't even try to understand what it is doing. This says a lot about their attitude to programming.
by The_Colonel
1/17/2025 at 2:01:17 PM
One way to filter them out, relevant to this thread, would be to let them go if they brazenly turned in work they did not create and do not understand.by LastTrain
1/17/2025 at 4:58:59 AM
I understand the point you’re trying to get across. For many kinds of mistakes, I agree it makes good sense to warn and correct the junior. Maybe that’s the case here. I’m willing to concede there’s room for debate.Can you imagine the fallout from this, though? Each and every line of code this junior has ever touched needs to be scrutinized to determine its provenance. The company now must assume the employee has been uploading confidential material to OpenAI too. This is an uncomfortable legal risk.
How could you trust the dev again after the dust is settled?
Also, it raises further concerns for me that this junior seems to be genuinely, honestly unaware that using ChatGPT to write code wouldn’t at least be frowned upon. That’s a frankly dangerous level of professional incompetence. (At least they didn’t try to hide it.)
Well now I’m wondering what the correct way would be to handle a junior doing this with ChatGPT, and what the correct way would be to handle similar kinds of mistakes such as copy-pasting GPL code into the proprietary code base, copy-pasting code from Stack Overflow, sharing snippets of company code online, and so on.
by DowsingSpoon
1/17/2025 at 5:54:03 AM
> The company now must assume the employee has been uploading confidential material to OpenAI too.If you think that’s not already the case for most of your codebase, you might be in for a rough awakening.
by manmal
1/17/2025 at 5:08:12 AM
> Also, it raises further concerns for me that this junior seems to be genuinely, honestly unaware that using ChatGPT to write code wouldn’t at least be frowned upon.Austen Allred is selling this as the future of programming. According to him, the days of writing code into an IDE are over.
by thaumasiotes
1/17/2025 at 6:05:23 AM
Responding to the link you posted: Apparently, the future of programming is 100 hour weeks? Naive me was thinking we could work less and think more with these new tools at our disposal.by manmal
1/17/2025 at 6:22:40 AM
Also you think with their fancy AI coding they could update their dates to the future or at least disable the page for a past dated session.by ojbyrne
1/17/2025 at 7:11:38 AM
Seems people didn't read the link and are downvoting you, possibly because they don't understand what you're talking about.by guappa
1/17/2025 at 7:38:37 AM
Thanks, added context.by manmal
1/17/2025 at 8:31:01 AM
Without prior knowledge, that reads like a scam?A free training program with a promise of a guaranteed high paying job at the end, where have I heard that before? Seems like their business model is probably to churn people through these sessions and then monetize whatever shitty chatbot app they build through the training.
by whatevertrevor
1/17/2025 at 9:48:07 PM
No, their business model is getting placement fees for whoever they graduate from the program.Considering this was a sponsored link on HN, endorsed by Y Combinator, I'd say you have a ridiculous threshold for labeling something a "scam", except to the degree that the companies committing to hire these people are pretty unlikely to get whatever they were hoping to get.
by thaumasiotes
1/17/2025 at 6:39:23 AM
I've seen seniors and above do that.They never cared about respecting software licenses until Biden said they must. Then they started to lament and cry.
by guappa
1/17/2025 at 6:32:48 AM
unless you work for hospitals or critical infrastructure, this reaction is overblown and comicalby ujkiolp
1/17/2025 at 3:51:45 AM
Are you hiring?by phinnaeus
1/17/2025 at 5:16:09 AM
In such an environment, it would be more common for access to ChatGPT (or even most of the Internet) to be blocked.by userbinator
1/17/2025 at 6:39:35 AM
Why? I encourage all my devs to use AI but they need to be able to explain what it does.by dyauspitr
1/17/2025 at 12:25:36 PM
> He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"I remember being a junior nearly 20 years back, a co-worker someone asked me how I'd implemented an invulnerability status, and I said something equally stupid despite knowing perfectly well how I'd implemented it and there not being any consumer grade AI more impressive than spam filters and Office's spelling and grammar checking.
Which may or may not be relevant to the example of your friend's coworker, but I do still wonder how much of my answers as a human are on auto-complete. It's certainly more than none, and not just from that anecdote… https://duckduckgo.com/?t=h_&q=enjoy+your+meal+thanks+you+to...
by ben_w
1/17/2025 at 7:44:11 AM
Feels like a controls failure as much as anything else. Any decently sized company that allows unrestricted access to llms, well that's going to be the tip of the iceberg.Also, the culture of don't care comes from somewhere, not ChatGPT
by ErrantX
1/17/2025 at 3:28:18 AM
the saddest part is if i wrote the code myself it would be worse lol GPT is coding at a intern level and as a dumb human being I feel sad I have been replaced but not as catastrophic as they made it seemit's interesting to see the underlying anxiety among devs though I think there is a place in the back of their minds that knows the models will get better and better and someday could get to staff engineer level
by gunian
1/17/2025 at 3:45:16 AM
I don't think that's the concern at all. The concern (imo) is that you should at least understand what the code is doing before you accept it verbatim and add it to your company's codebase. The potential it has to introduce bugs or security flaws is too great to just accept it without understanding it.by nozzlegear
1/17/2025 at 4:35:56 AM
I've been busy with a personal coding project. Working through problems with a LLM, which I haven't used professionally yet, has been great. Countless times in the past I've spent hours pouring over Stack Overflow and Github repository code looking for solutions. Quite often I would have to solve them myself and would always post the answer a day or two later below my question on Stack Overflow. A big milestone for a software engineer is getting to the point where any difficult problem can't be solved with internet search, asking colleagues, or asking the question no matter how well written and detailed on Stack Overflow because the problems are esoteric -- the edge of innovation is solitude. Today I give the input to the LLM, tell it what the output should be, and magically a minute later it is solved. I was thinking today about how long it has been since I was stuck and stressed on a problem. With this personal project, I'm prototyping and doing a lot of experimentation so having a LLM saves a ton of time keeping the momentum at a fast pace. The iteration process is a little different with frequent stop, refactor, cleanup, make the code consistent, and log the input and output to console to verify.Perhaps take intern's LLM code and have the LLM do the code review. Keep reviewing the code with the LLM until the intern gets it correct.
by dataviz1000
1/17/2025 at 6:06:18 AM
My experience with LLMs and code generation is usually the opposite, even using ChatGPT and the fancy o1 model. Maybe it's because I write a lot of F#, and the training data for that is probably low. When I'm not writing F#, then I like to write functional-style code. But either way, nine times out of ten I'm only using LLMs for "rubber ducking," as the code they give me usually falls flat on its face with obvious compiler errors.I do agree that I feel much more productive with it LLMs though. Just being able to rubber duck my ideas with an AI and talk about code is extremely helpful, especially because I'm a solo dev/freelancer and don't usually have anyone else to do that with. And even though I don't typically use the code they give me, it's still helpful to see what the AI is thinking and explore that.
by nozzlegear
1/17/2025 at 6:49:38 AM
I have had similar experiences using less popular libraries. My favorite state machine library released a new version a year ago and the LLMs, regardless of prompts telling them not to, will always use the old API. I find the LLMs are worthless when organizing ideas across multiple files. And, worst, they are not by their nature capable of consistency.On the other hand, I use d3.js for data visualization which has had a stable API for years, has likely hundreds of thousands of examples that are small contained in a single file, and has many blog posts, O'Reilly books, and tutorials. The LLMs create perfect data visualizations that are high quality. Any request to change one such as adding dynamic sliders or styling tooltips, for example, are done without errors or bugs. People who do data visualization likely will be the first to go. :(
I am concerned that new libraries will not gain traction because the LLMs haven't been trained to implement them. We will be able to implement all the popular libraries, languages, and techniques quickly, however, innovation might stall if we rely on these machines stuck in the past.
by dataviz1000
1/17/2025 at 4:23:06 AM
Exactly why devs are getting the bug bucksthat is right now at some point what if someone figures out a way to make it deterministic and able to write code without bugs?
by gunian
1/17/2025 at 4:40:36 AM
Then the programming language becomes natural language and you’ll have to be very good at describing what you want. Unless you are talking about AGI, aka, the singularity. Which is a whole other topic.by eggnet
1/17/2025 at 5:13:40 AM
not AGI at that point all human jobs can be replaced that's my personal bar at leasti'm thinking like models get small enough, you fine tune them on your code, you add fuzzing, rewriting
it may not be bug free but could it become self healing with minimal / known natural language locations? or instead of x engineers one feeds the skeleton to chatgpt 20 or something and instead of giving you the result immediately it does it iteratively would still be cheaper than x devs
by gunian
1/17/2025 at 5:06:29 AM
You cannot write code without bugs.by hackable_sand
1/17/2025 at 6:29:27 AM
I‘d say, you cannot write _interesting_ code without bugs.by manmal
1/17/2025 at 6:48:49 AM
You know whatOne man's bug is another man's feature
by hackable_sand
1/17/2025 at 7:30:01 AM
Sure. And some of those people are black hats ;)by jononor
1/17/2025 at 7:45:30 AM
modern freedom fighters Abe Lincoln couldn't compare :)by gunian
1/17/2025 at 4:27:44 AM
"AI is the payday loan* of tech debt".by chrisweekly
1/17/2025 at 3:58:25 AM
ChatGPT needs two years of exceeds expectations for before that can happen.by jahewson
1/17/2025 at 4:24:03 AM
I been writing at troll level since i first got my computer at 19 so it looks like exceeds expectations to me lolby gunian
1/17/2025 at 6:40:25 AM
It’s coding way, way above intern level. Honestly it’s probably a mid level.by dyauspitr
1/17/2025 at 8:59:45 AM
Essentially you're paying human to be a proxy between the requirements, LLM and codebase. Some people I'm talking to lament having to pay top dollar to their junior (and other kinds I'm sure) devs for this, but I think this is and will be the new reality and new normal. And instead we should start thinking how to make best of it, and how to help maximize success for these devs.Few decades down the road though we are likely to be viewing this current situation similar to how we're looking at 'human computers'[0] of yesteryear.
by rixrax
1/17/2025 at 1:49:59 PM
This is the norm on my majority H1B team. Nobody sees anything wrong with it but me so I stopped caring too.by stcroixx
1/18/2025 at 1:16:35 AM
If the management does not care why should the employees?by sumedh
1/17/2025 at 2:37:51 AM
At least he's honest.by userbinator
1/17/2025 at 9:42:21 AM
I'm a strong proponent of using LLM and use them extensively.But this is a fireable offense in my book.
by BiteCode_dev
1/17/2025 at 6:05:29 AM
Was this a case of something along the lines of an isolated function that had a bunch of bit shifting magic for some hyper optimization that was required, or was it just regular code?Not saying it's acceptable, but the first example is maybe worth a thoughtful discussion while the latter would make me lose hope.
by ghxst
1/17/2025 at 8:00:54 AM
There is no shame, damn.by johnisgood
1/17/2025 at 1:38:57 AM
I hope that junior engineer was reprimanded or even put on a PIP instead of just having the reviewer say lgtm and approve the request.by deadbabe
1/17/2025 at 1:44:24 AM
Probably depends a lot on the team culture. Depending on what part of the product lifecycle you're on (proving a concept, rushing to market, scaling for the next million TPS, moving into new verticals,...) and where the team currently is, it makes a lot of sense to generate more of the codebase by AI. Write some decent tests, commit, move on.I wish my reports would use more AI tools for parts of our codebase that don't need a high bar of scrutiny, boilerplate at enterprise scale is a major source of friction and - tbh - burnout.
by WaxProlix
1/17/2025 at 2:42:49 AM
Unless the plan is to quickly produce a prototype that will be mostly thrown away, any code that gets into the product is going to generate far more work maintaining it over the lifetime of a product than the cost to code it in the first place.As a reviewer I'd push back, and say that I'll only be able to approve the review when the junior programmer can explain what it does and why it's correct. I wouldn't reject it solely because chatgpt made it, but if the checkin causes breakage it normally gets assigned back to the person who checked it in, and if that person has no clue we have a problem.
by not2b
1/17/2025 at 2:42:40 PM
Not being willing to throw out bad/unused features is a different trap that organizations fall into. The amount of work that goes into, shall we say fortifying the foundations of a particular feature, ideally should be proportional to how much revenue that feature is responsible for. Test code also has to be maintained, and increasing the maintenance burden on something that has its own maintenance burden when customers don't even like it is shortsighted at the very least.by solatic
1/17/2025 at 8:00:40 AM
> I wouldn't reject it solely because chatgpt made it, but if the checkin causes breakage it normally gets assigned back to the person who checked it in, and if that person has no clue we have a problem.That's a fair point, but regardless of who wrote the code (or what tools were used) it should also probably be as clear as possible to everyone who reads it, because chances are that at some point that person will be elsewhere and some other person will have to take over.
by KronisLV
1/17/2025 at 6:46:09 PM
True, but you're talking about the difference between "only one person understands this, that's a risk!" and "zero people understand this".by not2b
1/17/2025 at 2:27:08 AM
Yes and the team could be missing structures to support junior engineers. What made them not ask for help or pairing is really important to dig in to and I would expect a senior manager to understand this and be introspective on what environment they have created where this human made this choice.by bradly
1/17/2025 at 11:36:03 AM
> Write some decent tests, commit, move on.Move on to what?! Where does a junior programmer who doesn't understand what the code does moves on to?
by GeoAtreides
1/17/2025 at 2:38:22 AM
I mean if that was an answer I got given by a junior during a code review the next email I'd be sending would be to my team lead about it.by XorNot
1/17/2025 at 9:11:25 AM
I have a better one, a senior architect who wrote a proposal for a new piece of documentation, and when asked about his 3 main topics in the doc and why them, said "LLM said those are the main ones". The rest of the doc was obviously incoherent LLM soup as well.by sofixa
1/17/2025 at 12:09:58 PM
>When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"That'd be an immediate -2 from me.
by ginko
1/17/2025 at 1:28:20 AM
[dead]by hardbants