2/2/2026 at 8:27:01 AM
I'd argue 2 types of users are* People using it as a tool, aware of its limitations and treating it basically as intern/boring task executor (whether its some code boilerplate, or pooping out/shortening some corporate email), or as tool to give themselves summary of topic they can then bite into deeper.
* People outsourcing thinking and entire skillset to it - they usually have very little clue in the topic, are interested only in results, and are not interested in knowing more about the topic or honing their skills in the topic
The second group is one that thinks talking to a chatbot will replace senior developer
by PunchyHamster
2/2/2026 at 8:48:05 AM
I started to outsource thinking at my job as my company made it very clear that they do not want/cant afford thinking engineers. Thinking requires time and they want to deliver quickly. So they cater towards the very realistic deadlines our PMs set for features (/s). Funnily enough the features have to be implemented ASAP according to the customers, but the customer feedback takes like 6 months due to them using the new feature for the first time 6 months after delivery. I just dont care anymore. Gonna leave the learning part up to my time off, but getting generally tired of the industry as a whole, so just putting in minimal effort to pay my bills until things explode or get better. So for me its definitely outsourcing thinking at work.by sevenzero
2/2/2026 at 9:20:55 AM
This is a fatalistic attitude, but I can totally get behind. It has become harder to associate my job with contributing with society.by gloomyday
2/2/2026 at 9:37:51 AM
I agree and I am far from being a senior engineer. I'm only in the market since a few years and started out just before the whole LLM stuff started to pick up. So I have been grinding a lot (2nd job I've learned, am in tech since ~2020) only to be confronted with permanent existential fear of having to possibly learn a 3rd job (which takes 3 years of full time work for neglectable pay in my country). I dont want to start from zero again and I am tired of corporations that are shitting out money to be cheap on their employees. Starting from zero is never fun, going back into debt is never fun and having to leave a job/career you like also is never fun. I'm 30 now and only ever have been making (noteworthy, still below median) money since 1.5 years now. I cant afford starting anew and there is little I can do about it which is extremely frustrating.by sevenzero
2/3/2026 at 4:52:10 PM
I work with/am friends with many junior-ish developers who are in the same place as you (got into programming in their late 20s around the 2020 hiring cycle). I'm very sorry for the stress you're dealing with.I don't know if this describes your situation, but I know many people who are dealing with positions where they have no technical mentorship, no real engineering culture to grow in, and a lot of deadlines and work pressure. Coupled with this, they often don't have a large social group within programming/tech, because they've only been in it for a few years and have been heads down grinding to get a good job the whole time. They're experiencing a weird mixture of isolation, directionless-ness, and intense pressure. The work is joyless for them, and they don't see a future.
If I can offer any advice, be selfish for a bit. Outsource as much as you want to LLMs, but use whatever time savings you get out of this to spend time on programming-related things you enjoy. Maybe work the tickets you find mildly interesting without LLMs, even if they aren't mission critical. Find something interesting to tinker with. Learn a niche language. Or slack off in a discord group/make friends in programming circles that aren't strictly about career advancement and networking.
I think it's basically impossible to get better past a certain level if you can't enjoy programming, LLM-assisted or otherwise. There's such a focus on "up-skilling" and grinding through study materials in the culture right now, and that's all well and good if you're trying to pass an interview in 6 weeks, but all of that stuff is pretty useless when you're burned out and overwhelmed.
by calebkaiser
2/2/2026 at 2:32:43 PM
As a product manager, this makes me think the features you're building are not the things your customers need or want most. I'm curious if you were to ask your product manager about that six-month timeframe, and just ask the open-ended question of is there anything on the backlog that we can build that the product manager thinks users would pick up within days instead of months?by Schiendelman
2/2/2026 at 3:13:55 PM
As a product manager, this feels like they're in the pretty typical B2B SaaS trap of building the stuff that the people who pay for the product insist they need but the people using the product don't necessarily want, so they've gotta invest a bunch of time and effort getting usage/feedback.Could be for good reasons (e.g. they're security features that are important to the business but add friction for the user) or just because management is disconnected from the reality of their employees. Either way, not necessarily the wrong decision by the PM - sometimes you've gotta build features fast because the buyer demands them in a certain timeframe in order to get the contract signed. Even if they never get used, the revenue still pays the bills.
by idopmstuff
2/3/2026 at 9:27:52 PM
If they were building the stuff the people paying for the product insist they need, it seems unlikely that they'd take 6 months to use it once built. Unless it's some API that takes a ton of work to build to. It didn't sound like they were getting feedback for those six months.Security features that add friction for the user are usually forced, aren't they?
Contract requirements do make sense, but I get the idea that this user would know that.
What are you imagining that would be actual value but not used for six months?
by Schiendelman
2/3/2026 at 8:38:53 PM
Put your LLM to use writing a resume and looking for a new job. You've already checked out of this job. There is a better and more fulfilling way to spend your waking hours. Good luck friend.by suprjami
2/2/2026 at 1:07:39 PM
What do you really care? Its a job.by ddsfsdf
2/2/2026 at 2:27:42 PM
Historically (I'm 48), professionals have cared about their jobs, generally speaking, and often do make serious attempts to logically derive sociological benefits from their personal efforts. There's been a seismic shift over the past 5-6 years, though, and this sense of care has massively eroded.by eitally
2/2/2026 at 2:34:57 PM
I'm 44. It's been more than five or six years. I would say 15 or 20, if not more.by Schiendelman
2/2/2026 at 2:59:16 PM
It feels like covid turbocharged it though. The amount of grift outright corruption is unrecognizable compared to even 2019. Maybe it was always there but it feels like companies have gone full mask off now.by strange_quark
2/3/2026 at 9:25:35 PM
Give me a concrete example?by Schiendelman
2/2/2026 at 6:31:47 PM
> There's been a seismic shift over the past 5-6 yearsNah. It's been at least since 2009 (GBC), if not longer.
It started happening with the advent of applicant tracking systems (making hiring a nightmare, which it still is) and the fact that most companies stopped investing into training of juniors and started focusing more on the short-term bottom line.
If the company is going to make it annoying to get hired and won't invest anything in you as a professional, there's 0 reason for loyalty besides giving your time for the paycheck. And 0 reason to go 120% so you burn out.
by antisthenes
2/2/2026 at 2:33:57 PM
I feel you. I’m 46 and now on the hunt for the right company to work for, and hopefully finish out my career there. While the company values haven’t technically changed, the actions taken in the past 5 years have eroded my trust so much I barely recognize the place. When you no longer have a sense of pride working somewhere, it’s time to move on. At least that is what I believe to be true.by NDizzle
2/2/2026 at 3:16:34 PM
> While the company values haven’t technically changed, the actions taken in the past 5 years have eroded my trust so much I barely recognize the place. When you no longer have a sense of pride working somewhere, it’s time to move on. At least that is what I believe to be true.The problem, as I see it, is the changes that bug me [1] seem systemic throughout the economy, "best practices" promulgated by consultants and other influencers. I'm actually under the impression my workplace was a bit behind the curve, at a lot of other places are worse.
[1] Not sure if they're the "actions" you're talking about. I'm talking about offshoring & AI (IHMO part of the same thrust), and a general increase in pressure/decrease in autonomy.
by palmotea
2/2/2026 at 3:20:05 PM
Software developers have never been professionals. Doctors, lawyers, accountants, chartered engineers are professionals. They have autonomy and obligations to a professional code of ethics that supersedes their responsibility to their employers.Devs are hired goons at worst and skilled craftspeople at best, but never professionals.
by ForHackernews
2/2/2026 at 11:34:27 PM
There are, proportionally, more lawyers than software engineers in prison I would claim. Code of ethics doesn't really mean much.by bsoles
2/2/2026 at 3:10:16 PM
> What do you really care? Its a job.Because having a job that's somewhat satisfying and not just a grind is great for one's own well-being. It's also not a bad deal for the employer, because an engaged employee delivers better results than one who doesn't give a shit.
by palmotea
2/2/2026 at 8:51:15 AM
> People outsourcing thinking and entire skillset to it - they usually have very little clue in the topic, are interested only in results, and are not interested in knowing more about the topic or honing their skills in the topicAnd this may be fine in certain cases.
I'm learning German and my listening comprehension is marginal. I took a practice test and one of the exercises was listening to 15-30 seconds of audio followed by questions. I did terribly, but it seemed like a good way to practice. I used Claude Code to create a small app to generate short audio (via ElevenLabs) dialogs and set of questions. I ran the results by my German teacher and he was impressed.
I'm aware of the limitations: Sometimes the audio isn't great (it tends to mess up phone numbers), it can only a small part of my work learning German, etc.
The key part: I could have coded it, but I have other more important projects. I don't care that I didn't learn about the code. What I care about is I'm improving my German.
by 3D30497420
2/2/2026 at 9:03:07 AM
Seems like you are part of the first group then, not the second. The fact that you are interested in learning and are using it as a tool disqualifies you from someone who has little clue and just wants to get something out (i.e. just spit out code)by isqueiros
2/2/2026 at 9:18:18 AM
As I reread the original post, I'm not actually not sure which group I fall into. I think there's a bunch of overlap depending on perspective/how you read it:> Group 1: intern/boring task executor
Yup, that makes sense I'm in group 1.
> Group 2: "outsourcing thinking and entire skillset to it - they usually have very little clue in the topic, are interested only in results"
Also me (in this case), as I'm outsourcing the software development part and just want the final app.
Soo... I probably have thought too much about the original proposed groups. I'm not sure they are as clear as the original suggests.
by 3D30497420
2/2/2026 at 10:21:58 AM
I'd say you're still in the group 1. Your main goal is not the app but learning German. Therefore creating the app using AI is only a means to an end, a tool, and spending time coding it yourself is not important in this context.by aljaz823
2/2/2026 at 10:35:16 AM
The AI usage was not about learning German, but for creating an app. This would be group 2. He may use the tool he made to learn German, but using that tool isn't using AIby charcircuit
2/2/2026 at 11:51:23 AM
>using that tool isn't using AIIt is though. App is using AI underneath to generate audio snippets. That's literally its purpose
by netdevphoenix
2/2/2026 at 6:11:57 PM
Creating those snippets don't require knowing how to make a proper recording, how to edit it down, or how to direct the voice actor for the line.by charcircuit
2/2/2026 at 9:45:53 AM
They could admittedly be more defined, but I think the original commenter missed a key word. It really boils down to whether or not you are offloading your critical thinking.The word "thinking" can be a bit nebulous in these conversations, and critical thinking perhaps even more ambiguously defined, so before we discuss that, we need to define it. I go with the Merriam-Webster definition: the act or practice of thinking critically (as by applying reason and questioning assumptions) in order to solve problems, evaluate information, discern biases, etc.
LLMs seem to be able to mimic this, particularly to those who have no clue what it means when we call an LLM a "stochastic parrot" or some equally esoteric term. At first I was baffled that anyone really thought that LLMs could somehow apply reason or discern its own biases but I had to take a step back and look at how that public perception was shaped to see what these people were seeing. LLMs, generative AI, ML, etc are all extremely complex things. Couple that with the pervasive notion that thinking is hard and you have a massive pool of consumers who are only too happy to offload some of that thinking on to something they may not fully understand but were promised that it would do what they wanted, which is make their daily lives a bit easier.
We always get snagged by things that promise us convenience or offer to help us do less work. It's pretty human to desire both of those things, but proving to be an Achilles Heel for many. How we characterize AI determines our expectations of it; so do you think of it as a bag of tools you can use to complete tasks? Or is it the whole factory assembly line where you can push a few buttons and an pseudo-finished product comes out the other side?
by 0xEF
2/2/2026 at 9:35:56 AM
False dichotomy is one of the original sins. The two groups as advertised aren't all that's out there. Most people are interested in results. How we get those results is part of the journey of getting results, and sometimes it's about the journey not the destination. I care very much about the results of my biopsy or my flight, I don't know much about how we get there, I want to know if I have cancer, and that my plane didn't crash. I hope that doesn't put me on the B ark that gets sent into the sun.by fragmede
2/2/2026 at 3:19:04 PM
This is me, but for writing code. I own a business, and I use Claude Code to build internal tools for myself.Don't care about code quality; never seen the code. I care if the tools do the things I want them to do, and they verifiably do.
by idopmstuff
2/3/2026 at 3:51:21 AM
How do you verify them? How do you verify they do not create security risks?by jason_s
2/3/2026 at 1:45:40 PM
They only run locally on my machine, and they use properly scoped API credentials. Is there some theoretical risk that someone could get their hands on my Gemini API key? Probably, but it'd be very tough and not a particularly compelling prize, so I'm not altogether too concerned here.On the verification front, a few examples:
1. I built an app that generates listing images and whitebox photos for my products. Results there are verifiable for obvious reasons.
2. I use Claude Code to do inventory management - it has a bunch of scripts to pull the relevant data from Amazon then a set of instructions on how to project future sales and determine when I should reorder. It prints the data that it pulls from Amazon to the terminal, so that's verifiable. In terms of following the instructions on coming up with reorder dates, if it's way off, I'm going to know because I'm very familiar with the brands that I own. This is pretty standard manager/subordinate stuff - I put some trust in Claude to get it right, but I have enough context to know if the results are clearly bad. And if they're only off by a little, then the result is I incur some small financial penalty (either I reorder too late and temporarily stock out or I reorder too early and pay extra storage fees). But that's fine - I'm choosing to make that tradeoff as one always does when one hands off work.
3. I gave Claude Code a QuickBooks API key and use it to do my books. This one gets people horrified, but again, I have enough context to know if anything's clearly wrong, and if things are only slightly off then I will potentially pay a little too much in taxes. (Though to be fair it's also possible it screws up the other way, I underpay in taxes and in that case the likeliest outcome is I just saved money because audits are so rare.)
by idopmstuff
2/3/2026 at 7:24:39 AM
Not every tool can have a "security risk". I feel that this stems from people who see every application as a product and products must be an online web app available to the world.Let's say I have a 5 person company and I vibe-engineer an application to manage shifts and equipment. I "verify" it by seeing with my own eyes that everyone has the tools they need and every shift is covered.
Before I either used an expensive SaaS piece of crap for it or did it with Excel. I didn't "verify" the Excel either and couldn't control when the SaaS provider updated their end, sometimes breaking features, sometimes adding or changing them.
by theshrike79
2/2/2026 at 4:31:38 PM
I'd love to hear about what your tools do.by bwestergard
2/2/2026 at 6:36:05 PM
You're in luck: https://theautomatedoperator.substack.com/by 3D30497420
2/3/2026 at 2:42:48 AM
That's the place!The most fun one is this, which creates listing images for my products: https://theautomatedoperator.substack.com/p/opus-45-codes-ge...
More recently, I'm using Claude Code to handle my inventory management by having it act as an analyst while coding itself tools to access my Amazon Seller accounts to retrieve the necessary info: https://theautomatedoperator.substack.com/p/trading-my-vibe-...
by idopmstuff
2/2/2026 at 12:09:11 PM
To me this misses a third group, those using these tools as a series of virtual teammates, a mock team member with which to ping pong possibilities.This is actually the greatest use case I see, and interact with.
by cik
2/2/2026 at 2:27:03 PM
Yea I am an ENFP. While I don’t think MBTI is scientific, it captures perfectly that I have the tendency to think out loud.LLMs make me think out loud way better.
Best rubber duck ever.
by mettamage
2/3/2026 at 5:29:26 AM
Felt very weird reading this on HN and not r/ENFPmemes. I agree completely.by DirkH
2/3/2026 at 7:46:12 AM
Yea I know. I once went into MBTI in the vein of "it's not scientific but can I learn something useful out of it?" I tend to test close to ENFP/ENTP. I can notice tendencies of both in me. Then I went on the ENFP subreddit as I suspected many had ADHD and simply asked in a poll. A lot of them said that they did, as I suspected as I'm subclinical myself (and it becomes clinical real fast I even just sleep for 6 hours on one night).So I learned that you can definitely glean some insights from it. One insight I have is: I'm a "talk out loud thinker". I don't really value that as an identity thing but it is definitely something I notice that I do. I also think a lot of things in my mind, but I tend to think out loud more than the average person.
So yea, that's how pseudo science can sometimes still lead to useful insights about one particular individual. Same thing with philosophy really, usually also not empirically tested (I do think it has a stronger academic grounding but to call philosophy a science is... a bit... tricky... in many cases. I think the common theme is that it's also usually not empirically grounded but still really useful).
by mettamage
2/2/2026 at 1:21:24 PM
To me, this is the first use case, depending on whether you're aware of its shortcomings or not.by madeofpalk
2/3/2026 at 5:49:45 AM
I'd think you'd just get a lot of bots agreeing with you.by ottah
2/2/2026 at 8:29:23 AM
The same person might be both kinds of users, depending on the topic or just the time of the dayby Aardwolf
2/2/2026 at 12:53:47 PM
It's almost as if categorization is often an oversimplification.by mathgeek
2/3/2026 at 2:46:01 PM
You can get it to type for you. I don't really type the code myself anymore. I review it and then have it make corrections. Very rarely I will jump in and type and probably should more because the token cost, but it's novel that I can make it change the code exactly how I want, so I do that instead.It's nice to brainstorm with too, but you have to know what you're doing.
It gets stuck on certain things for sure. But all in all it's a great productivity tool. I treat it like an advanced auto complete. That's basically how people need to treat it. You have to spend a lot of time setting up context and detailing what you want.
So does it save time? Yea, it can. It may not in every task, but it can. It's simply another way of coding. It's a great assistant, but it's not replacing a person.
by tom_m
2/2/2026 at 9:25:47 AM
Well the second group in your taxonomy are very unserious, I mean that's fine, it's OK to use an AI tool for vibing and self-amusement, there will be an entire multi-billion dollar entertainment industry which will grow up around that. In my personal experience, decisionmakers who fell into this camp and were frothing at the mouth about making serious business decisions this way are already starting to get a reality check.From my perspective the distinction is more on the supply side and we have two generations of AI tools. The first generation was simply talking to a chatbot in a web UI and it's still got its uses, you chat and build up a context with it, it's relying heavily on its training data, maybe it's reading one file.
The second generation leans into RAG and agentic capabilities (if you can glob and grep or otherwise run a search, congrats you have v1 of your RAG strategy). This is where Gemini actually scans all the docs in our Google Workspace and produces a proposal similar to ones we've written before. (Do we even need document templates anymore?) Or where you start a new programming project and Claude can write all the boilerplate, deploy and set up a barebones test suite within a couple of minutes. There's no doubt that these types of tools give us new capabilities and in some cases save a lot more time than just babbling into chatgpt.com.
I think this accounts for a lot of differences in terms of reported productivity by the sane users. I was way less enthusiastic about AI productivity gains before I discovered the "gen 2" applications.
by safety1st
2/2/2026 at 3:16:39 PM
I can buy that if we stipulate that one person can belong to both groups, depending on the task and goals of the user.Sometimes I just want the thing and really don't care about any details. Sometimes I want a very specific thing built in a very specific way. Sometimes I care about some details and not others.
How I use the tools at my disposal depends on what I want to get out of the effort.
by GrinningFool
2/2/2026 at 9:49:15 AM
Other alternatives that aren't exactly "just as a tool":* people who use it instead of search engines.
* people who use it as a doctor/therapist/confidant. Not to research. But as a practitioner.
There are others:
* people who use it instead of man pages or documentation.
* people who use it for short scripts in a language they don't quite understand but "sorta kinda".
by absynth
2/2/2026 at 9:20:55 AM
> The second group is one that thinks talking to a chatbot will replace senior developerAnd the first group thinks that these tools will enable them to replace a whole team of developers.
by notarobot123
2/2/2026 at 2:22:13 PM
A company with 5 developers could potentially downsize to 3 developers using AI, while improving overall velocity. Would you agree?by cheevly
2/2/2026 at 7:43:24 PM
No.by the__alchemist
2/2/2026 at 11:31:14 AM
I think the specific examples of the first group there undersell it. They make it sound like the group isn't getting a lot of power out of the AI. The things I use it as a tool for include- Peer reviews. Not the only peer review of code, but a "first pass" to point out anything that I might have missed
- Implementing relatively simple changes; ones where the "how" doesn't require a lot of insight into long term planning
- Smart auto-complete (and this one is huge)
- Searching custom knowledge bases (I use Obsidian and have an AI tied into it to search through my decade+ of notes)
- Smart search of the internet; describing the problem I'm trying to solve and then asking it to find places that discuss that type of thing
- I rarely use it to clean up emails, but it does happen sometimes. My emails tend to be very technical, and "cleaning them up" usually requires I spend time figuring out what information not to include
by RHSeeger
2/2/2026 at 1:22:29 PM
What about the type of user that uses thinking/reasoning to produce more advanced tooling in order to outsource more and more of their thinking and skillsets to it? Because I myself and many others that I know fall into that category.by cheevly
2/2/2026 at 10:20:40 PM
This, I have a friend using AI bots to do the whole "brain processing", he is just pressing buttons now. Even some smart cooking pan they bought (couple), uses an app with AI in it.I can see the day that all of these folks completing replacing their thinking skill with AI, unable to find job because they can no longer troubleshoot anything without AI.
I use AI as replacement for search engine, I spent 3 nights using ChatGPT to assist me in deploying a Proxmox LXC container running 4 network services and the whole traffic is routed to Proton VPN via WireGuard. If the VPN goes down, the whole container network stops without using my real IP. Everything was done via Ansible which I use to manage my homelad, and was able to identify mistakes and fix them myself. Dude, I have learned a ton with LXC and sort of moving away from VMs.
by h4kunamata
2/2/2026 at 3:35:44 PM
> The second group is one that thinks talking to a chatbot will replace senior developerOnce they realize that it doesn't replace senior but can easily replace junior, junior dev will have a bigger problem and the industry at large will have a huge problem in 8 years because the concept of "senior" would have vanished.
by whynotmaybe
2/2/2026 at 6:18:57 PM
Why would the concept of senior disappear in 8 years? I am a senior, and work with seniors that have been seniors for 20 years. In 8 years we will still be seniors.by hxugufjfjf
2/2/2026 at 7:15:57 PM
My perspective is that each year a good chunk of senior dev leave the pure dev to go to management roles or something else and while they do that, they are replaced by junior morphing into senior.I consider 8 years to be the real experience to be considered a senior dev.
If from now on, the amount of junior is drastically reduced, this will lead to a lack of senior in 8 years because the senior leaving should be the same proportion.
In a situation where they replace juniors with agents, yes, we'll still be senior, but just like people capable of setting a VHS recorder, our number will dwindle.
by whynotmaybe
2/2/2026 at 7:39:38 PM
I understand. At my work nobody senior becomes a manager so I never had that perspective. Thanks for clarifying your thoughts for me.by hxugufjfjf
2/2/2026 at 11:35:39 AM
And what people don't understand is that these two modes, much like those which can successfully restrict their calories and stay in shape, are dispositional more than anything. Most people will fail to "upgrade" to the better path, and people in the better path will fail to understand why most people are complaining about LLMs.by everdrive
2/2/2026 at 11:46:26 AM
> The second group is one that thinks talking to a chatbot will replace senior developerNo one is going to replace senior developers. But senior developer pay WILL decrease relative to its historical values.
by netdevphoenix
2/2/2026 at 11:49:17 AM
Surely making use of a new tool that makes you more productive would increase your value rather than decreasing it? Especially when, knowing the kinds of mistakes AI could make that would affect your codebase negatively in terms of maintainability, security etc would require significant experience.by sharperguy
2/2/2026 at 5:05:13 PM
> Surely making use of a new tool that makes you more productive would increase your value rather than decreasing it?Think wider. You, sharperguy, are not and will not the only person with access to these tools. Therefore, your productivity increase will likely be the same as everyone else's. If you are as good as everyone else, why would YOU get paid more? Have you ever seen a significant number of companies outside FAANG permanently boost everyone's salary just because they they did well on a given year?
A company's goal is to the shareholders not to you. Your value exists relative to that of others.
by netdevphoenix
2/2/2026 at 12:44:08 PM
Not really. If pay decreases it's because you're not required anymore or less, which is contrary to what has been shown. IF educating and enabling juniors etc. is not handled correctly, then senior pay will explode, because whilst they are much more efficient, their inherent knowledge is required to produce sustainable results.by RicDan
2/2/2026 at 5:13:11 PM
> If pay decreases it's because you're not required anymore or lessNot necessarily, there are many factors at play here which are downplayed. The first one is education: LLMs are going to significantly improve skill training. Arguably, it is already happening. So the gap between you and a middev will get narrower. At the same time, candidates who can be as good as you will increase.
While you can argue that you possess specialised skills that not many do, you are unlikely to prove that under pressure within a couple of hours and certainly not to the level where you can have late 10s level of negotiating power imo.
At the end of the day, the market can stay irrational longer than you can continue refuse to accept a lower offer imo. I believe there will be winners. But pure technical skill isn't the moat you think it is. Not anymore.
by netdevphoenix
2/2/2026 at 11:35:19 AM
I agree, but there is a creeping issue of where the first group may delve deeper into a topic if all they/we have is an increasingly polluted internet.by CrzyLngPwd
2/2/2026 at 3:02:31 PM
I think there's some middle ground possible between those two black and white groupingsby bwat49
2/2/2026 at 10:19:03 AM
I think you miss one third user. That's a developer generating entire systems and still have an understanding on the output. The dev person is in control of the architecture, code quality, functional quality and more. These persons are still rare. But I have seen them already. They are the new 10x developers.by holoduke
2/2/2026 at 1:46:59 PM
Ironically, I find LLMs far better at helping me dive into unfamiliar code than at writing it.A few weeks ago a critical bug came in on a part of the app I’d never touched. I had Claude research the relevant code while I reproduced the bug locally, then had it check the logs. That confirmed where the error was, but not why. This was code that ran constantly without incident.
So I had Claude look at the Excel doc the support person provided. Turns out there was a hidden worksheet throwing off the indices. You couldn’t even see the sheet inside Excel. I had Claude move it to the end where our indices wouldn’t be affected, ran it locally, and it worked. I handed the fixed document back to the support person and she confirmed it worked on her end too.
Total time to resolution: 15 minutes, on a tricky bug in code I’d never seen before. That hidden sheet would have been maddening to find normally. I think we might be strongly overestimating the benefits of knowing a codebase these days.
I’ve been programming professionally for about 20 years. I know this is a period of rapid change and we’re all adjusting. But I think getting overly precious about code in the age of coding agents is a coping mechanism, not a forward-looking stance. Code is cheap now. Write it and delete it.
Make high leverage decisions and let the agent handle the rest. Make sure you’ve got decent tests. Review for security. Make peace with the fact that it’s cheaper to cut three times and measure once than it used to be to measure twice and cut once.
by actsasbuffoon
2/2/2026 at 12:49:19 PM
I have been diving deeply in the Rust community and ecosystem and really enjoyed reading the decade of real engineering poor into it, from RFCs to std, critical crates such as serde, and testing practices. What a refreshing world.Compared to the mess created by Node.js npm amateur engineers, it really shows who is 10x or 100x.
Outsourcing critical thinking to pattern matching and statistical prediction will make the haystacks even more unmanageable.
by ontouchstart
2/3/2026 at 4:15:40 AM
It is so tiring to read this unoriginal copy-paste group-think literally everywhere. There is no "reasonable people versus opportunist bastards" war going on. This is a fantasy at best.> People using it as a tool, aware of its limitations
You can't know the limitations of these tools. It is literally unknowable. Depending on the context, the model and the task, it can be brilliant or useless. It might do the task adequately first time, then fail ten times in a row.
> People outsourcing thinking and entire skillset to it
You can't get out of it something that you can't conceive of. There are real life consequences to not knowing what AI produced. What you wrote basically assumes that there is a group who consistently hit themselves on the head with a hammer not knowing what hurt them.
by nurettin
2/2/2026 at 12:11:44 PM
I find myself in both groups depending on the project/task. I wonder what to make of that.by nisegami
2/2/2026 at 11:26:00 AM
What about me? I'm in group 3 and I can't be alone.I'm a subject matter expert 45 years in programming and data, aware of the tools limitation but still use it all day every day to implement non-trivial code, all the while using other tools to do voice transcription, internal blog posting about new tools, agents information gathering while I sleep, various classifiers, automated OCR, email scanning, recipe creation, electronics designing, many many other daily tasks.
by delaminator
2/2/2026 at 10:04:55 AM
Second group are often the management decision makers, holding budgets, setting up 5-year plans etc. Don't underestimate them nor mock them, at the end its a disservice to all of us.by kakacik
2/2/2026 at 3:37:00 PM
That is a great point. People in this group are programming at a different abstraction level, i.e., allocating computing resources, both human and machine resources.Now AI agents are cheap but they generate a lot of slop, and potential minefields that might be costly to clean. The ROI will show up eventually and people in the second group will find out their jobs might be in danger. Hopefully a third group will come to save them.
by ontouchstart
2/3/2026 at 2:10:29 AM
[dead]by hnisfullof2
2/2/2026 at 3:07:02 PM
You can split the second group into two sub-buckets.Junior devs: who have limited experience or depth in knowledge. They are unable to analyze the output of AI coding agents sufficiently to determine long term viability of the code. I think this is the entirety of who you're speaking of.
Senior devs: who are using it for more than a basic task executor. They have a decade+ of experience and can quickly understand if what the AI coding agent suggests is viable long term or not. When it's not, they understand how to steer it into a more appropriate direction.
by jmathai