5/11/2026 at 3:46:01 PM
Multiple times per week I have the same conversation. It goes something like this: - AI will make developers irrelevant
- Why?
- Because LLMs can write code
- Do you know what I do for a living?
- Yes, write code?
- Yes, about 2-5% of the time. Less now.
- But you said you are a developer?
- I did
- So what do you do 95-98% of the time?
- I understand things and then apply my ability to formulate solutions
- But I can do that!
- So why aren't you?
The developers who still think their job is about writing code will perhaps not have a job in the future. Brutal as it may sound: I'm fine with that. I'm getting old and I value my remaining time on the planet.Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.
by bborud
5/11/2026 at 11:30:32 PM
On one of my very first jobs in around 2000 I got paired with a much more experienced software engineer. He’d been a pro since the early 70s. I was stoked to learn from him.On like my fourth day he said “now I’m going to teach you the thing that helped me the most in my career…” I waited, ready for the received wisdom. And he said “always number your punch cards so if you drop them they will be easy to put back into order”. I was upset. We were long past the point where punch cards were in use. And then he said “I said what would help _me_ the most, not what would help _you_. Software is always changing”.
I’ve thought about that a lot lately.
by kasey_junk
5/13/2026 at 11:07:00 AM
I think it's going to be a shift in skill set, like the constant change that has gone before. I've always primarily considered by self an application developer, but of late I've become more software architect, more devops, more tester. Software engineering skills always leaked into these domains before AI, I think it's just a shift in the time spent there now we're not manually writing so much code. And shipping is STILL hard. I've moved my focus to retooling - consolidating much of my workflow in a tool I'm working on https://www.agentkanban.io - A remote kanban board with agent harness integration (Github CoPilot currently) and context management.by gbro3n
5/12/2026 at 6:52:27 AM
That conversation seems like he was covertly teaching you about linked lists.by snvzz
5/12/2026 at 2:18:13 PM
Or perhaps a peek into how fast the Software-engineering is changing that what works for you now may be irrelevant in future, and hence be prepared to be adaptive!by mecHacker
5/11/2026 at 5:26:06 PM
This is a bit of glib answer. Most of the time is spent coding which encompasses typing, retyping, and retyping again. It also includes banging your head against the wall while trying to get one of your rewrites to work against and under-documented API.OP's formulation makes SWE sound like a purely noble enterprise like mathematics. It's more like an oil rig worker banging on pieces of metal with large hammers to get the drill string put together. They went in with a plan, but the reality didn't agree and they are on a tight schedule.
by doug_durham
5/11/2026 at 5:37:48 PM
Most of the time is spent figuring what the right thing to do is, not writing the implementation. Sometimes the process of writing the implementation surfaces new considerations about what the right thing is, but still, producing text to feed to a compiler is not the bulk of the work of a software engineer. It is to unearth requirements and turn them into repeatable software.by estebank
5/11/2026 at 7:00:28 PM
Feels like lately most of the time is spent arguing about or at least worrying about whether or not AI is going to replace all software developers.by powvans
5/12/2026 at 4:31:17 PM
Or dealing with the idiotic fallout of somebody who sucks at coding or even has never coded in their life trying to make that happen.by pydry
5/11/2026 at 7:55:38 PM
If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive. With an LLM you don’t have to try to predict a complex system in advance, experiments are so cheap to can just converge to a solution directly. None of this pontificating; it’s really not that useful anymore.by 7e
5/12/2026 at 1:45:52 AM
With an LLM you don’t have to try to predict a complex system in advance, experiments are so cheap to can just converge to a solution directly.We saw a similar philosophy in TDD advocacy many years ago. Search for something like "Sudoku Jeffries" to see how that went. Then search for "Sudoku Norvig" to see what it looks like when you actually understand the problem.
The idea that you can somehow iterate your way to a solution when you have no idea where you're trying to go or even which direction your next step should be in has always seemed absurd to some of us but in the era of LLMs there's no longer any doubt. In the agentic era (can we call a few months an "era"?) I estimate that 90% or more of the writing I've read about how to use agents most effectively came down to making sure there is a clear specification for what they need to implement first and then imposing extensive guard rails to make sure their output does in fact follow that specification. It's all about doing enough design work up front to remove any ambiguity before coding the next part of the implementation and almost everyone claiming any sort of real world success with coding agents seems to have reached a similar conclusion.
by Silhouette
5/11/2026 at 8:23:25 PM
This is very naive and reductive thinking. Experiments have a cost, you really have to think carefully about what you are trying to learn. Even when code is cheap, traffic and time are still huge constraints, and you better make sure your hypothesis actually makes sense for your goals, because AI is more than happy to fill in the blanks with a plausible but completely wrong proposal.More broadly, it's well understood that experiments are not a replacement for design and UX. Google is famously great at the former and terrible at the latter. Sure the AI maxxers will say the machines are coming for all creative endeavours as well, but I'm going to need more evidence. So far, everything good I've seen come from AI still had a human at the wheel, and I don't see that changing any time soon.
by dasil003
5/12/2026 at 5:56:46 AM
Even writing code the good old way, of course we experiment. I remember the old rule "Plan to throw away the first one. You will anyway." But then there's the "second system effect" where the second system is supposedly always overengineered and trying to take every possibility into account.And then there's the times when the quick sloppy poc you planned to throw away gets forced into production and is still impossible to change ten years down the road.
AI makes all these problems so much less painful.
I worked at a company which had a huge monolithic ERP system (their product, to be clear) with no good separation between the GUI layer and presentation layer. The GUI was also dependent on an ancient version of the Borland C++ compiler. They put in a humongous effort to move to a slightly more modern UI library, and a client server architecture.
However, someone had decided that messages in xml or json were too inefficient, they already had performance issues. So they went with a binary message protocol of their own design - with no features for protocol update. Everything communicating with the server had to be on exactly the same version, or it would throw an error. So of course they very, very rarely updated the protocol.
I think the best help of AI will be to clean up such real life messes of soul-crushing architectural regrets. Will it do it perfectly, certainly not, but I wouldn't do it perfectly myself either if I was forced to do it - and I'd take a hell of a lot more time to do it.
by vintermann
5/11/2026 at 11:47:30 PM
I think you and 7e are both right. Being able to iterate some N orders of magnitude quicker is a big deal. This doesn’t eliminate design and UX. Rather, it merges it with high iteration speed to produce a form of “play”.“Play” is what produced at least two (likely more) generations of attentive (and therefore competent) programmers. The hype around LLMs is painful, yes, but attentive human minds will ultimately bust through it.
by avador
5/11/2026 at 8:01:47 PM
And before long you have a solution that is made up of a thousand pieces of spaghetti that neither you nor anyone else understands. And when your solution becomes too brittle to use, cannot be maintained, or fails catastrophically, then what? Just hope that's someone else's problem?by GolfPopper
5/11/2026 at 8:20:53 PM
Refactoring is cheap too, but you have to read your code and know when to stop and ask the agent to refactor, rewrite, adopt or change libs, fix issues presented by linters and code quality scanners, change abstractions and rethink the architecture.It's never been easier to replace chunks of code with sane software patterns, but you have to have a feel for those patterns. And also understand what's under the hood.
You folks speak like the only function of the agent is to spit code and features. Get a grip and treat your deliverables with care, otherwise you only have yourself to blame, not the AI.
by gchamonlive
5/12/2026 at 8:32:51 AM
Refactoring is not cheap when you take into account the cost of not breaking things.by microflash
5/12/2026 at 9:16:53 AM
When we say "X is cheap" it's in comparison to doing things manually, not irresponsiblyby gchamonlive
5/11/2026 at 8:23:01 PM
That's the point. Your prototype doesn't need to be pretty. It just needs to prove that the value is there for it to be made pretty.by a10c
5/11/2026 at 11:59:10 PM
Order of operations: Make it work. Make it right. Make it fast.[0][0]: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
by avador
5/12/2026 at 2:22:20 AM
Unfortunately, too many developers (especially in the AI era) stop after the first item.by didgetmaster
5/12/2026 at 6:53:30 AM
Stop... short of.by snvzz
5/11/2026 at 10:47:26 PM
You actually get what you ask for. And you can ask for anything, vaguely or not.You'll end with spaghetti if you'll play a bad manager and only ever allocate time for new features and never for cleanups.
You can go through code, add REFACTOR comments based on your tastes and thoughts, and get your result and iterate to your heart's wishes. You just don't need to do the direct code typing.
by megous
5/12/2026 at 9:29:56 AM
Well - you converge to a system, but do that by pruning what you don't want.If you care about maintainability and quality (and I include maintaining using LLM based tools) then you need to understand what it does (in doing so you will find lots of things for it to fix - you'll probably find that the architecture it's chosen is not right for what you want too).
by stuaxo
5/11/2026 at 9:41:33 PM
So the infinite monkeys with infinite typewriters approach.by jimbokun
5/11/2026 at 11:26:01 PM
aka "swarms". cool sounding name for.. throw yet more mud at the wall. at unprecedented scalesby darepublic
5/11/2026 at 8:58:08 PM
> If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive.No, because no amount of experimentation can solve many of the problems that have been solved by thinking. Even your claim about "experiments are cheap" requires thinking to decide what experiments to do. No one is generating all possible solutions that fit in X megabytes; you have to think to constrain the solution space.
by antonvs
5/11/2026 at 10:28:51 PM
AI is pretty good at figuring out what the right thing to do is.by eweise
5/11/2026 at 10:32:44 PM
AI is pretty good at pulling from the body of existing solutions of what the right thing to do is.by incanus77
5/12/2026 at 5:44:35 PM
I find that it often pulls a solution that is good enough for this problem today. Sometimes that is great, and other times it's just creating a pile of shitby mkoryak
5/11/2026 at 11:35:32 PM
AI is decent at solving problems that lots of people already solved a long time ago.by CyberDildonics
5/12/2026 at 2:29:29 AM
Too many people believe that AI is going to come up with elegant solutions to problems that no one has ever solved before. Maybe someday, but for now it seems to be good at finding a solution that may be hidden away somewhere in stack overflow. If it just isn't there, then you are out of luck.by didgetmaster
5/12/2026 at 4:17:54 PM
There is almost nothing new in computer programming. 99.999% of any code most of us on this forum write will be repeating patterns that have been written thousands of times before.Tell a coding agent what your new thing needs to do, give it the absolute constraints, max response times, max failover times, and so on, tell it which technologies it has access to or could use, and then tell it to spend a lot of time going over and over the design, coming up with an initial X number of designs (I use 5), and then it must self criticise each one of them and weigh them up, narrow down to three, before finally presenting those three options to the user.
Now you read the options, understand them, realise that the AI has either converged on something very sensible, or it has missed something, so you tell it what it missed and iterate. Or it nailed something good, you pick the option you prefer, and tell it to come up with a more fleshed out high level design, describing the flow and behaviour deeply (NO CODE REFERENCES!). Then once you're happy, tell it to use that and write a comprehensive coding plan. Tell it specifically what coding patterns you prefer (you should have these in your AGENTS.md file already), what patterns to avoid (single threaded? multi-threaded? Avoid gc? How you typically deal with error conditions, etc etc).
Then have it start iteratively working on the coding plan, and it *MUST* have a strong feedback loop. If there is no feedback loop initially, I tell it to build one. It must be able to write very fluent integration tests (not just unit tests). It must be able to run the app and read the logs.
Do all this and I bet you get a better result that 80% of developers out there. Coding agents are extremely good when used well.
by munksbeer
5/11/2026 at 6:43:11 PM
Glib is called for. The amount of information asymmetry that's still on the table as vibe coders and vibe engineers and vibe doctors emerge is staggering. Professional experience is still incredibly valuable. Most software developers might spend more than 6% of their time coding but no senior developers are banging their heads for hours over typos.by ecocentrik
5/11/2026 at 7:43:56 PM
These days nobody bangs their heads over typos.LLMs evaporated 90% of the "moments of despair" when you have an error and googling it isn't helping, or googling it made you realize you have to read 30min of documentation.
Coding is a joy now. LLMs shaved off all the rough edges.
by roncesvalles
5/11/2026 at 9:52:50 PM
They created other kinds of despair.A year ago I would've told my boss “can't be done” about my work today. I'd tell him to get me the right person to talk to (our partner, not an alien) who could give me some insight into what the hell I'm supposed to be doing to consume their API. Or to at least explain why it is that this can't be done.
Nowadays, I spent a couple of weeks reverse engineering their terrible ideas. Yeah, it worked. But it's a complete waste of my time, and tokens, energy, chips and RAM. And worst of all, it will lead to a terrible design.
That will work, but will eventually colapse under its own weight, as we use our increased power to increase our sloppiness and take it a little further. Because we can manage it. For now.
by ncruces
5/11/2026 at 8:41:04 PM
LLMs moved the moments of despair to PR reviews for me. It used to be that you could check on a junior dev occassionally throughout the day to make sure they're on the right track. Now you step away for 2 hours and they're raising a PR of bad code smell spaghetti and moving on to repeat their AI slopfest on the next task.It's getting hard to keep up with trying to teach new devs what bad code looks like. And I swear sometimes they just copy my PR comments into their AI tool to fix the mistakes without any of the learning.
by arcboii92
5/11/2026 at 9:44:58 PM
At some point there needs to be an uncomfortable conversation about how if all they’re doing is copy pasting everything they get from you into ChatGPT, you can do it yourself for much much cheaper.by jimbokun
5/11/2026 at 11:16:06 PM
how? management in most Tech companies are incentivizing them to do just that, so if you bring it up, they'll happily trot over to your manager to complain and then the uncomfortable conversation is you with management about why you're getting in the way of AI uptake by the team.by trustfundbaby
5/11/2026 at 9:35:06 PM
Don’t allow juniors to use AI. It’s like university exams: no programmable calculators allowed. Review assistants or senior who know what’s going on should though, it does help when used correctlyby eecc
5/11/2026 at 9:40:02 PM
Write a damn good automated review agent that runs against their PRs before even looking at them… works well for me!by FrankRay78
5/11/2026 at 10:37:02 PM
I've tried this without much luck. In my experience they get too bogged down on surface things and don't have the necessary business requirements/context to understand and find actual bugs.How have you set yours up that works well for you?
by hackeman300
5/12/2026 at 2:27:07 AM
So create a context document that explains the business context, and add that to the agent.Take the bad result that you're getting, and pretend it's coming from an enthusiastic junior. What would you tell them to make them do this task better? Add that explanation to the agent (or explain that to the LLM and get it to add that to the agent, I have found this to work as well).
When you create a task for the LLM, get it to create a requirements document that lists all the requirements. Feed that into the review agent so it understands what the code agent was trying to do.
The LLM will do what you tell it to do. It doesn't magically understand what you want it to do. You have to tell it what to do.
by marcus_holmes
5/11/2026 at 7:57:13 PM
You can't possibly believe this, or you and me (and many others) are doing something different. LLMs have created an entire new - huge - set of bang-your-head moments, as they go off half-cocked in a million simultaneous directions, chasing their tail, or just making shit up. And since the vast majority of work is on existing - often ancient - codebases, let's find out if you feel the same way in 18 months.by skeeter2020
5/11/2026 at 8:04:51 PM
LLMs are great for anyone who isn't responsible for the consequences of what they code.by GolfPopper
5/11/2026 at 11:16:10 PM
That's only if you do agentic coding.I use LLMs in the following ways:
1. Copy-pasting code into the web chat UI and asking for something (bugfix, add a feature, refactor, explain, review it etc), including entire source code files. A $20/mo Gemini subscription goes a long way (never been rate-limited). I only use the highest model. I often just copy-paste the entire source file between 3 backticks.
2. Cursor Tab. I do have hotkeys to enable and disable it; it's disabled most of the time otherwise it gets annoying.
3. Single-file changes directly from Cursor's AI sidebar. I only do this for simple, predictable stuff because even their auto-routing "Premium" setting is not as good as pasting stuff into Gemini 3.1 Pro.
That means I have only two $20/mo subscriptions: Gemini and Cursor.
I don't use Claude Code, it's really for people who don't know how to code. I don't use Plan Mode; I make and track the plan myself (if at all). I only tell the LLM granular tasks to execute. I don't use `claude.md` or `agents.md` or anything like that. If I don't like a particular output, I reset everything, modify my prompt and try again.
I believe this is the only way to fully leverage LLMs without losing any product quality. If you're trading off quality for "speed" (in quotes because over the long term, a low quality codebase is a massive drag on productivity) then there's no point.
by roncesvalles
5/12/2026 at 12:30:06 AM
I _think_ what you’ve said is “go shallow, not deep”. That is, don’t let the walk you make inside the latent space a long one. Twenty-five short and peppered steps, from de novo, is better than one long, protracted stew.Is that accurate?
by avador
5/12/2026 at 4:40:02 PM
Well yeah. If you know what you're doing, why would you let the AI take control?by roncesvalles
5/12/2026 at 9:45:19 PM
Well, if it works on step one, then why not step two? Where would different folks draw the line? My grandparents might continue on a while, whereas I would not. But if it also “works” on step two for me, should I take a third?What counts as “works” is the important bit, I think.
by avador
5/12/2026 at 1:47:36 AM
Yes, if you're using them to write large chunks of code or entire features. If you just use them to clear up some trivial problem in an unfamiliar technology that you used to spend 30 minutes googling with 50 tabs open, or stuff like write a method to filter, map and reduce an array based on specific criteria, they're a godsend.by suzzer99
5/11/2026 at 9:42:52 PM
Give them work in smaller chunks.by jimbokun
5/11/2026 at 10:27:05 PM
Maybe I'm weird, but my usage has been very conservative. As in, I treat the LLM like a junior dev that I have to micromanage and handhold.I am terrified of allowing these things to complete tasks end-to-end with nothing intervening. Maybe that's why I don't run into many of these issues. I mostly delegate grunt work and manual tedium, not reasoning or design choices to the LLM. I may consult the LLM and ask for criticism, but there is no way I'm going to allow it to quietly make design decisions that I don't know about.
by lo_zamoyski
5/12/2026 at 2:19:47 AM
You are in charge of what the LLM does. If it's running off half-cocked in a million simultaneous directions, that's on you. Write better skills. Tell it not to do that. Break into its loop and ask it wtf it thinks it's doing. If it's making shit up, force it to test more.The LLM will do what you tell it to do. Manage it.
by marcus_holmes
5/11/2026 at 7:52:03 PM
Languages have been reporting compile and runtime errors for decades. Additionally very few senior developers don't already have their minds wired to spot typos the way copy editors spot bad punctuation. Typos were only really a problem for students.by ecocentrik
5/12/2026 at 2:30:12 AM
I've been writing C++ for almost 20 years at this point and I do still benefit from how good Claude is at gnarly Template error messages.by neutronicus
5/11/2026 at 7:57:24 PM
> LLMs evaporated 90% of the "moments of despair"And then condensed an equal quantity of despair out of the ether via confident confabulations.
by kibwen
5/11/2026 at 8:18:33 PM
Equal? No, no no no. Upper management is making PoCs that promise to solve longstanding multi year learnings of tradeoffs and solution balancing, and setting goals based on that. We are heading to a cliff and everyone is going to learn what happens when you replace already vulnerable foundation pillars with pig iron.by taurath
5/12/2026 at 1:45:16 AM
100%. Googling when you don't even know enough to ask the right questions, with 50 tabs open and trying to read down to the 3rd or 4th Stack Overflow answer (which is usually the best for some inexplicable reason), was my least favorite part of development.I don't miss wasting an hour on a problem in a technology I'm not familiar with, where it's not like a big conceptual thing but something I could clear up in 5 seconds if I just had an expert in the room.
by suzzer99
5/11/2026 at 11:48:11 PM
LLMs create typos in the code they create all the time. Claude 4.7. Maybe you are using some next-gen super-AI nobody else has? Or you're just lucky.by leptons
5/12/2026 at 2:31:02 AM
So get the LLM to test and fix those typos. Why are you letting it mis-spell things?by marcus_holmes
5/12/2026 at 6:38:18 PM
Maybe you aren't familiar with how AI works. It writes the code for you. Nobody is "letting it mis-spell things". You run the code it wrote, it fails. You look through the code the AI wrote and find the typo it put in there, or give the AI the error for it to fix - but it still created the typo, and that is the main point here. AI often ignores the rest of the document and does what-ever-the-fuck it wants to make you stop prompting it, without any real concern for correctness.by leptons
5/13/2026 at 12:14:51 AM
No, that's not how it works.It writes the code for you. Then it runs the tests. Then it runs the linter. Then it runs the static analysis tool. If any of those fail, then it rewrites the code and runs them all again.
You only look at the code once it has done all of that.
If AI is ignoring the rest of the document and doing whatever it wants then you need to improve your document-writing skills. You can ask it why it did something, that helps discover how to improve. It's a process of refinement and discovery, just like learning how to use any new tool.
by marcus_holmes
5/11/2026 at 7:40:00 PM
This is temporary. What is the SKILL.md equivalent going to be in five years? In ten? You don't already see a pattern emerging around solutions to encode that "professional experience" into the tools themselves?These LLMs can already incorporate our entire cultural corpus yet your "professional experience" is the threshold they won't cross?
by pear01
5/11/2026 at 8:11:35 PM
The word “incorporate” is doing some very heavy lifting in your assertion. These LLMs already have access to the whole corpus of architectural knowledge and software best practices, and yet they’re unable to reliably implement those best practices. Why not? Why do they often make completely unintuitive decisions, even when repeatedly prompted to ask clarifying questions?by datsci_est_2015
5/11/2026 at 8:18:45 PM
To be clear by that and "cultural corpus" I meant their skill with natural languages. It is well known for instance that early LLMs were curiously better at composing sentences in English than doing basic math.Regarding such formal reasoning we have already seen marked improvement in the last year or two alone. The question is how this weighs on your prediction re their capabilities in the next two, five, ten, etc years.
by pear01
5/11/2026 at 8:49:38 PM
What are the properties of LLMs that have convinced you that there remains emergent complexity (e.g. the “ability” to formally reason) that we have not yet seen?by datsci_est_2015
5/11/2026 at 9:10:14 PM
There may be gains to be had in such emergence but that is not where I see the gains in the next five years. Those gains will be made by connecting LLMs more robustly with formal reasoning, which computers are already very good at. Continued iteration on connecting these right/left brain faculties could then lead to further emergence down the line.The present notions of harnesses, structured output or looping in the LLM to some external state or sandbox be it debugger output or embedding into a runtime already show early promising results along these lines. I see no reason to believe these gains will not continue over the next five years.
If you have some theories in the converse in that regard I am all ears.
by pear01
5/11/2026 at 9:39:53 PM
Extraordinary claims require extraordinary evidence, not the opposite. There’s no current evidence to suggest limitless progress, or even superlinear progress with regards to compute and energy. My guess would be sub linear or even logarithmic progress vs. linear growth in compute and energy, as that’s how most physical systems behave.by datsci_est_2015
5/11/2026 at 9:52:43 PM
No one said unlimited progress. Let's not revert to straw man claims.If you think the potential of LLMs is overblown feel free to short the market. I don't pretend to know the future. But if I may, I don't think you are framing the debate in the correct terms. Evidence is an important facet of human affairs. So is risk. Best of luck with your predictions.
by pear01
5/12/2026 at 1:21:26 AM
Markets can remain irrational longer than anyone can stay solvent (especially when wealth is as concentrated as it is currently: one doofus can keep an entire industry afloat).“Unlimited progress” is not a statement on the rate of progress, it’s a statement on the limits of progress. It’s a much weaker claim than you’re framing it as. Your claim very much is that we have not yet reached the limits of LLMs potential. My claim, conversely, is that we’re already reaching diminishing returns, which are being masked by a massive influx of compute and energy. My short: LLMs are not the path to AGI.
by datsci_est_2015
5/11/2026 at 11:03:29 PM
I really don't like this framing - it's hard to short a market at the best of times, let alone when governments have a vested interest in tech being too big to fail to compete in the global economic arms race - see Intel's stock in the past few months.I agree with you both - undoubtedly there are still massive gains to be made with the frontier models we have today with tooling and iteration, yet I do not believe there's sufficient evidence to claim we are rolling towards AG/SI on an exponential curve, without some additional breakthroughs given the jagged edges and data used to train models being fundamentally linear
by thinkthatover
5/12/2026 at 4:39:34 AM
Just remember you don't need AGI to see massive societal change. Certainly not mass layoffs. AGI is not the bar. By the time we all agree AGI has come the world will have already changed.You just need AI to be just good enough to win the tradeoff over a human employee. Just take your average office. Then ask yourself if the bar is really that high. AGI strikes me as an extremely nebulous concept. Better to just list everyone at your office and bucket them with a guess of how soon you think AI will replace them. Or weaken their market power. This is what every corporate boss in America is already doing. I'm merely suggesting rather than hope a graph curves in our individual favor we try to act more collectively as a species. Of course, I don't hold my breath.
I also don't find myself compelled by the notion that the danger to humanity is "AGI". The true danger is as it always has been - each other.
by pear01
5/12/2026 at 11:49:59 AM
> Just take your average office. Then ask yourself if the bar is really that high.How many years away do you think we are from a “concierge” AI that can do the menial tasks handled by most personal assistants / program managers? Booking flights and hotels and coordinating employee availability?
by datsci_est_2015
5/11/2026 at 9:03:29 PM
> Why do they often make completely unintuitive decisionsMost likely because you haven't constrained their behavior in your prompt. You're making the assumption that they "understand" that using best practices is what you want. You have to tell them that, and tell them which practices they should use.
by antonvs
5/11/2026 at 9:36:03 PM
They already fail consistently follow very simple and concrete instructions like “Please do not ever mock this object, always properly construct it in your tests”, so I’m not sure how they’re going to adhere to more vague and conceptual architectural paradigms. This is a problem with generative AI in general - image generation has similar limitations.by datsci_est_2015
5/11/2026 at 10:01:40 PM
Senior developers know what behavior to constrain.If incorrect LLM output is a prompt issue then demand for experienced developers will remain, and demand may actually increase as time passes.
by antihipocrat
5/11/2026 at 7:46:46 PM
The capacity of the person prompting it to understand is the threshold they won't cross. They can squeeze the gap as much as possible by dumbing down answers or slowly ramping up information complexity but there is a limit to comprehension.by ecocentrik
5/11/2026 at 7:57:22 PM
This is an interesting answer for questions about human agency and accountability/personhood questions but I don't see how it leads to increased confidence in the role of human as SWE.If LLMs get good enough, one might be tempted to ask so what if most humans can't understand the output? Human civilization has by and large been a constant exercise in us collectively accomplishing more and more while individually comprehending less and less.
Our ancestors likely understood more about hunting live game or murdering each other than we do. Most of us do not consider that a great loss. Most of us living in the modern world depend on things we don't fully comprehend. I'm just not sure how this would lead to being reassured re the human as SWE.
by pear01
5/11/2026 at 8:45:42 PM
We don't need as many hunters because we've domesticated sources of meat. We still need ranchers, butchers... an entire supply chain to get meat to consumers. We didn't remove humans from the loop, we just created specializations.Software specialization might look very different in 10 years but I doubt that technically specialized humans will be completely removed from their professions. We might not be carrying bows and arrows anymore but we will be carrying the equivalent of a rope and a Stetson.
by ecocentrik
5/11/2026 at 9:23:04 PM
Ranchers, butchers... and factory farms. Most meat Americans consume have had very little interaction with a person until they are being devoured on the plate.I appreciate your points. I agree with you that not all "technically specialized humans will be completely removed" but let's not pretend the comparison is going from a caveman with a spear to a cowboy with a lasso. If you concede it is likely to be very different at some point calling it SWE is no longer useful.
I think SWEs would be better off realizing they have enjoyed a relatively extreme level of privilege, and rather than trying to hold onto it, use what time they still have to advocate for a more egalitarian society, even if that means giving up some of their gains. Otherwise speaking of farming, the mass layoffs to come when software has been disrupting blue collar jobs for decades will really be a chickens coming home to roost moment.
by pear01
5/11/2026 at 10:47:16 PM
Now you're arguing against your own analogy? Hunter was ubiquitous position in human society prior to the domestication of animals. 50% of the workforce in hunter-gather societies. Today, 12 millennia after the domestication of wildlife, that number is down to 9-14% of the global workforce dedicated to the production, distribution, processing, sales of meat (not including cooked food) according to opus.Considering that only 1% of the US workforce was a software engineer I expect similar workforce optimization to occur in software engineering specializations over the next 12,000 years. /s But seriously, it's never going to zero.
by ecocentrik
5/12/2026 at 4:25:13 AM
No one said it's going to zero. It doesn't have to go to zero for lives to change. Would you rather be a cowboy or a factory farmer? The latter are some of the least desirable jobs in the entire world. The fact that millions of people still do them isn't the point in your column you think it is.by pear01
5/11/2026 at 9:47:58 PM
The software specialists may be replaced entirely by subject matter experts.No need for specialized commercial software, if everyone can just explain to the computer what they want in English.
by jimbokun
5/11/2026 at 8:32:12 PM
Do you really want to live in a world when nobody understand software that manages nuclear power plant? Or medical devices? Or financial software? Or radio transceivers firmware? Even something so boring like databases not understood could lead to disastrous effects if this would be the government database for managing people IDS. Hmm even if this would be working fine for years what would happen if bad actor would influence models to generate code if security issues? If nobody can comprehend the output how anybody would be able to think about the danger? This is even more grim then this https://www.citriniresearch.com/p/2028gicby npodbielski
5/11/2026 at 8:42:40 PM
We live in a world with nuclear weapons. Somehow we all cope and get up every morning. I think you are missing the point - the world is already grim. It always has been. What about human affairs say in the last century alone makes you think human oversight is some panacea? The impetus for civilization was not some innate desire for financial systems or medicine. It was not having other humans murder you. The Leviathan is already here.The article you shared has little to do with this. Questions of how to divide up gains technology creates are a separate question from that of the technology itself. Tbh I found what you shared so boring I could barely finish it. I already in this thread made an exhortation to support politicans who commit to erasing inequality. The idea that LLMs can only exist with inequality is nonsensical. The only thing grim about what you shared is the lack of political imagination. It's boring.
by pear01
5/11/2026 at 9:49:07 PM
At least we have people who understand the technology underlying nuclear weapons!by jimbokun
5/11/2026 at 9:26:43 PM
Maybe the tables will turn and people will ask, do you really want to live in a world where things aren't designed by machines (smarter than us)?by esafak
5/11/2026 at 8:50:13 PM
Your answer reminds me that my biggest gripe with this site and programmer forums in general is the lack of awareness of the breadth and scale of software development. I'm curious what you work on, because it doesn't sound anything like what I work on.> Most of the time is spent coding which encompasses typing, retyping, and retyping again. It also includes banging your head against the wall while trying to get one of your rewrites to work against and under-documented API.
I don't think I've experienced this to a large degree. Maybe early in my career. Most of my time now is spent formulating a solution, and time spent coding is mostly spent trying to compose my changes with the existing code in a way that is performant, reliable and meets the specifications.
by 01100011
5/11/2026 at 7:38:46 PM
This is far more true for junior and perhaps mid-career engineers, unless you're working in an extremely well-defined problem space (* see below).When working as a SWE, the longer I did it (~30 years) more of my time was spent understanding the problem, the edge cases, how to handle the edge cases, how to do all of it affordable, on time, and within budget.
That's engineering.
What you're describing is "writing code". That's lower value than "solving the problem".
I imagine a response, "But agile development, etc."
Yep. Part of solving the often sometimes involves creating prototypes to determine the essential viability of the solution. But that's only part of it. Which prototypes do you write? How much time do you allocate to same before accepting it's a dead end (at least for now) and punting on it?
That's engineering.
Me probably coming across as a dick today? Well, I was diagnosed autistic a year ago, and I'm on extended sabbatical/unemployment (3 years now) due to autistic burnout. And masking is part of how I got the burnout.**
* Why would someone be paying for that when there is likely someone else already doing it? Unless you're the rare person who hopes to "disrupt" the competition).
** has me begging the question of why I write here at all. SMH. Why do I do what I do? No idea sometimes.
by sleight42
5/11/2026 at 8:20:26 PM
I'm going to mix my metaphors a bit here...There's the saying "Any idiot can build a bridge; it takes an engineer to build a bridge that barely stands."
To put this another way, any idiotic LLM can write code. It takes a person with domain experience to understand what code to write, rewrite, or not write.
I've seen lots of organizations hollow out their internal competence in favor of outsourcing the skills. LLMs are the ultimate expression of that. There are people who say "you need to have people in your organization who understand how things work because they're the ones who solve problems!" and there are other people who say "focus on your core competencies! These problems you're worried about aren't your core competencies, so get rid of those experts, they're expensive and annoying; we can just sign a contract with an organization that'll know things for us."
At some point we all will identify exactly how much "seed corn" you need for the next season. We'll figure that out because we're starving, but at least we'll all know.
by cduzz
5/11/2026 at 9:24:42 PM
you've definitely been doing this longer than i have, but our outlook and recent experiences sound very similar. also been diagnosed recently, also on similar extended sabbatical/unemployment, also come across as a dick, also trying to mask less because burn out.got an email address in my profile if you'd be interested in talking at some point about something, or even talking about nothing in particular. (i don't normally do this sort of HN networking stuff, i find it super cringe. but there we go).
by dijksterhuis
5/11/2026 at 11:35:41 PM
This was my experience as a junior back in 2019.The actual problem solving was trivial but I would spend days trying to work my way around some Qt work or guess the magic Cmake incantation.
by AussieWog93
5/11/2026 at 7:25:29 PM
Let's also not forget a lot of the market edge of SWEs comes in knowing how to navigate these parts. The fact you needed to be reasonably fluent in a language was already a barrier to entry which meant in better times new grads could earn six figures at their first job just for putting in that effort.Maybe you will still be needed. That is one question. How well you will be paid and treated when the barrier to entry is now "I can think" is another. As the parent indicates, most people doing software are not doing things akin to pure math. I don't think most SWEs want that lifestyle anyway.
It's ok. You shouldn't fight the coming change. Instead use the time we still have to fight for more equal outcomes (vote for politicians that support UBI, Medicare for all). The longer you delude yourself that you are uniquely needed in an increasingly mechanized world the worse all our outcomes will be.
by pear01
5/11/2026 at 8:03:24 PM
The barrier to entry to generating code may be "I can think", but the barrier to entry for solving hard, distributed/multi-faceted engineering problems still remains quite high - agents can't really do this still to a decent level of efficacy reliably.The progress models have made in the last 5 years aren't convincing me they'll bridge that gap too soon, although I can see how some people are convinced by how decent agentic harnesses make things. I know it's really easy to get very hyped with the current state of the technology, but try to have a bit of skepticism.
by arandomhuman
5/11/2026 at 11:15:57 PM
> under-documented APIOne wonders why AI hasn't replaced all those non-existent documentation writers yet.
Therein lies a clue to what the future holds.
by RajT88
5/11/2026 at 6:04:32 PM
Are you, perchance, assuming that since you spend most of your time struggling with actual code, this is so for everyone else?Or are you saying that I'm lying. That I am secretly hammering away at my keyboard while pretending not to?
No, writing code hasn't been how I spend most of my time for many decades now.
by bborud
5/11/2026 at 6:06:42 PM
Are you a staff level engineer that has dozens of other engineers banging away at code projects you help define?by therealdrag0
5/11/2026 at 6:14:51 PM
Try to write a design doc before you implement something (which people find they need to do for LLMs to work at all anyway). You’ll find that you spend much less time actually writing code.Write proper API documentation laying out the assumptions and intent, generate some good API docs, write a design and architecture document (which people find they need for LLMs to work at all anyway). You’ll find that you spend a lot less time reading code.
by eska
5/11/2026 at 6:25:49 PM
> which people find they need to do for LLMs to work at all anywayEverything we have to do for AI to function well, would help humans to function better too.
If you take the things for AI, but do them for humans instead, that human will easily 2x or more, and someone will actually understand the code that gets written.
by dkersten
5/11/2026 at 8:01:02 PM
> If you take the things for AI, but do then for humans instead, that human will easily 2x or more, and someone will actually understand the code that gets writtenThis only works on high-trust teams and organizations. A lot of AI productivity gains are from SWE putting the extra effort because the results will be attributed to them. Being a force-multiplier for others isn't always recognized, instead, your perfomance will likely judged solely on the metrics directly attributed to you. I learned this lesson the hard way by being idealistic, and overestimating the level of trust that had been built after joining a new team. Companies pay lip service to software quality, no one gives a shit if your code has the lowest SEV rates.
by overfeed
5/12/2026 at 7:07:18 AM
Ah… that’s a reasonable point. Yes, the difference between a high-trust team and what you described is night and day. I suppose for those situations there’s a much bigger incentive to just throw AI at it, which explains why the big corporates love AI.by dkersten
5/12/2026 at 10:03:33 AM
No, no actually capable engineer should be just banging away at code. This is one way to know the level of an engineer.Less capable engineers think in terms of implementing runtime execution and solving runtime errors.
More capable engineers think in terms of designing the most effective architecture and abstractions for maintainability, performance, and robustness.
I have a project I am working on, It has not compiled in months, but that's ok, since the real work, for me, is in the architectural design.
Yes, getting it to actually run takes some time and effort, but for me that is almost mechanical now.
by prmph
5/11/2026 at 6:16:01 PM
It has varied over the years but it isn't actually relevant since I am talking about when I write software.Writing code just isn't what takes time.
by bborud
5/11/2026 at 6:22:03 PM
Getting the code into a state where it actually does what you want takes time - but a lot of that is research, testing, experimentation, documentation, etc. Those can be faster with AI assistance but you still need to bang on it enough to make sure it works right.by QuercusMax
5/11/2026 at 7:07:08 PM
I am not, yet actual coding is miniscule part of workflow. The rest is cca un-automable by any llm - politics, meetings, discussions, brainstorming, organizing testing teams, stakeholders and so on.This is how big corporations look like, not some SV startups.
by kakacik
5/12/2026 at 7:16:55 PM
I agree. And that stuff is soul destroying. I have done it, and right now I work in a place a little smaller, but we get so much done without all the cruft. And we get it done better. I spend much more time writing code now (*) than at the big corps, and we do a much better job because we can iterate.(*) Well, now claude spends a lot of time writing code, I spend a lot of time designing and steering it. Claude can write remarkably sophisticated code with the correct steering.*
by munksbeer
5/11/2026 at 7:04:51 PM
>OP's formulation makes SWE sound like a purely noble enterprise like mathematics. It's more like an oil rig worker banging on pieces of metal with large hammers to get the drill string put together.Those two formulations represent different developers' approaches to the same task. The former being developers who are much better at planning than the latter.
by logicchains
5/11/2026 at 11:38:25 PM
[flagged]by Sh0000reZ
5/11/2026 at 4:07:38 PM
> Yes, about 2-5% of the time.There are also those for whom that percentage is higher, let’s say 6-50%.
> I understand things and then apply my ability to formulate solutions
The AI is coming for that too.
You might just be lucky to be in circumstances that value your contributions or an industry or domain that isn’t well represented in the training data, or problem spaces too complex for AI. Not everyone is, not even the majority of devs.
People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
by KronisLV
5/11/2026 at 5:12:15 PM
Agree. It is just like 2 totally separate groups are arguing.One very tiny slice of speciality/ rare industries where code is critical but overall small part of project costs. I can see if code / software is 5% of overall cost even heavy use of AI for code part is not moving the needle. So people in this group can feel confident in their indispensability.
Second group is much larger and peddling CRUD / JS Frontends and other copy/paste junk. But as per industry classification they are just part of same Coder/Developer/IT Engineer group. And their bleak prospects is not some future scenario, it is playing out right now with tons of them getting laid off. And whole lot of people with IT degrees, certifications are not finding any jobs in this field.
by geodel
5/11/2026 at 7:03:24 PM
After hearing various similarly sounding opinions about CRUD being easy for LLMs, I started tracking how well LLMs handle a standard CRUD Django app I'm familiar with at https://github.com/marcindulak/learning-api-styles-gen-ai-ex....So far it appears that LLMs still require constant hand-holding, even for a small educational CRUD app.
by marcindulak
5/11/2026 at 9:03:13 PM
We've had reasonable effectiveness for CRUD. It's mainly the UI toolkits we use, but the plumbing it can do quite well. It's not 100% vibecoding but certainly a significant accelerator for parts of the job.by magicalhippo
5/12/2026 at 12:39:43 PM
I agree with the 2 separate groups theory, but I don't buy that the group that produces "copy/paste junk" is the much larger group. I think in most mega-corps, there is a huge existing code base, there are huge organizational challenges, and there is huge hierarchy with most people not being the junior juniors. 90+% of the work is "not coding." Probably way, way more if we include the middle managers. At startups, there is a lot of "copy/paste junk" but also often a decent amount of push the boundary new stuff. I don't know. I've been in the industry for 8 years now and it's been really rare to see the actual coding being the bottleneck or even the thing that takes the majority of the time.by kj4211cash
5/11/2026 at 6:03:28 PM
What makes you feel that a complex frontend would be easier for AI than a non-CRUD backend system?by hjort-e
5/11/2026 at 6:33:35 PM
Hubris.I don't mean this as a snarky jab. It's coming for anything software. I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend APIs, systems programming, embedded programming, they all seem equally threatened it's just a matter of time. Front end is easy to see in the AI web front ends but everything else is still easy pickings.
by evilduck
5/11/2026 at 8:17:47 PM
You are describing the toy projects that had us all amazed end of last year. Large, maintainable software that can serve paying customers is in a completely different galaxy.by manmal
5/11/2026 at 10:39:51 PM
There's rather a big difference between reverse engineering already working code and forward(?) engineering working code from nothing so that confidence seems misplaced.by ThrowawayR2
5/11/2026 at 7:05:29 PM
I 100% agree it's coming for everything. I'm just curious what the arguments would be for why frontend would be easier.by hjort-e
5/11/2026 at 8:45:20 PM
As a manager of a full stack team, we've found AI falls short a lot more on front end. It has its weak points on both front and back, but the problems with backend are quite easy to feed back into it -- needs more performance, needs to pass this security audit, needs to deal with xyz system. The problems with frontend are more like this is ugly, it's clunky to use, people don't like it. People without years of frontend experience tend to lack the vocabulary required to get AI to fix it, period, and it ends up going around in loops.by svachalek
5/11/2026 at 6:55:03 PM
> I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. BackendThat is not hard. It’s just tedious and very slow to do manually. The hard part would be about designing a usb dongle and ensuring that the associated software has good UX. The reason you don’t see kernel devs REing devices is not because it’s impossible or that it requires expert knowledge. It’s because it’s like counting sands on the beach.
by skydhash
5/11/2026 at 11:28:38 PM
Whether something is tedious depends on the person and situation. If you're already an expert, you may find a lot of work that goes into your 4th USB device (especially if it's based on yet another chip and bespoke SDK) quite tedious, since lot of it is based on standard requirements/designs that you can't change.You may also find re-ing stuff not tedious, due to what may be motivating you.
In any case, any work will have some things you just know how to do, or what to do, but previously (before LLM agents) no easy way to plow through them without pressing a lot of keyboard keys over long period of time.
by megous
5/11/2026 at 6:37:56 PM
It is irrelevant that complex frontend would be easy for AI or not. To me 1) how many unique complex frontends are needed out of total frontends that millions of sites out there need. 2) Will there be increase in need of such frontend engineers so other displaced folks can land a job there.I think it will be far fewer to have any positive impact on IT engineers' overall job prospects.
by geodel
5/11/2026 at 7:01:02 PM
But that's equally true for any type of system. Frontend isn't inherently easier than other systems, so i was just wondering why you singled it out. To me AI just seems better at backends and database designby hjort-e
5/11/2026 at 7:19:23 PM
OK, my examples seemed like biased against frontend which was not the intention.The thrust was overall job prospects for people in software field. It is not that frontend is easy but it is definitely easy to get into. Considering there are far more frontend developers then say C++ system engineers or database designers so in sheer numbers they will be affect more.
by geodel
5/11/2026 at 7:35:15 PM
Ah okay that's fair. In my country boot camps aren't a thing so frontend devs are rare and good frontend devs even more, so I think it depends on where in the world you are. We got an abundance of java devs here that i fear more forby hjort-e
5/11/2026 at 4:32:41 PM
There are periods of time where I might spend 80% of my time "coding", meaning I have minimal meetings and other responsibilities.However, even out of that 80% of my time, what fraction is actually spent "writing code"?
AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:
- Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback
There are times when AI can do the "write the code" portion 10x faster than I could, but if it's production code that actually matters, by the time I actually review the code, I doubt it's more than 2x.
by dmazzoni
5/11/2026 at 5:40:59 PM
>AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:- Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback
What part of those you think it doesn't help with?
by coldtea
5/11/2026 at 6:12:53 PM
There is no shortcut to understanding. No one can understand things for youby malfist
5/11/2026 at 8:02:21 PM
They can make it unnecessary for you to understand.Consider hash tables. Nobody implements a hash table by hand any more. I've written some, but not in this century. Optimal hash table design is a specialist subject. Do you know about robin hood algorithms? Changing the random number generator's seed to discourage collision attacks? A basic hash table starts to slow down around 70% full. Modern hash tables can get above 90% full before they have to expand.
Who keeps Knuth's Fundamental Algorithms handy any more? I own both the original edition and the revised edition. They're boxed up in the garage. I once read that book cover to cover. That was a long time ago.
That's not AI. That's solving the problem and putting it in a black box. That's how technology progresses.
by Animats
5/11/2026 at 8:43:21 PM
That's obviously not what I'm talking about. If you're asking an AI to write an optimal hash table algorithm, something is clearly wrong. I'm talking specifically about understanding the business domain and problem you are trying to solve.by malfist
5/12/2026 at 12:35:38 PM
>I'm talking specifically about understanding the business domain and problem you are trying to solve.And that's what people use AI everyday to help with, so?
by coldtea
5/11/2026 at 10:47:27 PM
> That's not AI. That's solving the problem and putting it in a black box. That's how technology progresses.The key word is solving. Meaning someone, after coming up with the solution, has taken times to prove that it works well in all usual and most extreme cases. With their reputation on the line.
That’s why you trust curl, ffmpeg, Knuth’s books,… but you don’t trust random cat on the internet. We don’t trust AI and the cost to review its output is not a great tradeoffs compared to just think and solve the problem.
by skydhash
5/11/2026 at 4:57:00 PM
> The AI is coming for that too.That may be true I’m not gonna say one way or the other, but if AI comes for that then almost all knowledge work is effectively dead, so all that’s left would be sales or physical labor.
by LPisGood
5/11/2026 at 5:35:39 PM
I wonder though, can AI make the next JS framework. I mean that in sincerity, there was the leap from jQuery to React for ex. If an AI only knows jQuery and no one makes React, will React come out of AI.by ge96
5/11/2026 at 5:45:03 PM
News: "AGI refuses to make another JS framework, rages on the follies of misguided developers and their wateful JS crutches"Developer community: Wow, we truly have become obsolete now!
by ASalazarMX
5/11/2026 at 5:52:35 PM
Who will be the disrupters when there is nothing to disruptby ge96
5/11/2026 at 7:08:19 PM
In a shocking twist, it turns out that Mootools is the agents' preferred frameworkby notpachet
5/11/2026 at 6:26:47 PM
A thought experiment: When all practical software is only written by AIs, will the AIs use goto? What will the programming language of AIs look like?My bet is something _like_ assembly, but not assembly.
That being said, I think humans will still program for fun. Just like we paint portraiture in a world with cameras.
by scj
5/11/2026 at 8:19:43 PM
I think it won't be like assembly, because it takes more information vs building blocks that have more dense information in them, kind of like how we use libraries and frameworksby r_lee
5/11/2026 at 7:00:41 PM
Yeah that's my thing for my hardware projects, I'm not going to reach for an LLM to do it, I want to write the code myself/be present. For something new I would consider using LLM to generate something, like a computer vision implementation or something I don't already know. The end result I would know how it works, just for POC.by ge96
5/12/2026 at 12:24:44 AM
There will be a new language created optimized for AI developmentby timacles
5/11/2026 at 7:34:24 PM
It can't. Framework hierarchy is largely based on social structure, rather than pure technical merit. Otherwise React would've been displaced long time ago.by wiseowise
5/11/2026 at 5:43:47 PM
People didn't leap from jQuery to React. It's a lot easier to imagine an AI looking at jQuery and [insert any server side MVC framework] and inventing Backbone.by smrq
5/11/2026 at 5:36:03 PM
The history of the last 250 years is inventing new professions as old ones are automated away.I expect that to continue.
by BurningFrog
5/11/2026 at 5:45:45 PM
The history of the last 250 was moving from agriculture to industrial work to service work. Now the last frontier is starting to be overtaken by automation too.(And in all of those transitions millions where left behind without work or with very worse prospects. The people that took the new jobs were often a different group, not people who knew the old jobs and were already in their 30s and 40s).
And what would be the new professions that uniquely require humans, when even thinking and creative jobs are eaten by AI? Would there be a boom of demand for dancers and chefs, especially as millions lose their service jobs?
by coldtea
5/11/2026 at 7:09:48 PM
Given some sort of machine with human capabilities, there would be no reason to assign that profession to a human, excepting perhaps cost.by nitwit005
5/11/2026 at 6:26:31 PM
> The history of the last 250 years is inventing new professions as old ones are automated away.Even if this still holds true ("past performance is no guarantee of future results") the part about it that people handwave away without thinking about or addressing is how awful the transitional period can be.
The industrial revolution worked out well for the human labor force in the long term, but there were multiple generations of people who suffered through a horrendous transition (one that was only alleviated by the rise of a strong labor movement that may not be replicable in the age of AI, given how it is likely to shift the leverage of labor vs. capital).
If you want to lean on history as an indication that massive sudden productivity changes will make things better for humanity in the long run, then fine, but then you have to acknowledge that (based on that same history) the transition could still be absolutely chaotic and awful for the lifespan of anyone who is currently alive.
by georgemcbay
5/12/2026 at 1:06:10 AM
This is the kind of sleep walking that’s about to walk humanity into the next dark ages.My parents say a lot of stuff like this. They tend to gloss over the untold suffering, great depressions and world wars that took us to get here.
The planets resources were also not in risk of running out. As the world is min maxed by billionaires, it nice the lower classes are drained of all capital, they will soon move to fighting each other for resources. the future is looking pretty grim for even the most optimistic of scenarios
by timacles
5/12/2026 at 4:19:08 PM
> They tend to gloss over the untold suffering, great depressions and world wars that took us to get here..spot on. you gotta wear shades to survive, future so bright.
by johnthescott
5/11/2026 at 7:33:03 PM
Like doordashing and pokemon card reselling.by charlie90
5/11/2026 at 7:34:58 PM
Don't forget OnlyFans and streaming.by wiseowise
5/12/2026 at 1:22:09 AM
Doordash and similar are experimenting with autonomous/remotely operated vehicles and porn is getting decimated once good enough uncensored video gen ai gets available. That doesn't sound like viable career choices either.by fireant
5/11/2026 at 10:59:48 PM
It's happening, but theres no law of the universe that says it has to be 1:1. Why are you so confident in this regard? 250 years is a very small slice of human history and could easily be the outlier.by dvsfish
5/11/2026 at 7:08:03 PM
> The AI is coming for that too.Yes, but if/when that happens, it won't just affect software engineers. An AI that can do that can replace any white collar worker.
> People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away.
I'm not sure anyone is actually working on those. People talk about spending all day writing CRUD apps here, but if you suggest there are already low code tools to build those, they will promptly tell you it's too complex for that to work.
by nitwit005
5/11/2026 at 7:24:37 PM
>Yes, but if/when that happens, it won't just affect software engineers. An AI that can do that can replace any white collar worker.Yes. Yes, that's exactly what we're going to see, and more swiftly than people are generally comfortable with. What are we going to do with all those cubicle dwellers?
by laughing_man
5/11/2026 at 11:10:20 PM
a new paradigm of 3 day work weeks. share the salary of the days off with those less automatable, and work to automate everyone. I wish some sort of dicussion like this could happen where the workers of the world get to see some of the gains of a new technology more immediately. If "the state" wants to maintain legitimacy and protect its citizenry (one of the primary promises of a state) to avoid a period of social unrest, the likes of which has been unheard of for several generations, I think something like this should at least be a part of the discussion.by dvsfish
5/12/2026 at 12:34:33 PM
I don't think it will happen. For one thing, it hasn't in the US since the introduction of the forty hour work week in 1940.But beyond that, I don't think most people want a three day work week. They would rather work five days and get the extra money. I worked at a company that did government contracting. We had a couple quarters without much in the way of orders, so instead of laying people off like you'd normally see in that situation, the company decided to go to a four day week, with a commensurate cut in pay.
I was thrilled, as a young single guy, to get Fridays off. I rented a room in someone's house and hit my monthly nut in about two weeks. But most of the people I worked with hated it. Some of them quit. A lot of them both needed the money and also had no idea what to do with themselves on that extra day.
by laughing_man
5/11/2026 at 4:48:27 PM
> The AI is coming for that too.Current AI tech giants prove over and over and over again that this is not the case
by PunchyHamster
5/11/2026 at 5:19:08 PM
We've literally just started, what "over and over" do you refer to?by cromka
5/11/2026 at 6:17:14 PM
I've been told the past four years that AI is coming for my job. And thats just not true. Its no closer to that than it was 4 years ago.by malfist
5/11/2026 at 7:14:03 PM
It is the lament of every generation of humans to think that they are the pinnacle of everything that has come before, we are just at the start of the so-called AI era, many very smart people coming up still haven’t really got their hands on all of the material available from a hardware and software standpoint. We are still at the early stages.I am very optimistic. I just wish I was younger to take advantage like Junior high, high school age with my current resources damn… The oldest lament in the books.
by Danox
5/11/2026 at 11:26:56 PM
What makes you optimistic? Geniunly curious as I’m looking for how to take advantage of the ai disruptionby davenci
5/12/2026 at 1:18:29 PM
I surely have seen jobs around me being replaced by AI tooling, it is getting closer in corporate consulting.by pjmlp
5/11/2026 at 11:35:17 PM
I was the same opinion till Claude code was relased. its a lot closer now.by trustfundbaby
5/11/2026 at 7:39:10 PM
I'm not sure how anyone would know if it's closer or not. There's been a lot of progress in LLMs over the last four years.by laughing_man
5/11/2026 at 7:19:08 PM
> Its no closer to that than it was 4 years ago.There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated. Since around the end of 2025 and models like Opus 4.6, the SOTA has gotten good enough to work agentically on all sorts of dev tasks with pretty good degrees of success (harnesses and how you use them still matters, ofc).
by KronisLV
5/11/2026 at 11:39:15 PM
What's their balance, revenue - AI expenses? Using the real token costs, not the subsidized costs.by lbrito
5/11/2026 at 7:36:06 PM
> There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated.And how much revenue do they generate?
by wiseowise
5/11/2026 at 7:13:27 PM
It feels its just around the corner. But when you turn 20th corner and its still behind the next one, maybe things are a bit different than they seem / clueless emotions make us believe.Long term its bleak, but short/medium term - not so much, if I get fired it won't be llm replacing me but rather company politics, budget changes etc. Which was the only real (very real) risk for past 15 years too, consistently. But it helps to not work for US company.
by kakacik
5/11/2026 at 11:48:37 PM
I mean this is just fingers-in-your-ears "LA LA LA I CAN'T HEAR YOU!!" stuff.I still have a job so AI hasn't taken that yet. But the suggestion it's "no closer" is ridiculous. At least in my life/career/office this last 12 months seems to have been a real inflection point in how AI is being used for software development.
by Tesl
5/12/2026 at 3:11:31 AM
Sure, sure. And my CEO believes the singularity happened in Q1. Doesn't make it trueby malfist
5/11/2026 at 10:26:15 PM
Ask some juniors how their job search is going. In five years, ask the seniors.by esafak
5/11/2026 at 6:24:02 PM
> We've literally just started5+ years in the software world is like 30 years in others...So...given lacking use-cases and humongous amounts of capital already wasted on chatbots...It's more like "we" are closer to closing curtains than to "just started"...
by hansmayer
5/11/2026 at 5:46:48 PM
Hype cycles, AI has made developers obsolete like a dozen times in the las couple of years, at least according to their developers.by ASalazarMX
5/11/2026 at 5:32:46 PM
Discovery of the best solution in a problem space is not generative but only verificative. Meaning: the LLM can see if a solution is better than another, but it can't generate the best one from the start. If you trust it, you'll get sub-par solutions.This is definitely an agent problem instead of an LLM problem. Anybody got something explorative like this working?
by luckystarr
5/11/2026 at 5:47:45 PM
So? Hundreds of millions of office and devel jobs are about for developing "optimal solutions" to begin with.by coldtea
5/11/2026 at 6:00:22 PM
>> I understand things and then apply my ability to formulate solutions> The AI is coming for that too.
If this is true, then you'd have to conclude that AI is coming for everything. I'm still not convinced by that. But I am convinced that the part of software development that involves typing code manually into an IDE all day is likely gone forever.
by tjwebbnorfolk
5/11/2026 at 6:06:22 PM
> If this is true, then you'd have to conclude that AI is coming for everything.Now you’re getting it
by itsafarqueue
5/11/2026 at 6:14:05 PM
It really doesn't have to come for everything to feel like it's taking everything. If it eliminates 10% of white collar jobs over the next decade, the impact will be felt everywhere.by flatline
5/11/2026 at 7:54:19 PM
I struggle to understand the logic (in general, the way people are talking), normally efficiencies come with increases in production and scale and use-cases.So of 10% of lawyers get AI-d away, let’s say, the remaining 90% are 1.1x+ efficient and also up against other lawyers enjoying the same… work might go up. And on the customer side there is sooooo much BS with lawyers, but if both lawyer and customer can communicate faster or better with the LLMs, we should see more better cases with better dialog and case handling. Again, the total amount of lawyering could go up a lot. And then we have the cases prohibitive without the LLMs, now possible for big money. Better LLM empowered lawyers should be able to create new and more lawyer work.
As it stands I see people selling services that are subsidized by VC, template jobs we’d be doing faster with copy paste but it’s not copyright infringement when OpenAI does it, and a rush for valuations to soak up VC because the business model isn’t there. I’m seeing a huge uptick in visual bugs on large commercial platforms and customer facing apps, and don’t feel OpenAI is gonna kill Office anytime soon… or Chromium… or Steam… or emacs…
Call me an optimist, but I think those LLM pump and dumpers are creating a wave of fear that would be quite different if they weren’t lying and trying to boost an IPO. Chat GPT 2 was too dangerous to release, lul, and the class action suits are just getting started.
An actual lawyer replacing tech company should sell lawyering for infini-money, not pens that’ll totally 10x your lawyering (bro).
by bonesss
5/11/2026 at 9:09:22 PM
And what do those 10% of lawyers do? Every other industry also got reduced by 10+%, its not like they have a job elsewhere.So.... they just starve in the streets?
Even if some other, arguably better job comes along, would they retrain for it? (You can say yes, but take a look at the long history of people choosing to join a cult and vote for an orange moron instead of learning a new skill).
Either you're convinced you won't be too badly affected and will gladly watch huge swaths of people suffer, or you're deluded enough to think that it will really, truly be different this time. In any case, I hope you get the worst results of what you preach.
by sophacles
5/11/2026 at 7:52:32 PM
Sure, but who doesn't think that 10% of white collar jobs are mostly bullshit anyway?by tjwebbnorfolk
5/11/2026 at 11:23:08 PM
The roughly 15-20 million people who would suddenly be without a job?by jplusequalt
5/11/2026 at 10:28:58 PM
The only thing worse than a bullshit job is no job.by esafak
5/11/2026 at 6:11:19 PM
Even if AI advances continue, for quite a while there's likely still going to be the 'Steve Jobs' role. That is, even if AI coding agents can, in the future, replace entire teams of SWEs, competently making all implementation decisions with no guidance from a tech-savvy human, the best software will likely still involve a human deciding what should be built and being very picky about how, exactly, it should externally behave.I don't know if it makes sense to call that person an SWE, and some people currently employed as SWEs either won't be good at this or aren't interested in doing it. But the existing pool of SWEs is probably the largest concentration of people who'll end up doing this job, because it's the largest concentration of people who've thought a lot about, and developed taste with respect to, how software should work.
by no_op
5/11/2026 at 7:32:23 PM
This matches what I'm seeing. I've been building software for a long time, but building more now with AI than I ever could with a traditional team. But the throughput that's helpful is from knowing what to build and what tradeoffs matter. The AI doesn't have that. It's a force multiplier on experience, not a replacement for it.by bmiedlar
5/11/2026 at 7:40:26 PM
How many Steve Jobses do we need as a percentage of people developing software?by laughing_man
5/11/2026 at 5:44:58 PM
> The AI is coming for that too.That's where we fundamentally disagree about.
Yes, AI is coming for solution formulation, absolutely, but not all of it, because it is actually a statistical machine with context limit.
Until the day LLMs are not statistical machine with a context limit, this will hold. Someone need to make something that has intent and purpose, and evidently now not by adding another 10T to the LLM parameter count.
by Aperocky
5/11/2026 at 5:57:39 PM
> because it is actually a statistical machine with context limit.So are humans.
Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)
And I argue that current LLMs have surpassed many of my capabilities already.
For example GPT/Opus can understand and document some ancient legacy project I never saw before in minutes. I would take a week+ to do the same and my report would probably have more mistakes and oversights than the one generated by the LLM.
by bel8
5/11/2026 at 7:17:15 PM
> So are humans.AI advocates are _way_ too confident about the nature human cognition. Questions that have been debated by philosophers and cognitive scientists for decades are now "obvious" according to you people, though you never provide any argument to support your statements.
by KalMann
5/12/2026 at 2:58:07 AM
Are you suggesting a non-physical reason for human cognition?by NewsaHackO
5/11/2026 at 6:14:00 PM
We are not pre-trained using the summary of all human knowledge over all of history. Yet we make certain decisions with much more ease.We are much more limited, but we fundamentally work differently. Hence adding more parameter like certain companies are doing isn't necessarily going to help. We need to rethink how LLM work, or how it work in tandem with something that's completely different.
I think it's doable, I just don't believe it's LLM, and I don't think anyone now knows what it is.
by Aperocky
5/11/2026 at 6:22:17 PM
> We are not pre-trained using the summary of all human knowledge over all of history.But we are? That's our education system.
The only reason school doesn't try to shove more information in our brains is because we hit bandwidth limits.
by bel8
5/11/2026 at 7:19:44 PM
> But we are? That's our education system.That is not what the education system does. That's an obvious distortion of reality. People train over billions of documents to statistically predict the next word to gain and understanding of language. LLMs do this statistical processing in order to mimic humans natural language learning ability. And there has been continued evidence of the limitations of this approach to accurately mimic the totality of human cognition.
by KalMann
5/12/2026 at 12:05:08 AM
>Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)Do you have any idea how many calculations it takes for a human to put a ball through a hoop while running across a court?
It could be millions or billions in a second. Manifesting consciousness, coordinating body movements, and everything else all at the same time takes calculations.
You may not be aware that your brain is doing multiplication, or any other kinds of math, constantly, but it is.
by leptons
5/12/2026 at 6:46:08 AM
I agree we do some marvelous things in sports but if we extrapolate from this table tennis robot, it's clear machines can/will do just as well there too:by bel8
5/12/2026 at 8:17:38 AM
That table tennis robot is not conscious. That table tennis robot does one thing well. A human is capable of far more. There is far more going on for a human playing table tennis than a robot. It doesn't matter if the table tennis robot plays table tennis better, it can't also play hocket, soccer, football, basketball, chess, polo, baseball, or many other things one human can do.The human condition is nothing but a massive amount of calculations under the hood. You don't feel it, or understand it, but it's there. Everything in nature is math, every physical phenomena has a cause and effect rooted in mathematics, and it's no surprise that humans are great at subconsciously calculating myriad things on-the-fly, as life is happening around us.
by leptons
5/11/2026 at 6:10:20 PM
Yours is a “God of the gaps” argument. You will remain technically correct (the best kind of correct!) long after the statistical machine has subsumed your practical argument, context limit and all.by itsafarqueue
5/11/2026 at 6:15:33 PM
I fall into the "pessimistic heavy user" camp, I burn thousands of $ worth of SOTA tokens monthly but it just makes me more acutely aware of the limitation and amount of work I need to do to work around them and what kind of decision that I should reserve to myself instead of trusting the LLMs to do.by Aperocky
5/11/2026 at 5:48:31 PM
>but not all of it, because it is actually a statistical machine with context limit.And the human mind is not?
by coldtea
5/11/2026 at 7:25:15 PM
I can give you the exact mathematical formula used to statistically optimize the output of a neural network from input examples. Can you do the same for the brain?by KalMann
5/12/2026 at 12:34:16 PM
Not atm, but does it matter?Is it similar, even if much simpler, of the sort of process that goes on there too? That's the important question.
by coldtea
5/11/2026 at 5:55:29 PM
It’s not.by nothinkjustai
5/11/2026 at 4:36:14 PM
> The AI is coming for that too.To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.
This was something I learnt in my very first job in the 1980s. I worked for someone who did industrial automation beyond PLCs and suchlike. He spent 6 months working in the company. On the factory floor, in the logistics department, in procurement, in accounting and even shadowing the board. Then he delivered a proposal for how to restructure the parts of the company, change manufacturing processes, and show how logistics and procurement could be optimized if you saw them as two parts in a bigger dance.
He redesigned the company so that it could a) be automated, and b) leverage automation to increase the efficiency of several parts of the business. THEN, he started planning how to write the software (this was the 80s after all), and then we started implementing it.
Now think about what went into this. For instance we changed a lot of what happened on the factory floor. Because my boss had actually worked it. So he knew what pain points existed. Pain points even the factory workers didn't know how to address because they didn't know that they could be addressed.
I was naive. I thought this was how everyone approached "software projects". People generally don't. But it did teach me that the job isn't writing code. It is reasoning about complex systems that often are not even known to those who are parts of it.
And this is for _boring_ software that requires very little creativity and mostly zero novelty. Now imagine how you do novel things.
> People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
You make it sound like it is a bad thing that certain tasks become easier.
I spent a lot of time writing CRUD stuff. Because the things i really want to work on depend on them. I don't enjoy what is essentially boilerplate. Who does? If you can do the same job in 1/20 the time, then how is this a bad thing?
It is only a bad thing if writing CRUD webapps is the limit of your ability. We don't argue for banning excavators because it puts people with shovels out of work. We find more meaningful things for them to do and become more productive. New classes of work becomes low-skilled jobs.
If you have been doing software for a while, you are probably doing some subset of this. But these things are hard to articulate. It is hard to articulate because it is not something we think about. Like walking: easy for us to do, hard to program a robot to do it.
by bborud
5/11/2026 at 5:49:20 PM
>To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.1 person needs to do that. The other 100 not doing that currently to begin with, but doing the AI-automatable work?
by coldtea
5/11/2026 at 4:45:30 PM
> To some degree yes, in practice, not so much.We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need? I think it's mostly a cope.
We have robots walking just fine now, by the way.
by SoftTalker
5/11/2026 at 5:07:59 PM
If they can do those things they can effectively replace any white collar job. That’s about 45% of the workforce. Societies tend to collapse around 25-30% unemployment.Imagine 45% of higher than average paying jobs gone.
If that happens we’ll either figure out a new economic system, or society will collapse.
Also saying robots are walking just fine is misleading for any definition of just fine that is anywhere near as good as a human.
by sarchertech
5/11/2026 at 5:24:51 PM
Look at how the billionaires are talking about AI: Their clear, unambiguous goal is basically to replace all white collar "knowledge" jobs. And there's currently nothing regulatory that's stopping them--they just need to wait for the state of the art to improve. Once AI is "good enough" if it ever is, they won't even think twice about 45% unemployment. What are we unemployed workers going to do about it? There's no effective labor organization left. Workers have basically no political power or seat at the table. We're not going to get violent--the police/military are already owned by the billionaire class. We're just going to eventually become economically irrelevant and die off.by ryandrake
5/11/2026 at 5:49:37 PM
> We're just going to eventually become economically irrelevant and die off.As harsh it may sound, it seems rather likely to me. It is not like s/w engineers have helped struggling workers in other sectors other than sanctimonious "Learn to code" advice. So software folks can't expect any solidarity or help from others.
by geodel
5/11/2026 at 5:51:25 PM
The fundamental issue isn't unemployment due to automation, but the fact that society cannot benefit from unemployment.It should be something for us to celebrate, because it means greater freedom for humans to pursue something else rather than spending time doing drudgery.
by kiba
5/11/2026 at 6:27:06 PM
Put it another way, the issue is that resources are not shared more equitably. This is especially egregious considering that LLMs are trained on all human knowledge. We've all been contributing to this enterprise, and what we may end up getting in return is unemployment.by shinryuu
5/11/2026 at 5:47:49 PM
45% of folks sitting on their hands are going to have the free time to talk, and this group of people are skilled at organization. Are you planning on throwing your hands up and passively accepting whatever comes your way?by monknomo
5/11/2026 at 6:44:33 PM
And at least in the US they have >45% of all the small arms weaponry. There is no bunker strong enough nor private army big enough if 100M people come for you.by rootusrootus
5/11/2026 at 7:01:25 PM
They're probably be betting that the technology they will need to defend their bunkers, think autonomous kill-bots or whatever, will emerge before people start to riot.Or they're planning to build an Elysium-like colony in the ocean or space, to keep the billionaire class far from danger.
by ryandrake
5/11/2026 at 6:41:26 PM
I get that it is popular to hate billionaires these days, but realistically, they did not get to be billionaires by being stupid. It runs directly counter to their own interests to induce anything like 45% unemployment. They will get poorer, the world they live in right along with the rest of us will get noticeably shittier, etc.More likely they figure out what to do with a bunch of idle talent. Or the coming generation of trillionaires will.
by rootusrootus
5/13/2026 at 12:45:11 AM
"Poorer?" That implies they still rely on money. But money is just a form of fiat power. The billionaires/trillionaires will not much need fiat currency or fiat power, because they're building real power.. by extension, they won't really need to worry very much about the rest of us, either..by layla5alive
5/11/2026 at 5:47:49 PM
It's important (and calming) to understand that since the Industrial Revolution started ~250 years ago, we've automated away most jobs several times over, while employment levels have stayed pretty constant."Automating half the jobs" is the same as "double productivity per worker".
When the doubling happens in 5 years rather than 50, it might be more disruptive, but I'm convinced we're on the verge of huge improvements in human standard of living!
by BurningFrog
5/11/2026 at 6:10:20 PM
What in the current state of world affairs outside of IT do you think is indicative of that potential for huge improvements in human standard of living?by wartywhoa23
5/11/2026 at 7:54:37 PM
It's what I mentioned:If we double productivity per worker, we have twice as much wealth on average.
I know there are angry people convinced that this will all be consumed by billionaires and jews, but historically that is not at all the track record of the last 250 years, and I expect that to continue.
by BurningFrog
5/12/2026 at 2:03:10 PM
If you are going to bring up history you should really look into what it took to redistribute wealth from oligarchs in the past.The fact that oligarchy now has more resources than ever in the history of humankind, a means to mass surveillance and generating mass propaganda, those wealth redistributions are looking much MUCH harder to accomplish.
Yea, historically it will inevitably happen. Realistically it will be after the new version of fuedalism and dark ages. So strap in for the next 400 years aren’t looking too good
by timacles
5/11/2026 at 11:28:25 PM
>If we double productivity per worker, we have twice as much wealth on average.That's not true. There are other factors at play such as demand.
If we make the average IT worker twice as productive, that doesn't mean now every IT worker is being paid twice as much, because most users aren't going to care if there are twice as many options on the app store, or twice as many bug fixes per release.
by jplusequalt
5/12/2026 at 3:59:39 PM
Consumption in a society will always be roughly equal to production.There are differences due to import/export balance, investments, government borrowing etc, but as a first approximation, if GDP increases by 10%, consumption will rise by a similar amount.
About your IT worker example: Let's say s/he produces $150k/year in value and is paid $140k. If AI makes them produce $300k of value, they may not automatically get a raise. But it becomes very attractive for another employer to hire them for $200k or $250k, or even $280k.
In the medium/long term, I don't see why wages would keep proportional to produced value.
by BurningFrog
5/11/2026 at 5:02:18 PM
We never noticed how easy the code writing part had already become because it happened slowly. Through mechanical means, through the ability to re-use code, and through code generation.Heck, even long before LLMs about 10% to 30% of my code was already automatically generated. By tooling, by IDLs and by my editor just being able to infer what my most likely input would be.
> We have robots walking just fine now, by the way.
I don't think you got the point I was trying to make.
by bborud
5/11/2026 at 5:13:10 PM
True, but I guess I see a distinction between scaffolded/templated boilerplate or autocomplete and actual application logic. People have generated boilerplate from templates for ages, as you say. RoR maybe a pretty good example, but there wasn't even early-days AI involved in doing that.by SoftTalker
5/11/2026 at 5:15:22 PM
>> We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need?Because they are currently "generative AI" meaning... autocomplete. They generate stuff but fall down at thinking and problem solving. There is talk of "reasoning models" but I think that's just clever meta-programming with LLMs. I can't say AI won't take that next step, but I think it will take another breakthrough on the order of transformers or attention. Companies are currently too busy exploiting the local maxima of LLMs.
by phkahler
5/11/2026 at 6:46:08 PM
> Companies are currently too busy exploiting the local maxima of LLMsI get the feeling we can already spot the next AI Winter. Which is okay, we need a breather, and the current technology is useful enough on its own.
by rootusrootus
5/11/2026 at 5:00:21 PM
> Why do we believe that LLMs are going to stop there?Why do you believe they wont? I think it's reasonable to assume that we will hit a ceiling that current models will not be able to break.
> We have robots walking just fine now, by the way.
Walking and reasoning are unrelated abilities.
by terseus
5/11/2026 at 5:07:20 PM
Walking was given as an example of "hard to program a robot to do it" by GP. Well, now we have robots that can walk.What evidence is there that LLMs have hit a ceiling at being able to do things like talk to users or stakeholders to elicit requirements? Using LLMs to help with design and architecture decisions is already a pretty common example that people give.
by SoftTalker
5/11/2026 at 5:10:41 PM
>bossesThe AI is coming for those too.
by vga1
5/11/2026 at 7:01:12 PM
Something like five to ten years ago, when AI hype was starting to hit media, one of the claims was that AI would come for middle-management first. Since middle-management can generally be described as collecting information from underlings and reporting information to upper management, their work was supposed to be easy to automate with AI. As far as I can tell, this hasn't proven to be true at all, and we software engineers proudly wrote ourselves out of work by constantly publishing our source code and discussing it openly.by snozolli
5/11/2026 at 5:15:52 PM
Lot of people don't seem to get that - It is easier to go from terrible to average but much harder to go from average to good.I am sure AI bros are same people who were convinced consumer grade fully automated driving was going happen "by end of the year" for last 7 years.
by thisisit
5/11/2026 at 7:20:01 PM
I agree with the statement and think a lot of people miss this, but I also wonder how many people probably don't care for good, they only care for 'good enough'.by Peanuts99
5/11/2026 at 8:47:12 PM
Many large systems can’t be built good enough because they just fall apart. Try letting a junior dev make an ERP or a database system.by manmal
5/11/2026 at 5:48:23 PM
No, I never believed in fully automated tale by Tesla, but as the LLMs improve my personal estimate for the date of human-level AGI is rapidly moving to "present". Before GPT-2 I had it somewhere in 2100, at GPT-2 I thought maybe by 2060 if we are lucky. Now I think it is 2035 or maybe even sooner.by lostmsu
5/11/2026 at 6:50:07 PM
I like to see the optimism, even if I don't share it. I think it's incredible hubris that humans think we are about to reinvent our own level of intelligence, just because we made a machine that talks pretty.by rootusrootus
5/11/2026 at 9:25:28 PM
Your own comment in my timeline is 7 years out of date. GPT-2 talked pretty, that was its whole thing. If you are trying to claim there's no difference between 5.5 and 2 you are delusional (hallucinating?).by lostmsu
5/11/2026 at 10:09:30 PM
I think I was fairly clear, I said that I think it is hubris to think what we have created is anything even slightly like human intelligence. It talks very pretty (a lot of work has gone into this aspect in particular), and it does demonstrate the extent to which, as individuals, most of us do not have especially unique thoughts nor problems to solve. It exposes how quickly humans jump to anthropomorphizing pretty much anything.Is it a handy tool? Yep! I use it every day. But it is laughable to think this is the path to AGI. The most common counterargument on HN is some variation of "but you can't prove that this isn't just like how a human thinks". A conspiracy theory at best, just reinforcing the fact that we know very little about how even simple non-human brains function.
by rootusrootus
5/11/2026 at 11:39:35 PM
You do you. I stick to the simplest reasonable definitions. From my perspective we are already in AGI, just the intelligence isn't quite on human level yet across the board.I am yet to see anyone saying it's just like human, so it looks like you are mostly hallucinating that too.
You didn't address my point on GPT-2 vs 5.5. Your only relevant claim is that 5.5 talks very pretty vs 2 just pretty I assume. Well, you have to be blind to claim this is the main difference.
by lostmsu
5/11/2026 at 6:19:45 PM
>> Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.Anecdotal evidence to support this.
I work with both dev and design teams. Upper management has already gone through several layoffs and offshoring of the two dev teams I work with. The devs they did keep were exactly what you said. The capable ones who reliably closed their Jira tickets. Never missed a deadline for building their features or components. And now? Their work has tripled and now the only help they get from management? "Start to figure out how to leverage AI, we're going to be a in hiring freeze for the next 10 months."
The double whammy of losing onshore team members and not getting any help from management to fix the problem they just created and essentially just telling them to figure out how to use AI to keep up is pretty staggering.
I would echo what one of the devs told me, "If this is the new "AI era" than you can count me right the fuck out of it."
by at-fates-hands
5/11/2026 at 4:22:48 PM
>> I understand things and then apply my ability to formulate solutions> The AI is coming for that too.
In that case all [1] non manual work is doomed, until robotics has an LLM moment.
[1] With the exception of all fields protected by politics or nepotism.
by oblio
5/11/2026 at 6:48:42 PM
> all non manual work is doomedAll work in general. Knowledge workers can still do manual work, and will compete to do so when there is no option to continue what they do today.
by rootusrootus
5/11/2026 at 7:31:44 PM
[flagged]by wiseowise
5/11/2026 at 9:00:39 PM
I agree in principle, but I think the 2-5% estimate is extremely low. I could be sold on most developers spending ~25%, up to 40% of their time on code. But very few people are spending 2% of their time on it. Unless you're some sort of super senior staff / advisor to the CTO at a gigantic company, which has already placed you on rare terrain.by amw-zero
5/11/2026 at 9:32:31 PM
Most people overestimate how much time they spend "writing code".I interviewed a ton of people in my career and when I ask "how much time writing code on your last job?". The more junior the person the more they would overestimate the time writing code (Some would say 90%!). Once they joined I was able to see how much time they really wrote code and it is almost never more than 30%.
Mostly because the code is only the final output. You spend most of your time doing research, talking to people. Working on Quarterly OKRs, going to meetings etc.
If you just write code you are either an extremely junior person that works on things trivial enough to not have to research or your are disillusioned and you don't realize you spend most of your time doing other things
by siren2026
5/11/2026 at 9:05:14 PM
might be closer to accurate if the 2-5% is his estimate of the physical time spent making key-strokesby d3rockk
5/11/2026 at 9:43:53 PM
Surely we should only count the time actuating the key. Apple keyboard users are in shambles.by bee_rider
5/11/2026 at 10:16:39 PM
Only the keydown, not the keyup.by layer8
5/11/2026 at 11:10:48 PM
That's <24 minutes per 8 hour day.If you're reading this and that matches your experience as an IC SWE whose job is ostensibly developing software.. you're either trapped in a very atypical org, or you're heading for a PIP.
by mh-
5/11/2026 at 9:33:01 PM
Nope. I would bet most people really only do 2-10%.But we would like to convince ourselves we don't.
by siren2026
5/11/2026 at 4:53:53 PM
Not sure where I first heard this, but I say it to my team all the time: "Programming is thinking, not typing"by hateful
5/11/2026 at 6:04:30 PM
I know a an accomplished CS professor, ACM fellow, cited in Knuth's TAOCP (as well as being an easter egg!), who still hunt-and-pecks. In fact, hunt-an-pecks incredibly slowly.Seeing him type really reinforced this idea.
by strbean
5/11/2026 at 7:27:36 PM
While this is a witty reply, most people are working on corporate CRUD apps. For us, I still follow Jeff Atwood's advice from a 2008 blog post: "We Are Typists First, Programmers Second" Ref: https://blog.codinghorror.com/we-are-typists-first-programme...by throwaway2037
5/11/2026 at 5:32:07 PM
I've always told my Jr Engineers to "think twice, code once".If I gave them a task and they immediately started typing it out, I would tell them to stop typing and ask them to explain to me what they were doing; they'd often just spit out what they thought the code should do, and I'd often point out edge cases they missed and would have missed had they just spit out code and a PR, wasting everyone's time. I would also insulate them from upper management to give them time to actually think (e.g. I wouldn't be coding so they could think then code).
To your point and to the GP's point, and one point I keep raising with LLM's: "typing is not where my time sinks are"
by the_hoffa
5/11/2026 at 6:02:16 PM
That's very true, which is why I find it insulting that so many AI proponents use the word "typing" to refer to writing code. It carries an implication that if you enjoy writing code by hand, you enjoy a mindless activity.by CodeMage
5/12/2026 at 12:48:28 PM
A former colleague of mine used to work for a boss who would periodically stick their head into the office where the programmers were and yell "I can't hear typing! Why are you not working!?".The reason I just remembered that is that the other day they proudly announced that everyone in their company would now be vibe-coding exclusively.
by bborud
5/11/2026 at 6:38:36 PM
I remember being that kid in high school who ran math and logical problems hard which contributed to me being very technical and to learn to push through painful mental challenges on the regular. Out of most of my graduating class there were not many of us that went on to become engineers for a reason because it isn't easy work by any means and I'm guessing is quite draining for people who don't use their brain like we do.So while AI will change the industry I don't see any reputable company firing the smartest ones in the room for junior level intelligence.
Even with it advancing someone has to be responsible for when it screws up which we know it will.
by brandensilva
5/11/2026 at 8:03:23 PM
This answer makes two big assumptions that haven't been proven out yet.- Understanding code without writing it is as viable as understanding code that you've worked with directly or indirectly
- Businesses care that you understand code
I really doubt the first one. Traditionally, understanding a code base in large part came from working with it intimately and building that muscle memory. The idea that understanding code by reading it is as good as understanding it from writing it, in my opinion, is not realistic.
Whether businesses care that their engineers (which they are increasingly viewing as monkeys at LLM typewriters) to understand the code remains to be seen. I don't think they particularly care whether their code runs slow and is buggy so long as it works just enough to churn out features and continue to pull income.
by ravenstine
5/11/2026 at 8:10:57 PM
> The idea that understanding code by reading it is as good as understanding it from writing it, in my opinion, is not realistic.As one of those developers who has written almost no significant code by hand since November 2025, but has produced a great deal of working software, I still understand the majority of the code I've produced just as well as if I'd typed it myself.
I may not be typing it myself, but I'm manipulating it constantly. It's not as simple as "reading" it - I'm reading it, executing it, figuring out refactorings for it, having tests built for it, having documentation built for it, sometimes writing that documentation myself, spinning up example scripts that use it, then building new code that depends on that previous code.
It's that act of exercising the code that gives me confidence that I understand it.
by simonw
5/11/2026 at 8:11:03 PM
> understanding it from writing itOn the surface it sounds weird - why would this be?
Possibly because building a system is not a one-shot step, but a process of many iterations, each of which involves experiments in production, and gaining more learnings. So at the end of the process, you don't just have N lines of working code, but also N lessons learned along the way. So presumably with the AI process we miss out on half the value.
Now the going thesis is that this extra value is unnecessary if we take the plunge and don't look back. My gut says the answer is somewhere halfway, I guess we'll see.
by foobarian
5/11/2026 at 6:27:27 PM
Isn't the long term trend just that we don't need as many engineers, not that there will no more software engineers?Theres another, different loop I keep seeing which is:
- Company A lays off engineers citing AI efficiencies
- People say its because of over hiring during 2020
- Company B lays off engineers citing AI efficiencies
- People say its because it was never a good business
- Company C lays off engineers citing AI efficiencies
- People say its because theres a recession
I guess to cite a counter example, unemployment is still super low, software jobs are still holding up, but the bear case is that eventually 5% of people will be able to do what people do today, and the demand for software won't grow at the same pace.
by czhu12
5/11/2026 at 7:40:15 PM
If company A is Amazon, company B is Ubisoft, and company C is Oracle, then I think it's very likely there isn't any pattern or "loop" here and it's legitimately just 3 different companies in 3 different situations doing layoffs for 3 different reasons but all 3 reaching for the same PR playbook. "We're leveraging AI to increase productivity" is the new "we're streamlining our business and focusing on our core products".by Xirdus
5/12/2026 at 3:20:03 PM
I agree in some ways, but I think this also overlooks that your job might be like that, but most decidedly “developer” jobs are not all like what you say which is more Engineering. Many people are able to have a career making basic HTML website changes. Are they not developers? Will their job not potentially be replaced by an AI that can make that change in seconds?It’s weird that people always seem to argue the extremes when reality is jumbled mess in the middle. Will developers lose jobs to AI? Without question. Will many “developer” jobs be eliminated because of that? Without question. Is it probably a really bad time to think you can go from your retail job to fixing people’s website as a lifetime career move? Yeah, probably not the best idea. Would it be smarter to focus on becoming a “Software Engineer” instead of a “Developer”? Yes usually. Does that mean it is a bad idea for EVERYONE to choose to become of developer? No, and that would be a dumb thing to argue.
We’re still going to need developers and definitely engineers, we are just going to need less of them in their current form, just like we needed less saddle makers, farriers and blacksmiths. We didn’t stop needing Horse Mechanics, we just needed less of them because we needed Car Mechanics. Some of those skills transferred, some didn’t.
by therealpygon
5/11/2026 at 5:06:01 PM
Only 5% of your time is spent writing code? That sounds like a low estimate for most software engineers I work with.May I ask if you could estimate how you spend the other 95% of the time?
by sefrost
5/11/2026 at 6:44:21 PM
In no particular order - Meetings
- Reading papers
- Understanding legacy code
- Reading internal news
- Ad hoc chats with coworkers
- Writing docs
- Editing configs
- Thinking about solutions
- Slacking off
- Analyzing results
- Testing code
- Reviewing PRs
- Understanding others' ongoing projects
by hatthew
5/12/2026 at 6:29:25 AM
> Slacking off
I laughed when I read this, but there is something to it. I like to say "intellectual relaxation" or take a break. Sometimes getting up from your desk to do some mindless admin task like photocopy a document for HR can free up your mind. If we were line workers at a factory, this would be mandated breaks. Business/Financial newspapers and factory executives love the old quote: "With robots, they never need a break, never need holiday, and can work 24x7." With the advent of agentic LLMs, a tiny fraction of that reality is leaking into the white collar world.
by throwaway2037
5/11/2026 at 7:03:08 PM
AI can do everything you listed except chats with coworkers and slacking off.I just don't think you've utilized the most recent versions of codex or claude.
by PizzaBorsch
5/12/2026 at 12:46:37 AM
It's definitely theoretically possible, but not there yet. I use cursor, claude (opus 4.7), and several proprietary LLMs/LLM frameworks at my job. The institutional knowledge I have wouldn't fit in the context window, and AIs lack my mental index/intuition of where to look for answers. When my AI makes a PR, I generally have to make some important changes, without which it's solution would be fundamentally broken. AI also cannot be trusted to make the right business tradeoff decisions.by hatthew
5/12/2026 at 8:19:24 PM
Many things at my software engineering job are like this, which require constantly changing human institutional knowledge that is almost always undocumented, or changing so quickly that it isn't relevant anymore. By the time you decide to automate it, the process changes. Tribal knowledge used to be something I hated seeing senior engineers keeping to themselves, but now it seems like an asset.by Shocka1
5/11/2026 at 7:07:05 PM
It sounds plausible to me since this is pretty en par with most other engineering disciplines. I’m a civil engineer. My responsibility is ultimately mostly to produce a constructable plan set. I spend far less than 5% of my time drafting or modeling.by Enginerrrd
5/11/2026 at 5:08:22 PM
Commenting on Hacker News?by davidw
5/11/2026 at 6:23:04 PM
For those who claim to be developers who code no more than 5% of their time and resort to arguments like "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?", it's not commenting, it's shilling for the AI corpocracy on HN.by wartywhoa23
5/11/2026 at 7:49:49 PM
>> "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?"I never got that argument. Compilers are formally proven, deterministic algorithms . If you understand what compiler does, you can have pretty good idea what it will produce. If it doesn't do that, its a bug. Definition of correctness is well defined by semantic equivalence.
LLMs are none of that. Its a fuzzy system that approximates your intent and does its best. I can make my intent more and more specific to get closer to what I want, but given all that is just regular spoken language its still open to interpretation. And all that is still quite useful, but I don't get the assembly language comparison here.
by truncate
5/12/2026 at 1:26:45 PM
Because compilers are only deterministic when using ahead of time compilation, without profiling data, and always the same set of compiler flags.Introduce dynamic compilation, profiling data, optimization passes, multiple implementations, ML driven heuristics, and getting deterministic Assembly output from a compiler starts to get harder to achieve.
by pjmlp
5/12/2026 at 11:41:27 PM
You are right about that but that's talking about what you generate but not what the output does. My point is that the compilers still designed to preserve semantic equivalence. semantic equivalence makes sense here because there are semantics well defined for both input and output. That bit is supposed to be deterministic. If something breaks that that is a bug.I just don't think comparing with compilers is a good argument.
by truncate
5/13/2026 at 7:35:08 AM
Semantic equivalence breaks down with UB optimizations.by pjmlp
5/11/2026 at 9:23:33 PM
By extension, does this imply that all the HLL advocates from decades past were shilling for compiler companies?by cobbzilla
5/11/2026 at 5:29:56 PM
In all seriousness, communications consumes a lot of time. Meetings, emails, Slack messages, pestering stake holders and other developers...by icedchai
5/11/2026 at 5:40:58 PM
If you spend 95% of your time on that stuff, you better be working on like critical infrastructure where nothing can go wrong, otherwise you are in an incredibly dysfunctional company.by hjort-e
5/11/2026 at 6:11:56 PM
I agree it would be absurd for it to take 95% of your time. I have, however, seen that it takes a lot more time than one would think.I did some contracting work for a severely dysfunctional meeting heavy organization and it was about 2 hours of meetings for every hour of real technical work!
by icedchai
5/11/2026 at 6:45:52 PM
Ah yes agreed, if it's more than 90% it just signals to me that a developers skills are probably being wasted too much on business/coordination stuff.But i guess if we mean actual time tapping your keyboard making code, then it's true some days for senior+ devs, but definitely not technical work overall.
by hjort-e
5/11/2026 at 6:31:59 PM
So about 26 hours of meetings to 13 hours of "real technical work" per week, but that's is 33%, not 5%.by fragmede
5/11/2026 at 6:28:59 PM
Even when it’s not dysfunctional, you spend a lot of time on communication and reading stuff other people wrote (including code). It’s very rare to work in isolation.by skydhash
5/11/2026 at 6:50:10 PM
I guess it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing code. If we say it's just typing, then 95% is not absurd noby hjort-e
5/11/2026 at 6:59:11 PM
> it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing codeAnd that would be where we disagree. I don’t read code to look at code. When I’m reading code, I’m looking for the contracts to follow when interacting with a system. It would be nice if it were documented, but more often than not you have to rely on code.
It’s very rare that I plan with a technical mindset. Yes I use the jargon, but it’s all about the business needs. Which again create contracts.
Same with writing code. Code is like English for me. If I don’t have a clear idea on what to write, I stop and do research (or ask someone). But when I do, it’s as straightforward as writing a sentence.
by skydhash
5/11/2026 at 7:18:51 PM
Huh? So you you don't research if something is technically feasible before you promise your stakeholders a delivery time/ price estimate?We all do the same stuff, the disagreement would just be what you feel coding is and if you think technical work is the same thing or a superset. If you as software dev aren't hands on with planning or working more than 5% of your time, you are basically a PO with a programing hobby
by hjort-e
5/11/2026 at 10:05:41 PM
> So you you don't research if something is technically feasible before you promise your stakeholders a delivery time/ price estimateI believe 99% of requests are not about what’s technically feasible. And the rare time I encountered one of those, my answer has mostly been “you don’t have enough resources to try solving that problem”.
If you know your fundamentals well, very often you will find the same common blocks everywhere. People much more smarter than me has solved a lot of fundamental issues and it’s rare that I see a business request that doesn’t reuse the same familiar stuff.
That’s why coding is mostly boring. You follow the same pattern again and again. But what dictates the flows are the business parameters. And that’s why most senior spend so much time gathering good requirements. Because the code is straightforward after that.
by skydhash
5/11/2026 at 6:43:27 PM
The least experienced developer writes the most code. Juniors would be spending whole day in the IDE, typing, testing, typing etc. Senior developers will go to a park for a few hours, think, then come back spent an hour or less typing code that just works or write nothing at all, maybe even delete code. Instead they might update documents, ask clarifications about found edge cases or errors in planning that were not considered.by varispeed
5/11/2026 at 6:48:41 PM
Since software is in every industry of man, I think you'll need to mention which industry this perspective is coming from. This is definitely NOT the case in certain industries.by nomel
5/11/2026 at 7:55:42 PM
Finance, web services, service integrationby varispeed
5/11/2026 at 10:22:11 PM
I don’t know if that’s true for most of us, who simply work in CRUD apps. Maybe I’m in a bubble though.by sefrost
5/11/2026 at 7:24:06 PM
Sneering at "kids these days"by FatherOfCurses
5/11/2026 at 5:09:17 PM
[dead]by mxksisksm
5/11/2026 at 8:00:35 PM
If you’re a developer and you’re writing code 2% of your time pre-Claude, that’s 9 minutes a day, you will and should be fired.by rhubarbtree
5/11/2026 at 10:38:03 PM
Things besides writing code that you might be doing:- Meetings
- Code reviews
- Manual testing
- Deployments and more testing
- Triaging issues
- WTF how did this bug happen?
- JIRA in general
- Whiteboarding sessions / Design docs
- Interviews
- 1:1s (mandatory ones)
- 1:1s (networking / problem solving / political alignment)
- Whatever your company's version of corporate extracurriculars is
by wan23
5/11/2026 at 7:59:51 PM
The perspective here is "lifetime career", so you need to project out 30 years here, for a meaningful argument.I think, much sooner than that, you'll have AI pumping out practically complete implementations that meet the requirements of function, set by the people who desire that function. THOSE people will be the developers, and will be more akin to technical "creatives", more on the product side, than the developer side.
by nomel
5/11/2026 at 8:14:44 PM
Someday people are going to get tired of "programming in English" with prompts, getting inaccurate output, etc and someone is going to invent a higher level kind of CODE that allows the user to directly specify the actions the computer should take to solve the problem. Later someone will invent a kind of tooling that COMPILES these CODES into a runnable thing skipping the prompt part all together. It might be called something like Unified Prompt Language.by AllanSavageDev
5/12/2026 at 7:12:54 PM
Programming is moving to programming by stated and understood intent, rather than syntax. Maybe contracts/legalese, but definitely not compilable code. Sure, some people will compile code, and more than some will be reading generated code, but that will be increasingly exceptional.by nomel
5/13/2026 at 9:34:06 AM
For some time now I've had some loose abstraction swimming around my head sort of between code and prompt, and maybe its the OUTPUT of the prompt. Im assuming going forward that developers will be no better at reading generated C code than I am at reading assembly today.I'm imagining some "AI Native Intermediate Representation" where the prompt says "Give me a function that takes an array of strings and returns that array sorted", the real code that runs in the end is the actual C code, and the representation might be something low complexity but still human readable:
func sortArray(<String> input) -> input.order.asc;
Not quite a prompt, certainly not runnable code but a higher level hybrid that is human readable AND can be compiled into actual C code. At some point someone is going to have to debug something and nobody will be able to read assembly/real code anymore.
It just seems to me in the whole "AI WRITES MY CODE" world that nobody is really thinking about debugging and maintenance. What do we just commit the prompt we built the program with to Git and call it a day? What about when we need to modify the system? Do we make a prompt to modify? Do we modify the original prompt?
AI so far is probabilistic, and that can certainly be dealt with via various workarounds, and theres currently no reason to think todays stuff would even generate the same code twice. I can't shake the idea theres a step missing in everyone saying "AI will write all our code now".
Its all a hack until a PHD writes a paper, then it becomes a technique.
Edit: Funny story - this morning Im up early to write a program of low complexity but still somewhat rigorous. Let me see if I can think up a prompt that I might use:
I need a program invoked with a main. We'll use this as a scaffolding to extend as the requirements materialize and constraints emerge. It will generate valid trading days for ES index futures where contract months are HMUZ. For each contract it will start with the last day of trading as defined by the contract specification located at (www.cmegroup.com/contracts/ES) and build and build an array consisting of 4 calendar months of those dates starting back from the last day of the contract. This collection will be sorted in ascending order by date where by the 0 element in the array is the earliest trading date.
^^ So this here is just my intermediate step. Im not even sure this is how I want to go forward with it but this approach is just the first way I can conceive of to get the job started. From here I'll read the output and figure how to add in the useful things I need to make it actually perform the intended function beyond what this prompt tells it to do. I cant even imagine how I might specify the final product I need in English. Id be writing a giant prompt for everything I need built and code would be so much faster to write and think in. Describing my stated intent in all this in English sounds like ... murder. And we all know that prompt I just made up sitting here is both a decent start at a prompt AND wholly inadequate and will produce absolutely useless garbage.
by AllanSavageDev
5/12/2026 at 5:34:17 AM
I think we can call it Claude++ or short C++by mawadev
5/11/2026 at 5:37:34 PM
You don't think AI is going to be able to understand things and apply their ability to formulate solutions better than you, in the near future?by AlexCoventry
5/13/2026 at 12:20:21 PM
No, I don't. Do you? If so, why? Extrapolation from guesswork?by bborud
5/11/2026 at 7:15:54 PM
In 2000 I learned about this old technology called "neural networks".AI really depends on long winters and rare breakthroughs. Deep neural network was the most recent breakthrough.
The iterations you currently see it just adding more storage, but the fundamental neural network structure doesn't change.
I'm confident AGI will not be achieved by the LLM architecture, and when the next AI breakthrough is, is anyones guess. But if you take history into account, it will take a while.
by koonsolo
5/11/2026 at 11:18:21 PM
Yes, same. In the late 90s through early aughts then I was taught over and over and over again that neural networks were a dead end concept and would never amount to anything.Just like all the preceding AI booms, this one will hit its maximal point, the hype train will fizzle, the best parts will just become "normal", and then a couple of decades later something new will come to push the boundary again.
by jghn
5/11/2026 at 11:16:41 PM
We switched to 'software engineer' to encapsulate that, I think. You can receive requirements and churn out code or you can go up a level and think about the solution. Go another level up and think about the problem. Another level and it's the context of the problem. Further than that and it's the priority of it. And even higher up is how it fits in the product roadmap and the architectural decisions.At some point you stop developing and start weighing up the requirement against your understanding of the system and the environment it works in.
by ljm
5/11/2026 at 11:17:52 PM
There's an old Chemistry joke, that I've reapplied to Software Engineering, and it goes something like:A New Engineer (NE) shows up on their first day on the job, notebook in hand ready to learn. They get assigned to shadow an Experienced Engineer (EE) for their first day.
EE: Now, the thing is, for any project on our team, you only need to change about 3 lines of code. NE, preparing to write down notes: Which 3? EE: Well, it depends.
(Originally about Material Safety Data Sheets, and there only being 3 relevant lines on them).
I think this is what people miss about Software Development.
by xracy
5/11/2026 at 11:34:48 PM
LLMs also can “understand things and apply their ability to formulate solutions”. There is nothing that will inherently limit AI from doing all knowledge work (and all physical work once robotics is good enough).Of course developers could just move up the “next level of abstraction” and become managers of agents who write the code, but eventually AI becomes a better manager of agents than even the best humans, at which point there is no contribution a human can make that an AI model or system of models couldn’t do better.
by atleastoptimal
5/11/2026 at 11:48:45 PM
> There is nothing that will inherently limit AI from doing all knowledge workResources is one. Energy, water, cost. There seems to be diminishing returns in intelligence at the moment, whilst power and memory usage continues to go up.
by lambdas
5/11/2026 at 9:30:46 PM
Usually that means you're already a senior developer, understanding things and formulating solutions is part of work delegation.Now those juniors whose job is to implement those solutions, they will have a hard time.
On my 50s, I also don't write as much code as I used to, even less nowadays with serverless, managed services, low code/no code tools, agent orchestration workflows, and with it I keep seeing development teams getting smaller.
by pjmlp
5/12/2026 at 10:48:04 AM
> Natural selection will take care of them in due course.While you are seemingly not at the moment, some day you might be at the receiving end of that "natural selection" in ways that seriously impact your remainint time on the planet.
In that case you might reconsider your stance, and especially question how natural is the selection of a few powerful rich people depriving others of their way to earn a living and their way to draw meaning from their lives.
The AI revolution keeps getting compared to the industrial revolution, but people keep forgetting the consequences of that one.
by DeusExMachina
5/12/2026 at 1:15:59 PM
I'm not terribly worried. The reason I am not worried is that software isn't my only marketable skillset. That is deliberate. Even though I see myself as primarily a software engineer, in the past decade I've worked in areas that tend to be viewed as wildly different strata and domains.And if the apocalypse comes, I'm actually not that bad at a handful of skilled blue collar jobs.
The people who should be worried are the ones with narrow skill-sets and no capacity for dealing with rapid change. Especially if those skills are shallow too.
But I wasn't talking about people. I was talking about companies. And the reason I'm not worried about companies going under is that they have gone all the time since the start of the industrial revolution. Yes, it happens faster and more violently today than before but neither the churn nor the reasons are all that new. They just need to be understood so you can deal with change rationally and without panicking.
It is a good idea to read up on historic innovation/disruption cycles and realize that they are nothing new. The only reason people think this is a new problem is that 50-100 years ago they used to take about as much time as your productive career. So people wouldn't need to understand how to deal with it. And every generation would be convinced that this is some unexpected and unique upheaval that only their generation has to deal with.
My stance is the only one that works well during disruption: you make sure you have more legs to stand on and you don't waste time fretting over things you can't change. If you find yourself out of options, you can only blame yourself.
by bborud
5/11/2026 at 5:17:38 PM
Because that classifies in "developers" and "software engineers". And software engineering isn't going to disappear anytime soonby madduci
5/11/2026 at 5:21:36 PM
Weird. I call myself a developer because I don't have an engineering degree from an abet certified engineering program.I recognize, in some capacity, that this isn't the norm and in the US "professional engineer" is protected and not simply "engineer", but it feels akin to stolen valor to me.
by hellojesus
5/11/2026 at 5:31:00 PM
If there were a license in the US for it, I’d agree with you. But as is, if you are “doing” engineering, you’re an engineer.If you are a licensed engineer of some kind, you’d state that outright.
The equivalent of stolen valor would be claiming to be a licensed software engineer; except there is no such license so it would also be fraud, misrepresentation, etc.
(I know this is different elsewhere)
by borski
5/11/2026 at 6:09:07 PM
> If there were a license in the US for it, I’d agree with you.Yeah, that is basically the thing in my country. You can't call yourself an engineer without passing a test, but I can't take it because there isn't one for software engineering.
Same thing for freelancing. Freelance jobs are defined in a list, and other jobs cannot benefit from the simplified tax rules that freelancers enjoy, but that list was written before software development was a thing.
by VonGallifrey
5/11/2026 at 5:56:22 PM
I call myself a computer programmer unless someone is asking for my official job title (software engineer)by traderj0e
5/12/2026 at 4:57:28 AM
I call myself engineer because I have also an engineering degree.But yeah, the term is mostly misused
by madduci
5/11/2026 at 5:36:10 PM
I'm a software dev in the US and I never call myself "engineer" in that capacity. Always "programmer" or "developer".I agree. Engineers have to clear a much higher bar. Even though my career was spent in medical diagnostic software where we had to get 510k clearance, I was still keenly aware that this was a fundamentally different activity from actual engineering.
by bilbo0s
5/11/2026 at 5:59:00 PM
I'm an electrical engineer that moved to software engineering and there's a lot of commonalities between what I do now and what I did previously as an electrical engineer. The bar might seem high, but that's the only way I know how to work, honestly.On the other hand, with the modern division of labour in a lot of companies and with the rhetoric I see here in HN and in other places: a lot of developers are indeed not even close to being engineers.
by whstl
5/11/2026 at 10:51:35 PM
> I'm getting old and I value my remaining time on the planet.It's an interesting sentiment. I, too, am getting old and value my remaining time on the planet, and so I code by hand every chance I get. :) Luckily I'm in a position to be able to do that.
by beej71
5/11/2026 at 6:04:56 PM
You’re a ”developer“, i guess, but not a coder (anymore), which is what your interlocutors are probably asking about. You’ve migrated to a middle manager job, not something they probably can just start doing competently. Essentially you’re agreeing with their initial sentiment, that coders will be made irrelevant.by hyperjeff
5/11/2026 at 6:08:36 PM
I think it’s more nuanced. Even a “coder” spends the majority of their time, not coding.by onethought
5/12/2026 at 1:17:54 AM
> Natural selection will take care of them in due courseWonderful articulation. There's a plethora of prognostication about how AI will change everything in software and beyond and the thing I keep thinking is, well, when will the talk stop and the demonstration of results commence. It doesn't seem to have as yet.
If it works, it'll work. The methods will spread and quickly be accessible to everyone, and progress will go on. That's great.
If it doesn't work, we'll also see that in the absence of real results. And simply stating you are seeing it doesn't qualify. It must be something we can all see and use that is unavoidably, undeniably real.
by davnicwil
5/11/2026 at 6:32:19 PM
And most of the time the statistical aspect of LLMs result in a less creative solution that is more expensive to run and harder to maintain. LLMs at this stage are good at scaffolding, generating the boilerplate you do not want to write and glue things together quickly. It just makes engineers faster.by dev_l1x_be
5/12/2026 at 10:06:11 AM
Well said, the only flaw is the unfortunate realization that "I understand things and then apply my ability to formulate solutions" is rarely required, how many zombie corps are still roaming these days?Judging by how many day to day tech products in my life are buggy, slow or user-hostile there can't be more than 50-100 tech companies actually innovating, right?
by intelVISA
5/11/2026 at 6:19:31 PM
- Compilers will make developers irrelevant
...
- Compilers can write assembly language code
- Compilers have -O3 now
etc...Maybe we should rejoice. I remember dreading writing documentation, and now I would happily hand that off to AI.
by m463
5/11/2026 at 7:05:50 PM
It is indeed exciting (for you at least). The problem is for most people is not that AI is spewing out code and reading documentation while developer do more interesting things. It is that companies are handing over the job of those developers to AI itself.So those ex-developers are free to do most interesting things in the world with little change of not relying on nice, steady paychecks every month.
by geodel
5/11/2026 at 4:10:58 PM
I dunno, man. I've been doing this for 20+ years and I think we're at a really important fork in the road where there are two possibilities.The first is that AI is achieving human-level expertise and capability, but since they're now being increasingly trained on their own output they are fighting an uphill battle against model collapse. In that case, perhaps AI is going to just sort of max out at "knowing everything" and maybe agentic coding is just another massive paradigm shift in a long line of technological paradigm shifts and the tooling has changed but total job market collapse is unlikely.
The other possibility is that we're going to continue to see escalating AI capability with regard to context, information retrieval, and most importantly "cognition" (whatever that means). Maybe we overcome the challenges of model collapse. Maybe we figure out better methodologies for training that don't end up just producing a chatbot version of Stack Overflow + Wikipedia + Reddit. Maybe we actually start seeing AI create and not just recreate.
If it's the latter, then I think engineers who think they are going to stay ahead of AI sound an awful lot like saddle makers who said "pffft, these new cars can only go 5 miles per hour."
by ryandvm
5/11/2026 at 4:43:02 PM
100% this.I'll also add another factor: it's become increasingly clear at our company that AI-enabled humans are getting to the bottom of the backlog of feature ideas much quicker. This makes the 'good ideas' part of the business the rate limiting step. And those are definitely not increasing with AI, beyond that generated by the AI churn itself ("let's bolt on a chat experience or an MCP!")
So maybe the coding assistants don't get a 10x improvement any time soon, but we see engineering job market contraction because there aren't really enough good ideas to turn into code.
by golddust-gecko
5/11/2026 at 5:59:50 PM
Yes, but as the price of getting work done goes down, a lot of companies that were priced out of custom software before now can hire devs, as the value hiring a few can provide just goes up. Fewer people per product, absolutely. No more teams of 10 or 20 working on the same thing. But there's so much out there that doesn't get done at all because you'd never be able to afford it.Simple marginal thinking: When you lower the price of something, it gets more use cases. A rich person might not take even more flights because they are cheaper, but more people will consider flying when they wouldn't have at old prices
by hibikir
5/11/2026 at 4:51:31 PM
You are supposing that AI is achieving human level expertise and capability is a given. I am not so sure. Right now that's much further from the truth than one might think at first glance.by bborud
5/11/2026 at 5:23:26 PM
> max out at "knowing everything"LLMs know nothing but are great at giving the illusion that they know stuff. (It's "mansplaining as a service"; it is easier to give confident answers every time, even if they are wrong, than to program actual knowledge.) Even your first case seems wildly optimistic. The second case is a lot of "maybes" and "we don't know how but we might figure it out" that seems like a lot to bet an entire farm on, much less an entire industry of farms.
We sure are looking at a shift in the job market, but I don't think it is a fork in the road so much as a Slow/Yield sign. Companies are signalling they are willing to take promises/hope to cut labor costs whether or not the results are real. I don't think anything about current AI can kill the software development industry, but I sure do think it can do a lot to make it a lot more miserable, lower wages, and artificially reduce job demand. I don't think this has anything to do with the real capabilities of today's AI and everything to do with the perception is enough of an excuse and companies were always looking for that excuse. (Just as ageism has always existed. AI is also just a fresh excuse for companies to carry on aging out experience from their staff, especially people with long enough memories/well schooled enough memories to remember previous AI booms and busts.)
But also, yeah if some magic breakthrough makes this a real "buggy whip manufacturer moment" and not just an illusion of one, I don't mind being the engineer on that side of it. There's nothing wrong about lamenting the coming death of an industry that employs a lot of good people and tries to make good products. This is HN, you celebrate the failures, learn from them, and then you pivot or you try something new. If evidence tells me to pivot then I will pivot, I'm already debating trying something entirely new, but learning from the failures can also mean respecting "what went right?" and acknowledging how many people did a lot of good, hard work despite the outcome.
by WorldMaker
5/11/2026 at 7:02:19 PM
I'm skeptical of LLM "reasoning" but they sure as hell know a lot. That's what the embeddings are: a giant semantic relationship between concepts.by anon84873628
5/11/2026 at 7:59:43 PM
Embeddings are still mostly just vectors into n-dimensional K-means clusters. It isn't "knowing" two things are related and here's the evidence, it is guessing two things are statistically likely to be related, based on trained patterns, and running with it without evidence.It has no "semantic understanding" as we would define it. It's just increasingly good at winning cluster lotteries because we've increased the amount of training data to incredible heights.
by WorldMaker
5/12/2026 at 4:33:52 AM
Can you explain how you "know" two things are related? If I ask you the similarities between a cat and a dog, is your answer based solely on an understanding of their genetic phylogeny and how those genes express traits?Grouping vectors in concept space is exactly how you create semantic understanding. The proof is in how good they are at creating semantically valid text. The fact that it took massive amounts of data is irrelevant. That just shows how much knowledge is encoded in all our language. It takes humans a ton of training to know things too.
by anon84873628
5/12/2026 at 4:56:10 PM
> is exactly howWe don't know that. It seems like great hubris to declare we know how the human brain works. You are asking me to explain how we know things and then telling me we've already figured it out in the same breath, and that's hilarious.
It doesn't take massive amounts of language data to train a baby human. It is almost entirely just: "Look. Here's a cat. Can you say cat? Cats go meow." "Over here, your aunt has a dog. Dogs go woof."
There's generally a flood of non-lingual contextual data in such moments such as sights, smells, sounds, movements, touch but that also only further underscores how different LLM training is from anything we'd consider human learning. Our memories aren't just "conceptual spaces of linguistic topics", they are complex sensory maps where a smell can remind you of the first dog you ever met. There is so much of our human knowledge that is not and never been encoded in most of our languages.
The fact that LLMs take massive amounts of linguistic data is relevant, because it shows how far we still have to go in barely scratching the surface of how the human brain seems to work. (Which again, we know only the barest details. Anyone who tells you they know 100% of how the human brain operates so far tends to be a snake oil salesman.)
by WorldMaker
5/12/2026 at 9:51:08 PM
We do mostly know how the brain works at this level of detail, and it is akin to Principal Component Analysis. There are only so many ways it could work, unless you believe in dualism. My question was rhetorical. All you've described with the other stuff is a "multi-modal" model (and ignoring all of the "biological pre-training" that took place through millennia of evolution). The interesting (and perhaps surprising to some people) thing is how well pure text training can compensate for the lack of other senses.by anon84873628
5/13/2026 at 4:27:35 AM
Cool attempt at an ANY% speedrun of biology and philosophy, bro. Maybe next time shoot for a higher percentage score?"Neural Networks" are the Omegaverse of Computing and we are all poorer for it. I could elaborate, but I'm exhausted and depressed right now. The map is not the territory. The broken analogy is almost never the real thing. A stopped clock is right thousands of times per year if you just keep collecting as much data as you can.
by WorldMaker
5/11/2026 at 7:44:55 PM
Encyclopedia and Wikipedia know a lot too. Knowledge isn't much of use on its own, it's about how you use it.by wiseowise
5/12/2026 at 4:37:44 AM
Well Wikipedia can't write an essay for me, and LLMs can.I'd say they are quite adroit at using their knowledge.
I mean, is Mythos finding all these vulnerabilities not evidence enough? Does AI Studio not clearly understand React and use it artfully?
by anon84873628
5/11/2026 at 7:22:19 PM
I agree with you, but a big drawback is that the accuracy or confidence of their output can't be estimated.So they surely know a lot, but you are never sure if the info is correct or not.
by koonsolo
5/12/2026 at 4:43:01 AM
They can estimate confidence based on distances in that state space.But yes, it gets tricky.
by anon84873628
5/11/2026 at 7:19:44 PM
Do you think the latter can be achieved with the LLM neural network architecture? I highly doubt it. Neural networks are very old tech, and it took us that long to get us here.I'm sure we'll reach AGI at some point, but looking at AI history, I don't see that coming any time soon.
by koonsolo
5/11/2026 at 7:16:33 PM
The problem is people think AI can replace the 95-95% that isn't code too. That's where we end up with massive unusable codebases that no one understands.by dawnerd
5/11/2026 at 6:34:58 PM
That doesn't hold because the goal for executives is to increase revenue and the main sales pitch of Anthropic et al is to pay for agents instead of paying for engineers. That means 80% of the workforce is out no matter what. Whether or not one belongs to the remaining 20% is a different story, but obviously not all of us will be there.> I understand things and then apply my ability to formulate solutions
AI is coming for that too. Don't be naive
by dakiol
5/11/2026 at 6:39:46 PM
It will be interesting for governments using workers as proxy for taxing corporations.by varispeed
5/11/2026 at 6:44:11 PM
> Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.Thing is, natural selection will take care of you at the same time. Because you'll also come to rely on products they make, or services they offer, either directly or indirectly. So eventually, you too, will suffer the consequences of the enshloppification.
by timedude
5/11/2026 at 10:49:38 PM
Anyone read posts like this and picture someone who doesn’t actually do anything all day besides posture in meetings? Probably with a super inflated title and salary.I doubt this is what the OP does, but there’s tons of developers like what I described and they seem actively proud at not actually building anything and playing politics all day.
by codemog
5/11/2026 at 6:22:19 PM
You miss the major factor in your compensation: pricing pressure due to supply/demand.By removing all the junior engineers, you've fundamentally changed the market forces longer term and most people expect that to negatively impact you in the supply demand curve regardless of whether or not the statements you've made above are true, which they most likely are for senior engineers.
by jchonphoenix
5/11/2026 at 6:28:38 PM
In removing junior developers, leaving only senior developers, wouldn't that reduce supply, making the price go up, not down? It's been a while since Econ 101 for me though.by fragmede
5/12/2026 at 10:30:45 AM
I had a professor at my CS university (one of the greatest I had) who used to say (in 2008): "a developer should write no more than 5 lines of code a day"by sirnicolaz
5/11/2026 at 6:46:54 PM
This is exactly it. The speed of light has not changed: we're limited by our ability to understand the system, and make decisions about what to do next. AI will speed that up, but the core work is the understanding and decision-making.Saying otherwise is sort of like reducing the task of writing a novel to typing.
by rpdillon
5/11/2026 at 6:42:59 PM
Something missed in that computer science was a highly theory driven discipline where people were taught how to think critically about solving complex problems. Industry complained they weren’t teaching enough programming skills, so they dumbed down the thinking part and emphasized the vocational part. Now the vocational part is virtually useless, and the grounding of theory applied to complex problems is suddenly really relevant again. Schools will take time to retool their programs, teaching staff, and two generations if not three graduates will have entered into a work environment that doesn’t need what they learned.As someone 35 years into my career I agree this is the most exciting part of my career. I love programming and I do it all the time but I do it by reading code and course correction and explaining how to think about the problems and herding cats - just like working with a team of 100 engineers. But the engineers I’m working with now by and large listen, don’t snipe me on perf reviews, aren’t hallucinating intent based on hallway conversations with someone else, etc. This team of AI engineers I have can explain to me their work, mistakes, drift, etc without ego and it’s if not always 100% correct it’s at least not maliciously so. It understands me no matter how complex the domain I reach into, in fact it understands the domain better than I do, so instead of spending a few months convincing people with little knowledge or experience that X is a good idea, I can actually discuss X and explore if it’s a good idea or not and make a better informed decision. I’ve learned more in these discussions than I’ve learned in decades of convincing overly egoistic juniors and managers to listen to me about something I’m an industry authority on.
However I see very clearly we will need very few of the team of 100 human engineers I can leave behind in my work. Some of will be there in a decade, but maybe less than 1:10. This is going to be a more brutal time than the Dotcom bust for CS grads, and I don’t think it will ever improve. Mostly because we simply won’t need the “my parents told me this makes money” people, just the passionate folks remain. But even then, we face a situation where the value of any software developed is very low because so much software is being developed. It’s going to turn into YouTube where software that is paid for is very small relative to the quantity of software developed. We already see this in the last few months with the rate of GitHub projects created. If the value of any software created is low, the compensation of the creator will be low unless they’re very rare talents.
by fnordpiglet
5/12/2026 at 1:31:15 PM
This is kind of country specific, in many European countries the kind of university various in years and content required for a degree, depending if they focus on vocational or more generic high education.Example, university versus polytechnic.
by pjmlp
5/11/2026 at 7:42:03 PM
This is a valid perspective, but I don't think a useful one.Being able to produce code is a huge unlock for many non-programmers. So in a way, it doesn't matter how much time existing developers spend on coding. It's about helping anyone become a developer.
by vagab0nd
5/11/2026 at 10:39:40 PM
Yeah coding speed was almost never the bottleneck I found. AI now does the typing and (some) of the thinking. It doesn’t figure out what needs to be built and how it all plays together (yet).by Insanity
5/11/2026 at 7:07:00 PM
The "apply my ability" is doing a lot of work, so to speak, in the above exchange. Work that might eventually well be automated away.by xhevahir
5/12/2026 at 1:56:01 PM
10% writing code. 90% reading and understanding codeby ChrisRR
5/11/2026 at 5:12:53 PM
Saying being a programmer is about writing code is a bit like saying being an artist is about drawing lines on a canvas.Yeah technically drawing lines on canvases may be an very important part of being a painter, but it is hardly the core of what makes or breaks great art.
by atoav
5/11/2026 at 7:14:44 PM
> Yes, about 2-5% of the time. Less now.I spent 2nd half of my 30y career fixing organizations and process where this was the case. so many things are wrong in places where this is the case (or alternatively you need a different job title :) )
by bdangubic
5/12/2026 at 3:24:06 AM
> Natural selection will take care of them in due course.or you.
by pasquinelli
5/11/2026 at 5:02:48 PM
What you described are senior developers and system architects.Junior developers spend most of their time writing code (when they're not forced to attend pointless standups, because Agile/blah/blah)
> The developers who still think their job is about writing code will perhaps not have a job in the future.
So you're saying the same thing everyone else is saying. SWEs won't go away, but they will be greatly reduced, because those whose job is about writing code -- junior devs -- will be replaced.
(How will Sr Devs in the future be created? That's the question, isn't it.)
by insane_dreamer
5/11/2026 at 5:19:22 PM
> How will Sr Devs in the future be created?As an extreme example, maybe we’ll see long-running internships and trainings like doctors experience. Doctors don’t start their career until ~12+ years of prep and training.
Pragmatically, software development has a lot of examples of teenagers making apps and college students building software companies. In the 12 years it takes for training, low-knowledge workers could be vibe coding continuously replacements of most commercial software products they’d be hired to build. So I doubt we’ll treat software development as a rarified high skill job.
by vineyardmike
5/12/2026 at 8:47:59 PM
[flagged]by throwaway74628
5/11/2026 at 6:08:25 PM
The true argument is about quantity - of people, not code. All qualitative arguments are missing the point.by boring-human
5/12/2026 at 2:13:57 PM
This is a bit of a strawman. When people say "writing code", they don't necessarily mean [pressing the keys on the keyboard that produces the necessary bytes in a text file].by agnishom
5/11/2026 at 5:49:02 PM
Note that just because you know the job is understanding things, the manager who'll boot you and leave you without income probably doesn't. They'll just get their political cookie points for saving money by replacing you with AI.by izacus
5/11/2026 at 7:14:20 PM
This is maybe a bit myopic.Dude - look what happened in the last 2 years on software.
Now project out another 10.
I totally agree with you 'as of now, in the current paradigm'.
But that could very well change.
by bluegatty
5/11/2026 at 5:37:25 PM
>- I understand things and then apply my ability to formulate solutions - Well, and AI can do part of that too, maybe more of it soon.
- ...
- Besides, you don't need 10 guys in a team to do that. A couple of them will do, then AI will do the coding. What will happen to the rest?
- ...
by coldtea
5/11/2026 at 7:15:47 PM
> Multiple times per week I have the same conversation.Really? I mean, good on you if it's true and you like the attention but that's sounds like an implausible amount of interest in someone and their relatively mundane profession.
by jstummbillig
5/11/2026 at 11:51:08 PM
Engineering in a nut shell... What did we do before computers??? Halls full of draftsmen...by engineer_22
5/11/2026 at 11:44:04 PM
I think similarly. To me value of the "programmer" is not "I love Rust", "I am React expert" etc. That "love" is for sure replaceable.by FpUser
5/11/2026 at 7:29:56 PM
In my community almost all problems are political. "Problem solving ability" matters if you are HFT, but everything else? Math can't tell you the best way to use land, educate a kid, what to pay for healthcare and how, how to prioritize biotech research, set a minimum wage, decide congressional maps, all sorts of stuff that actually I pay for or care a lot about. in fact I think you are totally misinterpreting what people are saying to you, you are 200% wrong: the 2-3% of your time spent coding was the valuable part, and your so called problem solving ability rarely solved any real problems.by doctorpangloss
5/11/2026 at 5:54:17 PM
I think the future is pretty up in the air in this respect, but my guess is that AI will just lead to another shift in the set of knowledge that a 'real programmer' is expected to have. I'm old enough to remember when people would make fun of web developers for 'programming' using HTML and JavaScript. And of course, back in the day, you couldn't be a real programmer unless you wrote assembly language. In a few years' time, being able to write (as opposed to read) source code in any specific programming language will probably become a niche skill. The next generation will be able to read Python to about the same extent that I can read x86 assembly.Perceptions of what knowledge counts as 'low level' are constantly shifting. These days, if you write C, you're a low-level, close to the metal programmer. In the 70s, a lot of people made fun of Unix for being implemented in a high-level programming language (i.e. C) rather than assembly.
by foldr
5/12/2026 at 1:33:54 PM
Kind of ironic, given than OS implemented in high-level programming language trace back to 1958, with JOVIAL being one of the first systems programming languages.by pjmlp
5/11/2026 at 4:46:06 PM
Pure wage workers should consider dropping the attitude about how tech progress will just make their inferiors in the same line of work be out of a job (hrmph good riddance etc.). Because this pseudo-progress could creep up on them as well.Then you won’t have this just world of the deserving workers at all. Just formerly deserving workers and idiot billionaires like Musk (while the robots do all of the work).
by keybored
5/11/2026 at 8:45:37 PM
This an example of survivor bias dressed up as general advice that doesn't consider the entire ecosystem. And we need look no further than what's happened in Hollywood with writing in particular.The general progression of a Hollywood writing career is from PA (production assistant), which often starts off as a volunteer "intern" position, to writer's assistant. Assistant here usually means doing any meanial task anyone wants from fetching drycleaning to taking a dog for a grooming appointment. When you're a writer's assistant, you will oten spend time in a writer's room. You will see how the process works. You probably won't contribute anything but you may get feedback on tehings you've written from whomever you're working for.
The next step is as a staff writer. You will be paid to produce scripts and stories for a TV show, for example. That writer's room will have a head writer. On a TV show the head writer is almost always the showrunner. The showrunner is effectively the leader of the entire project and is responsible for breaking up a season intoo storylines and making sure those scripts make sense as a collective. They might one or more of those scripts or maybe not. The showrunner will hire directors for each episode.
The path from staff writer to showrunner often goes through being a producer. Producers are responsible for a lot of the logistics of filming a show. Hiring extras, finding locations, coordinating stunts and costumes and making sure the director has everything they need.
As part of all this, in the 22 episode TV era, writers would often end up spending time on set while the show is being filmed. They'd learn from the process.
Every part of this was necessary. Those writers on set are your future producers and showrunners.
So what's happened in the streaming era is that writer's rooms got smaller (so-called "mini writer's rooms"), maybe only the showrunner is ever on set, the writers have stopped working by the time filming even begins and you might only be doing 8-12 episodes. On a 22 eipside season, that one job could support you. 8-12 episodes can't.
But you see how this all breaks down when writers can no longer support themselves, they're no longer being trained to be future producers and showrunners, there's no feedback from set back to the writer's room and you end up with 3 year gaps between seasons. The only reason for all of this is because it's cheaper.
So, you may be a staff engineer who tech leads dozens of other engineers. You're not formally a manager or director but you have a lot of influence about the entire project. But how did you get there? You started as a junior engineer being told what to do. You got to see how other leaders operated. You became responsible for more and more things. You might start fixing bugs under supervision to managing a feature then an entire project and so on.
So what's going to happen here is (IMHO) we will have years of the software engineer space shrinking. There'll be very little entry-level hiring. Layoffs will reduce the entire workforce and there'll be a few tech leaders who hang on because they still produce value. Some of them will probably discover they don't produce enough value and they'll go too.
But where do the future tech leaders come from in this scenario? AI is being used as an excuse to kill the entry-level pipeline and if you go around and say "git gud" [sic] then I'm sorry but you just don't understand the impact of what's happening or you don't care because, at least for now, you're simply not affected.
You see the same thing with people who espouse the myth of meritocracy. Well, if a given workforce shrinks by 50%, half those people are, by definition, not going to survive. An individual may be about to reskill or skill up to survive but not everyone can. And that's how people end up in Amazon warehouses. At least until they're no longer needed there ether.
by jmyeet
5/12/2026 at 5:27:12 AM
If the industry is to shrink this is the best way it can. Stop people entering while they are young and can pivot into something with better returns. Keep the experienced people who are older and may find it harder to pivot and had some "good days" to help them ride them through these bad times. I've seen similar dynamics in other industries as they slowly die/move on (e.g. manufacturing, niche trades, etc). A slow decline is better than a boom/bust. If it ends up that we need software engineers later training is an easier problem than mid career death for the juniors in a few years time.Eventually the market finds a new equilibrium of staff to demand ratio. You prefer that happen sooner so people don't make bad investments of their time (e.g. studying the wrong course based on inaccurate market signals).
by throw234234234
5/11/2026 at 4:14:03 PM
I normally say that I have zero concerns regarding AI in terms of employment. At most I am concerned in learning the best practices on AI usage to stay on top of things.It's ability to write code is alright. Sometimes it impresses me, sometimes it leaves me underwhelmed. It certainly can't be left to do things autonomously if you are responsible for its output.
Moderately useful tool, but hellishly expensive when not being subsidized by imbeciles that dream of it undrrmining labor. A fool and his money should be separated anyway.
What I am really concerned about the incoming economic disaster being brewed. I suspect things will get very ugly pretty soon.
by surgical_fire
5/12/2026 at 1:34:07 AM
[dead]by th1sisoldnews