2/16/2026 at 6:11:43 PM
> Microsoft’s AI CEO is saying AI is going to take everybody’s job. And Sam Altman is saying that AI will wipe out entire categories of jobs. ANd Matt Shumer is saying that AI is currently like Covid in January 2020—as in, “kind of under the radar, but about to kill millions of people”.> I legitimately feel like I am going insane when I hear AI technologists talk about the technology. They’re supposed to market it. But they’re instead saying that it is going to leave me a poor, jobless wretch, a member of the “permanent underclass,” as the meme on Twitter goes.
They are marketing it. The target customer isn't the user paying $20 for ChatGPT Pro, though; the customers are investors and CEOs, and their marketing is "AI is so powerful and destructive that if you don't invest in AI, you will be left behind." FOMO at its finest.
by mjr00
2/16/2026 at 6:47:26 PM
Tech has slowly been moving that way anyways. In terms of ROI, you’re often much better off targeting whales and large clients than trying to become the ubiquitous market service for consumers. Competition is fierce and people are poor comparatively, so you need the volume for success.Meanwhile if you go fishing for niche whales, there’s less competition and much higher ROI for them buying. That’s why a lot of tech isn’t really consumer friendly, because it’s not really targeting consumers, it’s targeting other groups that extract wealth from consumers in other ways. You’re selling it to grocery stores because people need to eat and they have the revenue to pay you, and see the proposition of dynamic pricing on consumers and all sorts of other things. Youre marketing it for analyzing communications of civilians for prying governments that want more control. You’re selling it to employers who want to minimize labor costs and maximize revenue, because they have millions or billions often and small industry monopolies exist all around, just find your niche whales to go hunting for.
And right now I’d say a lot of people in tech are happy to implement these things but at some point it’s going to bite you too. You may be helping dynamic pricing for Kroger because you shop at Aldi but at some point all of this will effect you as well, because you’re also a laboring consumer.
by Frost1x
2/16/2026 at 8:25:50 PM
The reason why whaling and government contracts are increasingly the best options available is because the wealth from the working class has mostly been extracted... And with less and less disposable wealth available to the populous, targeting them for products gets increasingly competitive as anything non essential gets ignoredIt's a negative feedback loop, and the politicians would rather reduce taxes on the rich then reverse that trend
At least that's how it looks to me
by ffsm8
2/17/2026 at 12:52:14 AM
The sad reality is that 80-90% of us are simply no longer economically relevant to $Trillion dollar companies. Even if we were paying customers, each of our total lifetime values is a rounding error on these guys' balance sheets. An individual could swear off Company X, and it wouldn't even be noticed, not even by someone deliberately looking for the economic impact of losing a customer.by ryandrake
2/16/2026 at 7:14:23 PM
It’s capitalism moving everything that way. Always has been and will continue to until we’re all hooked up to tubes paying taxes with ectoplasm.by braebo
2/16/2026 at 6:19:25 PM
The marketing is clearly effecting individual developers, too. There's a mass psychosis happeningby AstroBen
2/16/2026 at 6:27:23 PM
Maybe. I'm actually a big fan of Claude/Codex and use them extensively. The author of the article says the same.> To be clear: I like and use AI when it comes to coding, and even for other tasks. I think it’s been very effective at increasing my productivity—not as effective as the influencers claim it should be, but effective nonetheless.
It's hard to get measured opinions. The most vocal opinions online are either "I used 15 AI agents to vibe code my startup, developers are obsolete" or "AI is completely useless."
My guess is that most developers (who have tried AI) have an opinion somewhere between these two extremes, you just don't hear them because that's not how the social media world works.
by mjr00
2/16/2026 at 6:41:06 PM
Well I've just watched two major projects fail which were running mostly on faith because someone read too many "I used 15 AI agents to vibe code..." blog posts and sold it to management. The promoters have a deep technical understanding of the problem domain we have but little understanding of what an LLM can achieve or what it can understand relating to the problem at hand.Yes you can indeed vibe code a startup. But try building on that or doing anything relatively complicated and you're up shit creek. There's literally no one out there doing that in the influencer-sphere. It's all about the initial cut and MVP of a project, not the ongoing story.
The next failure is replacing a 20 year old legacy subsystem with 3MLOC with a new React / microservices thing. This has been sold to the directors as something we can do in 3 months with Claude. Project failure number three.
The only reality is no one learns or is accountable for their mistakes.
by dgxyz
2/16/2026 at 6:54:21 PM
Rather than making a good product that’s useful to the world, the goal of current startups seems to be milking VCs who are desperately searching for the new version of the mobile phone revolution that will make this all ok… so it seems like they’re accomplishing their goal?I reckon the reason the VC rhetoric has reached running-hair-dye-Giuliani-speech level absurdity isn’t because they’re trying to convince other people— it’s because they’re trying to convince themselves. I’d think it was funny as hell if my IRA wasn’t on the line.
by DrewADesign
2/16/2026 at 6:58:21 PM
I think no one cares about the truth or building something good any more. It's a meme economy. Tell a story and the numbers go up. Until they don't.Yes my pension is probably going down the same sinkhole with your IRA. Good luck. We need it.
by dgxyz
2/16/2026 at 7:07:59 PM
Its always been like this but for awhile you had people like Steve Jobs to hold people like Bill Gates accountable. He long referred to MSFT as being mcdonalds in relation to the stuff they produced - very pedestrian.by ass22
2/16/2026 at 7:57:39 PM
Yep, all that accountability gates has faced in his life.Is he facing charges yet for sneaking drugs into his wife's food, or did he only ever discuss that with his buddy Jeff E and never actually follow through with it
by queenkjuul
2/16/2026 at 8:22:38 PM
It's worse than a meme economy, it's a gambling economy. The entire VC business model is gambling (fund 10 hope 1 pays for the losers). Crypto is all gambling. The stock market is gambling. TV ads are all gambling ads. Even dating is gambling.Now programming and art are both gambling.
by ModernMech
2/16/2026 at 6:52:27 PM
My experience has been a mixed bag.AI has led us into a deep spaghetti hole in one product where it was allowed free rein. But when applied to localised contexts. Sort of a class at a time it’s really excellent and productivity explodes.
I mostly use it to type out implementations of individual methods after it has suggested interfaces that I modify by hand. Then it writes the tests for me too very quickly.
As soon as you let it do more though, it will invariably tie itself into a knot - all the while confidently ascertaining that it knows what it’s doing.
by eckesicle
2/16/2026 at 6:55:12 PM
On localised context stuff, yeah no. I spent a couple of hours rewriting something Claude did terribly a couple of weeks back. Sure it solved the problem, a relatively simple regression analysis, but it was so slow that it crapped out under load. Cue emergency rewrite by hand. 20s latency down to 18ms. Yeah it was that bad.by dgxyz
2/16/2026 at 7:55:43 PM
For me it's just wildly unpredictable. Sometimes it gets a small task perfectly right in one shot, sometimes it invents an absurd new way to be completely wrong.Anyone trusting it to just "do its own thing" is out if their mind
by queenkjuul
2/16/2026 at 8:18:32 PM
For me I would ask it to do a simple thing and it would give me the tutorial code you could find anywhere on the Internet. Then you ask it to modify it in a way that you can't find in any example online, it will tell you it's fixed everything, but actually nothing has changed at all or it's completely broken.I think if someone's goal was just the tutorial code, it would have been very impressive to them the AI can summon it.
by ModernMech
2/16/2026 at 8:46:48 PM
this is what I've been using freebie gemini chat for mostly, example code, like reminding me of c stdlib stuff, javascript, a bit of web server stuff here and there. I think it would be fun to give googles agent or cli stuff a spin but when I read up here and there about antigravity, I'm reading that people are getting their accounts shutdown for stuff I would have thought was ok, even if they paid for it (well actually as usual the actual reasons for accounts getting zapped remain unknown as is today's trend for cloud accounts).I'm too poor for local llms, I think there might be a 2 or 4gb graphics card in one of my junk pcs but thats about it lol
by cowboylowrez
2/16/2026 at 6:47:16 PM
> I've just watched two major projects failThis is an opportunity. You can have a good long career consulting/contracting for these types of companies.
by LouisSayers
2/16/2026 at 6:49:18 PM
Why do you think I work there!Emergency clean up work is ridiculous money!
by dgxyz
2/16/2026 at 6:51:08 PM
> "AI is completely useless."This is a straw man. I don't know anybody who sincerely claims this, even online. However if you dare question people claiming to be solving impossible problems with 15 AI agents (they just can't show you what they're building quite yet, but soon, soon you'll see!), then you will be treated as if you said this.
AI is a superior solution to the problem Stack Overflow attempted to solve, and really great at quickly building bespoke, but fragile, tools for some niche problem you solve. However I have yet to see a single instance of it being used to sustainably maintain a production code base in any truly automated fashion. I have however, personally seen my team slowed down because code review is clogged with terribly long, often incorrect, PRs that are largely AI generated.
by crystal_revenge
2/16/2026 at 6:39:56 PM
I use both Claude and Codex (Claude at work, Codex at home).They are fine, moderately useful here and there in terms of speeding up some of my tasks.
I wouldn't pay much more than 20 bucks for it though.
by surgical_fire
2/16/2026 at 6:26:17 PM
I think this is reality.None of our much-promoted AI initiatives have resulted in any ROI. In fact they have cost a pile of cash so far and delivered nothing.
by dgxyz
2/16/2026 at 6:45:47 PM
Many AI initiatives have had massive ROI though. The implementation problems are similar to any pre-AI tech rollout and hugely expensive non-AI tech implementations fail all the time.by molsongolden
2/16/2026 at 6:48:44 PM
Name one that has at least $200mn ROI over capital investment. Show me the balance sheet for it as well. And make sure that ROI isn't from suddenly not paying salaries.by dgxyz
2/16/2026 at 6:37:45 PM
[flagged]by co_king_5
2/16/2026 at 6:42:13 PM
thanks for your insightful contributionby AstroBen
2/16/2026 at 6:41:40 PM
Are these botted comments or just sarcasm?by Ancalagon
2/16/2026 at 6:55:19 PM
It's just like in Crypto days.Back then, whenever there was a thread discussing the merits of Crypto, there would be people speaking of the certainty that it was the future and fiat currency was on its way out.
It's the same shit with AI. In part it's why I am tranquil about it. The disconnect in between what AI shills say and the reality of using it on a daily basis tell me what I need to know.
by surgical_fire
2/16/2026 at 7:02:45 PM
That's why my account is so new. Zealot hordes. I grow tired and walk away from HN. Not sure why I keep coming back.I remember when everyone was reading SICP and Clojure and Blockchain were going to burn the universe to the ground. Then crypto. Now this.
Been around much longer than HN and watched this cycle so many times it's boring. I'm still writing C.
by dgxyz
2/16/2026 at 7:46:24 PM
> Not sure why I keep coming back.It's good toilet time for when Reddit gets too annoying.
by surgical_fire
2/16/2026 at 10:45:59 PM
That’s probably it actually.by dgxyz
2/16/2026 at 6:45:55 PM
Does it matter what I say, or are you going to call me a bot regardless because you're sensitive about my joke?by co_king_5
2/16/2026 at 7:16:58 PM
It's unclear if you are merely joking or whatever you are actually doing here. Your comments don't say much of anything. People here reward and want useful comments that take the discussion seriously. If you don't, what are you doing here commenting nothingburgers? YOU are the one writing short substance-free complaint-only comments, complaining about other people.That said, a well-reasoned text should probably go on a blog site, not here, or here only as link. Otherwise you are wasting a lot of effort, with only few people even noticing your comment and the discussion soon entirely disappearing into history.
by nosianu
2/16/2026 at 7:32:34 PM
Honestly I couldn’t tell - didn’t mean to offend. There were just two comments in a row saying essentially the same sarcastic thing about using a 3 year old tech for the last 5+ yearsby Ancalagon
2/16/2026 at 6:36:33 PM
After spending nearly 5 years building software which uses AI agents on the back end I've come to the conclusion it's the PC revolution part 2.Productivity gains won't show up on economic data and companies trying to automate everything will fail.
But the average office worker will end up with a much more pleasant job and will need to know how to use the models, just like who they needed to learn to use a PC.
by noosphr
2/16/2026 at 6:41:35 PM
Are these botted comments or just sarcasm?by Ancalagon
2/16/2026 at 6:44:42 PM
> There's a mass psychosis happeningThere absolutely is but I'm increasingly realizing that it's futile to fight it.
The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.
Even if you restrict yourself to small, open models, there is so much unexplored around messing with the internals of these. The entire world of open image/video generation is pretty much ignored by all but a very narrow niche of people, but has so much potential for creating interesting stuff. Even restricting yourself only to an API endpoint, isn't there something more clever we can be doing than re-implementing code that already exists on github badly through vibe coding?
But nobody in the hype-fueled mind rot part of this space remotely cares about anything real being done with gen AI. Vague posting about your billion agent setup and how you've almost entered a new reality is all that matters.
by crystal_revenge
2/16/2026 at 7:00:16 PM
Yes, it's been odd to observe the parallels with the web3 craze.You asked people what their project was for and you'd get a response that made sense to no one outside of that bubble, and if you pressed on people would get mad.
The bizarre thing is that this time around, these tools do have a bunch of real utility, but it's become almost impossible online to discuss how to use the tech properly, because that would require acknowledging some limitations.
by svara
2/16/2026 at 8:45:25 PM
Very similar to web3! On paper the web3 craze sounded very exciting: yes, I absolutely would love an alternate web of truly decentralized services.I've been pretty consistently skeptical of the crypto world, but with web3 I was really hoping to be wrong. What's wild is there was not a single, truly distributed, interesting/useful service at all to come out of all that hype. I spent a fair bit of time diving into the details of Ethereum and very quickly realized the "world computer" there (again, wonderful idea) wasn't really feasible for anything practical (I mean other than creating clever ways to scam people).
Right now in the LLM space I see a lot of people focused on building old things in new ways. I've realized that not only do very few people work with local models (where they can hack around and customize more), a surprisingly small number of people write code that even calls an LLM through an API for some specific task that previously wasn't possible (regular ol'software build using calls to an LLM has loads of potential). It's still largely "can some variation on a chat bot do this thing I used to do for me".
As a contrast, in the early web, plenty of people were hosting their own website, and messing around with all the basic tools available to see what novel thing they could create. I mean "Hamster Dance" was it's own sort of slop, but the first time you say it you engaged with it. Snarg.net still stands out as novel in it's experiments with "what is an interface".
by crystal_revenge
2/16/2026 at 10:49:59 PM
>As a contrast, in the early web, plenty of people were hosting their own website, and messing around with all the basic tools available to see what novel thing they could createI'm hoping that the already full of slop centralized platforms now with LLM fueled implosion will overflow and lead to a renaissance of sorts for small and open web, niche communities and decoupling from big tech.
It's already gaining traction among the young, as far as I can see.
by neoromantique
2/16/2026 at 7:28:03 PM
> The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.I think we all do???
Even if I'm not coding a lot, I use it every day for small tasks. There is not much to code in my job, IT in a small traditional-goods export business. The tasks range from deciphering some coded EDI messages (D.96A as text or XML, for example), summarizing a bunch of said messages (DESADV, ORDERSP, INVOIC), finding missing items, Excel formula creation for non-trivial questions, and the occasional Python script, e.g. to concatenate data some supplier sent in a certain way.
AI is so strange because it is BOTH incredibly useful and incredibly random and stupid. Among the latter, see a comment in my history I made earlier today, the AI does not tell me when it uses a heuristic and does not provide an accurate result. EVERY result it shows me it shows as final and authoritative and perfect. Even when after questioning it suddenly "admits" that it actually skipped a few steps and that's not the correct final result.
Once AI gets some actual "I" I'm sure the revolution some people are commenting about will actually happen, but I fear that's still some way off. Until then, lots of sudden hallucinations and unexpected wrong results - unexpected because normal people believe the computer when it claims it successfully finished the task and presents a result as correct.
Until then it's daily highs and lows with little in between, either it brilliantly really solves some task, or it fails and that includes telling you about it.
A junior engineer will at least learn, but the AI stays pretty constant in how it fails and does not actually learn anything. The maker providing a new model version is not the AI learning.
by nosianu
2/16/2026 at 6:48:39 PM
There's a good reason for that. The end result of exploring what they can actually do isn't very exciting or marketable"I shipped code 15% faster with AI this month" doesn't have the pull of a 47 agent setup on a mac mini
by AstroBen
2/16/2026 at 6:36:04 PM
What is the difference between mass psychosis and a very effective marketing scheme?by co_king_5
2/16/2026 at 8:18:14 PM
That the targets also get almost religious about the productby coldtea
2/16/2026 at 6:55:17 PM
> There's a mass psychosis happeningAny guesses on how long this lasts?
by thefilmore
2/16/2026 at 6:58:25 PM
Best guess: a few months and it'll spread through dev communities that the effect is a lot more modest than the extreme claims are making it out to be6-12 months before non-technical leaders take notice and realize they can't actually fire half their team
by AstroBen
2/16/2026 at 6:58:48 PM
Until VC money runs out, probably.by layer8
2/16/2026 at 6:27:35 PM
Ai psychosis or ai++ psychosis?by verdverm
2/16/2026 at 6:17:50 PM
This is also what OpenAI’s “safety” angle was all about.“Ohhhh this is so scary! It’s so powerful we have to be very careful with it!” (Buy our stuff or be left behind, Mr. CEO, and invest in us now or lose out)
by bubblewand
2/16/2026 at 6:19:42 PM
Anthropic has been the most histrionic about this, with their big blog post about how they need to make sure their models don't feel like they are being emotionally abused by the users being the most fatuous example.by viccis
2/16/2026 at 6:53:21 PM
I was taken aback when I recently noticed a co-worker thanking ChatGPT for its answer.by SoftTalker
2/16/2026 at 7:04:43 PM
LLMs talk like people; there is nothing wrong with this. It's perfectly fine to be nice to something even if it isn't human. It's why we don't go around kicking dogs for fun.I understand why people don't act polite to LLMs, but honestly I think not thanking them will make people act more dickish to other humans.
by beowulfey
2/16/2026 at 7:39:29 PM
I like to think we can perceive a difference between a machine and a living being. I don't thank my bicycle for transporting me or thank my spell-checker for finding typos. I get that we are prone to anthropomorphize but was just something I found a bit surprising.by SoftTalker
2/16/2026 at 8:26:35 PM
>I don't thank my bicycle for transporting me or thank my spell-checker for finding typos.Neither your bicycle nor your spell-checker hold conversations and answer questions, neither of them is being used as therapist or virtual girl/boy friend, and neither's whole shtick is being trained on a ginormous human corpus to convincingly respond like a person.
I like to think we can perceive a difference between a bicycle and an something specifically developed and trained to pass for intelligence...
by coldtea
2/16/2026 at 10:41:50 PM
See, i have absolutely thanked my car. When i was broke driving a beater and it was below zero outside, practically every time it chose to start lolI don't think it's that weird
by queenkjuul
2/16/2026 at 8:16:35 PM
I occasionally say please and thank you to ChatGPT for my own sake, not for the LLM's. They're sufficiently similar to humans that allowing myself to be a jerk subtly degrades myself and makes it more likely that I'm a jerk to real people.by bglazer
2/16/2026 at 7:02:36 PM
From a year ago https://www.vice.com/en/article/telling-chatgpt-please-and-t...by steveklabnik
2/16/2026 at 7:01:54 PM
People have been thanking Siri back 15 years ago, it’s just a reflex.by layer8
2/16/2026 at 6:26:19 PM
"This is obviously why only we can be trusted with operating these models, and require government legislation saying so."They're trying to get government to hand them a moat. Spoilers... There's no moat.
by slowmovintarget
2/16/2026 at 7:21:22 PM
I missed this - which blog post?by biophysboy
2/16/2026 at 6:25:39 PM
To me, Anthropic has done enough sketchy things to be on par with the players from Big Tech. They are not some new benevolent corporation backed by SVMany users don't want to acknowledge this about the company making their fav ai
by verdverm
2/16/2026 at 6:28:49 PM
Gpt2- "too dangerous to release"by scrollop
2/16/2026 at 6:50:04 PM
Oh funny, I forgot about that. But at the time it didn't seem unreasonable to withhold a model that could so easily write fake news articles. I'm not so sure it wasn't..by qnleigh
2/16/2026 at 6:21:58 PM
Can confirm. We don’t know if AI really is about to make programmers who write code by hand obsolete, but we sure as hell fear our competitors will ship features 10x faster than us. What is the logical next step?? Invest lots of money on AI or keep hoping it’s a fad and risk being left in the dust, even if you think that risk is fairly small?by brabel
2/16/2026 at 6:25:13 PM
Perhaps stop entering into saturated markets and using AI to try and shortcut your way to the moon?There's no way any LLM code generator can replace a moderately complex system at this point and looking at the rate of progress this hasn't improved recently at all. Getting one to reason about a simple part of a business domain is still quite difficult.
by dgxyz
2/16/2026 at 6:38:27 PM
> and looking at the rate of progress this hasn't improved recently at all.The rate of progress in the last 3 years has been over my expectations. The past year has been increasing a lot. The last 2 months has been insane. No idea how people can say "no improvement".
by NitpickLawyer
2/16/2026 at 8:32:10 PM
>The past year has been increasing a lot. The last 2 months has been insaneI wonder is there are parallel realities. What I remember from the last year is a resounding yawn at the latest models landing, and even people being actively annoyed in e.g. ChatGPT 4.1 vs 4 for being nerfed. Same for 5, big fanfare, and not that excited reception. And same for Claude. And nothing special in the last 2 months either. Nobody considers Claude 4.6 some big improvement over 4.5.
Sorry for closing this comment early, I need to leave my car at the house and walk to the car wash.
by coldtea
2/16/2026 at 6:45:12 PM
Yeah not that long ago, there was concern that we had run out of training data and progress would stall. That did not happen at all.by qnleigh
2/16/2026 at 8:32:23 PM
Sure it did.by coldtea
2/16/2026 at 6:44:45 PM
"My car is in the driveway, but it's dirty and I need to get it washed. The car wash is 50 meters away, should I drive there or walk?"by zozbot234
2/16/2026 at 6:50:07 PM
Gemini flash tells me to drive: “Unless you have a very long hose or you've invented a way to teleport the dirt off the chassis, you should probably drive. Taking the car ensures it actually gets cleaned, and you won't have to carry heavy buckets of soapy water back and forth across the street.”by votepaunchy
2/16/2026 at 6:51:26 PM
Beep boop human thinking ... actually I never wash my car. They do it when they service it once every year!by dgxyz
2/16/2026 at 6:48:32 PM
If your expectations were low, anything would have been over your expectations.There was some improvement in terms of the ability of some models to understand and generate code. It's a bit more useful than it was 3 years ago.
I still think that any claims that it can operate at a human level are complete bullshit.
It can speed things up well in some contexts though.
by surgical_fire
2/16/2026 at 7:15:56 PM
> It's a bit more useful than it was 3 years ago.It's comments like these that make me not really want to interact with this topic anymore. There's no way that your comment can be taken seriously. It's 99.9% a troll comment, or simply delusional. 3 years ago the model (gpt3.5, the only one out there basically) was not able to output correct code at all. It looked like python if you squinted, but it made no sense. To compare that to what we have today and say "a bit more useful" is not a serious comment. Cannot be a serious comment.
by NitpickLawyer
2/16/2026 at 8:45:05 PM
> It's comments like these that make me not really want to interact with this topic anymore.It's a religious war at this point. People who hate AI are not going to admit anything until they have no choice.
And given the progress in the last few months, I think we're a few years away from nearly every developer using coding agents, kicking and screaming in some cases, or just leaving the industry in others.
by munksbeer
2/16/2026 at 9:48:46 PM
This is such a weird framing.My comment was that I think AI is useful. I use it on a daily basis, and have been for quite a while. I actually pay for a Chat GPT account, and I also have access to Claude and Gemini at work.
That you frame my comment as "people who hate AI" and calls ir "a religious war" honestly says more about you than me.
It seems that if you don't think that AI is the second coming of Christ, you hate it.
by surgical_fire
2/16/2026 at 10:00:15 PM
To be honest, I didn't even really read your comment. I was mostly responding to NitpickLawyer in general terms. Sorry about that, it wasn't really aimed at you.But you're sort of doing the same thing I did - "second coming of Christ"?!
by munksbeer
2/16/2026 at 7:59:25 PM
/shrugI have no intention of changing your mind. I don't think of the people I reply to highly enough to believe they can change their minds.
I reply to these comments for other people to read. Think of it as me adding ky point of view for neutral readers.
Either way, I could use AI for some coding tasks back in GPT 3.5 days. It was unreliable, but not completely useless (far from it in fact)
Nowadays it is a little more reliable, and it can do more complex coding tasks with less detailed prompts. AI now can handle a larger context, and the "thinking" steps it adds to itself while generating output were a nice trick to improve its capabilities.
While it makes me more productive on certain tasks, it is the sort of the improvements I expected in 3 years of it being a massive money black hole. Anything less would actually be embarrassing all things considered.
Perhaps if your job is just writing code day in an out you would find it more useful than I do? As a software engineer I do quite a bit more than that, even if coding is the bit of work I used to enjoy the most.
by surgical_fire
2/16/2026 at 6:44:44 PM
The recent developments of only the last 3 months have been staggering. I think you should challenge your beliefs on this a little bit. I don't say that as an AI fanboy (if those exist), it's just really, really noticeable how much progress has been made in doing more complex SWE work, especially if you just ask the LLM to implement some basic custom harness engineering.by sweetheart
2/16/2026 at 8:36:14 PM
>The recent developments of only the last 3 months have been staggering.What developments have been "staggering"? Claude 4.6 vs 4.5? ChatGPT 5.2 vs 5? The Gemini update?
Only the hype has been staggering, and bs non-stories like the "AI agents conspire and invent their own religion".
by coldtea
2/16/2026 at 6:52:36 PM
I'll let you know in 12 months when we have been using it for long enough to have another abortion for me to clean up.by dgxyz
2/16/2026 at 6:52:27 PM
Why is it an all or nothing decision?Do a small test: if you're 10x faster then keep going. If not, shelve it for a while and maybe try again later
by AstroBen
2/16/2026 at 9:34:43 PM
It is an all or nothing decision. We were letting devs make a decision about AI use and it turned out just a few were using it. The rest were using it just a little. But we noticed people were still spending time on stuff that AI can easily do for them!! I know because I had to do stuff for devs who were just not able to do things quickly enough and I did it with AI in minutes. Quite disappointing. But my conclusion is that we need to push AI quite strongly and invest in the best services otherwise people don’t even try it, and when they do it if it’s some cheap service they quickly dismiss it. So here we are.by brabel
2/16/2026 at 7:22:23 PM
It's not possible to tell if you're 10x faster, or even faster at all, over any non-trivial amount of time. When not using a coding agent, you make different decisions and get the task done differently, at a different level of architecture, with a different understanding of the code.by zibzob
2/16/2026 at 7:36:47 PM
You don't think it's possible you'd notice if you, or the people around you, produced 6 months of work in 3 weeks?The case where it's not obvious is when the effect is <1.5x. I think that's clearly where we're at
by AstroBen
2/16/2026 at 8:01:34 PM
I think it's a lot harder than it sounds. First, nobody can estimate time well enough to know how long something would have taken without AI. And then, it's comparing apples to oranges - there's the set of things people did using an AI agent, and the set of things they would have done if they hadn't used AI, and the two are just not directly comparable. The AI agent set would definitely have more lines of code, that's all I could really say. Maybe it would also contain a larger maintenance burden, lower useful knowledge in humans, projects that didn't need to be done or could have been done in a smarter way, etc.by zibzob
2/16/2026 at 8:37:07 PM
This sounds like lowering expectations with extra steps!by coldtea
2/16/2026 at 6:46:53 PM
So, what I don't get is, taking it to its logical conclusion, if AI takes all the jobs then who are your customers? Who will buy your stock? Who will buy the software that all the developers you used to employ used to write? How do these CEOs and investors see this playing out?by SoftTalker
2/16/2026 at 6:47:41 PM
You’re not supposed to ask such logical questions. It kills the AI vibe.by cmiles8
2/16/2026 at 8:13:41 PM
"We are asking you to pay the subscription, not to think! Think of the investors!"by skydhash
2/16/2026 at 8:38:24 PM
Like late-stage capitalism generally thinks of these things: by then you'll have sold your company and living the life...That such a collapse of consumption economy, even just counting white collar jobs cut by "mere" 30%, would also mean a collapse of the stock market, society, infrastructure, and even basic safety, doesn't enter the mind.
by coldtea
2/16/2026 at 6:37:13 PM
something i wonder about with AI taking jobs --similar to the ATM example in the article (and my experience with ai coding tools), the automation will start out by handling the easiest parts of our jobs.
eventually, all the easy parts will be automated and the overall headcount will be reduced, but the actual content of the remaining job will be a super-distilled version of 'all the hard parts'.
the jobs that remain will be harder to do and it will be harder to find people capable or willing to do them. it may turn out that if you tell somebody "solve hard problems 40hrs a week"... they can't do it. we NEED the easy parts of the job to slow down and let the mind wander.
by parpfish
2/16/2026 at 6:40:59 PM
There's plenty of jobs like this already. They'll want to keep you around even if you're not doing much most of the time, because you can still solve the hard problems as they arise and grow organizational capital in other ways.by zozbot234
2/16/2026 at 6:42:32 PM
I’m also concerned about the continuing enshittification of software. Even without LLMs, we’ve had to endure slapdash software. Even Apple, which used to be perfectionistic, has slipped. I feel enshittification is a result of a lack of meaningful competition for many software products due to moats such as proprietary file formats and protocols, plus network effects. “Move fast and break things” software development methodologies don’t help.LLMs will help such teams move and break things even faster than before. I’m not against the use of LLMs in software development, but I’m against their blind use. However, when there is pressure to ship as fast as possible, many will be tempted to take shortcuts and not thoroughly analyze the output of their LLMs.
by linguae
2/16/2026 at 6:23:05 PM
Saying it will take jobs is the marketing line to CEOs - more than you will be left behind.by dv_dt
2/16/2026 at 6:30:29 PM
Except entry level jobs are already getting wiped out.by hmmmmmmmmmmmmmm
2/16/2026 at 6:37:36 PM
The one entry level job that's been wiped out for good by LLMs is human marketing copywriters, i.e. the people whose job was to come up with the kind of slop LLMs learned from. They're just rebranding as copyeditors now because AI can write the slop itself, or at least its first draft.by zozbot234
2/16/2026 at 8:48:55 PM
> the customers are investors and CEOs100%. i have a basically unlimited Claude balance at work. I do not think of cost except for fun. CEO thinks every engineer has to use AI because nobody is gonna just be using text editors alone in the future.
by whateveracct
2/16/2026 at 7:05:38 PM
Also: They've figured out they can "force" AI adoption top-down at many workplaces. They don't need to convince you or even your boss - they just need the C-suite to mandate it.by spamizbad
2/16/2026 at 6:42:25 PM
Yeah I guess the subtext is 'AI is going to take over so much of the market that it's risky to hold anything else.'by qnleigh
2/16/2026 at 6:42:24 PM
When trying to look infer people's motives don't just look at what they are doing. Look also at what they aren't doing. Alternatives they had and rejected.If marketing it was the sole objective there are many other stories they could have told, but didn't.
by im3w1l
2/16/2026 at 7:10:32 PM
what are a couple of those alternatives?by vpribish
2/16/2026 at 6:23:03 PM
You guys can hate him, but Alex Karp of Palantir had the most honest take on this recently which was basically:"Yes, I would love to pause AI development, but unless we get China to do the same, we're f***, and there's no advantage unilaterally disarming" (not exact, but basically this)
You can assume bad faith on the parts of all actors, but a lot of people in AI feel similarly.
by bpodgursky
2/16/2026 at 6:34:40 PM
In China, I wonder if the same narrative is happening, no new junior devs, threats of obsolescence, etc. Or are they collectively, see the future differently?by testbjjl
2/16/2026 at 6:43:13 PM
Most reporting I've seen rhymes with this, from last year https://www.theguardian.com/technology/2025/jun/05/english-s...by steveklabnik
2/16/2026 at 6:50:08 PM
They absolutely see the future differently because their society is already set up for success in an AI world. If what these predictions say become true, free market capitalism will collapse. What would be left?by SlightlyLeftPad
2/16/2026 at 7:13:53 PM
The reason you think its honest is because you already believed it.by biophysboy
2/16/2026 at 7:46:47 PM
What does he gain by saying "Yes I'd love to shut all of this down"?by bpodgursky
2/16/2026 at 8:52:11 PM
He gets to pretend to be impartial ("sure, I've love to shut this down and lose the billions coming my way"),and by pretending that he has no option but to "go against his own will" and continue it, he gets to make it sound nuclear-bomb-level important.
This is hyping 101.
by coldtea
2/16/2026 at 8:20:05 PM
Its rhetoric - he gains your support. He doesn't want to shut it down.by biophysboy
2/16/2026 at 8:38:11 PM
You should really step back and think, maybe you've spun a very complicated web of beliefs on top of your own eyes.Yes, maybe everyone is playing 8 dimensional chess and every offhand comment is a subtle play for credibility. Or maybe sometimes people just say what they think.
by bpodgursky
2/16/2026 at 8:49:23 PM
This is what I think:- Alex Karp genuinely believes China is a threat
- I think China is an economic threat, especially for tech
- An AI arms race is itself threatening; it is not like the nuclear deterrent
- Geopolitical tensions are very convenient for Alex Karp
- America has a history of exaggerating geopolitical threats
- Tech is very credulous with politics
by biophysboy
2/16/2026 at 8:53:20 PM
"This late night informercial guy is not genuinely believing the product to be amazing, he's just saying shit to sell it""Yes, maybe everyone is playing 8 dimensional chess and every offhand comment is a subtle play for credibility. Or maybe sometimes people just say what they think".
I mean, dude, a corporate head having a conflict of interest in saying insincere shit to promote the stuff his company makes is not some conspiracy thinking about everybody playing "8 dimensional chess".
It's the very basic baseline case.
by coldtea
2/16/2026 at 11:29:12 PM
If he was making nuclear weapons, would you question the statement, "I would be happy if everyone stopped making nuclear weapons, but if China is making them, we can't unilaterally disarm"? I'm very willing to believe a CEO who says that! They are human beings!Whether or not you believe it, a lot of people view advanced AI the same way.
by bpodgursky
2/17/2026 at 1:31:54 AM
>They are human beings!Barely.
Even in the nuclear weapons CEO case, if they actually believed that, they'd be in another line of business, not making millions of nuclear weapons.
by coldtea
2/16/2026 at 6:32:06 PM
Yeah but it’s in his interest to encourage an arms race with China.by tonyedgecombe
2/16/2026 at 7:11:50 PM
OK, but the other view equally compatible with the evidence is that he is scared of getting rolled by an AI-dominant China and that's why he's building tools for the dept of defense.Like I said you can believe whatever you want about good-faith motives, but he didn't have to say he wanted to pause AI, he could have been bright-and-cheery bullish, there was no real advantage to laying his cards out on his qualms.
by bpodgursky
2/16/2026 at 7:09:58 PM
Answer the question.If the USA pauses AI development, do you think China will?
by heraldgeezer
2/16/2026 at 8:48:34 PM
"I would love to stop getting your money, but consider the children/China/some disaster/various scenarios. That's why you should continue to shower me with billions".by coldtea
2/16/2026 at 11:20:43 PM
oh please. people said that about the moon, and nuclear weapons too. and yet it's the one side who has a track record of using new technology to intidmidate.by didntknowyou
2/16/2026 at 7:09:33 PM
HN has become so marxist they hate the country they live inby heraldgeezer
2/16/2026 at 9:12:12 PM
Oh no! What a pity. In hindsight, mind you, countries that treat their population like disposable labor units to be wrung dry and then discarded, tend to see the population develop opinions about that eventually.by Balinares
2/16/2026 at 9:37:59 PM
>disposable labor unitsThis is the source of migration allowance but I bet you stand there with a "Refugees welcome" sign.
by heraldgeezer
2/16/2026 at 6:22:22 PM
It's worse than that. It's ultimately a military technology. The end-game here is to use it offensively and / or defensively against other countries. Whoever establishes dominance first wins. And so you have to push adoption, so that it gets tested and can be iterated. But this isn't about making money (they are losing it like crazy!) This is end-of-the world shit and about whoever will be left standing once all the dominoes fall -- if they ever fall (let's hope they don't!)But it's tacitly understood we need to develop this as soon as we can, as fast as we can, before those other guys do. It's a literal arms race.
by empressplay
2/16/2026 at 6:35:53 PM
Yeah, if you consider a military-grade AI/LLM with access to all military info sources, able to analyze them all much quicker than a human… there’s no way this isn’t already either in progress or in use today.Probably only a matter of time until there’s a Snowden-esque leak saying AI is responsible for drone assassinations against targets selected by AI itself.
by monkpit
2/16/2026 at 8:45:57 PM
>Yeah, if you consider a military-grade AI/LLM with access to all military info sources, able to analyze them all much quicker than a human… there’s no way this isn’t already either in progress or in use today.Still wouldn't mean much. Wars are won on capacity, logistics (the practical side, not ability to calculate them), land/etc advantages, and when it comes to boots on the ground, courage, knowledge of the place, local population support, etc. Not "analyzing info sources" at scale which is mostly a racket that pretends to be important.
by coldtea
2/16/2026 at 11:23:18 PM
Ok, I didn’t say anything about what you said though. I said it’s definitely either in progress or already implemented.by monkpit
2/17/2026 at 1:33:23 AM
And I didn't refute anything about what you said. I said "Still wouldn't mean much."by coldtea
2/16/2026 at 6:42:20 PM
This 100%. We're in the middle of an AI Manhattan Project and if "we" give up or slow down, another company or country will get AGI before "us" and there's no coming back after that. If there's a chance AGI is possible, it doesn't make sense to let someone else take the lead no matter how dangerous it could be.by daze42
2/16/2026 at 6:52:00 PM
The better analogy would be https://en.wikipedia.org/wiki/Project_Stargate"If there's a chance psychic powers are real..."
by rep_lodsb
2/16/2026 at 6:31:17 PM
One often forgets this.by big_paps
2/16/2026 at 8:44:19 PM
>The end-game here is to use it offensively and / or defensively against other countries.Against other countries? The biggest endgame is own population control. That has always been the biggest problem/desire of elites, not war with other countries.
by coldtea
2/16/2026 at 6:35:05 PM
With all the wackiness around AI, is this some Mutually Assured Delusion doctrine?by saltcured
2/16/2026 at 6:35:17 PM
Sam Altman is a known sociopath who has no problem achieving his goals by any means necessary. His prior business dealings (and repeated patterns with OpenAI) are evidence of this.Shumer is of a similar stock but less capable, so he gets caught in his lies.
I’m still shocked people work with Altman knowing his history, but given the Epstein files etc it’s not surprise. Our elite class is entirely rotten.
Best advice is trust what you see in front of your face (as much as you can) and be very skeptical of anything else. Everyone involved has agendas and no morals.
by apaosjns
2/16/2026 at 6:48:18 PM
I'm shocked how congratulatory things were for OpenClaw joining Altman Incby verdverm
2/16/2026 at 6:56:03 PM
If you know the author you know it's a match made in heavenby gmerc
2/16/2026 at 6:39:40 PM
[dead]by himata4113
2/16/2026 at 7:25:26 PM
FOMO was literally built in into Bitcoin. In the beginning it was a lot easier, and then it slowly gets harder.But what I really hate about AI and how most people talk about it is that if one day it does what the advertisements say, all white collar jobs collapse.
by OptionOfT
2/16/2026 at 7:50:21 PM
> all white collar jobs collapseThen everything collapses. The carpenter will also be out of work if more than half of his client base cannot afford his work anymore.
by steve1977
2/16/2026 at 8:43:05 PM
Not to mention millions of ex-white-collars or student age that would have tried to be white-collar rushing to become carpenters and roofers, bringing down the pay and the quality of those jobs way down...by coldtea
2/16/2026 at 9:08:47 PM
This idea doesn't sit well with me. Like, I recognize it very likely could happen, but it feels kind of like a dumb magic trick. Like learning economy goes up because people generally believe economy should go up.If everyone is out of a job, then what is the point of the economy? Who is doing what work for whom? If nobody can afford anything, then why are we even doing this?
And, at some point, needs have to be met. If you were to drop 1000 random people on a remote island, you wouldn't expect nothing to happen just because there aren't any employers with jobs to hire people to do. People would spontaneously organize for survival and a local economy would form.
I find depictions of post-apocalyptic societies in sci-fi to be difficult to accept, too. Like Elysium: how could the entire Earth be an underclass, yet the space station needs them to work to be able to survive? That would be the easiest siege warfare of all time; do literally nothing and the space station eventually starves to death. Like Fallout: how could places stay completely run down for centuries? You mean to tell me that nobody would at least start a scrap yard and start cleaning up the old baby buggies and scrap metal from burnt out hulks of cars?
And then grown-ass adults tell stories about what could happen as if they are experts in Economy. Nobody knows how Economy work. Hell, half the Nobel prizes in economics in the last decade have basically been about proving the stupid, fairytale version of economics that existed for the last 100 years was a complete farce.
Yeah, I can definitely see a market collapse leading to a lot of mortgages getting foreclosed. But a complete shutdown? It seems preposterous. How? Why would everyone go along with it?
by moron4hire