alt.hn

2/16/2026 at 5:22:02 PM

I guess I kinda get why people hate AI

https://anthony.noided.media/blog/ai/programming/2026/02/14/i-guess-i-kinda-get-why-people-hate-ai.html

by NM-Super

2/16/2026 at 6:11:43 PM

> Microsoft’s AI CEO is saying AI is going to take everybody’s job. And Sam Altman is saying that AI will wipe out entire categories of jobs. ANd Matt Shumer is saying that AI is currently like Covid in January 2020—as in, “kind of under the radar, but about to kill millions of people”.

> I legitimately feel like I am going insane when I hear AI technologists talk about the technology. They’re supposed to market it. But they’re instead saying that it is going to leave me a poor, jobless wretch, a member of the “permanent underclass,” as the meme on Twitter goes.

They are marketing it. The target customer isn't the user paying $20 for ChatGPT Pro, though; the customers are investors and CEOs, and their marketing is "AI is so powerful and destructive that if you don't invest in AI, you will be left behind." FOMO at its finest.

by mjr00

2/16/2026 at 6:47:26 PM

Tech has slowly been moving that way anyways. In terms of ROI, you’re often much better off targeting whales and large clients than trying to become the ubiquitous market service for consumers. Competition is fierce and people are poor comparatively, so you need the volume for success.

Meanwhile if you go fishing for niche whales, there’s less competition and much higher ROI for them buying. That’s why a lot of tech isn’t really consumer friendly, because it’s not really targeting consumers, it’s targeting other groups that extract wealth from consumers in other ways. You’re selling it to grocery stores because people need to eat and they have the revenue to pay you, and see the proposition of dynamic pricing on consumers and all sorts of other things. Youre marketing it for analyzing communications of civilians for prying governments that want more control. You’re selling it to employers who want to minimize labor costs and maximize revenue, because they have millions or billions often and small industry monopolies exist all around, just find your niche whales to go hunting for.

And right now I’d say a lot of people in tech are happy to implement these things but at some point it’s going to bite you too. You may be helping dynamic pricing for Kroger because you shop at Aldi but at some point all of this will effect you as well, because you’re also a laboring consumer.

by Frost1x

2/16/2026 at 8:25:50 PM

The reason why whaling and government contracts are increasingly the best options available is because the wealth from the working class has mostly been extracted... And with less and less disposable wealth available to the populous, targeting them for products gets increasingly competitive as anything non essential gets ignored

It's a negative feedback loop, and the politicians would rather reduce taxes on the rich then reverse that trend

At least that's how it looks to me

by ffsm8

2/17/2026 at 12:52:14 AM

The sad reality is that 80-90% of us are simply no longer economically relevant to $Trillion dollar companies. Even if we were paying customers, each of our total lifetime values is a rounding error on these guys' balance sheets. An individual could swear off Company X, and it wouldn't even be noticed, not even by someone deliberately looking for the economic impact of losing a customer.

by ryandrake

2/16/2026 at 7:14:23 PM

It’s capitalism moving everything that way. Always has been and will continue to until we’re all hooked up to tubes paying taxes with ectoplasm.

by braebo

2/16/2026 at 6:19:25 PM

The marketing is clearly effecting individual developers, too. There's a mass psychosis happening

by AstroBen

2/16/2026 at 6:27:23 PM

Maybe. I'm actually a big fan of Claude/Codex and use them extensively. The author of the article says the same.

> To be clear: I like and use AI when it comes to coding, and even for other tasks. I think it’s been very effective at increasing my productivity—not as effective as the influencers claim it should be, but effective nonetheless.

It's hard to get measured opinions. The most vocal opinions online are either "I used 15 AI agents to vibe code my startup, developers are obsolete" or "AI is completely useless."

My guess is that most developers (who have tried AI) have an opinion somewhere between these two extremes, you just don't hear them because that's not how the social media world works.

by mjr00

2/16/2026 at 6:41:06 PM

Well I've just watched two major projects fail which were running mostly on faith because someone read too many "I used 15 AI agents to vibe code..." blog posts and sold it to management. The promoters have a deep technical understanding of the problem domain we have but little understanding of what an LLM can achieve or what it can understand relating to the problem at hand.

Yes you can indeed vibe code a startup. But try building on that or doing anything relatively complicated and you're up shit creek. There's literally no one out there doing that in the influencer-sphere. It's all about the initial cut and MVP of a project, not the ongoing story.

The next failure is replacing a 20 year old legacy subsystem with 3MLOC with a new React / microservices thing. This has been sold to the directors as something we can do in 3 months with Claude. Project failure number three.

The only reality is no one learns or is accountable for their mistakes.

by dgxyz

2/16/2026 at 6:54:21 PM

Rather than making a good product that’s useful to the world, the goal of current startups seems to be milking VCs who are desperately searching for the new version of the mobile phone revolution that will make this all ok… so it seems like they’re accomplishing their goal?

I reckon the reason the VC rhetoric has reached running-hair-dye-Giuliani-speech level absurdity isn’t because they’re trying to convince other people— it’s because they’re trying to convince themselves. I’d think it was funny as hell if my IRA wasn’t on the line.

by DrewADesign

2/16/2026 at 6:58:21 PM

I think no one cares about the truth or building something good any more. It's a meme economy. Tell a story and the numbers go up. Until they don't.

Yes my pension is probably going down the same sinkhole with your IRA. Good luck. We need it.

by dgxyz

2/16/2026 at 7:07:59 PM

Its always been like this but for awhile you had people like Steve Jobs to hold people like Bill Gates accountable. He long referred to MSFT as being mcdonalds in relation to the stuff they produced - very pedestrian.

by ass22

2/16/2026 at 7:57:39 PM

Yep, all that accountability gates has faced in his life.

Is he facing charges yet for sneaking drugs into his wife's food, or did he only ever discuss that with his buddy Jeff E and never actually follow through with it

by queenkjuul

2/16/2026 at 8:22:38 PM

It's worse than a meme economy, it's a gambling economy. The entire VC business model is gambling (fund 10 hope 1 pays for the losers). Crypto is all gambling. The stock market is gambling. TV ads are all gambling ads. Even dating is gambling.

Now programming and art are both gambling.

by ModernMech

2/16/2026 at 6:52:27 PM

My experience has been a mixed bag.

AI has led us into a deep spaghetti hole in one product where it was allowed free rein. But when applied to localised contexts. Sort of a class at a time it’s really excellent and productivity explodes.

I mostly use it to type out implementations of individual methods after it has suggested interfaces that I modify by hand. Then it writes the tests for me too very quickly.

As soon as you let it do more though, it will invariably tie itself into a knot - all the while confidently ascertaining that it knows what it’s doing.

by eckesicle

2/16/2026 at 6:55:12 PM

On localised context stuff, yeah no. I spent a couple of hours rewriting something Claude did terribly a couple of weeks back. Sure it solved the problem, a relatively simple regression analysis, but it was so slow that it crapped out under load. Cue emergency rewrite by hand. 20s latency down to 18ms. Yeah it was that bad.

by dgxyz

2/16/2026 at 7:55:43 PM

For me it's just wildly unpredictable. Sometimes it gets a small task perfectly right in one shot, sometimes it invents an absurd new way to be completely wrong.

Anyone trusting it to just "do its own thing" is out if their mind

by queenkjuul

2/16/2026 at 8:18:32 PM

For me I would ask it to do a simple thing and it would give me the tutorial code you could find anywhere on the Internet. Then you ask it to modify it in a way that you can't find in any example online, it will tell you it's fixed everything, but actually nothing has changed at all or it's completely broken.

I think if someone's goal was just the tutorial code, it would have been very impressive to them the AI can summon it.

by ModernMech

2/16/2026 at 8:46:48 PM

this is what I've been using freebie gemini chat for mostly, example code, like reminding me of c stdlib stuff, javascript, a bit of web server stuff here and there. I think it would be fun to give googles agent or cli stuff a spin but when I read up here and there about antigravity, I'm reading that people are getting their accounts shutdown for stuff I would have thought was ok, even if they paid for it (well actually as usual the actual reasons for accounts getting zapped remain unknown as is today's trend for cloud accounts).

I'm too poor for local llms, I think there might be a 2 or 4gb graphics card in one of my junk pcs but thats about it lol

by cowboylowrez

2/16/2026 at 6:47:16 PM

> I've just watched two major projects fail

This is an opportunity. You can have a good long career consulting/contracting for these types of companies.

by LouisSayers

2/16/2026 at 6:49:18 PM

Why do you think I work there!

Emergency clean up work is ridiculous money!

by dgxyz

2/16/2026 at 6:51:08 PM

> "AI is completely useless."

This is a straw man. I don't know anybody who sincerely claims this, even online. However if you dare question people claiming to be solving impossible problems with 15 AI agents (they just can't show you what they're building quite yet, but soon, soon you'll see!), then you will be treated as if you said this.

AI is a superior solution to the problem Stack Overflow attempted to solve, and really great at quickly building bespoke, but fragile, tools for some niche problem you solve. However I have yet to see a single instance of it being used to sustainably maintain a production code base in any truly automated fashion. I have however, personally seen my team slowed down because code review is clogged with terribly long, often incorrect, PRs that are largely AI generated.

by crystal_revenge

2/16/2026 at 6:39:56 PM

I use both Claude and Codex (Claude at work, Codex at home).

They are fine, moderately useful here and there in terms of speeding up some of my tasks.

I wouldn't pay much more than 20 bucks for it though.

by surgical_fire

2/16/2026 at 6:26:17 PM

I think this is reality.

None of our much-promoted AI initiatives have resulted in any ROI. In fact they have cost a pile of cash so far and delivered nothing.

by dgxyz

2/16/2026 at 6:45:47 PM

Many AI initiatives have had massive ROI though. The implementation problems are similar to any pre-AI tech rollout and hugely expensive non-AI tech implementations fail all the time.

by molsongolden

2/16/2026 at 6:48:44 PM

Name one that has at least $200mn ROI over capital investment. Show me the balance sheet for it as well. And make sure that ROI isn't from suddenly not paying salaries.

by dgxyz

2/16/2026 at 6:37:45 PM

[flagged]

by co_king_5

2/16/2026 at 6:42:13 PM

thanks for your insightful contribution

by AstroBen

2/16/2026 at 6:41:40 PM

Are these botted comments or just sarcasm?

by Ancalagon

2/16/2026 at 6:55:19 PM

It's just like in Crypto days.

Back then, whenever there was a thread discussing the merits of Crypto, there would be people speaking of the certainty that it was the future and fiat currency was on its way out.

It's the same shit with AI. In part it's why I am tranquil about it. The disconnect in between what AI shills say and the reality of using it on a daily basis tell me what I need to know.

by surgical_fire

2/16/2026 at 7:02:45 PM

That's why my account is so new. Zealot hordes. I grow tired and walk away from HN. Not sure why I keep coming back.

I remember when everyone was reading SICP and Clojure and Blockchain were going to burn the universe to the ground. Then crypto. Now this.

Been around much longer than HN and watched this cycle so many times it's boring. I'm still writing C.

by dgxyz

2/16/2026 at 7:46:24 PM

> Not sure why I keep coming back.

It's good toilet time for when Reddit gets too annoying.

by surgical_fire

2/16/2026 at 10:45:59 PM

That’s probably it actually.

by dgxyz

2/16/2026 at 6:45:55 PM

Does it matter what I say, or are you going to call me a bot regardless because you're sensitive about my joke?

by co_king_5

2/16/2026 at 7:16:58 PM

It's unclear if you are merely joking or whatever you are actually doing here. Your comments don't say much of anything. People here reward and want useful comments that take the discussion seriously. If you don't, what are you doing here commenting nothingburgers? YOU are the one writing short substance-free complaint-only comments, complaining about other people.

That said, a well-reasoned text should probably go on a blog site, not here, or here only as link. Otherwise you are wasting a lot of effort, with only few people even noticing your comment and the discussion soon entirely disappearing into history.

by nosianu

2/16/2026 at 7:32:34 PM

Honestly I couldn’t tell - didn’t mean to offend. There were just two comments in a row saying essentially the same sarcastic thing about using a 3 year old tech for the last 5+ years

by Ancalagon

2/16/2026 at 6:36:33 PM

After spending nearly 5 years building software which uses AI agents on the back end I've come to the conclusion it's the PC revolution part 2.

Productivity gains won't show up on economic data and companies trying to automate everything will fail.

But the average office worker will end up with a much more pleasant job and will need to know how to use the models, just like who they needed to learn to use a PC.

by noosphr

2/16/2026 at 6:41:35 PM

Are these botted comments or just sarcasm?

by Ancalagon

2/16/2026 at 6:44:42 PM

> There's a mass psychosis happening

There absolutely is but I'm increasingly realizing that it's futile to fight it.

The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.

Even if you restrict yourself to small, open models, there is so much unexplored around messing with the internals of these. The entire world of open image/video generation is pretty much ignored by all but a very narrow niche of people, but has so much potential for creating interesting stuff. Even restricting yourself only to an API endpoint, isn't there something more clever we can be doing than re-implementing code that already exists on github badly through vibe coding?

But nobody in the hype-fueled mind rot part of this space remotely cares about anything real being done with gen AI. Vague posting about your billion agent setup and how you've almost entered a new reality is all that matters.

by crystal_revenge

2/16/2026 at 7:00:16 PM

Yes, it's been odd to observe the parallels with the web3 craze.

You asked people what their project was for and you'd get a response that made sense to no one outside of that bubble, and if you pressed on people would get mad.

The bizarre thing is that this time around, these tools do have a bunch of real utility, but it's become almost impossible online to discuss how to use the tech properly, because that would require acknowledging some limitations.

by svara

2/16/2026 at 8:45:25 PM

Very similar to web3! On paper the web3 craze sounded very exciting: yes, I absolutely would love an alternate web of truly decentralized services.

I've been pretty consistently skeptical of the crypto world, but with web3 I was really hoping to be wrong. What's wild is there was not a single, truly distributed, interesting/useful service at all to come out of all that hype. I spent a fair bit of time diving into the details of Ethereum and very quickly realized the "world computer" there (again, wonderful idea) wasn't really feasible for anything practical (I mean other than creating clever ways to scam people).

Right now in the LLM space I see a lot of people focused on building old things in new ways. I've realized that not only do very few people work with local models (where they can hack around and customize more), a surprisingly small number of people write code that even calls an LLM through an API for some specific task that previously wasn't possible (regular ol'software build using calls to an LLM has loads of potential). It's still largely "can some variation on a chat bot do this thing I used to do for me".

As a contrast, in the early web, plenty of people were hosting their own website, and messing around with all the basic tools available to see what novel thing they could create. I mean "Hamster Dance" was it's own sort of slop, but the first time you say it you engaged with it. Snarg.net still stands out as novel in it's experiments with "what is an interface".

by crystal_revenge

2/16/2026 at 10:49:59 PM

>As a contrast, in the early web, plenty of people were hosting their own website, and messing around with all the basic tools available to see what novel thing they could create

I'm hoping that the already full of slop centralized platforms now with LLM fueled implosion will overflow and lead to a renaissance of sorts for small and open web, niche communities and decoupling from big tech.

It's already gaining traction among the young, as far as I can see.

by neoromantique

2/16/2026 at 7:28:03 PM

> The thing that surprises me is that people are simultaneously losing their minds over AI agents while almost no one is exploring playing around with what these models can really do.

I think we all do???

Even if I'm not coding a lot, I use it every day for small tasks. There is not much to code in my job, IT in a small traditional-goods export business. The tasks range from deciphering some coded EDI messages (D.96A as text or XML, for example), summarizing a bunch of said messages (DESADV, ORDERSP, INVOIC), finding missing items, Excel formula creation for non-trivial questions, and the occasional Python script, e.g. to concatenate data some supplier sent in a certain way.

AI is so strange because it is BOTH incredibly useful and incredibly random and stupid. Among the latter, see a comment in my history I made earlier today, the AI does not tell me when it uses a heuristic and does not provide an accurate result. EVERY result it shows me it shows as final and authoritative and perfect. Even when after questioning it suddenly "admits" that it actually skipped a few steps and that's not the correct final result.

Once AI gets some actual "I" I'm sure the revolution some people are commenting about will actually happen, but I fear that's still some way off. Until then, lots of sudden hallucinations and unexpected wrong results - unexpected because normal people believe the computer when it claims it successfully finished the task and presents a result as correct.

Until then it's daily highs and lows with little in between, either it brilliantly really solves some task, or it fails and that includes telling you about it.

A junior engineer will at least learn, but the AI stays pretty constant in how it fails and does not actually learn anything. The maker providing a new model version is not the AI learning.

by nosianu

2/16/2026 at 6:48:39 PM

There's a good reason for that. The end result of exploring what they can actually do isn't very exciting or marketable

"I shipped code 15% faster with AI this month" doesn't have the pull of a 47 agent setup on a mac mini

by AstroBen

2/16/2026 at 6:36:04 PM

What is the difference between mass psychosis and a very effective marketing scheme?

by co_king_5

2/16/2026 at 8:18:14 PM

That the targets also get almost religious about the product

by coldtea

2/16/2026 at 6:55:17 PM

> There's a mass psychosis happening

Any guesses on how long this lasts?

by thefilmore

2/16/2026 at 6:58:25 PM

Best guess: a few months and it'll spread through dev communities that the effect is a lot more modest than the extreme claims are making it out to be

6-12 months before non-technical leaders take notice and realize they can't actually fire half their team

by AstroBen

2/16/2026 at 6:58:48 PM

Until VC money runs out, probably.

by layer8

2/16/2026 at 6:27:35 PM

Ai psychosis or ai++ psychosis?

by verdverm

2/16/2026 at 6:17:50 PM

This is also what OpenAI’s “safety” angle was all about.

“Ohhhh this is so scary! It’s so powerful we have to be very careful with it!” (Buy our stuff or be left behind, Mr. CEO, and invest in us now or lose out)

by bubblewand

2/16/2026 at 6:19:42 PM

Anthropic has been the most histrionic about this, with their big blog post about how they need to make sure their models don't feel like they are being emotionally abused by the users being the most fatuous example.

by viccis

2/16/2026 at 6:53:21 PM

I was taken aback when I recently noticed a co-worker thanking ChatGPT for its answer.

by SoftTalker

2/16/2026 at 7:04:43 PM

LLMs talk like people; there is nothing wrong with this. It's perfectly fine to be nice to something even if it isn't human. It's why we don't go around kicking dogs for fun.

I understand why people don't act polite to LLMs, but honestly I think not thanking them will make people act more dickish to other humans.

by beowulfey

2/16/2026 at 7:39:29 PM

I like to think we can perceive a difference between a machine and a living being. I don't thank my bicycle for transporting me or thank my spell-checker for finding typos. I get that we are prone to anthropomorphize but was just something I found a bit surprising.

by SoftTalker

2/16/2026 at 8:26:35 PM

>I don't thank my bicycle for transporting me or thank my spell-checker for finding typos.

Neither your bicycle nor your spell-checker hold conversations and answer questions, neither of them is being used as therapist or virtual girl/boy friend, and neither's whole shtick is being trained on a ginormous human corpus to convincingly respond like a person.

I like to think we can perceive a difference between a bicycle and an something specifically developed and trained to pass for intelligence...

by coldtea

2/16/2026 at 10:41:50 PM

See, i have absolutely thanked my car. When i was broke driving a beater and it was below zero outside, practically every time it chose to start lol

I don't think it's that weird

by queenkjuul

2/16/2026 at 8:16:35 PM

I occasionally say please and thank you to ChatGPT for my own sake, not for the LLM's. They're sufficiently similar to humans that allowing myself to be a jerk subtly degrades myself and makes it more likely that I'm a jerk to real people.

by bglazer

2/16/2026 at 7:01:54 PM

People have been thanking Siri back 15 years ago, it’s just a reflex.

by layer8

2/16/2026 at 6:26:19 PM

"This is obviously why only we can be trusted with operating these models, and require government legislation saying so."

They're trying to get government to hand them a moat. Spoilers... There's no moat.

by slowmovintarget

2/16/2026 at 7:21:22 PM

I missed this - which blog post?

by biophysboy

2/16/2026 at 6:25:39 PM

To me, Anthropic has done enough sketchy things to be on par with the players from Big Tech. They are not some new benevolent corporation backed by SV

Many users don't want to acknowledge this about the company making their fav ai

by verdverm

2/16/2026 at 6:28:49 PM

Gpt2- "too dangerous to release"

by scrollop

2/16/2026 at 6:50:04 PM

Oh funny, I forgot about that. But at the time it didn't seem unreasonable to withhold a model that could so easily write fake news articles. I'm not so sure it wasn't..

by qnleigh

2/16/2026 at 6:21:58 PM

Can confirm. We don’t know if AI really is about to make programmers who write code by hand obsolete, but we sure as hell fear our competitors will ship features 10x faster than us. What is the logical next step?? Invest lots of money on AI or keep hoping it’s a fad and risk being left in the dust, even if you think that risk is fairly small?

by brabel

2/16/2026 at 6:25:13 PM

Perhaps stop entering into saturated markets and using AI to try and shortcut your way to the moon?

There's no way any LLM code generator can replace a moderately complex system at this point and looking at the rate of progress this hasn't improved recently at all. Getting one to reason about a simple part of a business domain is still quite difficult.

by dgxyz

2/16/2026 at 6:38:27 PM

> and looking at the rate of progress this hasn't improved recently at all.

The rate of progress in the last 3 years has been over my expectations. The past year has been increasing a lot. The last 2 months has been insane. No idea how people can say "no improvement".

by NitpickLawyer

2/16/2026 at 8:32:10 PM

>The past year has been increasing a lot. The last 2 months has been insane

I wonder is there are parallel realities. What I remember from the last year is a resounding yawn at the latest models landing, and even people being actively annoyed in e.g. ChatGPT 4.1 vs 4 for being nerfed. Same for 5, big fanfare, and not that excited reception. And same for Claude. And nothing special in the last 2 months either. Nobody considers Claude 4.6 some big improvement over 4.5.

Sorry for closing this comment early, I need to leave my car at the house and walk to the car wash.

by coldtea

2/16/2026 at 6:45:12 PM

Yeah not that long ago, there was concern that we had run out of training data and progress would stall. That did not happen at all.

by qnleigh

2/16/2026 at 8:32:23 PM

Sure it did.

by coldtea

2/16/2026 at 6:44:45 PM

"My car is in the driveway, but it's dirty and I need to get it washed. The car wash is 50 meters away, should I drive there or walk?"

by zozbot234

2/16/2026 at 6:50:07 PM

Gemini flash tells me to drive: “Unless you have a very long hose or you've invented a way to teleport the dirt off the chassis, you should probably drive. Taking the car ensures it actually gets cleaned, and you won't have to carry heavy buckets of soapy water back and forth across the street.”

by votepaunchy

2/16/2026 at 6:51:26 PM

Beep boop human thinking ... actually I never wash my car. They do it when they service it once every year!

by dgxyz

2/16/2026 at 6:48:32 PM

If your expectations were low, anything would have been over your expectations.

There was some improvement in terms of the ability of some models to understand and generate code. It's a bit more useful than it was 3 years ago.

I still think that any claims that it can operate at a human level are complete bullshit.

It can speed things up well in some contexts though.

by surgical_fire

2/16/2026 at 7:15:56 PM

> It's a bit more useful than it was 3 years ago.

It's comments like these that make me not really want to interact with this topic anymore. There's no way that your comment can be taken seriously. It's 99.9% a troll comment, or simply delusional. 3 years ago the model (gpt3.5, the only one out there basically) was not able to output correct code at all. It looked like python if you squinted, but it made no sense. To compare that to what we have today and say "a bit more useful" is not a serious comment. Cannot be a serious comment.

by NitpickLawyer

2/16/2026 at 8:45:05 PM

> It's comments like these that make me not really want to interact with this topic anymore.

It's a religious war at this point. People who hate AI are not going to admit anything until they have no choice.

And given the progress in the last few months, I think we're a few years away from nearly every developer using coding agents, kicking and screaming in some cases, or just leaving the industry in others.

by munksbeer

2/16/2026 at 9:48:46 PM

This is such a weird framing.

My comment was that I think AI is useful. I use it on a daily basis, and have been for quite a while. I actually pay for a Chat GPT account, and I also have access to Claude and Gemini at work.

That you frame my comment as "people who hate AI" and calls ir "a religious war" honestly says more about you than me.

It seems that if you don't think that AI is the second coming of Christ, you hate it.

by surgical_fire

2/16/2026 at 10:00:15 PM

To be honest, I didn't even really read your comment. I was mostly responding to NitpickLawyer in general terms. Sorry about that, it wasn't really aimed at you.

But you're sort of doing the same thing I did - "second coming of Christ"?!

by munksbeer

2/16/2026 at 7:59:25 PM

/shrug

I have no intention of changing your mind. I don't think of the people I reply to highly enough to believe they can change their minds.

I reply to these comments for other people to read. Think of it as me adding ky point of view for neutral readers.

Either way, I could use AI for some coding tasks back in GPT 3.5 days. It was unreliable, but not completely useless (far from it in fact)

Nowadays it is a little more reliable, and it can do more complex coding tasks with less detailed prompts. AI now can handle a larger context, and the "thinking" steps it adds to itself while generating output were a nice trick to improve its capabilities.

While it makes me more productive on certain tasks, it is the sort of the improvements I expected in 3 years of it being a massive money black hole. Anything less would actually be embarrassing all things considered.

Perhaps if your job is just writing code day in an out you would find it more useful than I do? As a software engineer I do quite a bit more than that, even if coding is the bit of work I used to enjoy the most.

by surgical_fire

2/16/2026 at 6:44:44 PM

The recent developments of only the last 3 months have been staggering. I think you should challenge your beliefs on this a little bit. I don't say that as an AI fanboy (if those exist), it's just really, really noticeable how much progress has been made in doing more complex SWE work, especially if you just ask the LLM to implement some basic custom harness engineering.

by sweetheart

2/16/2026 at 8:36:14 PM

>The recent developments of only the last 3 months have been staggering.

What developments have been "staggering"? Claude 4.6 vs 4.5? ChatGPT 5.2 vs 5? The Gemini update?

Only the hype has been staggering, and bs non-stories like the "AI agents conspire and invent their own religion".

by coldtea

2/16/2026 at 6:52:36 PM

I'll let you know in 12 months when we have been using it for long enough to have another abortion for me to clean up.

by dgxyz

2/16/2026 at 6:52:27 PM

Why is it an all or nothing decision?

Do a small test: if you're 10x faster then keep going. If not, shelve it for a while and maybe try again later

by AstroBen

2/16/2026 at 9:34:43 PM

It is an all or nothing decision. We were letting devs make a decision about AI use and it turned out just a few were using it. The rest were using it just a little. But we noticed people were still spending time on stuff that AI can easily do for them!! I know because I had to do stuff for devs who were just not able to do things quickly enough and I did it with AI in minutes. Quite disappointing. But my conclusion is that we need to push AI quite strongly and invest in the best services otherwise people don’t even try it, and when they do it if it’s some cheap service they quickly dismiss it. So here we are.

by brabel

2/16/2026 at 7:22:23 PM

It's not possible to tell if you're 10x faster, or even faster at all, over any non-trivial amount of time. When not using a coding agent, you make different decisions and get the task done differently, at a different level of architecture, with a different understanding of the code.

by zibzob

2/16/2026 at 7:36:47 PM

You don't think it's possible you'd notice if you, or the people around you, produced 6 months of work in 3 weeks?

The case where it's not obvious is when the effect is <1.5x. I think that's clearly where we're at

by AstroBen

2/16/2026 at 8:01:34 PM

I think it's a lot harder than it sounds. First, nobody can estimate time well enough to know how long something would have taken without AI. And then, it's comparing apples to oranges - there's the set of things people did using an AI agent, and the set of things they would have done if they hadn't used AI, and the two are just not directly comparable. The AI agent set would definitely have more lines of code, that's all I could really say. Maybe it would also contain a larger maintenance burden, lower useful knowledge in humans, projects that didn't need to be done or could have been done in a smarter way, etc.

by zibzob

2/16/2026 at 8:37:07 PM

This sounds like lowering expectations with extra steps!

by coldtea

2/16/2026 at 6:46:53 PM

So, what I don't get is, taking it to its logical conclusion, if AI takes all the jobs then who are your customers? Who will buy your stock? Who will buy the software that all the developers you used to employ used to write? How do these CEOs and investors see this playing out?

by SoftTalker

2/16/2026 at 6:47:41 PM

You’re not supposed to ask such logical questions. It kills the AI vibe.

by cmiles8

2/16/2026 at 8:13:41 PM

"We are asking you to pay the subscription, not to think! Think of the investors!"

by skydhash

2/16/2026 at 8:38:24 PM

Like late-stage capitalism generally thinks of these things: by then you'll have sold your company and living the life...

That such a collapse of consumption economy, even just counting white collar jobs cut by "mere" 30%, would also mean a collapse of the stock market, society, infrastructure, and even basic safety, doesn't enter the mind.

by coldtea

2/16/2026 at 6:37:13 PM

something i wonder about with AI taking jobs --

similar to the ATM example in the article (and my experience with ai coding tools), the automation will start out by handling the easiest parts of our jobs.

eventually, all the easy parts will be automated and the overall headcount will be reduced, but the actual content of the remaining job will be a super-distilled version of 'all the hard parts'.

the jobs that remain will be harder to do and it will be harder to find people capable or willing to do them. it may turn out that if you tell somebody "solve hard problems 40hrs a week"... they can't do it. we NEED the easy parts of the job to slow down and let the mind wander.

by parpfish

2/16/2026 at 6:40:59 PM

There's plenty of jobs like this already. They'll want to keep you around even if you're not doing much most of the time, because you can still solve the hard problems as they arise and grow organizational capital in other ways.

by zozbot234

2/16/2026 at 6:42:32 PM

I’m also concerned about the continuing enshittification of software. Even without LLMs, we’ve had to endure slapdash software. Even Apple, which used to be perfectionistic, has slipped. I feel enshittification is a result of a lack of meaningful competition for many software products due to moats such as proprietary file formats and protocols, plus network effects. “Move fast and break things” software development methodologies don’t help.

LLMs will help such teams move and break things even faster than before. I’m not against the use of LLMs in software development, but I’m against their blind use. However, when there is pressure to ship as fast as possible, many will be tempted to take shortcuts and not thoroughly analyze the output of their LLMs.

by linguae

2/16/2026 at 6:23:05 PM

Saying it will take jobs is the marketing line to CEOs - more than you will be left behind.

by dv_dt

2/16/2026 at 6:30:29 PM

Except entry level jobs are already getting wiped out.

by hmmmmmmmmmmmmmm

2/16/2026 at 6:37:36 PM

The one entry level job that's been wiped out for good by LLMs is human marketing copywriters, i.e. the people whose job was to come up with the kind of slop LLMs learned from. They're just rebranding as copyeditors now because AI can write the slop itself, or at least its first draft.

by zozbot234

2/16/2026 at 8:48:55 PM

> the customers are investors and CEOs

100%. i have a basically unlimited Claude balance at work. I do not think of cost except for fun. CEO thinks every engineer has to use AI because nobody is gonna just be using text editors alone in the future.

by whateveracct

2/16/2026 at 7:05:38 PM

Also: They've figured out they can "force" AI adoption top-down at many workplaces. They don't need to convince you or even your boss - they just need the C-suite to mandate it.

by spamizbad

2/16/2026 at 6:42:25 PM

Yeah I guess the subtext is 'AI is going to take over so much of the market that it's risky to hold anything else.'

by qnleigh

2/16/2026 at 6:42:24 PM

When trying to look infer people's motives don't just look at what they are doing. Look also at what they aren't doing. Alternatives they had and rejected.

If marketing it was the sole objective there are many other stories they could have told, but didn't.

by im3w1l

2/16/2026 at 7:10:32 PM

what are a couple of those alternatives?

by vpribish

2/16/2026 at 6:23:03 PM

You guys can hate him, but Alex Karp of Palantir had the most honest take on this recently which was basically:

"Yes, I would love to pause AI development, but unless we get China to do the same, we're f***, and there's no advantage unilaterally disarming" (not exact, but basically this)

You can assume bad faith on the parts of all actors, but a lot of people in AI feel similarly.

by bpodgursky

2/16/2026 at 6:34:40 PM

In China, I wonder if the same narrative is happening, no new junior devs, threats of obsolescence, etc. Or are they collectively, see the future differently?

by testbjjl

2/16/2026 at 6:50:08 PM

They absolutely see the future differently because their society is already set up for success in an AI world. If what these predictions say become true, free market capitalism will collapse. What would be left?

by SlightlyLeftPad

2/16/2026 at 7:13:53 PM

The reason you think its honest is because you already believed it.

by biophysboy

2/16/2026 at 7:46:47 PM

What does he gain by saying "Yes I'd love to shut all of this down"?

by bpodgursky

2/16/2026 at 8:52:11 PM

He gets to pretend to be impartial ("sure, I've love to shut this down and lose the billions coming my way"),

and by pretending that he has no option but to "go against his own will" and continue it, he gets to make it sound nuclear-bomb-level important.

This is hyping 101.

by coldtea

2/16/2026 at 8:20:05 PM

Its rhetoric - he gains your support. He doesn't want to shut it down.

by biophysboy

2/16/2026 at 8:38:11 PM

You should really step back and think, maybe you've spun a very complicated web of beliefs on top of your own eyes.

Yes, maybe everyone is playing 8 dimensional chess and every offhand comment is a subtle play for credibility. Or maybe sometimes people just say what they think.

by bpodgursky

2/16/2026 at 8:49:23 PM

This is what I think:

- Alex Karp genuinely believes China is a threat

- I think China is an economic threat, especially for tech

- An AI arms race is itself threatening; it is not like the nuclear deterrent

- Geopolitical tensions are very convenient for Alex Karp

- America has a history of exaggerating geopolitical threats

- Tech is very credulous with politics

by biophysboy

2/16/2026 at 8:53:20 PM

"This late night informercial guy is not genuinely believing the product to be amazing, he's just saying shit to sell it"

"Yes, maybe everyone is playing 8 dimensional chess and every offhand comment is a subtle play for credibility. Or maybe sometimes people just say what they think".

I mean, dude, a corporate head having a conflict of interest in saying insincere shit to promote the stuff his company makes is not some conspiracy thinking about everybody playing "8 dimensional chess".

It's the very basic baseline case.

by coldtea

2/16/2026 at 11:29:12 PM

If he was making nuclear weapons, would you question the statement, "I would be happy if everyone stopped making nuclear weapons, but if China is making them, we can't unilaterally disarm"? I'm very willing to believe a CEO who says that! They are human beings!

Whether or not you believe it, a lot of people view advanced AI the same way.

by bpodgursky

2/17/2026 at 1:31:54 AM

>They are human beings!

Barely.

Even in the nuclear weapons CEO case, if they actually believed that, they'd be in another line of business, not making millions of nuclear weapons.

by coldtea

2/16/2026 at 6:32:06 PM

Yeah but it’s in his interest to encourage an arms race with China.

by tonyedgecombe

2/16/2026 at 7:11:50 PM

OK, but the other view equally compatible with the evidence is that he is scared of getting rolled by an AI-dominant China and that's why he's building tools for the dept of defense.

Like I said you can believe whatever you want about good-faith motives, but he didn't have to say he wanted to pause AI, he could have been bright-and-cheery bullish, there was no real advantage to laying his cards out on his qualms.

by bpodgursky

2/16/2026 at 7:09:58 PM

Answer the question.

If the USA pauses AI development, do you think China will?

by heraldgeezer

2/16/2026 at 8:48:34 PM

"I would love to stop getting your money, but consider the children/China/some disaster/various scenarios. That's why you should continue to shower me with billions".

by coldtea

2/16/2026 at 11:20:43 PM

oh please. people said that about the moon, and nuclear weapons too. and yet it's the one side who has a track record of using new technology to intidmidate.

by didntknowyou

2/16/2026 at 7:09:33 PM

HN has become so marxist they hate the country they live in

by heraldgeezer

2/16/2026 at 9:12:12 PM

Oh no! What a pity. In hindsight, mind you, countries that treat their population like disposable labor units to be wrung dry and then discarded, tend to see the population develop opinions about that eventually.

by Balinares

2/16/2026 at 9:37:59 PM

>disposable labor units

This is the source of migration allowance but I bet you stand there with a "Refugees welcome" sign.

by heraldgeezer

2/16/2026 at 6:22:22 PM

It's worse than that. It's ultimately a military technology. The end-game here is to use it offensively and / or defensively against other countries. Whoever establishes dominance first wins. And so you have to push adoption, so that it gets tested and can be iterated. But this isn't about making money (they are losing it like crazy!) This is end-of-the world shit and about whoever will be left standing once all the dominoes fall -- if they ever fall (let's hope they don't!)

But it's tacitly understood we need to develop this as soon as we can, as fast as we can, before those other guys do. It's a literal arms race.

by empressplay

2/16/2026 at 6:35:53 PM

Yeah, if you consider a military-grade AI/LLM with access to all military info sources, able to analyze them all much quicker than a human… there’s no way this isn’t already either in progress or in use today.

Probably only a matter of time until there’s a Snowden-esque leak saying AI is responsible for drone assassinations against targets selected by AI itself.

by monkpit

2/16/2026 at 8:45:57 PM

>Yeah, if you consider a military-grade AI/LLM with access to all military info sources, able to analyze them all much quicker than a human… there’s no way this isn’t already either in progress or in use today.

Still wouldn't mean much. Wars are won on capacity, logistics (the practical side, not ability to calculate them), land/etc advantages, and when it comes to boots on the ground, courage, knowledge of the place, local population support, etc. Not "analyzing info sources" at scale which is mostly a racket that pretends to be important.

by coldtea

2/16/2026 at 11:23:18 PM

Ok, I didn’t say anything about what you said though. I said it’s definitely either in progress or already implemented.

by monkpit

2/17/2026 at 1:33:23 AM

And I didn't refute anything about what you said. I said "Still wouldn't mean much."

by coldtea

2/16/2026 at 6:42:20 PM

This 100%. We're in the middle of an AI Manhattan Project and if "we" give up or slow down, another company or country will get AGI before "us" and there's no coming back after that. If there's a chance AGI is possible, it doesn't make sense to let someone else take the lead no matter how dangerous it could be.

by daze42

2/16/2026 at 6:31:17 PM

One often forgets this.

by big_paps

2/16/2026 at 8:44:19 PM

>The end-game here is to use it offensively and / or defensively against other countries.

Against other countries? The biggest endgame is own population control. That has always been the biggest problem/desire of elites, not war with other countries.

by coldtea

2/16/2026 at 6:35:05 PM

With all the wackiness around AI, is this some Mutually Assured Delusion doctrine?

by saltcured

2/16/2026 at 6:35:17 PM

Sam Altman is a known sociopath who has no problem achieving his goals by any means necessary. His prior business dealings (and repeated patterns with OpenAI) are evidence of this.

Shumer is of a similar stock but less capable, so he gets caught in his lies.

I’m still shocked people work with Altman knowing his history, but given the Epstein files etc it’s not surprise. Our elite class is entirely rotten.

Best advice is trust what you see in front of your face (as much as you can) and be very skeptical of anything else. Everyone involved has agendas and no morals.

by apaosjns

2/16/2026 at 6:48:18 PM

I'm shocked how congratulatory things were for OpenClaw joining Altman Inc

by verdverm

2/16/2026 at 6:56:03 PM

If you know the author you know it's a match made in heaven

by gmerc

2/16/2026 at 6:39:40 PM

[dead]

by himata4113

2/16/2026 at 7:25:26 PM

FOMO was literally built in into Bitcoin. In the beginning it was a lot easier, and then it slowly gets harder.

But what I really hate about AI and how most people talk about it is that if one day it does what the advertisements say, all white collar jobs collapse.

by OptionOfT

2/16/2026 at 7:50:21 PM

> all white collar jobs collapse

Then everything collapses. The carpenter will also be out of work if more than half of his client base cannot afford his work anymore.

by steve1977

2/16/2026 at 8:43:05 PM

Not to mention millions of ex-white-collars or student age that would have tried to be white-collar rushing to become carpenters and roofers, bringing down the pay and the quality of those jobs way down...

by coldtea

2/16/2026 at 9:08:47 PM

This idea doesn't sit well with me. Like, I recognize it very likely could happen, but it feels kind of like a dumb magic trick. Like learning economy goes up because people generally believe economy should go up.

If everyone is out of a job, then what is the point of the economy? Who is doing what work for whom? If nobody can afford anything, then why are we even doing this?

And, at some point, needs have to be met. If you were to drop 1000 random people on a remote island, you wouldn't expect nothing to happen just because there aren't any employers with jobs to hire people to do. People would spontaneously organize for survival and a local economy would form.

I find depictions of post-apocalyptic societies in sci-fi to be difficult to accept, too. Like Elysium: how could the entire Earth be an underclass, yet the space station needs them to work to be able to survive? That would be the easiest siege warfare of all time; do literally nothing and the space station eventually starves to death. Like Fallout: how could places stay completely run down for centuries? You mean to tell me that nobody would at least start a scrap yard and start cleaning up the old baby buggies and scrap metal from burnt out hulks of cars?

And then grown-ass adults tell stories about what could happen as if they are experts in Economy. Nobody knows how Economy work. Hell, half the Nobel prizes in economics in the last decade have basically been about proving the stupid, fairytale version of economics that existed for the last 100 years was a complete farce.

Yeah, I can definitely see a market collapse leading to a lot of mortgages getting foreclosed. But a complete shutdown? It seems preposterous. How? Why would everyone go along with it?

by moron4hire

2/16/2026 at 6:16:19 PM

The AI executives are marketing it—it’s just none of us are the target demographic. They are marketing it to executives and financiers, the people who construct the machinations to keep their industry churning, and those who begrudge the necessity of labor in all its forms.

by nativeit

2/16/2026 at 6:23:30 PM

Yup, if you haven’t heard first-hand (i.e. from the source) at least one story where some exec was at least using AI to intimidate his employees, or outright terminating them in some triumphant way (whether or not this was a sound business decision), then you’ve gotta be living in a bubble. AI might not be the problem but the way it’s being used is.

by lambdasquirrel

2/16/2026 at 6:57:21 PM

This has been the message at the F100 that one of my relatives works at. The CEO's increasingly aggressive message to their hundreds of thousands of employees is that they should figure out how to get 10x faster with AI or their job is on the line. The average non-technical white collar employee doesn't know the details of how LLMs work or any of the day-to-day changes in tooling that we see in the tech industry. All they see is elites pouring all their resources into a machine that will result in Great Depression 2 if it succeeds. Millions of people whose lives depend on their $50k office job in Middle America are hoping and praying that it fails.

I live in an area that's not a tech hub and lots of people get confrontational when they find out I work in tech. First they want to know if I'm working on AI, and once they're satisfied that the answer is no, they start interrogating me about it. Which companies are behind it, who their CEOs are, who's funding them, etc. All easily Googleable, but I'm seen as the AI expert because I work in tech.

by SL61

2/16/2026 at 7:13:28 PM

I do love that.

My career is built on people not knowing how to Google lmao (IT)

To most people, AI is chatGTP. Maybe Gemini.

Claude? No idea.

VS Code, Cursor, Antigraivity, Claude Code? Blank stares.

Same as when the computer came, some will fall behind. Excel monkeys copy pasting numbers will go, copywriters, written word jobs = already gone. Art for simple images = AI now all done by one person.

Unless you want a Soviet system where jobs are kept to keep people busy.

by heraldgeezer

2/16/2026 at 10:27:20 PM

> Unless you want a Soviet system where jobs are kept to keep people busy.

In $big_corp, everyone seems to have penis envy over “head count”, constantly checking whose is bigger.

If you want to see an executive have an existential crisis, ask them how many of those folks are necessary for the org to run.

by csa

2/16/2026 at 9:12:43 PM

> Unless you want a Soviet system where jobs are kept to keep people busy.

There's an argument that even under capitalism, a lot of jobs still only exist in order to keep people busy.

by notpachet

2/16/2026 at 6:24:57 PM

Part of what's going on here -- why we have this gap in what we say we fear and how we act, is just a human deficiency.

I remember when Covid got out of control in China a lot of people around me [in NY] had this energy of "so what it'll never come to us." I'm not saying that they believed that, or had some rational opinion, but they had an emotional energy of "It's not big deal." The emotional response can be much slower than the intellectual response, even if that fuse is already lit and the eventuality is indisputable.

Some people are good at not having that disconnect. They see the internet in 1980 and they know that someday 60 years from now it'll be the majority of shopping, even though 95% of people they talk to don't know what it is and laugh about it.

AI is a little-bit in that stage... It's true that most people know what it is, but our emotional response has not caught up to the reality of all of the implications of thinking machines that are gaining 5+ iq points per year.

We should be starting to write the laws now.

by zug_zug

2/16/2026 at 6:49:27 PM

But it’s worth being careful - you could’ve said the same thing 3 years ago about NFTs. They were taking off and people made very convincing arguments about how it was the future of concert tickets, and eventually commerce in general.

If we started writing lots of laws around NFTs, it would just be a bunch of pointless (at best), or actively harmful laws.

Nobody cares about NFTs today, but there were genuinely good ideas about how they’d change commerce being spouted by a small group of people.

People can say “this is the future” while most people dismiss them, and honestly the people predicting tectonic shifts are usually wrong.

I don’t think that the current LLM craze is headed for the same destiny as NFTs, but I don’t think that the “LLM is the new world order” crowd is necessarily more likely to be correct just because they’re visionaries.

by californical

2/16/2026 at 7:15:29 PM

Some of my friends bought a tonne of bitcoin when it was around ~$100 because it was clearly the future. I'm still not sure if I was an idiot or smart to reject that

by AstroBen

2/16/2026 at 10:56:09 PM

> I'm still not sure if I was an idiot or smart to reject that

Both.

Smart because you realized that BTC is an incredibly flawed currency, or store of value, or whatever you want to call it.

Idiot because you grossly underestimated the desire of the general public to gamble on the “next big thing”.

by csa

2/16/2026 at 6:17:35 PM

The reason I dislike AI use in certain modes is because the end result looks like a Happy Meal toy from McDonalds. It looks roughly like the thing you wanted or expected, but on even a casual examination it falls far short. I don’t believe this is something we can overcome with better models. Or, if we can then what we will end up writing as prompts will begin to resemble a programming language. At which point it just isn’t worth what it costs.

This tech is a breakthrough for so many reasons. I’m just not worried about it replacing my job. Like, ever.

by twodave

2/16/2026 at 6:51:56 PM

[dead]

by dr-detroit

2/16/2026 at 6:27:49 PM

> And Matt Shumer is saying that AI is currently like Covid in January 2020—as in, "kind of under the radar, but about to kill millions of people"

This is where the misrepresentation... no, the lie comes in. It always does in these "sensible middle" posts! the genre requires flattening both sides into dumber versions of themselves to keep the author positioned between two caricatures. Supremely done, OP.

If you read Matt's original article[0] you see he was saying something very different. Not "AI is going to kill lots of people" but that we're at the point on an exponential curve where correct modeling looks indistinguishable from paranoia to anyone reasoning from base rates of normal experience. The analogy is about the epistemic position of observers, not about body counts.

[0]: https://shumer.dev/something-big-is-happening

by ctoth

2/16/2026 at 6:33:13 PM

My feelings on AI are complicated.

It's very useful as a coding autocomplete. It provides a fast way to connect multiple disparate search criteria in one query.

It also has caused massive price hikes for computer components, negatively impacted the environment, and most importantly, subtly destroys people's ability to understand.

by bovermyer

2/16/2026 at 9:55:00 PM

I've come to the same conclusions. I don't think the feelings are complicated though. It's just personal when you find use in it. It's kind of like cryptocurrency in the way that it doesn't democratize efficiency and usefulness amongst everyone. So for a few it is a powerful and useful tool, but ultimately at the cost of everything else.

by hunter-gatherer

2/16/2026 at 6:19:51 PM

AI is scary, but look on the bright side:

Whenever there is a massive paradigm shift in technology like we have with AI today, there are absolutely massive, devastating wars because the existing strategic stalemates are broken. Industrialized precision manufacturing? Now we have to figure out who can make the most rifles and machine guns. Industrialized manufacturing of high explosives? Time to have a whole world war about it. Industrialized manufacturing of electronics? Time for another world war.

Industrialized manufacturing of intelligence will certainly lead to a global scale conflict to see if anyone can win formerly unwinnable fights.

Thus the concerns about whether you have a job or not will, in hindsight, seem trivial as we transition to fighting for our very survival.

by mullingitover

2/16/2026 at 6:23:55 PM

To me global rise of full blown authoritarianism in every corner seems more plausible than a shooting war. The tech is very well suited for controlling people - both in the monitoring sense and destroying their ability to tell what’s really.

ie new stalemate in the form of multiple inward focused countries/blocs

by Havoc

2/16/2026 at 6:29:00 PM

That was already happening without LLMs. LLMs will just make it worse.

by BlackjackCF

2/16/2026 at 7:06:59 PM

Yeah LLMs complete the surveillance state. It adds the patience to monitor, analyze, de-anonymize all the data. The industrial revolution and its wealth temporarily disrupted civilization, but we're regressing to the normal state of global authoritarianism again.

by recursivecaveat

2/16/2026 at 6:37:23 PM

"We've always been at war with Eurasia"

by goda90

2/17/2026 at 12:51:20 AM

Bingo

by Havoc

2/16/2026 at 7:07:31 PM

> the existing strategic stalemates are broken

Claude, go hack <my enemy nation-state> and find me ways to cause them harm that are unlikely be noticed until it is too late for them to act on it.

by RHSeeger

2/16/2026 at 6:32:16 PM

Where were the massive devastating wars last time this happened with the internet and mobile phone?

by verdverm

2/16/2026 at 6:37:10 PM

You could say that it waged a silent war, and our kids' attention spans lost.

by tgv

2/16/2026 at 6:41:27 PM

Very likely they got the causality backwards. Every time there’s a big war, technology advances because governments pour resources into it.

by cvwright

2/16/2026 at 6:55:36 PM

The internet and mobile phones weren't paradigm shifts for warfare. There were already mobile radios in WWII, so they fall under the 'industrialized manufacturing of electronics' bucket.

by mullingitover

2/16/2026 at 9:29:54 PM

You might look at what Ukraine is doing with mobile phones, satellite internet, and now Ai

Drones change everything we think we know about warfare, except for adage that logistics is what wins wars (post industrialization)

by verdverm

2/16/2026 at 6:52:41 PM

Just for the sake of argument, I don't think the internet and mobile phones are military technologies, nor to GP use those examples.

> Industrialized manufacturing of electronics?

Ukraine seems to be exploring this and rewriting military doctrine. The Iranian drones the Russians are using seem to be effective, too. The US has drones, too, and we've discovered that drone bombing is not helpful with insurgencies; we haven't been in any actual wars for a while, though.

> Industrialized manufacturing of intelligence

I don't think we've gotten far enough to discover how/if this is effective. If GP means AI, then we have no idea. If GP means fake news via social media, then we may already be seeing the beginning effects. Both Obama and Trump had a lot of their support from the social media.

Having written this, I think I flatly disagree with GP that technology causes wars because of its power. I think it may enable some wars because of its power differential, but I think a lot is discovered through war. WWI discovered the limitations of industrial warfare, also of chemical weapons. Ukraine is showing what constellations of mini drones (as opposed to the US' solitary maxi-drones) can do, simply because they are outnumbered and forced to get creative.

by prewett

2/16/2026 at 7:41:57 PM

I think a peer comment said it best, the commenter has causality backwards

It seems you may be extending this

If you don't think internet is a vital tech on the front line drone war, I would invite you to watch Perun's recent video on Starlink

by verdverm

2/16/2026 at 7:24:50 PM

how do you not think the internet is a military technology? i mean (waves hands) like it's from ARPA, the military paid for it, it integrated cold war air defence, it made global comms resilient to attack, and made information non-local on a massive scale

GP's assertion about tech revolutions making wars doesn't make any sense to me on any level, but it's not just because the latest revolutions were 'not military tech'

i'm liking william spaniel's model : wars happen when 1 - there is a substantial disagreement between parties and 2 - there is a bargaining friction that prevents reaching a less-costly negotiated resolution.

I don't see how a technical revolution necessarily causes either, much less both, of those conditions. there sure is a lot of fear and hype going around - and that causes confusion and maybe poor decisions - but we should chill on the apocalyptics

by vpribish

2/16/2026 at 6:36:16 PM

Yeah this was my thought as well

by squibonpig

2/16/2026 at 6:37:29 PM

I am absolutely sure that WW3 is inevitable, for these exact reasons. Later, the survivors will be free to reorganize the society.

by atemerev

2/16/2026 at 6:56:41 PM

Nature likes to do occasional resets. Probably explains the Fermi paradox as well.

by SoftTalker

2/16/2026 at 10:23:55 PM

All our human history is a tiny speck on astronomical timescales. The timescale of life itself, on the other hand, is quite significant. Just from this we can somewhat deduce that life might be common in the universe, but sentience might be rare.

by atemerev

2/16/2026 at 6:46:31 PM

Luddites weren't against technology. “They just wanted machines that made high-quality goods and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.” https://www.smithsonianmag.com/history/what-the-luddites-rea...

by ttuominen

2/16/2026 at 6:13:17 PM

> Being able to easily interact with banks, without waiting in a line that’s too long for the dum-dum you get at the end to be a real consolation, made people use banks more.

Actually, in my city, not the ATMs, but the apps which made it possible to do almost everything on the phone significantly reduced the number of banks in the last few years. I have to go very rarely to the bank, but, when I have to do, I see that another close one has closed and I have to go somewhere even farther.

by bananaflag

2/16/2026 at 6:48:41 PM

Disclaimer: self plug[0]

I honestly believe everything will be normalized. A genius with the same model as I will be more productive than I, and I will be more productive than some other people, exactly the same as without AI.

If AI starts doing things beyond what you can understand, control and own, it stops being useful, the extra capacity is wasted capacity, and there are diminishing returns for ever growing investment needs. The margins fall off a cliff (and they're already negative), and the only economic improvement will come from Moore's Law in terms of power needed to generate stuff.

The nature of the work will change, you'll manage agents and what not, I'm not a crystal ball, but you'll still have to dive into the details to fix what AI can't, and if you can't, you're stuck.

[0]https://www.fer.xyz/2026/02/llm-equilibrium

by fer

2/16/2026 at 7:32:35 PM

The margins on inference definetly aren’t negative. An easy way to check this is by looking at the costs of using cloud hosted open source models, which necessarily are served at a positive margin, and are much lower $/token than what you get from the labs.

by SteveMqz

2/16/2026 at 6:19:36 PM

> students rob themselves of the opportunity to learn, so they can… I dunno, hit the vape and watch Clavicular get framemogged

Hahah, this guy Gen-Zs.

by abhaynayar

2/16/2026 at 6:27:50 PM

Depending on how you use AI you can learn things a lot quicker than before. You can ask it questions, ask it to explain things, etc. Even if the AI is not ready for prime time yet, the vision of being able change how we learn is there.

by chung8123

2/16/2026 at 6:50:14 PM

> The people in charge of AI keep telling me to hate it

Anthropic’s Dario Amodei deserves a special mention here. Paints the grimmest possible future, so that when/if things go sideways, he can point back and say, "Hey, I warned you. I did my part."

Probably there is a psychological term that explains this phenomenon, I asked ChatGPT and it said it could be considered "anticipatory blame-shifting" or "moral licensing".

by ludwigvan

2/16/2026 at 8:46:47 PM

Despite having a very good life, i always had trouble with self-esteem and always felt the impostor syndrome and not having really earned what i have (both professionally and personally.

AI users and especially AI boosters are the best thing that happened to me. I no longer felt as an impostor, maybe the opposite. The confidence they gave me got me a promotion and a raise.

I still get mad at AI sometime, when someone use it to resolve an easy issue tagged "good first issue" i especially took time to describe for a new hire to enter a very complex codebase, or when i review a fully vibe-coded PR before the person who vibe-coded it, but overall the coming of AI (especially since Sonnet/Opus4.5 and GPT 5.1) is great.

by orwin

2/16/2026 at 6:22:32 PM

The vibes around the self-driving car hype (maybe 10 years ago?) felt very similar to me, but on a smaller scale. There was a lot of "You might like driving your car and having a steering wheel, but if you do, you're a luddite who will soon be forced to ride about in our featureless rented robot pods" type of statements, or that one AI scientist who was quoted saying we should just change laws around how humans are allowed to interact with streets to protect the self-driving cars.

Not all of it was like that, I think oddly enough it was Tesla or just Elon Musk claimng you'd soon be able to take a nap in your car on your morning commute through some sort of Jetsons tube or that you could let your car earn money on the side while you weren't using it, which might actually be appealing to the average person. But a lot of it felt like self-driving car companies wanted you to feel like they just wanted to disrupt your life and take your things away.

by writeslowly

2/16/2026 at 6:20:22 PM

I have some friends who are embracing it and using it to transform their businesses (eg insurance sales), and others who hate it and think it should be banned (lawyers, white collar).

I think for a lot of people it feels like an inconvenient thing they have to contend with, and many are uncomfortable with rapid change.

by cdempsey44

2/16/2026 at 6:40:07 PM

"Change is scary,"

This is not the point the author was making, but I think this phrase implies that it's merely fear of change which is the problem. Change can bring about real problems and real consequences whether or not we welcome it with open arms.

by everdrive

2/16/2026 at 7:01:59 PM

I think the author is far too generous in thinking through the possible motives of tech leaders in the "what if they believe it?" branch.

Far from embracing UBI (or any other legal/social strategy to mitigate mass unemployment), tech leaders have signaled very strongly that they'd actually prefer the exact opposite. They have nearly universally aligned themselves with the party that's explicitly in favor of extreme wealth inequality and aversion to even the mildest social welfare program.

by lukev

2/16/2026 at 6:28:04 PM

People hate AI because it does all the fun jobs.

by amelius

2/16/2026 at 6:32:22 PM

Haha, funny but not true !

by big_paps

2/16/2026 at 6:35:59 PM

Is this job obsolescence narrative top of mind in China? I wonder if they are they seeing these developments differently?

by testbjjl

2/16/2026 at 6:55:01 PM

Nice read! The main benefit for me is the reduced search times for anything I need to look up online. Especially for code you can find relevant information ware more quickly.

One improvement for your writing style: it was clear to me that you don’t hate AI though, you didn’t have to mention that so many times in your story.

by crassus_ed

2/16/2026 at 6:16:12 PM

> If I can somehow hate a machine that has basically stopped me from having to write boring boilerplate code, of course others are going to hate it!

Poor author, never tried expressive high-level languages with metaprogramming facilities that do not result in boring and repetitive boilerplate.

by v3xro

2/16/2026 at 9:54:47 PM

The rule of metaprogramming is that it ends up just as convoluted and full of edge cases as regular code, just without a nice way to debug. The rule is also that it always seems like a fantastic idea at first and will solve so many isues.

I've been programming since 1994. I've seen a lot. I almost always end up despising any metaprogramming system and wish we'd just kept things simpler even if it meant boilerplate.

by munksbeer

2/16/2026 at 6:48:37 PM

Honestly, this. The mainstream coding culture has spent decades dealing with shoehorning stateful OOP into distributed and multithreaded contexts. And now we have huge piles of code, getters and setters and callbacks and hooks and annotation processers and factories and dependency injection all pasted on top of the hottest coding paradigm of the 90's. It's too much to manage, and now we feel like we need AI to understand it all for us.

Meanwhile, nobody is claiming vast productivity gains using AI for Haskell or Lisp or Elixir.

by WolfeReader

2/16/2026 at 6:55:25 PM

I mean, I find that LLMs are quite good with Lisp (Clojure) and I really like the abstraction levels that it provides. Pure functions and immutable data mean great boundary points and strong guarantees to reason about my programs, even if a large chunk of the boring parts are auto-coded.

I think there's lots of people like me, it's just that doing real dev work is orthogonal (possibly even opposed) to participating in the AI hype cycle.

by lukev

2/16/2026 at 6:20:13 PM

[dead]

by throwaway613746

2/16/2026 at 7:08:14 PM

> The idea of AI as an exterminator of human problems is much more appealing than AI as the exterminator of, you know, the career of me and everybody else on Earth.

Here's the rub though: needing a "career" to survive and have a decent life is a human problem. It's an extreme case of mass Stockholm Syndrome that's made the majority accept that working in order to make money in order to have access to life-preserving/enhancing resources is a necessary part of the human condition. Really, that flow is only relevant when it requires human effort to create, maintain and distribute those resources in the first place.

AI is increasingly taking over the effort, and so is threatening that socio-economic order. The real problem is that the gains are still being locked away to maintain that scarcity which the efforts address, so over time there's an increasing crises of access, since there's nothing really in place to continue providing the controlled access everyone in the system has had to resources for... centuries.

by skeledrew

2/16/2026 at 7:01:57 PM

I'm still confused on how so many people are happy paying so much money, just to BE the product.

Widespread FOMO and the irrationality that comes with it might be at an all time high.

by thothless

2/16/2026 at 6:16:29 PM

I suppose they mean "Why people who hate AI hate AI"... I don't hate AI and know many people who don't either. I find it quite useful but that's it.

by glimshe

2/16/2026 at 6:50:00 PM

People don’t hate AI because they’re scared of it taking their jobs. They hate it because it’s massively overhyped while simultaneously being shoved down their throats like it’s a panacea. If and when AI, whether in LLM form or something else, actually demonstrates genuine intelligence as opposed to clearly probabilistic babble and cliche nonsense, people will be a lot more excited and open to it. But what we have currently is just dogshit with a few neat tricks to cover up the smell.

by avazhi

2/16/2026 at 6:51:21 PM

this feels like a comment out of 2023. Ever since reasoning models they have become much more than "probabilistic babble".

by hmmmmmmmmmmmmmm

2/16/2026 at 7:13:14 PM

Hence why I mentioned a few tricks to cover the shit smell.

by avazhi

2/16/2026 at 8:02:57 PM

The U trap turned out to be a lot more useful than a "trick". It's one of two reasons we managed to get indoor plumbing (which is a pretty huge win for personal comfort IMO).

by saulpw

2/16/2026 at 6:35:45 PM

> I have a friend who is a new TA at a university in California. They’ve had to report several students, every semester, for basically pasting their assignments into ChatGPT.

We've solved this problem before.

You have 2 separate segments:

1. Lessons that forbid AI 2. Lessons that embrace AI

This doesn't seem that difficult to solve. You handle it like how you handle calculators and digital dictionaries in universities.

Moving forward, people who know fundamentals and AI will be more productive. The universities should just teach both.

by ergocoder

2/16/2026 at 6:42:20 PM

this is tough because we've spent years building everything in education to be mediated by computers and technology, and now we're realizing that maybe we went a little overboard and over-fit to "lets do everything on computers".

it was easy to force kids to learn multiplication tables in their head when there were in-person tests and pencil-and-paper worksheets. if everything happens through a computer interface... the calculator is right there. how do you convince them that it's important to learn to not use it?

if we want to enforce non-ai lessons, i think we need to make sure we embrace more old-school methods like oral exams and essays being written in blue books.

by parpfish

2/16/2026 at 6:57:53 PM

The Phoebus.AI cartel was an international cartel that controlled the manufacture and sale of computer components in much of Europe and North America between 2025 and 2039. The cartel took over market territories and lowered the useful supply and life of such computer components, which is commonly cited as an example of planned obsolescence of general computing technology in favor of 6G ubiquitous computing. The Phoebus.AI cartel's compact was intended to expire in 2055, but it was instead nullified in 2040 after World War III made coordination among the members impossible.

by phoebusaicartel

2/16/2026 at 7:28:23 PM

The most insightful part of the essay is the focus on everyday experience. People are not reacting to AI because of labor statistics but because of cheating, fake videos, spam, and the loss of visible effort. When effort becomes indistinguishable from automation, trust erodes. That explains the backlash better than unemployment forecasts.

Where the piece misses the point is scale. It treats AI mainly as a labor market shock. Historically, technologies rarely eliminate work outright. They change what humans are valued for. The deeper danger is not mass joblessness. It is weakened thinking, shallow learning, and a breakdown in shared reality. The economic fear is overstated. The cultural damage is understated.

by mfrankel

2/16/2026 at 7:00:09 PM

That's excellent article. I also don't believe AI is the issue, but rather those that are at the helm of most of these companies. In my view, AI Companies like other tech companies of the past have no interest in serving society. So, you have a point when you said, "They don’t actually care about what their products may do to society—they just want to be sure they win the AI race, damn the consequences." It's all about money, and those at the top that have the money, have nothing to lose. I'd rather see AI being put to better use, curing cancer, other diseases. I think your scenario where, "Their Super-AGI will write the UBI law, and get it passed, when it has a few minutes between curing cancer and building a warp drive," is very likely now.

by SilentM68

2/16/2026 at 8:11:00 PM

> ANd Matt Shumer is saying that AI is currently like Covid in January 2020—as in, “kind of under the radar, but about to kill millions of people”.

Odd typo + em dash = "Make mistakes in your output so that people think it wasn't AI generated"

by ModernMech

2/16/2026 at 8:34:58 PM

Author here! I didn’t use AI to write this. Jekyll’s markdown to HTML converter helpfully transforms a series of three normal dashes into an em dash.

I did use AI to proofread the article, and implemented its primary suggestion (turning down a dark joke at my own expense that was pretty out of place), but the first draft of this was 100% hand-written.

by NM-Super

2/16/2026 at 7:51:36 PM

> unless all those AI marketing materials are really meant for the ultra-wealthy, and not for me

The only reason they exist is because rich people keep shoveling money into their furnaces. 800M free ChatGPT users do nothing for nobody, Sam A couldn't care less about them, the only people that matter to AI companies, and i mean matter at all, are investors. And every thing they do is about appealing to investors.

by queenkjuul

2/16/2026 at 7:05:43 PM

Highly entertaining to me that people will form these Woe Is Me rings and just hype themselves into sadness. Then you give them a few minutes and they’ll start exclaiming about how society is all about loneliness these days and how they’ve been going to therapy for the last five years.

The miserable have always been miserable. And no matter how much the world improves, they will find paths to misery. Perhaps the great lesson of this age is that some revel in sadness.

Perhaps what we desire as humans is intensity of emotion more than valence.

by renewiltord

2/16/2026 at 6:30:20 PM

The cracks are showing, and all the “AI is going to eliminate 50% of white collar jobs” fear mongering is simply signaling we’re in the final stages before the bubble implosion.

The AI bros desperately need everyone to believe this is the future. But the data just isn’t there to support it. More and more companies are coming out saying AI was good to have, but the mass productivity gains just aren’t there.

A bunch of companies used AI as an excuse to do mass layoffs only to then have to admit this was basically just standard restructuring and house cleaning (eg Amazon).

Theres so much focus on white collar jobs in the US but these have already been automated and offshored to death. What’s there now is truly a survival of the fittest. Anything that’s highly predicable, routine, and fits recurring patterns (ie what AI is actually good at) was long since offshored to places like India. To the extent that AI does cause mass disruption to jobs, the India tech and BPO sectors would be ground zero… not white collar jobs in the US.

The AI bros are in a fight for their careers and the signal is increasingly pointing to the most vulnerable roles out there at the moment being all those tangentially tacked onto the AI hype cycle. If real measurable value doesn’t show up very soon (likely before year end) the whole party will come crashing down hard.

by cmiles8

2/16/2026 at 10:18:49 PM

I just don't agree with this at all.

Right now is the good time for the job market. The S&P is at an all time high.

In the next recession, I expect massive layoffs in white collar work and there is no way those jobs are coming back on the other side.

40-50% of US white collar work hours are spent on procedural, rules-based tasks. Then another large chunk is managing the people doing procedural, rules-based tasks and support of people doing rules based tasks. Salary and benefits are 50% of operating costs for most business.

Maybe you do something really interesting and unique but that is just not what most white collar workers in the US are doing.

I know for myself, these are the final days of white collar work before I am unemployable as a white collar worker. I don't think the company I work for will exist either in 5 years. It is not a matter of Claude code being able to update a legacy system or not. It is that the tide hasn't really gone out in 15 years and all these zombie companies are going to get wiped out at the same time AI is automating the white collar jobs. Delaying the business cycle from clearing over and over is not a free lunch, it is a bill that has been stacking up for a long time.

On the other side, the business as usual of today won't be an option.

From my own white collar experience, I think if you view procedural rules-based tasks as a graph, the automation of any one task depends so much on other tasks being automated. So it will seem like the automation is not working but at some point you get a contagion of automation. Then so much automation will happen at once.

by topocite

2/16/2026 at 6:41:58 PM

The openclaw stuff for me is a prime signal we are now reaching the maximal size of the bubble before it pops - the leaders of the firms at the frontier are lost and have no vision - this a huge warning signal. E.g Steve Jobs was always ahead of the curve in the context of the personal computer revolution - there was no outside individual who had a better view of where things were heading.

There isnt gonna be a huge event in the public markets though, except for Nvidia, Oracle and maybe MSFT. Firms that are private will suffer enormously though.

by ass22

2/16/2026 at 6:35:19 PM

My hunch is the year of the AI Bubble is the same one as the Linux Desktop

by verdverm

2/16/2026 at 7:52:30 PM

Is this article written by AI?

by lasgawe

2/16/2026 at 6:35:22 PM

Once I read a reference to Clavicular, I realized that the very first thing this author should do is stop reading the NYTimes. If the goal is to experience things closer to reality haha.

by doctorpangloss

2/16/2026 at 8:38:28 PM

I don’t subscribe to the NYT anymore (I cancelled my subscription in 2022). I know who Clavicular is because of social media.

The fact that the NYT thought the guy was worthy of a profile is yet another piece of evidence that I should never have given that paper money in the first place.

by NM-Super

2/16/2026 at 6:58:42 PM

He's kind of hard to avoid at this point if you're a regular on any big social platform (except maybe Instagram?).

by FeteCommuniste

2/16/2026 at 7:24:07 PM

You can’t even avoid it on HN now. ;)

by layer8

2/16/2026 at 6:26:18 PM

Yeah, it's FUD. AI can't even do customer service jobs well and CEOs make hyperbole statements that they will replace 30% of job.

by b8

2/16/2026 at 6:28:32 PM

In my experience dealing with e.g. Amazon Prime Video customer service, the actual people working on customer service can't do those jobs well either. As an example, I've complained multiple times to them about live sports streams getting interrupted and showing an "event over" notice while the event is still happening. It's a struggle to get them to understand the issue let alone to get it fixed. They haven't been helpful a single time.

So if AI improves a bit, it might be better than the current customer service workers in some ways...

by FartyMcFarter

2/16/2026 at 6:44:26 PM

Amazon isn't interested in giving you a quality experience.

The customer service reps are warm bodies for sensitive customers to yell at until they tire themselves at.

Tolerating your verbal abuse is the job.

Amazon never intended to improve the quality of the service being offered.

You're not going to unsubscribe, and if you did they wouldn't miss you.

by co_king_5

2/16/2026 at 7:18:52 PM

To be fair, when has being useless stopped adoption of something in the customer service space?

by Macha

2/16/2026 at 6:24:36 PM

> The classical cultural example is the Luddites, a social movement that failed so utterly

Maybe not the best example? The luddites were skilled weavers that had their livelihoods destroyed by automation. The govt deployed 12,000 troops against the luddites, executed dozens after show trials, and made machine breaking a capital offense.

Is that what you have planned for me?

by dmm

2/16/2026 at 6:55:13 PM

I caught that too. The piece is otherwise good imo, but "the luddites were wrong" is wrong. In fact, later in the piece the author essentially agrees – the proposals for UBI and other policies that would support workers (or ex-workers) through any AI-driven transition are an acknowledgement that yes, the new machines will destroy people's livelihoods and that, yes, this is bad, and that yes, the industrialists, the government and the people should care. The luddites were making exactly that case.

> while it’s true that textile experts did suffer from the advent of mechanical weaving, their loss was far outweighed by the gains the rest of the human race received from being able to afford more than two shirts over the average lifespan

I hope the author has enough self awareness to recognize that "this is good for the long term of humanity" is cold comfort when you're begging on the street or the government has murdered you, and that he's closer to being part of the begging class than the "long term of humanity" class (by temporal logistics if not also by economic reality).

by drewbeck

2/16/2026 at 7:15:19 PM

My take was that it's not

> We should hate/destroy this technology because it will cause significant short term harm, in exchange for great long term gains.

Rather

> We should acknowledge that this technology will cause significant short term harm is we don't act to mitigate it. How can we act to do that, while still obtaining the great long term gains from it.

by RHSeeger

2/16/2026 at 6:17:32 PM

Sam Altman gave millions to Andrew Yang for pushign UBI, so they are trying to forewarn and experiment with finding the right solution. Most of the world prefers to shove their heads in the sand though and call them grifters, so of course we'll do nothing until it's catastrophic.

by KaoruAoiShiho

2/16/2026 at 6:17:35 PM

The amount of EM dashes and usage of negation does make me think AI wrote part of this. I'll give credit for lack of semicolons, but people are starting to get a bit better at "humanizing" their outputs.

by Der_Einzige

2/16/2026 at 6:22:58 PM

There are em-dashes, but the writing feels nice and unlike the default ChatGPT style, so even if AI, (which it might not be, cause people do use em-dashes), I don't mind.

by abhaynayar

2/16/2026 at 6:47:53 PM

I'm certainly seeing a huge amount of AI-assisted writing recently (that "I Love Board Games" article posted here yesterday was a good example), but I think this one is human-written. Pangram shows it as human written also.

by Nition

2/16/2026 at 7:04:37 PM

I wouldn't be surprised if someone had AI write a bot to post a complaint on every HN thread about how the articles smells like AI slop. It's so tiresome. Either the article is interesting and useful or not, I don't really care if someone used AI to write it.

by dasil003

2/16/2026 at 6:52:00 PM

Almost hard to remember now, but many tech companies used to be well liked - even Facebook at one time. The negative externalities of social media or smartphones were not apparent right away. But now people live with them daily

So feelings have soured and tech seems more dystopian. Any new disruptive technology is bound to be looked upon with greater baseline cynicism, no matter how magical. That's just baked in now, I think.

When it comes to AI, many people are experiencing all the negative externalities first, in the form of scams, slop, plagiarism, fake content - before they experience it as a useful tool.

So it's just making many people's lives slightly worse from the outset, at least for now

Add all that on top of the issues the OP raises and you can see why so many are have bad feelings about it

by SmirkingRevenge

2/16/2026 at 6:36:01 PM

Well, the process cannot be stopped or paused, whether we like it or not, for a few relatively obvious reasons.

And relying on your government do do the right thing as of 2026 is, frankly, not a great idea.

We need to think hard ourselves how to adapt. Perhaps "jobs" will be the thing of the past, and governments will probably not manage to rule over it. What will be the new power structures? How do we gain a place there? What will replace the governments as the organizing force?

I am thinking about this every day.

by atemerev

2/16/2026 at 7:35:12 PM

Of course the process can be stopped. That's just a lack of imagination.

by mitchdoogle

2/16/2026 at 10:25:52 PM

By whom?

Well may be together with the full societal collapse, but as long as there are nation-states and competition, AI will be developed.

by atemerev

2/16/2026 at 9:08:11 PM

The AI bubble is almost exactly like Theranos. I honestly believe that Elizabeth Holmes believed in what she was doing, and that if she strung investors along for long enough, she could fund the development of her magical nanofluidic CGM on steroids. She didn't have the knowledge to pull it off, nor to know that everybody else was far from pulling it off, and she Dunning-Kruger'd herself into a situation where the jig was up and she was caught.

The difference is, with AI, it looks like they're really pulling it together and delivering something. For years it was "any minute now, this is gonna change everything, keep giving us money please" and it was all amusing but not-worth-the-hassle chatbots until recently, but Professor Harold Hill came through in the clutch and put together a band that can play something resembling music at the latest possible moment. And now we have agents that can do real work and the distinct possibility that the hermetic magicians might actually awaken their electronic god.

by bitwize

2/16/2026 at 6:11:12 PM

[flagged]

by vladdoster

2/16/2026 at 6:18:29 PM

Pangram is crap. Simple N-gram analysis ala stylometry techniques your local 3 letter has known about since the 90s are more effective.

Also, I can make even the most slopped models give me 100% "human written" easily by simply fiddling with the sampler settings. Catch me if you can with temp = 100 and a modern distributional aware truncation sampler (i.e. P-less decoding, top-H, even min_p)

by Der_Einzige

2/16/2026 at 6:12:28 PM

I hate LLMs because you can solve any problem that LLMs can solve in a much better way but people are too stupid, cheap or lazy to put in the effort to do so and share it with everyone.

That and the whitewashing it allows on layoffs from failing or poorly planned businesses.

Human issues as always.

by dgxyz

2/16/2026 at 6:14:04 PM

[dead]

by throwjjj