alt.hn

2/11/2026 at 9:52:27 AM

Something Big Is Happening

https://shumer.dev/something-big-is-happening

by eternalreturn

2/12/2026 at 3:44:48 PM

So apparently according to Axios this blog post has gone "mega viral" and their article concludes by stating affirmatively that the "AI" revolution is here now. It's been shared by a number of normally trusted sources; my sister linked it to me because she saw Medi Hassan share it with a note that it's the most important thing you'll read in like forever.

To me it reads exactly like every other blog post of it's genre. It substitutes subjective personal experience for any kind of externally verifiable fact, makes appeals to anonymous authorities that always seem to support the author's conclusion, uses language designed to induce a sense of fear if not outright panic in the reader, and at no point addresses the fundamental reality of "AI's" catastrophic unprofitability. Not to mention how gross it feels to read the author's slobbering all over Amodei as some kind of model for good corporate behavior.

Fundamentally my real problem with it is that the author believes that if we make LLMs good enough at coding, they will then become capable of doing all other knowledge work to a high enough standard that they will replace human knowledge workers. That is such a breathtaking example of a Leap to Conclusion that if we could harness it's energy we could start sending spaceships directly to other star systems.

by rurihoshino

2/13/2026 at 7:12:48 PM

It doesn't take much effort to find news about AI (LLMs) successfully being deployed with ROI in healthcare, legal, customer operations, retail, banking accounting/tax and more. I don't think the article needs to worry about Leaping a Conclusion as there is plenty of evidence outside.

by mycall

2/13/2026 at 10:12:04 PM

Mind sharing some of this supposedly easy to find evidence? I'm having a real hard time digging any up.

by s1mplicissimus

2/12/2026 at 9:18:06 AM

PhD physicist (Stanford/SLAC), Research Software Engineer doing low-level systems work in C/C++ and LLM research. Not a founder or investor — just a practitioner.

One data point for this thread: the jump from Opus 4.5 to 4.6 is not linear. The minor version number is misleading. In my daily work the capability difference is the largest single-model jump I've experienced, and I don't say that casually — I spent my career making precision measurements.

I keep telling myself I should systematically evaluate GPT-5.3 Codex and the other frontier models. But Opus is so productive now that I can't justify the time. That velocity of entrenchment is itself a signal, and I think it quietly supports the author's thesis.

I'm not a doomer — I'm an optimist about what prepared individuals and communities can do with this. But I shared this article with family and walked them through it in detail before I ever saw it on HN. That should tell you something about where I think we are.

by mattlangston

2/12/2026 at 7:19:19 PM

If you use Claude Code, it will take you half a day to learn to use Codex, and like 30 minutes to start being productive in it. The switching cost is almost zero. Just go test out GPT 5.3, there is no reason not to

by jakobnissen

2/13/2026 at 8:06:35 PM

It's a bit more than zero, because I have substantial tooling around Claude Code – subagents, skills, containerization, &c – that I'd have to (have Opus...) reimplement.

by nicwolff

2/13/2026 at 9:24:56 PM

You can point CC at other models, e.g. via OpenRouter. It only requires three env variables to be set.

by car

2/13/2026 at 10:25:11 AM

one feels the llm wow moment whenever what they do on an area has been surpassed by an llm. newer versions of llms are probably trained by the feedback from developer code agent sessions; so this is probably why pro developers started to feel "wow" recently.

the real challenge will be in the frontier of the human knowledge and whether llms will be able to advance things forward or not.

ps1; i'm using 5.3/o4.6/k2.5/m2.5/glm5 and others daily for development - so my work has 1.5x intensified - i tackle increasingly harder problems but llms still really fail big in brand new challenges like i fail too. so i'm more alert than ever.

ps2: syntactical autocomplete used to write 80% of my code; now llms replaced autocomplete but at a semanticlevel; i think and LLM implements most of my actions like a cerebellum for muscle coordination; but sometimes teaching me new info from the net.

by diminish

2/13/2026 at 3:13:56 PM

The frontier-of-knowledge point is the right question. My own research is a case in point - I apply experimental physics methods to LLMs, measuring their equations of motion in search of a unified framework for how and why they work. Some of the answers I'm looking for may not exist in any training data.

That's where the 4.5->4.6 jump hit me hardest - not routine tasks but problems where I need the model to reason about stuff it hasn't seen. It still fails, but it went from confidently wrong to productively wrong, if that makes sense. I can actually steer it now.

The cerebellum analogy resonates. I'd go further - it's becoming something I think out loud with, which is changing how I approach problems, not just how fast I solve them.

by mattlangston

2/13/2026 at 7:17:13 PM

That wrongness is the frontier labs trying to remove their benchmaxxing bias, so the models now have a concept of 'I don't know' and will rethink directions and goals better. There was lots of research last year on this topic, and it takes 6 to 12 months before it is implemented for general consumption.

2026 will see further improvements for you.

by mycall

2/12/2026 at 7:58:03 PM

For reference the author's (Matt Shumer) AI business (hyperwrite ai) is a hundred small LLM wrappers that do things like:

- "transform complex topics into easy-to-understand explanations."

- "edit and transform images using simple text descriptions."

- "summarizes a research article, and answers specific questions about it."

You can see all of them here: https://www.hyperwriteai.com/aitools.

Hyperwrite does also have a markdown editor with an ai copilot sidebar that seems a little more substantial: https://www.hyperwriteai.com/ai-document-editor.

I don't know enough to disprove Matt, but I don't know why anyone should listen to him. There are far smarter people who have come up with similar conclusions.

by moojacob

2/12/2026 at 11:31:34 PM

Well, I did read it, I did listen to him. Assuming he isn't lying about his anecdotal evidence, he did a very good job opening my eyes to just how fast the AI models are moving and how the disruption can and likely will hit the public before they realize it.

by unethical_ban

2/11/2026 at 12:50:00 PM

I am not having the exact same experience as the author--Opus 4.6 and Codex 5.3 seem more incremental to me than what he is describing--but if we're on an exponential curve, the difference is a rounding error.

4 months ago, I tried to build an application mostly vibe-coded. I got impressively far for what I thought was possible, but it bogged down. This past weekend, my friend had OpenClaw build an application of similar complexity in a weekend. The difference is vast.

At work, I wouldn't say I'm one-shotting tasks, but the first shot is doing what used to be a week's work in about an hour, and then the next few hours are polish. Most of the delay in the polish phase is due to the speed of the tooling (e.g. feature branch environment spin up and CI) and the human review at the end of the process.

The side effects people report of lower quality code hitting review are real, but I think that is a matter of training, process and work harness. I see no reason that won't significantly improve.

As I said in another thread a couple days ago, AI is the first technology where everyone is literally having a different experience. Even within my company, there are divergent experiences. But I think we're in world where very soon, companies will be demanding their engineering departments converge to the lived experience of the people who are seeing something like the author. And if they can find people who can actuate that reality, the folks who can't are going to see their options contract precipitously.

by acjohnson55

2/11/2026 at 1:14:04 PM

> But I think we're in world where very soon, companies will be demanding their engineering departments converge to the lived experience of the people who are seeing something like the author.

I think this part is very real.

If you’re in this thread saying “I don’t get it” you are in danger much faster than your coworker who is using it every day and succeeding at getting around AI’s quirks to be productive.

by whynotminot

2/13/2026 at 12:31:21 PM

My wife manages 70 software developers. Her boss, the CIO, who has no practical programming experiece, is demanding her and her peers cut 50% of their staff in the next year.

by afpx

2/11/2026 at 2:47:18 PM

We’ve got repos full of 90% complete vibe code.

They’re all 90% there.

The thing is the last 10% is 90% of the effort. The last 1% is 99% of the effort.

For those of us who can consistently finish projects the future is bright.

The sheer amount of vibe code is simply going to overwhelm us (see current state of open source)

by pragmatic

2/11/2026 at 1:23:27 PM

Be careful here. I have more coworkers contributing slop and causing production issues than 10x’ing themselves.

The real danger is if management sees this as acceptable. If so best of luck to everyone.

by psiszj

2/11/2026 at 1:28:16 PM

> The real danger is if management sees this as acceptable. If so best of luck to everyone.

Already happening. It's just an extension of the "move fast and break stuff" mantra, only faster. I think the jury is still out on if more or less things will break, but it's starting to look like not enough to pump the brakes.

by TheCraiggers

2/11/2026 at 2:06:11 PM

> Be careful here. I have more coworkers contributing slop and causing production issues than 10x’ing themselves.

Sure, many such cases. We'll all have work for a while, if only so that management has someone to yell at when things break in prod. And break they will -- the technology is not perfected and many are now moving faster than they can actually vet the results. There is obvious risk here.

But the curve we're on is also obvious now. I'm seeing massive improvements in reliability with every model drop. And the model drops are happening faster now. There is less of an excuse than ever for not using the tools to improve your productivity.

I think the near future is going to be something like a high-speed drag race. Going slow isn't an option. Everyone will have to go fast. Many will crash. Some won't and they will win.

by whynotminot

2/11/2026 at 2:39:51 PM

> I think the near future is going to be something like a high-speed drag race. Going slow isn't an option. Everyone will have to go fast. Many will crash. Some won't and they will win.

I think this is right. This is what we as engineers have to wrap our minds around. This is the game we're in now, like it or not.

> Many will crash.

Aside from alignment, and some of these bigger picture concerns, prompt injection looms large. It's an astoundingly large, possibly unsolvable vector for all sorts of mayhem. But many people are making the judgment that there's too much to be gained before the shocks hit them. So far, they're right.

by acjohnson55

2/11/2026 at 1:33:39 PM

If a company lets faulty code get to production, that's an issue no matter how it is produced. Agentic coding can produce code at much higher volumes, but I think we're still in the early days of figuring out how to scale quality and the other nonfunctional requirements. (I do believe that we're literally talking about days, though, when it comes to some facets of some of these problems.)

But there's nothing inherent about agentic coding to lead to slop outcomes. If you're steering it as a human, you can tweak the output, by hand or agentically, until it matches your expectations. It's not currently a silver bullet.

That said, my experience is that the compressing of the research, initial draft process, and revision--all which used to be the bulk of my job--is radical.

by acjohnson55

2/11/2026 at 1:02:07 PM

Yes, for me --moved past AI coding accelerating you to 80-90% then living in the valley of infinite tweaks. This past month with with right thinking working with say Opus 4.6 has moved past that blocker.

by mediaeater

2/11/2026 at 12:55:21 PM

> But I think we're in world where very soon, companies will be demanding their engineering departments converge to the lived experience of the people who are seeing something like the author.

We already live in that world. It's called "Hey Siri", "Hey Google", and "Alexa". It seems that no amount of executive tantrum has caused any of these tools to give a convergent experience.

by waku888

2/11/2026 at 1:25:35 PM

Voice assistants, which I've used less than 10 times in my life, are hardly related to what I'm talking about.

by acjohnson55

2/13/2026 at 3:53:22 PM

Let's take that vibe coded product and iterate what it gave you 100 times, as you tweak it to fit your vision. When you do that 101st iteration, can you prevent it from breaking something else, or changing it in a way you don't like it?

What if it doesn't understand what you're asking it to do and keeps failing and you have to keep rolling back? Can you understand the 20,000 lines it's generated so you can make the change yourself without tearing your hair out? Can you fix bugs in it that it can't, without starting from zero and having to understand the whole codebase?

by marstall

2/13/2026 at 4:06:38 PM

To guard against this, the best course of action is probably modularization and composition, right? The Unix philosophy, ie building small, focused tools out of small, focused tools.

by gavmor

2/13/2026 at 4:56:35 PM

yes - i've thought that could work. returning to a more protected object oriented programming model (with hard-defined interfaces) could be a way - "make these changes but restrict yourself to this object" etc.

by marstall

2/13/2026 at 4:30:26 PM

Just run it a 102nd time to fix the error from the 101st time, obviously /j

by nickorlow

2/11/2026 at 12:44:03 PM

What’s the point in using these tools if they’re gonna replace us in a few years? It’s weird the author says that but then his conclusion is basically “go spend money on stuff I’m invested in”.

Covid comparison is apt. I remember being insanely scared in Jan 2020 when those videos of Chinese people dropping dead were coming out (and being shamed by most of my peers etc). Few months later it was starting to become obvious it was really only a major risk if you were old or infirm, but the rest of the world had took awhile to catch up.

AI’s big and gonna change stuff - and like COVID probably for the worse - but we’re in a poorly understood hype cycle right now.

by psiszj

2/11/2026 at 12:50:26 PM

> What’s the point in using these tools if they’re gonna replace us in a few years?

Increase shareholder value in the short term.

by karmakurtisaani

2/12/2026 at 7:57:50 PM

> > What’s the point in using these tools if they’re gonna replace us in a few years?

> Increase shareholder value in the short term.

everytime I see this as the ultimate conclusion for all of these kind of hype posts

by opem

2/11/2026 at 1:03:10 PM

> Few months later it was starting to become obvious it was really only a major risk if you were old or infirm, but the rest of the world had took awhile to catch up.

Only a risk in terms of dying sure, but plenty of people lost taste and smell for long periods before the vaccine (I wouldn't be surprised if some of them have yet to get it back). I'd rather be dead, to be honest. The food around my part of the world is too delicious.

by aero9188

2/13/2026 at 8:37:31 AM

also people developed debilitating brain fog, chronic fatigue, and increased risk of heart trouble.

by zem

2/13/2026 at 2:25:19 PM

I have all of the above still

Started in Feb 2020

by hackable_sand

2/13/2026 at 5:24:38 PM

sorry to hear it :( I have friends in the same boat.

by zem

2/11/2026 at 12:43:53 PM

I asked ai to summarise this blog post.

Jokes aside it should be noted that the author is a founder and ceo of an AI company, not to mention an active investor in the sector. (All disclosed in his "about" page)

by anthonj

2/11/2026 at 12:52:16 PM

How convenient that the AI apocalypse is happening RIGHT NOW, as the investors are more and more worried about an AI bubble. Good timing, I suppose.

by karmakurtisaani

2/12/2026 at 7:21:13 AM

It should be noted that the author doesn't shy away from that, and that his argument is convincing. He notes that while he is in the AI field, the actual cutting edge of AI development is done by a far smaller group of companies and researchers than the broader AI industry which includes his company.

Was there something specific in the article you found unconvincing, or that directly counters an experience you've had with AI?

by unethical_ban

2/12/2026 at 9:21:37 AM

I'm not making a statement on his arguments (well I dont agree with it, but that was not in my post).

I posted that because I consider blogging a fringe form of journalism, and basic journalism ethics require clear disclosure of conflict of interests.

His entire blog is very self serving. Which doesn't mean his opinion is wrong, but potentially not "cold" or impartial. More like a sales pitch probably. He is also very trasparent about his business but the article is posted here without that context, and I think it's important to point it out given the era of sensationalized and fake news we live in.

by anthonj

2/11/2026 at 4:20:50 PM

I thought the article was going to delve into this.

"The future is being shaped by a remarkably small number of people".

That is a lot of power in the hands of a few people. Probably nothing to worry about. Power is hardly ever abused...

by O1111OOO

2/13/2026 at 3:49:21 PM

with these posts I always wonder, what happens when this code runs into a customer? Or 1000 customers, or a million? All with their own divergent needs year over year.

I have just gotten off 3 years as a developer for that kind of project, and I used the best AI tools diligently every day. It often saved me time. Like from some small drudgery of half day of flailing about in config land. Or it could generate some nice rails controllers and a javascript front end from a well-written spec. writing tests was also a strong suit.

but just as often it failed to understand the depth of the product and its myriad concerns and led me down the garden path, reducing my efficiency.

Aside from that, a large part of my job was the parts that weren't coding - wrestling with specs that were far from ready for primetim, chaotic internal processes, deployment, internal coordination/communcation, talking to customers, etc.

In the end it seemed like it saved me maybe 20% of my time overall. Nothing to sneeze at.

I get that greenfield apps that have no customer contact can be created with a phrase now. That's pretty amazing. But I would love to see Opus 4.6 up against a real beast of a codebase that you're far from a master of.

by marstall

2/11/2026 at 1:03:39 PM

Every once in a while, I try LLMs just to see how improvement is going.

Yesterday I had to explain to Opus what the color white is and what "bottom right" means after it declared problems fixed, repeatedly, that a literal preschooler would have been able to tell were absolutely unchanged from the original problem description.

I am still waiting for this world of redundant programmers I've been hearing about for years.

by pockybum522

2/12/2026 at 1:44:52 PM

I use FreeBSD. When talking to LLMs, they insist on giving me code in bash. Bash is not native to FreeBSD (though you can get it and use it). I correct them, and of course they apologize, but another day continue to use bash in other questions.

by assimpleaspossi

2/12/2026 at 8:10:40 PM

Considering the rate of improvement of these LLMs, wait for a month or two and then you may not even need an os, let alone some obsecure piece of software (shell).

by opem

2/13/2026 at 8:24:47 AM

[dead]

by juanani

2/11/2026 at 12:38:42 PM

> Making AI great at coding was the strategy that unlocks everything else. That's why they did it first.

They did it first because doing it first was easier. There are tons of examples around and code can be verified to work.

by dsmurrell

2/11/2026 at 12:36:32 PM

I find these water-against-a-rock literary tones so tedious. Even the writer always seems to have to go back and put some of it in BOLD TEXT, supposedly highlighting the main ideas, but really optical affordances.

The truth of this seems much more banal. Computing has become a major drag. There have been tens of thousands of libraries that reinvent the wheel. Every operating system has become a toy. All major language systems have an absurd learning curve. Each important application is fortified by a giant corporation. Social media is self-important pop babble.

LLMs are surprisingly good at dealing with complex systems. I can fire one up and ask, for example, why this Swift code is not compiling. But why doesn’t my Swift editor explain that problem? Why is it a confusing question at all? The entire system was built from the ground up at enormous expense. Why do I seek outside help?

Our computing is full of whizzy animations and pointless Victorian ironmongery. All meaningless. AI is medicine, not the cure.

by hyperhello

2/11/2026 at 1:35:59 PM

> They focused on making AI great at writing code first... because building AI requires a lot of code.

I'm not convinced this person knows what they're talking about.

by politelemon

2/13/2026 at 1:26:03 PM

The problem with current technology is that it can't feed from what it produces. And the more it produces the less it has to feed from.

I wonder what will happen.

by motbus3

2/13/2026 at 8:28:23 AM

You know, as a (prickly) analogy, whatever your take on covid was, half the population vehemently disagreed with your take. No matter which side was more "correct", either way, a huge percentage of the population can be, and often is, completely deluded on even fairly understandable topics.

When it comes to something as complex as AI, what are the odds that a random person is going to have any sort of good/informed take on it? Especially someone like this, who's a non-technical angel investor? Their entire job is hyping things up to raise money / get paydays. They actually list on their resume various "viral articles/tweets" that they made that got attention / raised money. Could this guy remotely explain, technically, how an LLM works under the hood? I highly doubt it. His credentials are not building AI, not technical knowledge, but hyping up companies that use AI.

Well, at least he gives 1-5 year time frames for all his grand claims, so when they don't actually happen he'll be quickly proven wrong. But of course, it's the internet, and nothing will ever come from somebody making grand claims and then being completely proven wrong, there will be no follow up, no self reflection, no retraction, no long-term credibility hit, just on to hyping up the next thing after getting his payday.

by thegrim000

2/11/2026 at 12:40:17 PM

Management is going to quickly start bisecting human engineers along lines of maximalists and minimalists. The minimalists will all be let go. A few bad things will happen. A few systems will strain under the pressure but itll be “worth it” in the same way that its cheaper to pay lawsuits than do a recall of a plane.

We arent innovating in other areas that might soften the blow. We dont have good support systems, social security, healthcare, or even demands in other areas. How many engineers are going to be plumbers and construction workers?

by thinkingkong

2/11/2026 at 12:45:16 PM

If what the author says is true, there’s no point in management either.

by psiszj

2/13/2026 at 8:35:16 PM

Think IBM training manual had a quote about not being able to hold a computer accountable, so you cant let them make management decisions. Basically management will stick around so somebody can be held to blame if AI slop breaks

by coastingotter

2/12/2026 at 7:23:33 AM

Why is this flagged? It's a relevant essay from someone in the field with very convincing arguments.

Does using "@dang" work to get attention to this?

by unethical_ban

2/12/2026 at 1:42:33 PM

I noticed that in the article he includes a link to follow him on X. It made me wonder if the article exists to get him followers. Or get more investment into his company.

I'm not saying the article isn't worth reading. I just now wondered--and wouldn't be surprised--if it was written by AI.

by assimpleaspossi

2/11/2026 at 3:47:19 PM

This link is now on the top level of DrudgeReport.

I hope he has a good hosting plan.

by NickHodges0702

2/10/2026 at 11:59:29 PM

So long as perceived LLM skill is still "spiky" - e.g. within a domain, still showing relatively high variation in ability (often depending on the task or user, to be fair), people will continue to dismiss it

by sudhirb

2/11/2026 at 1:05:55 PM

If we cover our eyes, it definitely won't happen

by okokwhatever

2/11/2026 at 3:21:02 PM

Mostly speculative

by 4b11b4

2/11/2026 at 12:00:51 AM

> AI can already read contracts, summarize case law, draft briefs, and do legal research at a level that rivals junior associates.

And best of all, when it messes up, it doesn't get sued!

You do.

by chrisjj

2/11/2026 at 4:12:47 AM

It doesn’t mess up. Not any more.

by feastingonslop

2/11/2026 at 12:58:35 PM

This is a solid assessment of whats here and what is in front of us. Broad brush stroke dismissals aside, we are here. Evolve or Perish. AI is like unchecked fire, but make no mistake fire is very powerful once it was harnessed. AI leans more supplemental vs incremental than prior major tech shifts and that's worth noting. It will be the same for other sectors and verticals over time. The markets view software eats the world is being eaten by new software.

by mediaeater

2/11/2026 at 3:12:01 PM

yep, and for the finances to make sense, these AIs need to make a significant impact on employment rolls, ie., they need to replace humans. Personally I think its the wrong tech at the wrong time, but I don't think me and this timeline is a very good match so ymmv

by cowboylowrez

2/11/2026 at 1:06:48 PM

On their heads be it

by nis0s

2/12/2026 at 8:13:50 PM

Protect the human culture while you can. His advice to embrace AI in order to survive what's coming is naive. When Europeans came to native Americans, they brought their technology, their way of thinking and erased the native culture in a matter of years. The weak were poisoned with alcohol, the strong were killed and the few survivors were sent to reservations. Would it be possible to survive by embracing the alien culture? Don't fool yourself.

by akomtu

2/11/2026 at 12:59:57 PM

> You can describe an app to AI and have a working version in an hour. I'm not exaggerating. I do this regularly. If you've always wanted to write a book but couldn't find the time or struggled with the writing, you can work with AI to get it done.

Interesting. So you regularly make new apps in 1 hour each.

How is that the same as...writing a book? Did you mean write several short stories? Or are we talking non-fiction?

by aero9188

2/11/2026 at 1:14:10 PM

[dead]

by goosers

2/11/2026 at 12:45:18 PM

[flagged]

by hbjkhgkytfkytv

2/11/2026 at 2:53:14 PM

As you've proven it, some people really are holding it wrong:

https://chatgpt.com/share/698c97bb-0d04-8006-9418-8f299c6bd0...

by dist-epoch

2/11/2026 at 7:55:31 PM

I mean, you both used the exact same prompt, how is OP 'holding it worng'?

by zparky

2/11/2026 at 10:53:03 PM

Granted that we can't see which model was picked, but one clearly shows that thinking was enabled, the other shows it's not.

This just proves the article's point about using cheap models / free versions and complaining about the state of AI.

Well done!

by denysvitali

2/12/2026 at 1:46:06 AM

It was GPT 5.2. Can't remember if I did thinking or not. Note that even after the fallacy was highlighted to the model, it seemed to have a tough time grasping what went wrong.

by hbjkhgkytfkytv

2/13/2026 at 2:27:14 PM

You are proud about baking classism into your little toy?

Disgusting

by hackable_sand

2/12/2026 at 11:05:26 PM

> AI models are as shitty as they were in 2023.

Yep! They still remain stupid (in the intelligence sense, not the pejorative sense) tools which have no practical value to anyone with decent skills. But the people who have a financial interest in hype are still trying to convince everyone "it's totally different now bro, use the latest model bro". It's so tiresome.

by bigstrat2003

2/11/2026 at 12:34:03 PM

All about AI taking over the world.

by jmclnx