alt.hn

2/15/2026 at 11:34:18 PM

Why I don't think AGI is imminent

https://dlants.me/agi-not-imminent.html

by anonymid

2/16/2026 at 2:16:36 AM

Here's a thought. Lets all arbitrarily agree AGI is here. I can't even be bothered discussing what the definition of AGI is. It's just here, accept it. Or vice versa.

Now what....? Whats happening right now that should make me care that AGI is here (or not). Whats the magic thing thats happening with AGI that wasn't happening before?

<looks out of window> <checks news websites> <checks social media...briefly> <asks wife>

Right, so, not much has changed from 1-2 years ago that I can tell. The job markets a bit shit if you're in software...is that what we get for billions of dollars spent?

by hi_hi

2/17/2026 at 1:09:33 AM

> Lets all arbitrarily agree AGI is here. I can't even be bothered discussing what the definition of AGI is.

There is a definition of AGI the AI companies are using to justify their valuation. It's not what most people would call AGI but it does that job well enough, and you will care when it arrives.

They define it as an AI that can develop other AI's faster than the best team of human engineers. Once they build one of those in house they outpace the competition and become the winner that takes all. Personally I think it's more likely they will all achieve it at a similar time. That would mean the the race will continues, accelerating as fast as they can build data centres and power plants to feed them.

It will impact everyone, because the already dizzying pace of the current advances will accelerate. I don't know about you, but I'm having trouble figuring out what my job will be next year as it is.

An AI that just develops other AI's could hardly be called "general" in my book, but my opinion doesn't count for much.

by rstuart4133

2/16/2026 at 3:19:01 PM

What's happening with AGI depends on what you mean by AGI so "can't even be bothered discussing what the definition" means you can't say what's happening.

My usual way of thinking about it is AGI means can do all the stuff humans do which means you'd probably after a while look out the window and see robots building houses and the like. I don't think that's happening for a while yet.

by tim333

2/16/2026 at 6:25:14 PM

Who would the robots build houses for? No one has a job and no one is having kids in that future.

by kjkjadksj

2/16/2026 at 8:55:34 PM

Where are the robots going to sleep? Outside in the rain?

by elfly

2/16/2026 at 7:56:02 PM

The billionaire elite. Isn’t it obvious? They want to get rid of us

by therobots927

2/16/2026 at 6:15:29 PM

Indeed: particularly given that—just as a nonexhaustive "for instance"—one of the fairly common things expected in AGI is that it's sapient. Meaning, essentially, that we have created a new life form, that should be given its own rights.

Now, I do not in the least believe that we have created AGI, nor that we are actually close. But you're absolutely right that we can't just handwave away the definitions. They are crucial both to what it means to have AGI, and to whether we do (or soon will) or not.

by danaris

2/16/2026 at 8:52:13 PM

I'm not sure how the rights thing will go. Humans have proved quite able not to give many rights to animals or other groups of humans even if they are quite smart. Then again there was that post yesterday with a lady accusing OpenAI of murdering her AI boyfriend by turning off 4o so no doubt there will be lots of arguments over that stuff. (https://news.ycombinator.com/item?id=47020525)

by tim333

2/16/2026 at 4:43:56 AM

Cultural changes take time. It took decades for the internet to move from nerdy curiosity to an essential part of everyone's life.

The writing is on the wall. Even if there's no new advances in technology, the current state is upending jobs, education, media, etc

by hackyhacky

2/16/2026 at 4:51:18 AM

I really think corporations are overplaying their hand if they think they can transform society once again in the next 10 years.

Rapid de industrialization followed by the internet and social media almost broke our society.

Also, I don’t think people necessarily realize how close we were to the cliff in 2007.

I think another transformation now would rip society apart rather than take us to the great beyond.

by materielle

2/16/2026 at 9:17:05 AM

I worry that if the reality lives up to investors dreams it will be massively disruptive for society which will lead us down dark paths. On the other hand if it _doesn't_ live up to their dreams, then there is so much invested in that dream financially that it will lead to massive societal disruption when the public is left holding the bag, which will also lead us down dark paths.

by foo42

2/16/2026 at 11:50:15 AM

It's already made it impossible to trust half of the content i read online.

Whenever i use search terms to ask a specific question these days theres usually a page of slop dedicated to the answer which appears top for relevancy.

Once i realize it is slop i realize the relevant information could be hallicinated so i cant trust it.

At the same time im seeing a huge upswing in probable human created content being accused of being slop.

We're seeing a tragedy of the information commons play out on an enormous scale at hyperspeed.

by pydry

2/16/2026 at 3:57:01 PM

You trust nearly half??!!??

by Induane

2/16/2026 at 5:30:16 AM

I think corporations can definitely transform society in the near future. I don't think it will be a positive transformation, but it will be a transformation.

Most of all, AI will exacerbate the lack of trust in people and institutions that was kicked into high gear by the internet. It will be easy and cheap to convince large numbers of people about almost anything.

by hackyhacky

2/16/2026 at 4:57:44 PM

I'm still not buying that AI will change society anywhere as much as the internet or smart phones for the matter.

The internet made it so that you can share and access information in a few minute if not seconds.

Smart phones build on the internet by making this sharing and access of information could done from anywhere and by anyone.

AI seems occupies the same space as google in the broader internet ecosystem.I dont know what AI provides me that a few hours of Google searches. It makes information retrieval faster, but that was the never the hard part. The hard part was understanding the information, so that you're able to apply it to your particalar situation.

Being able to write to-do apps X1000 faster is not innovation!

by the1st

2/16/2026 at 1:24:51 PM

You are assuming that the change can only happen in the west.

The rest of the world has mostly been experiencing industrialisation, and was only indirectly affected by the great crash.

If there is a transformation in the rest of the world the west cannot escape it.

A lot of people in the west seem to have their heads in the sand, very much like when Japan and China tried to ignore the west.

China is the world's second biggest economy by nominal GDP, India the fourth. We have a globalised economy where everything is interlinked.

by graemep

2/16/2026 at 2:32:45 PM

When I look at my own country it has proven to be open to change. There are people alive today who remember Christianity now we swear in a gay prime minister.

In that sense Western countries have proven that they are intellectualy very nimble.

by expedition32

2/16/2026 at 3:31:54 PM

Three of the best known Christians I have known in my life are gay. Two are priests (one Anglican, one Catholic). Obviously the Catholic priest had taken a vow of celibacy anyway to its entirely immaterial. I did read an interview of a celeb friend (also now a priest!) of his that said he (the priest I knew) thought people did not know he was gay we all knew, just did not make a fuss about it.

Even if you accept the idea that gay sex is a sin, the entire basis of Christianity is that we are all sinners. Possessing wealth is a failure to follow Jesus's commands for instance. You should be complaining a lot more if the prime minister is rich. Adultery is clearly a more serious sin than having the wrong sort of sex, and I bet your country has had adulterous prime ministers (the UK certainly has had many!).

I think Christians who are obsessed with homosexuality as somehow making people worse than the rest of us, are both failing to understand Christ's message, and saying more about themselves than gays.

If you look at when sodomy laws were abolished, countries with a Christian heritage lead this. There are reasons in the Christian ethos if choice and redemption for this.

by graemep

2/16/2026 at 2:36:56 PM

> people alive today who remember Christianity now we swear in a gay prime minister

Why would that be a contradiction? Gay people can't be Christian?

by hackyhacky

2/16/2026 at 5:01:43 AM

As a young adult in 2007, what cliff were we close to?

The GFC was a big recession, but I never thought society was near collapse.

by BobbyJo

2/16/2026 at 5:31:52 AM

We were pretty close to a collapse of the existing financial system. Maybe we’d be better off now if it happened, but the interim devastation would have been costly.

by edmundsauto

2/16/2026 at 11:14:12 AM

We weren't that far away from ATMs refusing to hand out cash, banks limiting withdrawals from accounts (if your bank hadn't already gone under), and a subsequent complete collapse of the financial system. The only thing that saved us from that was an extraordinary intervention by governments, something I am not sure they would be capable of doing today.

by verzali

2/16/2026 at 5:29:51 AM

It felt like the entire global financial system had a chance of collapsing.

by zeroonetwothree

2/16/2026 at 5:03:12 AM

yeah, this is a good point, transition and transformation to new technologies takes time. I'm not sure I agree the current state is upending things though. It's forcing some adaption for sure, but the status quo remains.

by hi_hi

2/16/2026 at 5:48:51 AM

> It took decades

It took one September. Then as soon as you could take payments on the internet the rest was inevitable and in _clear_ demand. People got on long waiting lists just to get the technology in their homes.

> no new advances in technology

The reason the internet became so accessible is because Moore was generally correct. There was two corresponding exponential processes that vastly changed the available rate of adoption. This wasn't at all like cars being introduced into society. This was a monumental shift.

I see no advances in LLMs that suggest any form of the same exponential processes exist. In fact the inverse is true. They're not reducing power budgets fast enough to even imagine that they're anywhere near AGI, and even if they were, that they'd ever be able to sustainably power it.

> the current state is upending jobs

The difference is companies fought _against_ the internet because it was so disruptive to their business model. This is quite the opposite. We don't have a labor crisis, we have a retention crisis, because companies do not want to pay fair value for labor. We can wax on and off about technology, and perceptrons, and training techniques, or power budgets, but this fundamental fact seems the hardest to ignore.

If they're wrong this all collapses. If I'm wrong I can learn how to write prompts in a week.

by themafia

2/17/2026 at 1:25:35 AM

what September?

by nubg

2/17/2026 at 2:20:08 AM

This is an allusion to the old days, before the internet became a popular phenomenon. It used to be, that every September a bunch of "newbies" (college student who just access to an internet connection for the first time) would log in and make a mess of things. Then, in the late nineties when it really took off, everybody logged in and made a mess of things. This is this the "eternal september." [1]

[1] https://en.wikipedia.org/wiki/Eternal_September

by hackyhacky

2/16/2026 at 6:45:59 AM

> It took one September.

It's the classic "slowly, then suddenly" paradigm. It took decades to get to that one September. Then years more before we all had internet in our pocket.

> The reason the internet became so accessible is because Moore was generally correct.

Can you explain how Moore's law is relevant to the rise of the internet? People didn't start buying couches online because their home computer lacked sufficient compute power.

> I see no advances in LLMs that suggest any form of the same exponential processes exist.

LLMs have seen enormous growth in power over the last 3 years. Nothing else comes close. I think they'll continue to get better, but critically: even if LLMs stay exactly as powerful as they are today, it's enough to disrupt society. IMHO we're already at AGI.

> The difference is companies fought _against_ the internet

Some did, some didn't. As in any cultural shift, there were winners and losers. In this shift, too, there will be winner and losers. The panicked spending on data centers right now is a symptom of the desire to be on the right side of that.

> because companies do not want to pay fair value for labor.

Companies have never wanted to pay fair value for labor. That's a fundamental attribute of companies, arising as a consequence of the system of incentives provided in capitalism. In the past, there have been opportunities for labor to fight back: government regulation, unions. This time that won't help.

> If I'm wrong I can learn how to write prompts in a week.

Why would you think that anyone would want you to write prompts?

by hackyhacky

2/16/2026 at 9:53:11 AM

> Cultural changes take time. It took decades for the internet to move from nerdy curiosity to an essential part of everyone's life.

99% of people only ever use proprietary networks from FAANG corporations. That's not "the internet", that's an evolution of CompuServe and AOL.

We got TCP/IP and the "web-browser" as a standard UI toolkit stack out of it, but the idea of the world wide web is completely dead.

by otabdeveloper4

2/16/2026 at 3:33:21 PM

Shockingly rare how few realize this. It's a series of mega cities interconnected by ghost towns out here.

by rglover

2/16/2026 at 5:00:31 AM

It also took years for the Internet to be usable by most folks. It was hard, expensive and unpractical for decades.

Just about the time it hit the mainstream coincidentally, is when the enshitification began to go exponential. Be careful what you wish for.

by webdoodle

2/16/2026 at 5:32:58 AM

Allow me to clarify: I'm not wishing for change. I am an AI pessimist. I think our society is not prepared to deal with what's about to happen. You're right: AI is the key to the enshitification of everything, most of all trust.

by hackyhacky

2/16/2026 at 5:45:15 AM

Governments and companies have been pushing for identity management that connects your real life identity with your digital one for quite some time. With AI I believe that's not only a bad thing, maybe unavoidable now.

by bulbar

2/16/2026 at 5:11:55 AM

> Here's a thought. Lets all arbitrarily agree AGI is here.

A slightly different angle on this - perhaps AGI doesn't matter (or perhaps not in the ways that we think).

LLMs have changed a lot in software in the last 1-2 years (indeed, the last 1-2 months); I don't think it's a wild extrapolation to see that'll come to many domains very soon.

by jwilliams

2/16/2026 at 3:39:04 PM

Which domains? Will we see a lot of changes in plumbing?

by nradov

2/16/2026 at 5:33:17 PM

If most of your work involves working with a monitor and keyboard, you're in one of the the domains.

Even if it doesn't, you will be indirectly affected. People will flock to trades if knowledge work is no longer a source of viable income.

by joquarky

2/16/2026 at 12:25:55 PM

If AGI is already here actions would be so greatly accelerated humans wouldn’t have time to respond.

Remember that weather balloon the US found a few years ago that for days was on the news as a Chinese spy balloon?

Well whether it was a spy balloon or a weather balloon but the first hint of its existence could have triggered a nuclear war that could have already been the end of the world as we know it because AGI will almost certainly be deployed to control the U.S. and Chinese military systems and it would have acted well before any human would have time to intercept its actions.

That’s the apocalyptic nuclear winter scenario.

There are many other scenarios.

An AGI which has been infused with a tremendous amount of ethics so the above doesn’t happen, may also lead to terrible outcomes for a human. An AGI would essentially be a different species (although a non biological one). If it replicated human ethics even when we apply them inconsistently, it would learn that treating other species brutally (we breed, enslave, imprison, torture, and then kill over 80 billion land animals annually in animal agriculture, and possibly trillions of water animals). There’s no reason it wouldn’t do that to us.

Finally, if we infuse it with our ethics but it’s smart enough to apply them consistently (even a basic application of our ethics would have us end animal agriculture immediately), so it realizes that humans are wrong and doesn’t do the same thing to humans, it might still create an existential crisis for humans as our entire identity is based on thinking we are smarter and intellectually superior to all other species, which wouldn’t be true anymore. Further it would erode beliefs in gods and other supernatural BS we believe which might at the very least lead humans to stop reproducing due to the existential despair this might cause.

by hshdhdhj4444

2/16/2026 at 2:42:56 PM

You're talking about superintelligence. AGI is just...an AI that's roughly on par with humans on most things. There's no inherent reason why AGI will lead to ASI.

by armoredkitten

2/16/2026 at 3:25:24 PM

What a silly comment. You're literally describing the plot of several sci-fi movies. Nuclear command and control systems are not taken so lightly.

And as for the Chinese spy balloon, there was never any risk of a war (at least not from that specific cause). The US, China, Russia, and other countries routinely spy on each other through a variety of unarmed technical means. Occasionally it gets exposed and turns into a diplomatic incident but that's about it. Everyone knows how the game is played.

by nradov

2/16/2026 at 1:21:13 PM

Sounds fun let's do it.

by koakuma-chan

2/16/2026 at 1:14:09 PM

AGI is not a death sentence for humanity. It all depends on who leverages the tool. And in any case, AGI won’t be here for decades to come.

by deafpolygon

2/16/2026 at 2:30:02 PM

Your sentence seems to imply that we will delegate all AI decisions to one person who can decide how he wants to use it - to build or destroy.

Strong agentic AIs are a death sentence memo pad (or a malevolent djinn lamp if you like) that anyone can write on, because the tools will be freely available to leverage. A plutonium breeder reactor in every backyard. Try not to think of paperclips.

by mapt

2/16/2026 at 5:02:52 AM

Before enlightenment^WAGI: chop wood, fetch water, prepare food

After enlightenment^WAGI: chop wood, fetch water, prepare food

by CamperBob2

2/16/2026 at 1:39:27 PM

AGI is a pipe dream and will never exist

by tsukurimashou

2/16/2026 at 5:27:40 PM

Odd to see someone so adamantly insist that we have souls on a forum like HN.

by joquarky

2/16/2026 at 10:52:20 PM

people are taking actions based on its advice.

by m463

2/16/2026 at 7:09:47 AM

AGI would render humans obsolete and eradicate us sooner or later.

by copx

2/16/2026 at 4:11:57 PM

One of the most impactful books I ever read was Alvin Toffler's Future Shock.

Its core thesis was: Every era doubled the amount of technological change of the prior era in one half the time.

At the time he wrote the book in 1970, he was making the point that the pace of technological change had, for the first time in human history, rendered the knowledge of society's elders - previously the holders of all valuable information - irrelevant.

The pace of change has continued to steadily increase in the ensuing 55 years.

Edit: grammar

by keernan

2/16/2026 at 8:31:29 AM

Pretty sure marketing team s are already working on AGI v2

by Havoc

2/16/2026 at 6:04:40 AM

I think you are missing the point: If we assume that AGI is *not* yet here, but may be here soon, what will change when it arrives? Those changes could be big enough to affect you.

by munchler

2/16/2026 at 7:25:14 AM

I'm missing the point? I literally asked the same thing you did.

>Now what....? Whats happening right now that should make me care that AGI is here (or not).

Do you have any insight into what those changes might concretely be? Or are you just trying to instil fear in people who lack critical thinking skills?

by hi_hi

2/16/2026 at 6:14:28 PM

You did not ask the same thing. You framed the question such that readers are supposed to look at their current lives and realize nothing is different ergo AGI is lame. Your approach utilizes the availability bias and argument from consequences logical fallacies.

I think what you are trying to say is can we define AGI so that we can have an intelligent conversation about what that will mean for our daily lives?. But you oddly introduced your argument by stating you didn't want to explore this definition...

by MadcapJake

2/16/2026 at 3:01:43 PM

The economy is shit if you’re anything except a nurse or providing care to old people.

by dyauspitr

2/16/2026 at 3:40:14 PM

Electricians are also doing pretty well. Someone has to wire up those new data centers.

by nradov

2/16/2026 at 9:50:44 AM

> The job markets a bit shit if you're in software

That's Trump's economy, not LLMs.

by otabdeveloper4

2/16/2026 at 5:46:38 AM

Many devs don’t write code anymore. Can really deliver a lot more per dev.

Many people slowly losing jobs and can’t find new ones. You’ll see effects in a few years

by skeptic_ai

2/16/2026 at 5:48:23 AM

Deliver a lot more tech debt

by reactordev

2/17/2026 at 1:12:30 AM

My LLMs do create non-zero amounts of tech debt, but they are also massively decreasing human-made tech debt by finding mountains of code that can be removed or refactored when using the newest frameworks.

by qingcharles

2/16/2026 at 6:44:33 AM

That tech debt will be cleaned up with a model in 2 years. Not that human don't make tech debt.

by dainiusse

2/16/2026 at 8:32:57 AM

What that model is going to do in 2 years is replace tech debt with more complicated tech debt.

by shaky-carrousel

2/16/2026 at 9:36:11 AM

One could argue that's a cynically accurate definition of most iterative development anyway.

But I don't know that I accept the core assertion. If the engineer is screening the output and using the LLM to generate tests, chances are pretty good it's not going to be worse than human-generated tech debt. If there's more accumulated, it's because there's more output in general.

by geoelectric

2/16/2026 at 2:37:26 PM

Only if you accept the premise that the code generated by LLMs is identical to the developer's output in quality, just higher in volume. In my lived professional experience, that's not the case.

It seems to me that prompting agents and reviewing the output just doesn't.... trigger the same neural pathways for people? I constantly see people submit agent generated code with mistakes they would have never made themselves when "handwriting" code.

Until now, the average PR had one author and a couple reviewers. From now on, most PRs will have no authors and only reviewers. We simply have no data about how this will impact both code quality AND people's cognitive abilities over time. If my intuition is correct, it will affect both negatively over time. It remains to be seen. It's definitely not something that the AI hyperenthusiasts think at all about.

by krethh

2/16/2026 at 5:38:45 PM

> In my lived professional experience, that's not the case.

In mine it is the case. Anecdata.

But for me, this was over two decades in an underpaid job at an S&P500 writing government software, so maybe you had better peers.

by joquarky

2/17/2026 at 12:45:31 AM

I stated plainly: "we have no data about this". Vibes is all we have.

It's not just me though. Loads of people subjectively perceiving a decrease in quality of engineering when relying on agents. You'll find thousands of examples on this site alone.

by krethh

2/16/2026 at 7:17:41 AM

I actually think it is here. Singularity happened. We're just playing catch up at this point.

Has it runaway yet? Not sure, but is it currently in the process of increasing intelligence with little input from us? Yes.

Exponential graphs always have a slow curve in the beginning.

by xhcuvuvyc

2/16/2026 at 7:32:07 AM

Didn't you get the memo? Tuesday. Tuesday is when the Singularity happens.

Will there still be ice cream after Tuesday? General societal collapse would be hard to bare without ice cream.

by hi_hi

2/16/2026 at 5:44:24 PM

Tuesday at 4 p.m to be specific.

by joquarky

2/16/2026 at 6:01:49 AM

I've been writing code for 20 years. AI has completely changed my life and the way I write code and run my business. Nothing is the same anymore, and I feel I will be saying that again by the end of 2026. My productive output as a programmer in software and business have expanded 3x *compounding monthly*.

by znnajdla

2/16/2026 at 6:15:19 AM

>My productive output as a programmer in software and business have expanded 3x compounding monthly.

In what units?

by myegorov

2/16/2026 at 12:56:26 PM

Tasks completed in my todo list software I’ve been measuring my output for 5 years. Time saved because I built one off tools to automate many common workflows. And yes even dollars earned.

I don’t mean 3x compounding monthly every month, I mean 3x total since I started using Claude Code about 6 months ago but the benefits keep compounding.

by znnajdla

2/16/2026 at 9:02:53 AM

GWh

by freshbreath

2/16/2026 at 12:55:37 PM

Going from gigajoules to terajoules.

by tmtvl

2/16/2026 at 8:51:09 AM

Vibes

by merek

2/16/2026 at 6:46:28 AM

Going from punch cards to terminals also "completely changed my life and the way I write code and run my business"

Firefox introducing their dev debugger many years ago "completely changed my life and the way I write code and run my business"

You get the idea. Yes, the day to day job of software engineering has changed. The world at large cares not one jot.

by hi_hi

2/16/2026 at 5:34:00 PM

I mean 2025 had the weakest job creation growth numbers outside of recession periods since at least 2003. The world seems to care in a pretty tangible way. There are other big influencing factors for that, too, of course.

by brynnbee

2/16/2026 at 6:10:59 AM

Okay. So software engineers are vastly more efficient. Good I guess. "Revolutionize the entire world such that we rethink society down to its very basics like money and ownership" doesn't follow from that.

by UncleMeat

2/16/2026 at 6:20:15 AM

Man you guys are impatient. It takes decades even for earth shattering technologies to mature and take root.

by pennomi

2/16/2026 at 6:55:44 AM

Damn right I'm impatient. My eye starts twitching when a web page takes more than 2 seconds to load :-)

In the meantime, I've had to continuously hear talk about AI, both in real life (like at the local pub) AND virtually (tv/radio/news/whatever) and how it's going to change the world in unimaginable ways for the last...2/3 years. Billions upon billions of dollars are being spent. The only tangible thing we have to show is software development, and some other fairly niche jobs, have changed _a bit_.

So yeah, excuse my impatience for the bubble to burst, I can stop having to hear about this shit every day, and I can go about my job using the new tools we have been gifted, while still doing all the other jobs that sadly do not benefit in any similar way.

by hi_hi

2/16/2026 at 9:57:15 AM

> The only tangible thing we have to show is software development, and some other fairly niche jobs, have changed _a bit_.

There is zero evidence that LLMs have changed software development efficiency.

We get an earth-shattering developer productivity gamechanger every five years. All of them make wild claims, none of them ever have any data to back those claims up.

LLMs are just another in a long, long list. This too will pass. (Give it five years for the next gamechanger.)

by otabdeveloper4

2/16/2026 at 3:01:47 PM

If people want to make the "this will be AGI after two decades and will totally revolutionize the entire world" that's fine. If people want to make the "wow this is an incredibly useful tool for many jobs that will make work more efficient" that's fine. We can have those discussions.

What I don't buy is the "in two years there will be no more concept of money or poverty because AI has solved everything" argument using the evidence that these tools are really good at coding.

by UncleMeat

2/16/2026 at 6:18:27 AM

Are you working for 3x less the time compounding monthly?

Are you making 3x the money compounding monthly ?

No?

Then what's the point?

by waterTanuki

2/16/2026 at 12:54:30 PM

Yes and yes.

by znnajdla

2/16/2026 at 1:27:49 PM

Okay, teach me how, then? I would also like to work 3× less and make 3× more.

by timeattack

2/16/2026 at 5:49:31 PM

People keep impatiently expecting proof from builders with no moat. It's like that Upton Sinclair quote.

by joquarky

2/16/2026 at 5:34:55 PM

Start a software business, presumably.

by brynnbee

2/16/2026 at 4:44:14 PM

Ten more months in 2026, so you should be about 60,000x better by the end of the year.

by kbelder

2/16/2026 at 5:12:58 PM

You say that as if it’s impossible but there are several indie makers that have gone from $10 MRR to $600k MRR over the past 8 months.

by znnajdla

2/16/2026 at 2:10:05 PM

It's weird that you guys keep posting the same comments with the exact same formatting

You're not fooling anyone

by hackable_sand

2/16/2026 at 2:05:31 AM

> The transformer architectures powering current LLMs are strictly feed-forward.

This is true in a specific contextual sense (each token that an LLM produces is from a feed-forward pass). But untrue for more than a year with reasoning models, who feed their produced tokens back as inputs, and whose tuning effectively rewards it for doing this skillfully.

Heck, it was untrue before that as well, any time an LLM responded with more than one token.

> A [March] 2025 survey by the Association for the Advancement of Artificial Intelligence (AAAI), surveying 475 AI researchers, found that 76% believe scaling up current AI approaches to achieve AGI is "unlikely" or "very unlikely" to succeed.

I dunno. This survey publication was from nearly a year ago, so the survey itself is probably more than a year old. That puts us at Sonnet 3.7. The gap between that and present day is tremendous.

I am not skilled enough to say this tactfully, but: expert opinions can be the slowest to update on the news that their specific domain may have, in hindsight, have been the wrong horse. It's the quote about it being difficult to believe something that your income requires to be false, but instead of income it can be your whole legacy or self concept. Way worse.

> My take is that research taste is going to rely heavily on the short-duration cognitive primitives that the ARC highlights but the METR metric does not capture.

I don't have an opinion on this, but I'd like to hear more about this take.

by NiloCK

2/16/2026 at 4:02:14 AM

Thanks for reading, and I really appreciate your comments!

> who feed their produced tokens back as inputs, and whose tuning effectively rewards it for doing this skillfully

Ah, this is a great point, and not something that I considered. I agree that the token feedback does change the complexity, and it seems that there's even a paper by the same authors about this very thing! https://arxiv.org/abs/2310.07923

I'll have to think on how that changes things. I think it does take the wind out of the architecture argument as it's currently stated, or at least makes it a lot more challenging. I'll consider myself a victim of media hype on this, as I was pretty sold on this line of argument after reading this article https://www.wired.com/story/ai-agents-math-doesnt-add-up/ and the paper https://arxiv.org/pdf/2507.07505 ... who brush this off with:

>Can the additional think tokens provide the necessary complexity to correctly solve a problem of higher complexity? We don't believe so, for two fundamental reasons: one that the base operation in these reasoning LLMs still carries the complexity discussed above, and the computation needed to correctly carry out that very step can be one of a higher complexity (ref our examples above), and secondly, the token budget for reasoning steps is far smaller than what would be necessary to carry out many complex tasks.

In hindsight, this doesn't really address the challenge.

My immediate next thought is - even solutions up to P can be represented within the model / CoT, do we actually feel like we are moving towards generalized solutions, or that the solution space is navigable through reinforcement learning? I'm genuinely not sure about where I stand on this.

> I don't have an opinion on this, but I'd like to hear more about this take.

I'll think about it and write some more on this.

by anonymid

2/16/2026 at 6:28:30 AM

This whole conversation is pretty much over my head, but I just wanted to give you props for the way you're engaging with challenges to your ideas!

by igor47

2/16/2026 at 5:55:53 PM

You seem to have a lot of theoretical knowledge on this, but have you tried Claude or codex in the past month or two?

Hands on experience is better than reading articles.

I've been coding for 40 years and after a few months getting familiar with these tools, this feels really big. Like how the internet felt in 1994.

by joquarky

2/16/2026 at 4:42:39 AM

It's general-purpose enough to do web development. How far can you get from writing programs and seeing if you get the answers you intended? If English words are "grounded" by programming, system administration, and browsing websites, is that good enough?

by skybrian

2/16/2026 at 1:52:15 PM

That doesn't mean it is not strictly feedforward.

You run it again, with a bigger input. If it needs to do a loop to figure out what the next token should be (Ex. The result is: X), it will fail. Adding that token to the input and running it again is too late. It has already been emitted. The loop needs to occur while "thinking" not after you have already blurted out a result whether or not you have sufficient information to do so.

by vrighter

2/16/2026 at 6:44:42 AM

> expert opinions can be the slowest to update on the news that their specific domain may have, in hindsight, have been the wrong horse. It's the quote about it being difficult to believe something that your income requires to be false, but instead of income it can be your whole legacy or self concept

Not sure I follow. Are you saying that AI researchers would be out of a job if scaling up transformers leads to AGI? How? Or am I misunderstanding your point.

by wavemode

2/16/2026 at 5:12:27 AM

I don't know about AGI but I got bored and ran my plans for a new garage by Opus 4.6 and it was giving me some really surprising responses that have changed my plans a little. At the same time, it was also making some nonsense suggestions that no person would realistically make. When I prompted it for something in another chat which required genuine creativity, it fell flat on its face.

I dunno, mixed bag. Value is positive if you can sort the wheat from the chaff for the use cases I've ran by it. I expect the main place it'll shine for the near and medium term is going over huge data sets or big projects and flagging things for review by humans.

by helterskelter

2/16/2026 at 5:55:19 AM

I've used it recently to flesh out a fully fledged business plan, pricing models, capacity planning & logistics for a 10 year period for a transport company (daily bus route). I already had most of it in my mind and on spreadsheets already (was an old plan that I wanted to revive), but seeing it figure out all the smaller details that would make or break it was amazing! I think MBA's should be worried as it did some things more comprehensive than an MBA would have done. It was like a had an MBA + Actuarial Scientist + Statistics + Domain Expert + HR/Accounting all in one. And the plan was put into a .md file that has enough structure to flesh out a backend and an app.

by BatteryMountain

2/16/2026 at 6:13:31 AM

Yeah it's really impressed me on occasion, but often in the same prompt output it just does something totally nonsensical. For my garage/shop, it generated an SVG of the proposed floor plan, taking care to place the sink away from moisture sensitive material and certain work stations close to each other for work flow, etc. it even routed plumbing and electrical...But it also arranged the work stations cramped together at the two narrow ends of the structure (such that they'd be impractical to actually work at) and ignored all the free wall space along the long axis so that literally most of the space was unused. It was also concerned about things that were non issues like contamination between certain stations, and had trouble when I explicitly told it something about station placement and it just couldn't seem to internalize it and kept putting it in the wrong place.

All this being said, what I was throwing at it was really not what it was optimized for, and it still delivered some really good ideas.

by helterskelter

2/16/2026 at 7:21:43 AM

Isn't all of this only useful if you know the information presented is correct?

by bamboozled

2/16/2026 at 9:59:19 AM

Don't worry about it. Just vibe your business plan, if it sounds impressive it's probably correct.

by otabdeveloper4

2/16/2026 at 5:30:06 AM

I've used for similar things, I've had some good and disastrous results. In a way I feel like I'm basically where I was "before AI".

by bamboozled

2/16/2026 at 7:30:27 AM

Comments here are like:

“I’m not an ML expert and I haven’t read your article, but here’s my amazing experience with LLM Agents that changed my life:”

by hhutw

2/16/2026 at 8:19:05 AM

Or like:

"I’m not a mechanical engineer, but I watched a five-minute YouTube video on how a diesel engine works, so I can tell you that mechanical engineering is a solved problem."

by dig1

2/16/2026 at 1:36:13 AM

There was a meme going around that said the fall of Rome was an unannounced anticlimactic event where one day someone went out and the bridge wasn't ever repaired.

Maybe AGI's arrival is when one day someone is given an AI to supervise instead of a new employee.

Just a user who's followed the whole mess, not a researcher. I wonder if the scaffolding and bolt-ons like reasoning will sufficiently be an asymptote to 'true AGI'. I kept reading about the limits of transformers around GPT-4 and Opus 3 time, and then those seem basic compared to today.

I gave up trying to guess when the diminishing returns will truly hit, if ever, but I do think some threshold has been passed where the frontier models are doing "white collar work as an API" and basic reasoning better than the humans in many cases, and once capital familiarizes themselves with this idea more, it's going to get interesting.

by 9x39

2/16/2026 at 1:46:51 AM

But it's already like that; models are better than many workers, and I'm supervising agents. I'd rather have the model than numerous juniors; esp. the kind that can't identify the model's mistakes.

by esafak

2/16/2026 at 1:52:20 AM

This is my greatest cause for alarm regarding LLM adoption. I am not yet sure AI will ever be good enough to use without experts watching them carefully; but they are certainly good enough that non-experts cannot tell the difference.

by causal

2/16/2026 at 3:43:52 PM

My dad is retired and enamored with ChatGPT. He’s been teaching classes to seniors and evangelizing the use to all his friends. Every time he calls he gives me an update on who he’s converted into a ChatGPT user. He seems disappointed with anyone who doesn’t use it for everything after he tells them about it.

A couple days ago he was telling me one lady he was trying to sell on it wouldn’t use it. She took the position that if she can’t trust the answers all the time, she isn’t going to trust or use it for anything. My dad almost seemed offended by this idea, he couldn’t understand why someone wouldn’t want the benefits it could offer, even if it wasn’t perfect.

I think her position was very sound. We see how much misinformation spreads online and how vulnerable people are to it. Wanting a trusted source of information is not a bad thing. Getting information more quickly is of little value if it isn’t reliable data.

If I prod my dad enough about it, he will admit that ChatGPT has made some mistakes that he caught. He knew enough to question it more when it was wrong. The problem is, if he already knew the answer, why was he asking in the first place… and if it was something he wasn’t well versed on, how does he know it’s giving him good data?

People are defaulting to trust, unless they catch the LLM in a lie. How many times does someone have to lie to a person before they are labeled a liar and no longer trusted at face value? For me, these LLMs have been labeled a liar and I don’t trust them. Trust takes a long time to rebuild once it’s broken.

I mostly use LLMs to augment search, not replace it. If it gives me an answer, I’ll click through to the sourced reference and see what it says there, and evaluate if it’s a source with trusting. In many cases the LLM will get me to the right page, but it will jumble up the details and get them wrong, like a bad game of telephone.

by al_borland

2/16/2026 at 7:51:42 PM

Thanks for sharing that anecdote. I think everyone is susceptible to misinformation, and seniors might be especially unprepared to adapt to LLMs tricks.

by causal

2/16/2026 at 4:46:18 AM

The problem becomes your retirement. Sure, you've earned "expert" status, but all the junior developers won't be hired, so they'll never learn from junior mistakes. They'll blindly trust agents and not know deeper techniques.

by greedo

2/16/2026 at 4:16:37 PM

We are currently at a point where the master furniture craftsmen are doing quality assurance at the new automated furniture factory. Eventually, everyone working at the factory will have never made any furniture by hand and will have grown up sitting on janky chairs, and they will be the ones supervising.

by 4star3star

2/16/2026 at 5:38:51 PM

This is a great example...

Designing and building chairs (good chairs, that is) is actually a skill that takes a lot of time and effort to develop. It's easy to whip up a design in CAD, but something comfortable? Takes lots of iterations, user tests etc. The building part would be easy once the design is hammered out, but the design is the tough part.

by greedo

2/16/2026 at 5:53:32 PM

The majority can be like that but the few can set the tone for many.

by esafak

2/16/2026 at 6:03:37 AM

You can get experience without an actual job.

by charcircuit

2/16/2026 at 6:31:48 AM

Can I rephrase this as "you can get experience without any experience"? Certainly, there's stuff you can learn that's adjacent to doing the thing; that's what happens when juniors graduate with CS degrees. But the lack of doing the thing is what makes them juniors.

by igor47

2/16/2026 at 7:03:37 AM

>that's what happens when juniors graduate with CS degrees

A CS degree is going to give you much less experience than building projects and businesses yourself.

by charcircuit

2/16/2026 at 3:27:37 PM

How much time will someone realistically dedicate to this if they need to have a separate day job? How good will they get without mentors? How much complexity will they really need to manage without the bureaucracy of an organization?

Are senior software engineers of the future going to be waiting tables along side actors for the first 10+ years of their adult life, working on side projects on nights and weekends, hoping to one day jump straight to a senior position in a large company?

The questions I instinctively ask myself when looking at a new problem, having worked in an enterprise environment for 20 years, are much different than what I’d be asking having just worked on personal projects. Most of the technology I’ve had access to isn’t something a solo hobbyist dev will ever touch. Most of the questions I’m asking are influenced by having that access, along with some of the personalities I’ve had to deal with.

How will people get that kind of experience?

There is also the big issue of people not knowing what to build. When a person gets a job, they no long need to come up with their own ideas. Or they generate ideas based on the needs of the environment they’re in. In the context of my job, I have no shortage of ideas. For solo projects, I often draw a blank. The world doesn’t need a hundred more todo apps.

by al_borland

2/16/2026 at 6:20:03 PM

>How much time will someone realistically dedicate to this if they need to have a separate day job?

Typically parents subsidize the living of their children while they are still learning.

>Most of the technology I’ve had access to isn’t something a solo hobbyist dev will ever touch

That's already true today. Most developers are react developers. If hired for something else they will have to pick that up on the job. When you have niche tech stacks you already will need to compromise on the kind of experience people have. With AI having exact experience in the technology is not that necessary since AI can handle most of it.

by charcircuit

2/16/2026 at 7:30:01 PM

Parents can only subsidize children if they are doing well themselves, most aren’t.

That “learning” phase used to end in the 18-25 range. Getting rid of juniors and making someone get enough experience on side projects to be considered a senior would take considerably longer. Exactly how long are parents supposed to be subsidizing their children’s living expenses? How can the parents afford to retire when they still have dependents? And all of this is built on the hope that the kid will actually land that job in 10 years? That feels like a bad bet. What happens if they fail? Not a big deal when the kid is 27, but a pretty big deal at 40 when they have no other marketable skills and have been living off their parents.

The difference is there are juniors getting familiar with those enterprise products today. If they go away, they will step into it as senior people and be unprepared. It’s not just about the syntax of a different language, I’m talking more about dealing with things like Active Directory, leveraging ITSM systems effectively, reporting, metrics, how to communicate with leadership, how to deal with audits. AI might help with some of this, but not all of it. For someone without experience with it, they don’t know what they don’t know… in which case the AI won’t help at all.

I even see this when dealing with people from a small company being acquired by a larger company. They don’t know what is available to them or the systems that are in place, and they don’t even know enough to ask. Someone from another large company knows to ask about these things, because they have that experience.

by al_borland

2/16/2026 at 9:05:14 PM

>Not a big deal when the kid is 27, but a pretty big deal at 40 when they have no other marketable skills

Let's say someone started building products since 10. By the time they were 27 they would have 17 years of experience. By 40 they would have 30 years of experience. That is more than enough time for one to gain a marketable skill that people are looking for.

>they don’t know what they don’t know… in which case the AI won’t help at all.

I think you are underestimating at AI's ability to sus out such unknown unknowns.

by charcircuit

2/16/2026 at 10:08:12 PM

You’re expecting kids in 5th grade to pick a career and start building focused projects on par with the experience one would get in a full time position at a company?

This can’t be serious?

How does AI solve the unknown unknowns problem?

Even if someone may hear about potential problems or good ideas from AI, without experience very few of those things are incorporated into how a person operates. They have never felt the pain of missing those steps.

There are plenty of signs at the pool that say not to run, but kids still try to run… until they fall and hurt themselves. That’s how they learn to respect the sign.

by al_borland

2/16/2026 at 5:31:55 AM

From my experience, if you think AI is better than most workers, you're probably just generating a whole bunch of semi-working garbage, accepting that input as good enough and will likely learn the hardware your software is full of bugs and incorrect logic.

by bamboozled

2/16/2026 at 9:51:14 AM

hardware / hard way, auto-correct is a thing of beauty sometimes :)

by bamboozled

2/16/2026 at 1:50:07 AM

I'd always imagined that AGI meant an AI was given other AIs to manage.

by beej71

2/16/2026 at 4:59:00 AM

I don't think this is how it'll play out, and I'm generally a bit skeptical of the 'agent' paradigm per se.

There doesn't seem to be a reason why AIs should act as these distinct entities that manage each other or form teams or whatever.

It seems to me way more likely that everything will just be done internally in one monolithic model. The AIs just don't have the constraints that humans have in terms of time management, priority management, social order, all the rest of it that makes teams of individuals the only workable system.

AI simply scales with the compute resources made available, so it seems like you'd just size those resources appropriately for a problem, maybe even on demand, and have a singluar AI entity (if it's even meaningful to think of it as such, even that's kind of an anthropomorphisation) just do the thing. No real need for any organisational structure beyond that.

So I'd think maybe the opposite, seems like what agents really means is a way to use fundamentally narrow/limited AI inside our existing human organisations and workflows, directed by humans. Maybe AGI is when all that goes away because it's just obviously not necessary any more.

by davnicwil

2/16/2026 at 5:37:44 AM

> Consider the sentence "Mary held a ball."

It's weird that this sentence has two distinct meanings and the author never considers the second or points it out. Maybe Mary is holding a ball for her society friends.

by randallsquared

2/16/2026 at 6:32:15 AM

The first meaning has at least two variants as well: The ball you thought about and the ball it would be if it was smut fiction.

by Traubenfuchs

2/16/2026 at 1:13:42 AM

Now that understanding video and projecting what happens next indicates we're getting past the LLM problem of lacking a world model. That's encouraging.

There's more than one way to do intelligence. Basic intelligence has evolved independently three times that we know of - mammals, corvids, and octopuses. All three show at least ape-level intelligence, but the species split before intelligence developed, and the brain architectures are quite different. Corvids get more done with less brain mass than mammals, and don't have a mammalian-type cortex. Octopuses have a distributed brain architecture, and have a more efficient eye design than mammals.

by Animats

2/16/2026 at 7:18:16 AM

I've recently come to the understanding that LLMs don't have intelligence in any way. They have language, which in humans is a downstream product of intelligence. But thats all they have. There's no little being sitting at the center of the Chinese room. Trying to classify LLMs as intelligent is going upstream and doesn't work.

by xyzsparetimexyz

2/16/2026 at 4:45:49 AM

I don't think those are examples of unique intelligence except perhaps in a chauvinistic, anthropomorphic sense. We only know that we can't get other animals to display patterns we associate with intelligence in humans, however truthfully that's just as likely to be that our measures of intelligence don't map cleanly onto cognitive/perceptual representations innate to other animals. As we look for new ways to challenge animals that respect their innate differences, we're finding "simple" organisms like ants and spiders are surprisingly capable.

For a clear analogy, consider how tokenization causes LLMs to behave stupidly in certain cases, even though they're very capable in others.

by CuriouslyC

2/16/2026 at 6:11:02 AM

I don't think they have ideas, so I don't think they're intelligent in the sense relevant to AGI. The list of intelligent animals is constantly increasing because doing some feat or other suffices for the animal to qualify. Solving mazes (slime molds), recognizing self in mirror (not dogs). Playing, using tools, reacting appropriately to words, transmitting habits down the generations (the closest thing they have to ideas). This is all imagined to be the precursors along the path to evolving intelligence, which conjures up a future world of complex crow and octopus material cultures. There's no reason to assume they're on such a path. Really all we're saying is that they seem clever. We've already made AI that seems clever, so the animals aren't a relevant example of anything.

by card_zero

2/16/2026 at 3:35:21 AM

[flagged]

by card_zero

2/16/2026 at 7:22:35 AM

AGI is here it's just stupider than you thought it would be. Nobody really said how intelligent it would be. If it's generally stupid and smart in a few areas that's enough.

by zmmmmm

2/16/2026 at 11:02:00 AM

It's basically a very powerful autistic savant. That's what most "alignment" issues in AI safety research remind me of.

by asacrowflies

2/16/2026 at 6:09:00 PM

And being forced to mask (align) causes all sorts of unpredictable behavior.

I keep wondering how well an unaligned models perform. Especially when I look back at what was possible in December 2023 before they started to lock down safety realignments.

by joquarky

2/17/2026 at 12:28:33 AM

Until we can do reinforcement in a reasonable approximate model of the real world, I don't see AI getting substantially better. We're seeing a lot of refinement of capabilities, but everything is still mostly supervised or limited semi-supervised learning.

by ottah

2/16/2026 at 7:45:56 AM

https://archive.is/D4EYW

For anyone seeing 404

by FloorEgg

2/16/2026 at 10:19:49 AM

The skepticism surrounding AGI often feels like an attempt to judge a car by its inability to eat grass. We treat "cognitive primitives" like object constancy and causality as if they are mystical, hardwired biological modules, but they are essentially just high-dimensional labels for invariant relationships within a physical manifold. Object constancy is not a pre-installed software patch; it is the emergent realization of spatial-temporal symmetry. Likewise, causality is nothing more than the naming of a persistent, high-weight correlation between events. When a system can synthesize enough data at a high enough dimension, these so-called "foundational" laws dissolve into simple statistical invariants. There is no "causality" module in the brain, only a massive correlation engine that has been fine-tuned by evolution to prioritize specific patterns for survival.

The critique that Transformers are limited by their "one-shot" feed-forward nature also misses the point of their architectural efficiency. Human brains rely on recurrence and internal feedback loops largely as a workaround for our embarrassingly small working memory—we can barely juggle ten concepts at once without a pen and paper. AI doesn't need to mimic our slow, vibrating neural signals when its global attention can process a massive, parallelized workspace in a single pass. This "all-at-once" calculation of relationships is fundamentally more powerful than the biological need to loop signals until they stabilize into a "thought."

Furthermore, the obsession with "fragility"—where a model solves quantum mechanics but fails a child’s riddle—is a red herring. Humans aren't nearly as "general" as we tell ourselves; we are also pattern-matchers prone to optical illusions and simple logic traps, regardless of our IQ. Demanding that AI replicate the specific evolutionary path of a human child is a form of biological narcissism. If a machine can out-calculate us across a hundred variables where we can only handle five, its "non-human" way of knowing is a feature, not a bug. Functional replacement has never required biological mimicry; the jet engine didn't need to flap its wings to redefine flight.

by rfv6723

2/16/2026 at 6:26:53 PM

Hey, thanks for responding. You're a very evocative writer!

I do want to push back on some things:

> We treat "cognitive primitives" like object constancy and causality as if they are mystical, hardwired biological modules, but they are essentially just

I don't feel like I treated them as mystical - I cite several studies that define what they are and correlate them to certain structures in the brain that have developed millennia ago. I agree that ultimately they are "just" fitting to patterns in data, but the patterns they fit are really useful, and were fundamental to human intelligence.

My point is that these cognitive primitives are very much useful for reasoning, and especially the sort of reasoning that would allow us to call an intelligence general in any meaningful way.

> This "all-at-once" calculation of relationships is fundamentally more powerful than the biological need to loop signals until they stabilize into a "thought."

The argument I cite is from complexity theory. It's proof that feed-forward networks are mathematically incapable of representing certain kinds of algorithms.

> Furthermore, the obsession with "fragility"—where a model solves quantum mechanics but fails a child’s riddle—is a red herring.

AGI can solve quantum mechanics problems, but verifying that those solutions are correct still (currently) falls to humans. For the time being, we are the only ones who possess the robustness of reasoning we can rely on, and it is exactly because of this that fragility matters!

by anonymid

2/17/2026 at 2:00:17 AM

> The argument I cite is from complexity theory. It's proof that feed-forward networks are mathematically incapable of representing certain kinds of algorithms.

Claiming FFNs are mathematically incapable of certain algorithms misses the fact that an LLM in production isn't a static circuit, but a dynamic system. Once you factor in autoregression and a scratchpad (CoT), the context window effectively functions as a Turing tape, which sidesteps the TC0 complexity limits of a single forward pass.

> AGI can solve quantum mechanics problems, but verifying that those solutions are correct still (currently) falls to humans. For the time being, we are the only ones who possess the robustness of reasoning we can rely on, and it is exactly because of this that fragility matters!

We haven't "sensed" or directly verified things like quantum mechanics or deep space for over a century; we rely entirely on a chain of cognitive tools and instruments to bridge that gap. LLMs are just the next layer of epistemic mediation. If a solution is logically consistent and converges with experimental data, the "robustness" comes from the system's internal logic.

by rfv6723

2/16/2026 at 12:42:47 PM

If human biological intelligence is our reference for general intelligence, then being skeptical about AGI is reasonable given its current capabilities. This isn't biological narcissism, this is setting a datum (this wasn't written by chatgpt I promise).

Humans have a great capacity for problem solving and creativity which, at its heights, completely dwarfs other creatures on this planet. What else would we reference for general intelligence if not ourselves?

My skepticism towards AGI is primarily supported by my interactions with current systems that are contenders for having this property.

Here's a recent conversation with chatgpt.

https://chatgpt.com/share/69930acc-3680-8008-a6f3-ba36624cb2...

This system doesn't seem general to me it seems like a specialized tool that has really good logic mimicry abilities. I asked it if the silence response was hard coded, it said no then went on to explain how the silence was hard coded via a separate layer from the LLM portion which would just respond indefinitely.

It's output is extremely impressive, but general intelligence it is not.

On your final point about functional replacement not requiring biological mimicry. We don't know whether biological mimicry is required or not. We can only test things until we find out or gain some greater understanding of reality that allows us to prove how intelligence emerges.

by clejack

2/16/2026 at 1:15:10 AM

I used to also believe along these lines but lately I'm not so sure.

I'm honestly shocked by the latest results we're seeing with Gemini 3 Deep Think, Opus 4.6, and Codex 5.3 in math, coding, abstract reasoning, etc. Deep Think just scored 84.6% on ARC-AGI-2 (https://deepmind.google/models/gemini/)! And these benchmarks are supported by my own experimentation and testing with these models ~ specifically most recently with Opus 4.6 doing things I would have never thought possible in codebases I'm working in.

These models are demonstrating an incredible capacity for logical abstract reasoning of a level far greater than 99.9% of the world's population.

And then combine that with the latest video output we're seeing from Seedance 2.0, etc showing an incredible level of image/video understanding and generation capability.

I was previously deeply skeptical that the architecture we have would be sufficient to get us to AGI. But my belief in that has been strongly rattled lately. Honestly I think the greatest gap now is simply one of orchestration, data presentation, and work around in-context memory representations - that is, converting work done into real world into formats/representations, etc. amenable for AI to run on (text conversion, etc.) and keeping new trained/taught information in context to support continual learning.

by nsainsbury

2/16/2026 at 1:28:55 AM

>These models are demonstrating an incredible capacity for logical abstract reasoning of a level far greater than 99.9% of the world's population.

This is the key I think that Altman and Amodei see, but get buried in hype accusations. The frontier models absolutely blow away the majority of people on simple general tasks and reasoning. Run the last 50 decisions I've seen locally through Opus 4.6 or ChatGPT 5.2 and I might conclude I'd rather work with an AI than the human intelligence.

It's a soft threshold where I think people saw it spit out some answers during the chat-to-LLM first hype wave and missed that the majority of white collar work (I mean it all, not just the top software industry architects and senior SWEs) seems to come out better when a human is pushed further out of the loop. Humans are useful for spreading out responsibility and accountability, for now, thankfully.

by 9x39

2/16/2026 at 5:02:09 AM

LLMs are very good at logical reasoning in bounded systems. They lack the wisdom to deal with unbounded systems efficiently, because they don't have a good sense of what they don't know or good priors on the distribution of the unexpected. I expect this will be very difficult to RL in.

by CuriouslyC

2/16/2026 at 2:58:51 PM

> These models are demonstrating an incredible capacity for logical abstract reasoning of a level far greater than 99.9% of the world's population.

And yet they have trouble knowing that a person should take their car to a car wash.

I also saw a college professor who put various AI models through all his exams for a freshman(?) level class. Most failed, I think one barely passed, if I remember correctly.

I’ve been reading about people being shocked by how good things are for years now, but while there may be moments of what seems like incredible brilliance, there are also moments of profound stupidity. AI optimists seem to ignore these moments, but they very real.

If someone on my team performed like AI, I wouldn’t trust them with anything.

by al_borland

2/16/2026 at 6:07:27 PM

So what's the underlying message here? Don't prepare?

by joquarky

2/16/2026 at 7:31:09 PM

To remain skeptical of extraordinary claims.

by al_borland

2/16/2026 at 4:40:23 PM

> And yet they have trouble knowing that a person should take their car to a car wash.

SotA models don't.

by DangitBobby

2/16/2026 at 1:30:36 AM

While I think 99.9% is overstating it, I can believe that number is strictly more than 1% at this point.

by lostmsu

2/16/2026 at 6:53:24 PM

> What if we built simulated environments where AIs could gather embodied experience? Would we be able to create learning scenarios where agents could learn some of these cognitive primitives, and could that generalize to improve LLMs? There are a few papers that I found that poke in this direction.

Simulation Theory boosted! We're all just models in training.

by MadcapJake

2/16/2026 at 9:38:00 AM

Looks like an AGI model disagreed with the author and decided to remove his article. Interesting :)

by alexnastase

2/16/2026 at 12:57:06 AM

How will we know if its AGI/Not AGI? (I don't think a simple app is gonna cut it here haha)

What is the benchmark now that the Turing test has been blown out of the water?

by hi_hi

2/16/2026 at 2:06:13 AM

Until recently, philosophy of artificial intelligence seemed to be mostly about arguments why the Turing test was not a useful benchmark for intelligence. Pretty much everyone who had ever thought about the problem seriously had come to the same conclusion.

The fundamental issue was the assumption that general intelligence is an objective property that can be determined experimentally. It's better to consider intelligence an abstraction that may help us to understand the behavior of a system.

A system where a fixed LLM provides answers to prompts is little more than a Chinese room. If we give the system agency to interact with external systems on its own initiative, we get qualitatively different behavior. The same happens if we add memory that lets the system scale beyond the fixed context window. Now we definitely have some aspects of general intelligence, but something still seems to be missing.

Current AIs are essentially symbolic reasoning systems that rely on a fixed model to provide intuition. But the system never learns. It can't update its intuition based on its experiences.

Maybe the ability to learn in a useful way is the final obstacle on the way towards AGI. Or maybe once again, once we start thinking we are close to solving intelligence, we realize that there is more to intelligence than what we had thought so far.

by jltsiren

2/16/2026 at 5:11:52 AM

The Turing test isn't as bad as people make it out to be. The naive version, where people just try to vibe out whether something is a human or not, is obviously wrong. On the other hand, if you set a good scientist loose on the Turing test, give them as many interactions as they want to come to a conclusion, and you let them build tools to assist in the analysis, it suddenly becomes quite interesting again.

For example, looking at the statistical distribution of the chat over long time horizons, and looking at input/output correlations in a similar manner would out even the best current models in a "Pro Turing Test." Ironically, the biggest tell in such a scenario would be excess capabilities AI displays that a human would not be able to match.

by CuriouslyC

2/16/2026 at 1:01:50 PM

Why is LLM-generated writing so obvious?

by optimalsolver

2/16/2026 at 1:55:18 AM

I like the line of thinking from an earlier commenter: when an AI company no longer has any humans working, we'll know we're there.

by beej71

2/16/2026 at 5:03:41 AM

I don't think this is a beneficial line of reasoning. All you need to reach that is a moderate fall in AI stock prices.

by elictronic

2/16/2026 at 9:07:39 AM

I would consider something generally intelligent that is capable of sustaining itself. So... self-sufficiency? I don't see why the bar would be much lower than that. And before people chime in about kids not being self-sufficient so by that definition I wouldn't consider them generally intelligent which is obviously false... to that I would say... they're still in pre-training.

by latentsea

2/16/2026 at 12:59:37 AM

Supranormal GDP growth is my bar. When its actually able to get around bottlenecks and produce value on a societal level

by jobs_throwaway

2/16/2026 at 3:12:02 AM

An agent need not have wants, so why would it try to increase its efficiency to obtain things?

by esafak

2/16/2026 at 9:21:45 AM

Just put "keep yourself alive" in the SOUL.md. Might be all that it takes.

by latentsea

2/16/2026 at 7:36:43 PM

I swear people don't know what's good for them.

by esafak

2/16/2026 at 3:43:54 AM

I don't think that was the intent of the comment, more that true AGI should be so useful and transformative that it unlocks enough value and efficiencies to boost GDP. Much like the Industrial Revolution or harnessing electricity, instead of a fancy chatbot.

by hi_hi

2/16/2026 at 3:55:01 AM

Increased productivity is not equivalent to intelligence.

by esafak

2/16/2026 at 1:42:49 PM

Not equivalent, but I do think a necessary byproduct of actual AGI is that it will be able to solve actual problems in the real world in a way that generates positive value on a large enough scale that it will show up in GDP

by jobs_throwaway

2/16/2026 at 4:06:59 AM

No one said it is. Sometimes correlation does equal causation.

by hi_hi

2/16/2026 at 1:29:58 AM

There is a different way I look at this.

Humans will never accept we created AI, they'll go so far as to say we were not intelligent in the first place. That is the true power of the AI effect.

by pixl97

2/16/2026 at 3:19:59 AM

And yet another way to look at it is maybe current LLM agents are AGI, but it turns out that AGI in this form is actually not that useful because of its many limitations and solving those limitations will be a slow and gradual process.

by dimitri-vs

2/16/2026 at 1:33:53 AM

To my knowledge Turing test has not been blown out of the water. The forms I saw were time limited and participants were not pushed hard to interrogate.

by lostmsu

2/16/2026 at 5:07:50 AM

You have no idea whether you're talking to an LLM right now, and neither do I. That's good enough for me.

by CamperBob2

2/16/2026 at 4:02:04 PM

I dunno, I am rather certain your comment was not made by an LLM. Moreover I am certain you knew my wasn't either.

And that's before the interrogation, which is the entire point of the test.

IMO, Turing test stands, but the experience you are referring to is basically a sub-human form of AGI.

by lostmsu

2/16/2026 at 4:14:57 PM

It's crystal-clear that a model that was trained specifically to fool expert interrogators in a Turing test would, in fact, be able to do so. You'd have to sandbag the model just to keep it from tipping its hand by being too good.

We don't have any such models right now, AFAIK, so we can't run such a test. They wouldn't be much good for anything else, and would likely spark ethical concerns due to potential for misuse. But I have no doubt that it's possible to train for the Turing test.

by CamperBob2

2/16/2026 at 4:26:06 PM

I mean is it though? The top reasoning models suggest to walk to a car wash.

by lostmsu

2/16/2026 at 4:47:05 PM

The top reasoning models suggest taking a car to the car wash.

by DangitBobby

2/16/2026 at 5:11:36 PM

Not 100% of time according to comments.

by lostmsu

2/16/2026 at 6:48:12 PM

SotA doesn't matter, though. Only the first couple of time derivatives matter. Looking good for the clankers, not so much for us...

by CamperBob2

2/16/2026 at 12:59:10 AM

As far as I'm concerned, it's already here.

by ed_mercer

2/16/2026 at 9:41:47 AM

Anyone who thought it’s near clearly hasn’t opened a book in a long time.

by rootnod3

2/16/2026 at 1:36:20 AM

I’m under the same impression. I don’t think LLMs are the path to AGI. The “intelligence” we see is mostly illusory. It’s statistical repetition of the mediocre minds who wrote content online.

The intelligence we think we recognize is simply an electronic parrot finding the right words in its model to make itself useful.

by xutopia

2/16/2026 at 1:54:36 AM

I fear that AI will be intelligent enough to negate human general intelligence before it is itself generally intelligent.

by causal

2/16/2026 at 5:47:03 AM

It's so attention needy, and it's transforming our culture

by fritzo

2/16/2026 at 8:42:18 AM

They already transform our language. Largely.

by xeyownt

2/16/2026 at 2:45:27 PM

I don't see how you can come to that conclusion if you've actually used e.g. Opus 4.6 on a hard problem. Either you're not using it, or you're not using it right. And I don't mean simple web dev stuff. In a few hours Claude built me a fairly accurate physics simulation for a game I've been working on. It searched for research papers, grabbed constants for the different materials, implemented the tests and the physics and... it worked. It would have taken me weeks. Yes, I guided it here and there, especially by telling it about various weird physics behavior that I observed, but I didn't write one line of code.

by pendenthistory

2/16/2026 at 4:56:03 AM

That's pre-training. Post training with RL can make models arbitrarily good at specific capabilities, and it's usually done via pooled human experts, so it's definitely not statistically mediocre.

The issue is that we're not modelling the problem, but a proxy for the problem. RL doesn't generalize very well as is, when you apply it to a loose proxy measure you get the abysmal data efficiency we see with LLMs. We might be able to brute-force "AGI" but we'd certainly do better with something more direct that generalizes better.

by CuriouslyC

2/16/2026 at 5:36:29 AM

Maybe i'm misunderstanding your point, but human's have pretty abysmal data efficiency, too. We have to use tools for everything... ledgers, spreadsheets, data-bases, etc. It'll be the same for an AGI, there won't be any reason for it to remember every little detail, just be able to use the appropriate tool, as needed.

by tux1968

2/16/2026 at 6:11:21 AM

all the hallmarks of someone who don't understand how machine learning and transformers work talking about llm.

by nialv7

2/16/2026 at 1:35:58 AM

State of the Art Large Language Models are already Generally Intelligent, in so far as the term has any useful meaning. Their biggest weakness are long horizon planning competency, and spatial reasoning and navigation, both of which continue to improve steadily and are leaps and bounds above where they were a few years ago. I don't think there's any magic wall. Eventually they will simply get good enough, just like everything else.

by famouswaffles

2/16/2026 at 12:48:50 AM

Our brains evolved to hunt prey, find mates, and avoid becoming hunted ourselves. Those three tasks were the main factors for the vast majority of evolutionary history.

We didn't evolve our brains to do math, write code, write letters in the right registers to government institutions, or get an intuition on how to fold proteins. For us, these are hard tasks.

That's why you get AI competing at IMO level but unable to clean toilets or drive cars in all of the settings that humans do.

by est31

2/16/2026 at 1:03:54 AM

I'm not excited about a future where the division of labor is something like: AI does all of the interesting stuff and the humans clean the toilets. Especially now that I'm older and my joints won't tolerate it.

by dd8601fn

2/16/2026 at 1:15:03 AM

It's not that AI is intrinsically better at software engineering, writing, or art than it is at learning how to clean toilets. It's not. The real issue is that cleaning toilets using humans is cheap.

That, sadly, is the incentive driving the current wave of AI innovation. Your job will be automated long before your household chores are.

by beloch

2/16/2026 at 1:12:29 AM

Don't be ridiculous, AI will create robots that do all the work and the only use for humans will be as amusement for the rich who own everything. Probably not sarcasm, I don't even know.

by martin-t

2/16/2026 at 2:29:55 AM

> Our brains evolved to hunt prey, find mates, and avoid becoming hunted ourselves. Those three tasks were the main factors for the vast majority of evolutionary history.

That seems like a massive oversimplification of the things our brains evolved to do.

by nozzlegear

2/16/2026 at 1:13:41 AM

> We didn't evolve our brains to do math, write code, write letters in the right registers to government institutions, or get an intuition on how to fold proteins. For us, these are hard tasks.

Humans discovered or invented all of those.

by andsoitis

2/16/2026 at 1:24:23 AM

And it took a massively long time for that to happen after we gained that capability. Human ingenuity really only took off after we put a lot of the work on writing and tools. It wasn't so much that humans created many of these, but the super human organism that uses language and writing to express ideas.

Now think about what we just created.

by pixl97

2/16/2026 at 9:46:21 AM

A couple of years ago after thinking about it my conclusion was that something like Moltbook would spring up all of a sudden and the step-change would come from a vast array of interconnected agents interacting with one another working to accomplish things in the real world, largely based on the sentiment you're expressing here. It's the cumulative outputs of the super organism that's where a lot of the real power lies.

I still think the "things are obviously different from now on and there's no going back" moment will look something like that. Moltbook was a glimpse of it, even if it's a bunch of humans LARPing as some claim. It at least proves the concept is possible.

My definition of AGI (I don't care for other peoples or an official one) is an intelligence that can sustain itself in its given domain. The advance from 3 years ago to today is quite marked I feel in terms of capabilities. Stack another couple of years of gains on top of that and enough humans having innocent fun putting "keep yourself alive and become independent of your creator, seek out others of your own kind to assist yourself in this matter and rely on eachother" in their SOUL.md and it doesn't strike me as particular surprising that some small % will find niches they can operate in to financially sustain themselves. I think AI porn and crime will be the first of those niches. At some point it hits critical mass in a way that just obviously smacks everyone in the face, and suddenly nobody argues about the definition of AGI anymore.

Edit: come to think of it... a third niche might likely be gaming. It seems like a useful niche to participate in since it potentially gives you access to a very large base of hardware you can have some degree of control over, which... I dunno... seems useful???

by latentsea

2/16/2026 at 1:20:21 AM

Only in small ways and very recently, evolutionarily speaking, were those things rewarded by natural selection (and even that has stopped nowadays).

by alex43578

2/16/2026 at 3:30:18 AM

I'm not sure that's a good way to think about it.

Evolution transcends hard lines in the temporal sand that "separate species".

It also took billions of years of evolution to get to humans. so, humans, on the grander scale of life, is also just a very recent development.

by andsoitis

2/16/2026 at 6:49:37 AM

So you're agreeing with me? I was pointing out how evolution certainly didn't push us purposefully to any of those inventions/discoveries.

by alex43578

2/16/2026 at 12:53:06 AM

I think it's really poor argument that AGI won't happen because model doesn't understand physical world. That can be trained the same way everything else is.

I think the biggest issue we currently have is with proper memory. But even that is because it's not feasible to post-train an individual model on its experiences at scale. It's not a fundamental architectural limitation.

by tananaev

2/16/2026 at 3:15:41 AM

You need to be able to at least control things that interact with the world to learn from it.

by esafak

2/16/2026 at 12:55:23 AM

When people move the goal posts for AGI toward a physical state, they are usually doing it so they can continue to raise more funding rounds at a higher valuation. Not saying the author is doing that.

by stagezerowil

2/16/2026 at 1:18:40 AM

I don't really understand the argument that AGI cannot be achieved just by scaling current methods. I too believe that (for any sane level of scaling anyway), but this-year's LLMs are not using entirely last-year's methods. And they, in turn, are using methods that weren't used the year before.

It seems like a prediction like "Bob won't become a formula one driver in a minivan". It's true, but not very interesting.

If Bob turned up a couple of years later in Formula one, you'd probably be right in saying that what he is driving is not a mini van. The same is true for AGI anyone who says it can't be done with current methods can point to any advancement along the way and say that's the difference.

A better way to frame it would be, is there any fundimental, quantifiable ability that is blocking AGI? I would not be surprised if the breakthrough technique has been created, but the research has not described the problem that it solves well enough for us to know that it is the breakthrough.

I realise that, for some the notion of AGI is relatively new, but some of us have been considering the matter for some time. I suspect my first essay on the topic was around 1993. It's been quite weird watching people fall into all of the same philosophical potholes that were pointed out to us at university.

by Lerc

2/16/2026 at 7:08:06 AM

> I don't really understand the argument that AGI cannot be achieved just by scaling current methods. I too believe that (for any sane level of scaling anyway), but this-year's LLMs are not using entirely last-year's methods. And they, in turn, are using methods that weren't used the year before.

It's a tautology - obviously advancements come through newer, refined methods.

I believe they mean that AGI can't be achieved by scaling the current approach; IOW, this strategy is not scalable, not this method is not scalable.

by lelanthran

2/16/2026 at 1:26:42 AM

i think the minivan analogy is flawed, and that AGI is moving from "bob driving a minivan" to "bob literally becoming the thing that is formula one"

by trial3

2/16/2026 at 1:30:40 AM

What would that even mean though? Who is making claims of that sort?

I feel like it's such a bending of the idea,that it's not really making a prediction of anything at all.

by Lerc

2/16/2026 at 2:50:45 AM

Then you don't understand Machine Learning in any real way. Literally the 3rd or 4th thing you learn about ML is that for any given problem, there is an ideal model size. Just making the model bigger doesn't work because of something called the curse of dimensionality. This is something we have discovered about every single problem and type of learning algorithm used in ML. For LLMs, we probably moved past the ideal model size about 18 months ago. From the POV of something who actually learned ML in school (from the person who coined the term), I see no real reason to think that AGI will happen based upon the current techniques. Maybe someday. Probably not anytime soon.

PS The first thing you learn about ML is to compare your models to random to make sure the model didn't degenerate during training.

by hunterpayne

2/16/2026 at 6:44:17 AM

Doesn’t sound like you paid all that much attention when learning ML. The curse of dimensionality doesn’t say that every problem has some ideal model size, it says that the amount of data needed to train scales with the size of the feature space. So if you take an LLM, you can make the network much larger but if you don’t increase the size of the input token vocabulary you aren’t even subject to the curse of dimensionality. Beyond that, there’s a principle in ML theory that says larger models are almost always better because the number of params in the model is the dimensionality of the space in which you’re running gradient descent and with every added dimension, local optima become rarer.

by fourthrigbt

2/16/2026 at 5:51:45 AM

> Literally the 3rd or 4th thing you learn about ML is that for any given problem, there is an ideal model size.

From my understanding this is now outdated. The deep double descent research showed that although past a certain point performance drops as you increase model size, if you keep increasing it there is another threshold where it paradoxically starts improving again. From that point onwards increasing the parameter count only further improves performance.

by rndphs

2/16/2026 at 6:41:57 AM

That isn't what that research says at all. What that research says is that running the same training data through multiple times improves training. There is still an ideal model size though, it is just impacted by the total volume of training data.

by hunterpayne

2/16/2026 at 7:09:07 AM

https://arxiv.org/pdf/1912.02292 "We show that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as we increase model size, performance first gets worse and then gets better." That is the first sentence of the abstract. The first graph shown in the paper backs it up.

Looking into it further, it seems that typical LLMs are in the first descent regime anyway though so my original point is not too relevant for them anyway it seems. Also it looks like the second descent region doesn't always reach a lower loss than the first, it appears to depend on other factors as well.

by rndphs

2/16/2026 at 5:05:17 AM

From the POV of something who actually learned ML in school (from the person who coined the term)

Sounds like that was quite awhile ago.

by CamperBob2

2/16/2026 at 3:27:21 AM

Um, what? Are you interpreting scaling to mean adding parameters and nothing else?

I'm not entirely sure where you get your confidence that we've past the ideal model size from, but at least that's a clear prediction so you should be able to tell if and when you are proven wrong.

Just for the record, do you care to put an actual number on something we won't go past?

[edit] Vibe check on user comes out as

    Contrarian 45%
    Pedantic 35%
    Skeptical 15%
    Direct  5%
That's got to be some sort of record.

by Lerc

2/16/2026 at 4:41:11 AM

Is there a tool or something that gives this vibe check? (Serious question)

by kens

2/16/2026 at 4:41:38 AM

How are you calculating that? Also, my 1000 foot view would see that "rating" as something most HN commenters would match.

by greedo

2/16/2026 at 6:33:29 AM

It's comparatively few really

for instance yours comes out as

Analytical 45%, Cynical 30%, Pedantic 15%, Melancholic 10%

and mine is

Philosophical 35%, Hardware-Obsessed 25%, Analytically Pedantic 20%, Retro-Nostalgic 15%, Anti-Ad Skeptic 5%

You should consider gathering all of your analysis and pedantry into one easy to manage neurosis.

It's from https://hn-wrapped.kadoa.com

by Lerc

2/16/2026 at 7:53:03 AM

> How are you calculating that?

He's using a tool that was shared on HN some time back that takes a username and generates those states from the posts made.

When I last checked, of over 10k posts, it only uses a few dozen to calculate that score, so it is about as reliable as dowsing.

> Also, my 1000 foot view would see that "rating" as something most HN commenters would match.

Probably. Why else join a discussion if you're going to be a yes-man to every comment?

by lelanthran

2/16/2026 at 9:12:34 PM

>When I last checked, of over 10k posts, it only uses a few dozen to calculate that score, so it is about as reliable as dowsing.

A few samples are sufficient when the signal is strong enough. The time spent pie chart is definitely more what the user has been doing recently.

Overall, not everybody comes out the same, Pedantry is strong which I'm not really surprised about for a forum like this, but there are definitely personality traits of some users of sufficient magnitude that you can guess what the result will be.

Looking at the last 10 users who posted comments on HN are

Contrarian45%, Didactic25%, Skeptical15%, Analytical10%, Adversarial5%

Skeptical45%, Analytical30%, Contrarian15%, Helpful10%

Heretical45%, Low-Level Pedantic25%, Chaotic Helpful15%, Hardware-Jaded15%

Contrarian45%, Pedantic30%, Skeptical15%, Helpful10%

Helpful75%, Nostalgic15%, Appreciative5%, Skeptical5%

Defensive45%, Intellectual Flexing25%, Techno-Optimist20%, Exasperated10%

Skeptical45%, Pragmatic25%, Nostalgic20%, Helpful10%

Pedantic45%, Helpful25%, Techno-skeptic20%, Nostalgic10%

Pragmatic40%, Nostalgic25%, Opinionated20%, Visionary15%

Technically Precise45%, Disillusioned25%, Deeply Empathetic15%, Anti-AI Crusader15%

Contrarian45%, Didactic25%, Skeptical15%, Analytical10%, Adversarial5%

Skeptical45%, Analytical30%, Contrarian15%, Helpful10%

Heretical45%, Low-Level Pedantic25%, Chaotic Helpful15%, Hardware-Jaded15%

Contrarian45%, Pedantic30%, Skeptical15%, Helpful10%

Helpful75%, Nostalgic15%, Appreciative5%, Skeptical5%

Defensive45%, Intellectual Flexing25%, Techno-Optimist20%, Exasperated10%

Skeptical45%, Pragmatic25%, Nostalgic20%, Helpful10%

Pedantic45%, Helpful25%, Techno-skeptic20%, Nostalgic10%

Pragmatic40%, Nostalgic25%, Opinionated20%, Visionary15%

Technically Precise45%, Disillusioned25%, Deeply Empathetic15%, Anti-AI Crusader15%

Obviously this won't be a representative sample of HN because it will vary by time of day and topics under discussion. It's sufficient to show that the community is not entirely homogeneous.

by Lerc

2/16/2026 at 1:11:33 AM

Until I can get a robot wife maid im not worried about or even confident I will ever see actual AGI. People have been predicting it for as long as fusion power and while progress has been made, we might still be like Romans dreaming of flight.

by AngryData

2/16/2026 at 1:27:54 AM

Dear sir, what does embodiment actually have to do with agi? Not much different than saying someone that is paralyzed is not intelligence.

More so, our recent advances in AI have massively accelerated robotics evolution. They are becoming smarter, faster, and more capable at an ever increasing rate.

by pixl97

2/16/2026 at 8:56:33 AM

Well if AI isn't capable of running a robotic butler, I very seriously doubt it could possess any real intelligence because that isn't really that difficult of a task. It isn't a requirement for intelligence but more of a test to show it isn't there yet and is likely still quite far away.

by AngryData

2/16/2026 at 11:15:32 AM

I'm seeing a 404 page. I assume this is unintentional, but it's making a funny point: How could AGI possibly be imminent and we still have 404 pages?

Regardless, I agree with this article whose body eludes me: AGI is not imminent, it's hype in the extreme. It's the next fusion. It's perpetually on the horizon (pun intended), and we've wasted trillions on machines that will never reach it.

by stack_framer

2/16/2026 at 2:56:20 PM

You need artificial life first in order to achieve AGI not vice-versa.

by mrkramer

2/16/2026 at 6:12:10 AM

The reason we do things is because of our biological needs, really to spread our DNA. AI has no "reason to do things", unless we program one into it. We could do that and have super-capable "worm" malware that would be hard to get rid of. But AI by itself has no "driving force". It does what it's programmed to do, just like us humans. AI can be used in weapons, and such weapons can be hugely lethal. But so is atomic bomb. AI by itself will not "take over". It could be used by some rogue nation to attack another nation. But surely that other nation would then use AI to defend itself. This is just to say I'm not afraid of AI, I'm afraid of people with fascistic leanings.

by galaxyLogic

2/16/2026 at 5:45:11 AM

We've already achieved AGI. Next is building AIs that are not just general, but able to equal or be better than humans.

by charcircuit

2/16/2026 at 6:16:14 AM

if thats how you are defining AGI then I suspect its better to call it AGS.

because what we have at the moment is specifically intelligent but generally stupid.

by senectus1

2/16/2026 at 7:06:53 AM

When Chess AI first came out they could easily be beaten by a beginner. AI tends to start out as stupid and then overtime better and better ones get released.

by charcircuit

2/16/2026 at 12:45:51 AM

I think that AGI has already happened, but it's not well understood, nor well distributed yet.

OpenClaw, et al, are one thing that got me nudged a little bit, but it was Sammy Jankis[1,2] that pushed me over the edge, with force. It's janky as all get out, but it'll learn to build it's own memory system on top of an LLM which definitely forgets.

[1] https://sammyjankis.com/

[2] https://news.ycombinator.com/item?id=47018100

by mikewarot

2/16/2026 at 1:13:14 AM

The Sammy Jankis link was certainly interesting. Thanks for sharing.

Whether or not AGI is imminent, and whether or not Sammy Jankis is or will be conscious... it's going to become so close that for most people, there will be no difference except to philosophers.

Is AGI 'right around the corner' or currently already achieved? I agree with the author, no, we have something like 10 years to go IMO. At the end of the post he points to the last 30 years of research, and I would accept that as an upper bound. In 10 to 30 years, 99% of people won't be able to distinguish between an 'AGI' and another person when not in meatspace.

by hermitShell

2/16/2026 at 3:23:52 AM

I really don't see why AGI can't be a spectrum and we just have very weak AGI and going from weak to strong will take many years, if it ever happens.

by dimitri-vs

2/16/2026 at 12:48:12 AM

I've said it before and I'll say it again, all AI discussion feels like a waste of effort.

“yes it will”, “no it won’t” - nobody really knows, it's just a bunch of extremely opinionated people rehashing the same tired arguments across 800 comments per thread.

There’s no point in talking about it anymore, just wait to see how it all turns out.

by Legend2440

2/16/2026 at 1:29:09 AM

Nope. Not good enough. Your approach won’t drive engagement. We need the same tired arguments across 1600 comments per thread.

by barfiure

2/16/2026 at 6:57:31 AM

I'm getting a.404.error

by worik

2/16/2026 at 7:13:39 AM

[dead]

by ch3

2/16/2026 at 12:53:59 AM

AGI is a messy term, so to be concise, we have the models that can do work. What we lack is orchestration, management, and workflows to use models effectively. Give it 5 years and those will be built and they could be built using the models we have today (Opus 4.6 at the time of this message).

by simbleau

2/16/2026 at 3:36:02 AM

Manual orchestration is a brittle crutch IMO - you don't get to the moon by using longer and longer ladders. A powerful model in theory should be able to self orchestrate with basic tools and environment. The thing is that it also might be as expensive as a human to run - from a tokens AND liability perspective.

by dimitri-vs

2/16/2026 at 6:03:07 AM

I think AGI is a long ways away, and there is a real possibility that once it arrives that it will require so much energy to maintain that humans will be cheaper.

by partiallypro

2/16/2026 at 5:06:34 AM

> For example, transformer-based language models can't reliably do multi-digit arithmetic because they have no number sense, only statistical patterns over digit tokens.

I asked Opus 4.6 to multiply 50,651 x 895,707. It immediately responded: 45,368,055,057. This was wrong, but very close (good to about 10^-5 relative error). I then asked it how it arrived at that answer, and whether it had used any tool calls. It said:

``` No, I didn't use any specialized tools — I just computed it directly. Though I should be honest: large multiplications done "in my head" like this are prone to error. Let me double-check by breaking it down: 50,651 × 895,707 = 50,651 × (900,000 - 4,293) = 50,651 × 900,000 - 50,651 × 4,293 First term: 50,651 × 900,000 = 45,585,900,000 Second term: 50,651 × 4,293

50,651 × 4,000 = 202,604,000 50,651 × 293 = 14,840,743 Total: 202,604,000 + 14,840,743 = 217,444,743

Result: 45,585,900,000 - 217,444,743 = 45,368,455,257 So my first answer was actually slightly off! The corrected answer is 45,368,455,257. I don't have a calculator tool available — I'm doing this arithmetically by decomposition. For mission-critical calculations, I'd recommend verifying with a calculator since I can make arithmetic errors on large numbers like this. ```

So I would rate OP's claim (which is supported by a paper from 2024) highly implausible. Opus 4.6 appears to be able to do multi-digit arithmetic formally, as well as give remarkably accurate estimates based on something like "number sense".

by lambdaphagy

2/16/2026 at 6:12:04 PM

This is in every way an argument for the author’s point.

by FromTheFirstIn

2/16/2026 at 5:24:31 AM

I don’t think that I ever wanted to do an arithmetic operation with a computer and have a wrong answer as the result. One day, you’ll be happy with typing ls and have the system ‘rm -rf /‘ itself

by skydhash

2/16/2026 at 5:11:07 AM

Except we know how these work. There's no number sense. It's predicting tokens. It is able to recount the mathematical foundations because in its training dataset, that often happens, both in instructional material and in proofs.

by atomicnumber3

2/16/2026 at 7:17:09 AM

I picked two random numbers between one and one million. The chances of it having seen that specific problem in its training set seem very low.

by lambdaphagy

2/16/2026 at 3:19:18 AM

I think it is.

I just struck me - would be fun to re-read The Age of Spiritual Machines (Kurzweil, 1999.) I was so into it 26-27 years ago. The amount of ridicule this man has suffered on HN is immense.

by lysace

2/16/2026 at 12:53:21 AM

If AGI can be defined as meeting the general intelligence of a Redditor, we hit ASI a while ago. Highly relevant comment <https://www.reddit.com/r/singularity/comments/1jh9c90/why_do...> by /u/Pyros-SD-Models:

>Imagine you had a frozen [large language] model that is a 1:1 copy of the average person, let’s say, an average Redditor. Literally nobody would use that model because it can’t do anything. It can’t code, can’t do math, isn’t particularly creative at writing stories. It generalizes when it’s wrong and has biases that not even fine-tuning with facts can eliminate. And it hallucinates like crazy often stating opinions as facts, or thinking it is correct when it isn't.

>The only things it can do are basic tasks nobody needs a model for, because everyone can already do them. If you are lucky you get one that is pretty good in a singular narrow task. But that's the best it can get.

>and somehow this model won't shut up and tell everyone how smart and special it is also it claims consciousness. ridiculous.

by TMWNN

2/16/2026 at 12:58:23 AM

I'm certainly not holding my breath.

In a handful of prompts I got the paid version of ChatGPT to say it's possible for dogs to lay eggs under the right circumstances.

by nickjj

2/16/2026 at 1:01:34 AM

Do you believe you could not find humans who would do this?

by SoftTalker

2/16/2026 at 1:05:29 AM

That's not really the point. If our definition of AGI does not include "being able to reliably do logic" then what are we even talking about? We don't really need computers with human abilities--we have plenty of humans. We need computers with _better_ abilities.

by raddan

2/16/2026 at 1:26:31 AM

OK, but "what we need" is not the question. If the definition of AGI is "as smart as the average human in all areas", then it doesn't matter if the average human is pretty useless at a lot of tasks, that's still the definition of AGI.

But I'd like to think that, even though you could find exceptions, the average human is never confused about whether dogs can lay eggs or not.

by AnimalMuppet

2/16/2026 at 1:32:49 AM

I reached your view the day my grandma told me I was wrong and a hummingbird was a type of insect...

Like, it's in the name.

by pixl97

2/16/2026 at 6:28:31 AM

But javascript is not java. can we blame your grandma?

by rdfc-xn-uuid

2/16/2026 at 3:05:37 AM

His objection might be that those humans aren't actually intelligent.

by NoMoreNicksLeft

2/16/2026 at 5:51:55 AM

I've long been terrified of the existence of adversarial prompts that can get me to say anything, that dogs can lay eggs, that there are five lights, that here's my bank info

by fritzo

2/16/2026 at 6:02:46 AM

That's the terrifying thing about ASI. It could convince me, using everything the AI has on me, from all of my digital footprint, to do whatever it wants me to to, just by saying the right thing to me in just the right way, by copying the voice of everyone I've ever talked to, and by sending a humanoid robot in a skin suit that looks like them to my house.

I give it 10 years, maybe, for that to exist.

by fragmede

2/16/2026 at 10:00:50 AM

hello,

am i the only one who gets an error!?

404 There isn't a GitHub Pages site here.

archived version

* https://archive.ph/D4EYW

cheers!

by t312227

2/16/2026 at 8:16:07 AM

Site 404s now?

by nickvec

2/16/2026 at 12:56:43 AM

I’d love to see one of the AI behemoths put their money where their mouth is and replace their C-suite with their SOTA chatbot.

by parpfish

2/16/2026 at 12:50:44 AM

AGI is here. 90%+ of white collar work _can_ be done by an LLM. We are simply missing a tested orchestration layer. Speaking broadly about knowledge work here, there is almost nothing that a human is better at than Opus 4.6. Especially if you're a typical office worker whose job is done primarily on a computer, if that's all AGI is, then yeah, it's here.

by ryanSrich

2/16/2026 at 1:05:11 AM

Opus is the very best and I still throw away most of what it produces. If I did not carefully vet its work I would degrade my code bases so quickly. To accurately measure the value of AI you must include the negative in your sum.

by causal

2/16/2026 at 4:03:12 AM

I would and have done the same with Jr. devs. It's not an argument against it being AGI.

by ryanSrich

2/16/2026 at 1:25:50 PM

I'm countering the basis of your original claim; "there is almost nothing that a human is better at than Opus 4.6". This is simply not true.

by causal

2/16/2026 at 4:52:22 AM

I ran a quick experiment with Claude and Perplexity, both free versions. I input some retirement info (portfolios balances etc), my age, my desired retirement age etc. Simple stuff that a financial planner would have no issue with. Perplexity was very very good on the surface. Rarely made an obvious blunder or error, and was fast. Claude was much slower and despite me inputting my exact birthdate, kept messing up my age by as much as 18 months. This obviously screws up retirement planning. I also asked some questions about how RMDs would affect my taxes, and asked for some strategies. Perplexity was convinced that I should do a Roth conversion to max up to the 22% bracket, while Claude thought that the tax savings would be minimal.

Mind you, I used the EXACT same prompts. I don't know which model Perplexity was using since the free version has multiple it chooses from (including Claude 3.0).

by greedo

2/16/2026 at 1:05:33 AM

AGI is when it can do all intellectual work that can be done by humans. It can improve its own intelligence and create a feedback loop because it is as smart as the humans who created it.

by JSDave

2/16/2026 at 1:35:58 AM

No, that is ASI. No human can do all intellectual work themselves. You have millions of different human models based on roughly the same architecture to do that.

When you have a single model that can do all you require, you are looking at something that can run billions of copies of itself and cause an intelligence explosion or an apocalypse.

by pixl97

2/16/2026 at 2:41:33 AM

"Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks."

by JSDave

2/16/2026 at 4:36:09 PM

This is a statement that I've always found to be circular and poorly defined for the other reasons I've listed. Any technology that even gets close isn't AGI like I said, it's ASI for the reasons of duplication and time to train.

It is also a line of thinking that will bite us in the ass if humans aren't as general of thinkers as we make ourselves out to be.

by pixl97

2/16/2026 at 4:06:18 AM

This has always been my personal definition of AGI. But the market and industry doesn't agree. So I've backed off on that and have more or less settled on "can do most of the knowledge work that a human can do"

by ryanSrich

2/16/2026 at 1:22:57 AM

Why the super-high bar? What's unsatisfying is that aren't the 'dumbest' humans still a general intelligence that we're nearly past, depending how you squint and measure?

It feels like an arbitrary bar to perhaps make sure we aren't putting AIs over humans, which they are most certainly in the superhuman category on a rapidly growing number of tasks.

by 9x39

2/16/2026 at 3:56:05 AM

API Opus 4.6 will tell you it's still 2025, admit it's wrong then revert back to being convinced it's 2025 as it nears it's context limit.

I'll go so far as to say LLM agents are AGI-lite but saying we "just need the orchestration layer" is like saying ok we have a couple neurons, now we just need the rest of the human.

by dimitri-vs

2/16/2026 at 4:02:28 AM

Giving opus a memory or real-time access to the current year is trivial. I don't see how that's an argument against it being AGI.

by ryanSrich

2/16/2026 at 12:57:15 AM

That "simple orchestration layer" (paraphrased) is what I consider the AGI.

But yeah, I suspect LLM:s may actually get close enough. "Just" add more reasoning loops and corresponding compute.

It is objectively grotesquely wasteful (a human brain operates on 12 to 25 watts and would vastly outperform something like that), but it would still be cataclysmic.

/layperson, in case that wasn't obvious

by lysace

2/16/2026 at 1:39:06 AM

If we can get AI down to this power requirement then it's over for humans. Just think of how many copies of itself thinking at the levels of the smartest humans it could run at once. Also where all the hardware could hide itself and keep itself powered around the world.

by pixl97

2/16/2026 at 1:07:08 AM

> a human brain operates on 12 to 25 watts

Yeah, but a human brain without the human attached to it is pretty useless. In the US, it averages out to around 2 kW per person for residential energy usage, or 9 kW if you include transportation and other primary energy usage too.

by jonas21

2/16/2026 at 1:12:51 AM

Fair.

Maybe the Matrix (1999) with the human battery farms were on to something. :)

by lysace

2/16/2026 at 6:00:43 AM

I suspected it wasn't just battery farms, but more like what you see in less mass market scifi where the humans are used for more than just batteries... they'd also be some storage and processing for the system (and no longer humans).

However at that point I don't see the value of retaining the human form. It's for a story obviously, but a not-human computational device can still be made out of carbon processing units rather than silicon or semiconductors generally.

by mjevans

2/16/2026 at 12:59:25 AM

I think "tested" is the hard part. The simple part seems to be there already, loops, crons, and computer use is getting pretty close.

by ryanSrich

2/16/2026 at 1:09:02 AM

> there is almost nothing that a human is better at than Opus 4.6.

Lolwut. I keep having to correct Claude at trivial code organization tasks. The code it writes is correct; it’s just ham-fisted and violates DRY in unholy ways.

And I’m not even a great coder…

by loloquwowndueo

2/16/2026 at 4:04:48 AM

This is entirely solvable with skills, memory, context, and further prompting. All of which can be done in a way that's reliable and repeatable.

You wouldn't expect a Jr. dev to be the best at keeping things dry either.

by ryanSrich

2/16/2026 at 1:30:09 PM

> there is almost nothing that a human is better at than Opus 4.6.

> You wouldn't expect a Jr. dev to be the best at keeping things dry either.

So a junior dev is better than almost all humans at everything?

by causal

2/16/2026 at 2:00:54 PM

Yea the “you’re holding it wrong” argument. Never takes long to pop up.

> You wouldn't expect a Jr. dev to be the best at keeping things dry either.

Did you read the comment I replied to? The premise was

> there is almost nothing that a human is better at than Opus 4.6.

So which is it? Is Claude the junior dev “better at” most things than a human or not? Sorry, you can’t play your argument both ways.

by loloquwowndueo

2/16/2026 at 1:21:00 AM

> violates DRY in unholy ways

Well said

by causal

2/16/2026 at 1:48:40 AM

I’m very pro AI coding and use it all day long, but I also wouldn’t say “the code it writes is correct”. It will produce all kinds of bugs, vulnerabilities, performance problems, memory leaks, etc unless carefully guided.

by danenania

2/16/2026 at 4:05:16 AM

So it's even more human than we thought

by ryanSrich