alt.hn

4/5/2026 at 3:15:20 AM

AGI Is Here?

https://breaking-changes.blog/agi-is-here/

by oakhan3

4/5/2026 at 3:28:19 AM

No, it is isn’t and saying it is here by using vague goalposts does not make AGI show up. I will agree we have been unable to define what it is, but we can’t even figure out why humans have consciousness right now.

by Eufrat

4/5/2026 at 5:47:30 AM

AGI is here? Yann Le Cun has a few weeks ago once more presented his PoV about how current LLMs fail: https://youtu.be/nqDHPpKha_A?is=sQsO57UWwR8LGZkW

in french ...so in my own words:

1) Still unreliable at logic and general inference: try and try again seems to be SoTA...

2) Comically bad at pro-activity and taking the right initiative: eg. "You're right to be upset."

3) Most likely already reaching the end of the line in terms of available good training data: looking at the posted article here, I would tend to agree...

by polotics

4/5/2026 at 6:46:55 AM

The problem is that LeCun was obviously wrong on LLMs before. You have to take what he says with the caveat that he probably talks about these in a purist (academic) way. Most of the "downsides" and "failures" are not really happening in the real world, or if they happen, they're eventually fixed / improved.

~2 years ago he made 3 statements that he considered failures at the time, and he was quite adamant that they were real problems:

1. LLMs can't do math

2. LLMs can't plan

3. (autoregressive) LLMs can't maintain a long session because errors compound as you generate more tokens.

ALL of these were obviously overcome by the industry, and today we have experts in their field using them for heavy, hard math (Tao, Knuth, etc), anyone who's used a coding agent can tell you that they can indeed plan and follow that plan, edit the plan and generally complete the plan, and the long session stuff is again obvious (agentic systems often remain useful at >100k ctx length).

So yeah, I really hope one of Yann, Ilya or Fei-Fei can come up with something better than transformers, but take anything they say with a grain of salt until they do. They often speak on more abstract, academic downsides, not necessarily what we see in practice. And don't dismiss the amout of money and brainpower going into making LLMs useful, even if from an academic pov it seems like we're bashing a square peg into a round hole. If it fits, it fits...

by NitpickLawyer

4/5/2026 at 4:05:10 AM

> we can’t even figure out why humans have consciousness right now.

My uneducated guess is that it just means we save/remember (in a lossy way) inputs from our senses and then constantly decide what to do right now based on current and historical inputs, as well as contemplated future events.

I think the rest of our body greatly influences all of that as well, for example: we know running is healthy and we should do it, but we also decide not to run if we are busy, feel tired, or are in pain etc.

by ranger_danger

4/5/2026 at 4:53:30 AM

"the hard problem of consciousness" is not about "what are we conscious of", but rather: how is it possible to be conscious (i.e. experience qualia) at all?

by PaulDavisThe1st

4/5/2026 at 5:52:01 AM

I think you will like Robert Sapolski lectures on YouTube...

by polotics

4/5/2026 at 3:38:30 AM

I think we agree - we have arbritrary goalposts regarding AGI and they have been met. WE don't know what we consider to be "the big changing moment" and that moment is hard to define because we don't have a good definition of it when we talk about ourselves even.

So the convo becomes - what is that "thing" and do we need to draw similarities between "it" and our own intelligence.

by oakhan3

4/5/2026 at 3:47:47 AM

[flagged]

by jfeew

4/5/2026 at 4:52:35 AM

Why would it specifically be job-displacement-via-LLM's that signal AGI? Why not job-displacement-via-automated-robot? or job-displacement-via-office-technology?

by PaulDavisThe1st

4/5/2026 at 3:56:20 AM

That seems like a pretty dangerous place to set your goalposts. Don't you think we need to see what's coming and figure out how to deal with it before widespread job destruction starts?

by SpicyLemonZest

4/5/2026 at 11:15:10 AM

Why do people conflate consciousness with AGI?

Intelligence is about being able to use information, make deductions, inferences or hypotheses. And presumably use that to inform action.

Consciousness is about having an internal experience. I would regard many living things as having consciousness but not a general intelligence.

by MattPalmer1086

4/5/2026 at 1:35:43 PM

besides, even humans don't have general intelligence

by WithinReason

4/5/2026 at 5:15:05 PM

Speak for yourself.

by sph

4/5/2026 at 11:41:35 AM

There is no point in talking about AGI unless you mean sentience. That’s what everyone cares about. The rest is technobabble.

by satisfice

4/5/2026 at 12:06:04 PM

Why would anyone specifically care about making a machine that can experience feelings? Could be dumb as a brick, but sentient.

by MattPalmer1086

4/5/2026 at 11:57:26 PM

It's what everyone other than pedantic jerks think AGI means. Read a book. Watch a movie. Every depiction of AGI is essentially a depiction of a human. Sometimes a human without substantial emotions. That's sentience.

by satisfice

4/6/2026 at 8:09:29 AM

The focus of AGI is on achieving human equivalence in cognitive tasks, or to surpass it ("intelligence"). That's where the money and the research is. Making a stupid machine that happens to be aware ("sentient") isnt the goal.

by MattPalmer1086

4/5/2026 at 1:36:36 PM

because the I in AGI stands for intelligence, not sentience.

by WithinReason

4/5/2026 at 11:53:38 PM

That's not what anyone means by it. Are you paying attention to why people care about AGI? AGI means "human-like."

by satisfice

4/5/2026 at 3:25:55 AM

Let's see any of this stuff do something as simple as run a vending machine without collapsing into incoherency before calling it 'AGI'.

by crooked-v

4/5/2026 at 3:28:05 AM

Which part of running a vending machine do you think would be hard for a SOTA model?

by fastball

4/5/2026 at 4:00:10 AM

That vending machine comment is probably referring to https://www.anthropic.com/research/project-vend-1 - which is a fair point.

But I think it actually supports my thesis: we either haven't defined AGI well enough, or we've met it and are now waiting for something beyond it that doesn't have a name yet - something with the common sense and situational intuition a human would have in a scenario like that. The goalposts keep moving because the definition was never solid to begin with.

by oakhan3

4/5/2026 at 3:30:30 AM

The singularity doesn't mean that we get an AGI Day with a big announcement from The People In Charge that intelligence has been solved once and for all. It simply means that the frequency of "this changes everything" and "rumored model X at lab Y passes benchmark Z at 110% in less-than-zero-shot" style posts increases monotonically for ever.

by ipnon

4/5/2026 at 3:35:43 AM

The clickbait and ensuing arguments will never end. That is our fate.

Once ASI exists we'll still have people arguing whether it's actually AGI or not.

by rl3

4/5/2026 at 3:51:44 AM

Or it will just become seen as a category error. We don't often go around talking about "artificial general strength" or "artificial super-strength", even though it's easy to build a claw arm which can exert more force in any direction than any human can.

by SpicyLemonZest

4/5/2026 at 3:56:57 AM

Usefulness is here, and has been for a while time. I have been consistent in my stance that AGI will be achieved in the demonstration of iterative, stable self improvement. This can be demonstrated in knowledge creation or skill acquisition.

by brg

4/5/2026 at 3:35:15 PM

I like this - calling it "usefulness" - I think that's actually something the community could get behind - AGU - where we are right now, and 2 terms for 2 milestones we have yet to reach - which could be reached in any order:

1. increases in consistency and quality to some specific measure

2. development of human-like qualities - "consciousness"? "Intuition"?

And then we toss the term AGI which is overloaded and ambiguous.

by oakhan3

4/5/2026 at 2:22:05 PM

Repeating that we don't have a definition isn't helping anything except vapid blog posts having another thing to debate. I'll give one that I believe. It's the practical ability to use AI for most things that humans do at human levels of competence, without specifically being trained for each. There is no requirement for AGI actually think/reason beyond practical measures.

by karmakaze

4/5/2026 at 3:30:25 PM

It's evidently fueled a healthy debate here and you've tossed your opinion in the ring because of it.

I think you're on to something - we need a measure of what is considered "human levels of competence" or some bar by which we say "ok this is consistent enough"

by oakhan3

4/5/2026 at 3:36:24 AM

I’m not on either side of the argument, but one popular definition is missing which is “can automate most knowledge work”.

Not that this is my definition or anything, just pointing out that this is the one people actually care about, even if the acronym doesn’t say anything about economics or social change.

by ripped_britches

4/5/2026 at 3:43:01 AM

Interesting, could you explain it further - I'd like to know what that means - I thought I covered this in definition 8 - but with the very blatant asterisk in the post being - we have it and it works but it doesn't work consistently enough and with acceptable quality in long enough runs or open ended tasks - but I believe we will get closer and closer to this target with improvements in models and scaffolding.

by oakhan3

4/5/2026 at 9:01:57 PM

I agree that it seems most likely that things are going to rapidly improve as they have been.

But the reality is that global unemployment levels are as of yet unaffected.

This is clearly the hardest bar to meet, but it’s also the most important.

If AI fully automates 5% of global jobs (or pick your number), I think it would be fair to say that this specific definition of AGI is achieved.

As a SWE, I feel immensely augmented by AI. But you can’t yet fully deploy an AI to do a job end to end without any human involvement.

I use like ~1b input tokens per week on codex or something like that, and while it does an insane amount of work, you have to have a skilled hand guide it.

This might not be the case for long, but it’s not here yet (in that narrow definition at least).

by ripped_britches

4/5/2026 at 3:38:46 AM

[flagged]

by jfeew

4/5/2026 at 3:32:57 AM

My definition has always been making a new world changing discovery in physics or math.

by atulvi

4/5/2026 at 3:35:06 AM

I have never met a human that managed that, and we still let them vote!

by mapontosevenths

4/6/2026 at 4:52:16 PM

Mine was beating the top pro at Go.

by karmakaze

4/5/2026 at 5:53:06 AM

Seems kinda arbitrary.

More humans publish non-replicable "science" in sociology -- your bar is way too high

by readthenotes1

4/5/2026 at 3:35:07 AM

That’s a limiting definition imo.

by jfeew

4/5/2026 at 4:54:56 AM

At one point in time, "AGI" included being able to learn skills that involved manipulating the physical world. While LLMs and their ilk may contribute to this, we are still (AFAICT) far, far from this at this time.

by PaulDavisThe1st

4/5/2026 at 5:56:48 AM

Since when? I've never understood AGI to require that.

by Petersipoi

4/5/2026 at 2:56:53 PM

Oh, like since the 80s. One canonical example has been spreading butter on toast, something most children can learn to do quite early in life, but computer-mechanical systems are still not very good at (last time I looked, anyway). Rodney Brooks used to talk about this in the 90s, although that was also a period when "embodied intelligence" was where things were at (which sometimes went as far as "you can't really be intelligent without the physical experience of a body in the world").

by PaulDavisThe1st

4/5/2026 at 4:54:45 PM

If humans are not GI, we cannot judge what is AGI.

by ritcgab

4/5/2026 at 8:11:58 PM

AGI is not reasonable without real neural memory. LLMs don't have intrinsic memory. Do not confuse a shrub for a tree.

by OutOfHere

4/5/2026 at 3:34:28 AM

Why do people keep referencing the Turing test? Turing did not anticipate there’d be a gigantic dump of text contributed by humans online to feed off.

by jfeew

4/5/2026 at 3:55:01 AM

It is quoted relentlessly and so is worth addressing, for example see https://www.ibm.com/think/topics/artificial-general-intellig...

I mentioned it to have a more complete set of definitions for AGI from across the community - but do agree that it is by far the weakest and more-so a measurement of human variability and gullibility than AI intelligence.

by oakhan3

4/5/2026 at 3:38:31 AM

People mention it to remind the world that the goal posts have been repeatedly moved by critics, and always will be.

A certain percentage of humans will never acknowledge that machines can be intelligent. Those people should be disqualified from the conversation for the same reason we disqualify biblical literalists from conversations about radio carbon dating.

by mapontosevenths

4/5/2026 at 3:43:21 AM

> A certain percentage of humans will never acknowledge that machines can be intelligent.

Doesn't this assume there IS an objective, quantifiable definition of an "intelligent machine" that is agreed upon by most people? That instead sounds rather subjective to my ears.

by ranger_danger

4/5/2026 at 4:01:15 AM

Even failing a single unified definition, every reasonable person should be able to define some subjective line of their own.

Some people don't even have a subjective definition though. They'll continue to deny the machines are intelligent no matter where the line is drawn.

It's not worth debating those folks because to them it is a matter of faith and no amount of reason can convince the unreasonable.

by mapontosevenths

4/5/2026 at 3:57:41 AM

Can confirm an easy way to win an argument is to remove the decenting voice. Bet that would have went great in civil right and woman sufferage.

by lolz404

4/5/2026 at 4:15:35 AM

Ignoring the irrational isn't the silencing of dissent, it's ignoring time-wasters who refuse (or are unable) to argue in good faith.

I only get so many hours on earth, I'd rather not spend them debating what the definition of "is" is with someone who would rather litigate tautological nonsense than accept *any* level of evidence as sufficient.

by mapontosevenths

4/5/2026 at 3:43:53 AM

[flagged]

by jfeew