4/5/2026 at 3:28:19 AM
No, it is isn’t and saying it is here by using vague goalposts does not make AGI show up. I will agree we have been unable to define what it is, but we can’t even figure out why humans have consciousness right now.by Eufrat
4/5/2026 at 5:47:30 AM
AGI is here? Yann Le Cun has a few weeks ago once more presented his PoV about how current LLMs fail: https://youtu.be/nqDHPpKha_A?is=sQsO57UWwR8LGZkWin french ...so in my own words:
1) Still unreliable at logic and general inference: try and try again seems to be SoTA...
2) Comically bad at pro-activity and taking the right initiative: eg. "You're right to be upset."
3) Most likely already reaching the end of the line in terms of available good training data: looking at the posted article here, I would tend to agree...
by polotics
4/5/2026 at 6:46:55 AM
The problem is that LeCun was obviously wrong on LLMs before. You have to take what he says with the caveat that he probably talks about these in a purist (academic) way. Most of the "downsides" and "failures" are not really happening in the real world, or if they happen, they're eventually fixed / improved.~2 years ago he made 3 statements that he considered failures at the time, and he was quite adamant that they were real problems:
1. LLMs can't do math
2. LLMs can't plan
3. (autoregressive) LLMs can't maintain a long session because errors compound as you generate more tokens.
ALL of these were obviously overcome by the industry, and today we have experts in their field using them for heavy, hard math (Tao, Knuth, etc), anyone who's used a coding agent can tell you that they can indeed plan and follow that plan, edit the plan and generally complete the plan, and the long session stuff is again obvious (agentic systems often remain useful at >100k ctx length).
So yeah, I really hope one of Yann, Ilya or Fei-Fei can come up with something better than transformers, but take anything they say with a grain of salt until they do. They often speak on more abstract, academic downsides, not necessarily what we see in practice. And don't dismiss the amout of money and brainpower going into making LLMs useful, even if from an academic pov it seems like we're bashing a square peg into a round hole. If it fits, it fits...
by NitpickLawyer
4/5/2026 at 4:05:10 AM
> we can’t even figure out why humans have consciousness right now.My uneducated guess is that it just means we save/remember (in a lossy way) inputs from our senses and then constantly decide what to do right now based on current and historical inputs, as well as contemplated future events.
I think the rest of our body greatly influences all of that as well, for example: we know running is healthy and we should do it, but we also decide not to run if we are busy, feel tired, or are in pain etc.
by ranger_danger
4/5/2026 at 4:53:30 AM
"the hard problem of consciousness" is not about "what are we conscious of", but rather: how is it possible to be conscious (i.e. experience qualia) at all?by PaulDavisThe1st
4/5/2026 at 5:52:01 AM
I think you will like Robert Sapolski lectures on YouTube...by polotics
4/5/2026 at 3:38:30 AM
I think we agree - we have arbritrary goalposts regarding AGI and they have been met. WE don't know what we consider to be "the big changing moment" and that moment is hard to define because we don't have a good definition of it when we talk about ourselves even.So the convo becomes - what is that "thing" and do we need to draw similarities between "it" and our own intelligence.
by oakhan3
4/5/2026 at 3:47:47 AM
[flagged]by jfeew
4/5/2026 at 4:52:35 AM
Why would it specifically be job-displacement-via-LLM's that signal AGI? Why not job-displacement-via-automated-robot? or job-displacement-via-office-technology?by PaulDavisThe1st
4/5/2026 at 3:56:20 AM
That seems like a pretty dangerous place to set your goalposts. Don't you think we need to see what's coming and figure out how to deal with it before widespread job destruction starts?by SpicyLemonZest
4/5/2026 at 11:15:10 AM
Why do people conflate consciousness with AGI?Intelligence is about being able to use information, make deductions, inferences or hypotheses. And presumably use that to inform action.
Consciousness is about having an internal experience. I would regard many living things as having consciousness but not a general intelligence.
by MattPalmer1086
4/5/2026 at 1:35:43 PM
besides, even humans don't have general intelligenceby WithinReason
4/5/2026 at 5:15:05 PM
Speak for yourself.by sph
4/5/2026 at 11:41:35 AM
There is no point in talking about AGI unless you mean sentience. That’s what everyone cares about. The rest is technobabble.by satisfice
4/5/2026 at 12:06:04 PM
Why would anyone specifically care about making a machine that can experience feelings? Could be dumb as a brick, but sentient.by MattPalmer1086
4/5/2026 at 11:57:26 PM
It's what everyone other than pedantic jerks think AGI means. Read a book. Watch a movie. Every depiction of AGI is essentially a depiction of a human. Sometimes a human without substantial emotions. That's sentience.by satisfice
4/6/2026 at 8:09:29 AM
The focus of AGI is on achieving human equivalence in cognitive tasks, or to surpass it ("intelligence"). That's where the money and the research is. Making a stupid machine that happens to be aware ("sentient") isnt the goal.by MattPalmer1086
4/5/2026 at 1:36:36 PM
because the I in AGI stands for intelligence, not sentience.by WithinReason
4/5/2026 at 11:53:38 PM
That's not what anyone means by it. Are you paying attention to why people care about AGI? AGI means "human-like."by satisfice