4/20/2026 at 9:43:59 PM
From my observations, there are generally four camps in the machine consciousness discussion:1. People who haven't really thought about it, and assume they're conscious because they talk like a human.
2. People who haven't really thought about it, and assume they can't be conscious because humans are obviously somehow special. This appears to be the largest group, and is linked to our religiously rooted culture in which human exceptionalism is the default.
Those first two groups comprise the majority of people, and are not worth engaging with.
3. People who have thought about it, and came to the conclusion that they might be conscious, usually for computationalism/functionalism reasons. This is the group that I place myself in.
4. People who have thought about it, and came to the conclusion that they can't be conscious, usually for biological naturalist reasons. This seems to be the predominant group on Hacker News (among those who discuss it).
by in-silico
4/20/2026 at 10:00:51 PM
I'm not sure I'd agree that people in groups 1 and 2 aren't worth engaging with.The interesting bit to do for both cases is look at the 'they talk like a human' and 'are obviously somehow special' parts, separate the ideas of language, intelligence (memory, fluidity, abstract reasoning), _aliveness_ (as a biological process) and finally ideas about metacognition and theory of mind, and see whether their idea of consciousness as a super-bundle of the above (which is how I assume a lot of default ideas about consciousness are) actually sticks, or whether it falls apart when beings can have a subset of those properties but not all.
Also, I nominate myself to be in the 'People who have thought about it and are becoming more doubtful that I myself am conscious, and the question might be moot.' group.
by sunrunner
4/20/2026 at 10:11:25 PM
I'm curious about your doubting your own consciousness statement, given that "we humans are conscious" is pretty axiomic to its definition and one of the few pieces that most agree with.by in-silico
4/20/2026 at 10:23:06 PM
Take a look at Daniel Dennett, for starters!If you're looking for one of the genuine angles on this:
Consciousness is horrendously under-defined, to the point some people go something like "you know, at this point I figure we'd be better off not having this word at all. "
Some days that's me, with a headache.
by Kim_Bruning
4/20/2026 at 10:28:21 PM
So it's more of a semantic argument than an actual rejection of the idea that you experience qualia/sentience/something?by in-silico
4/20/2026 at 11:25:04 PM
Dennett's whole thing is the rejection of qualia. See https://web.ics.purdue.edu/~drkelly/DennettQuiningQualia1988...by reverius42
4/20/2026 at 11:15:52 PM
You'd have to define those terms operationally first, somehow, before I could give you an honest reply. Most people can't -and those who do disagree- which suggests something structural.[It can be done. But it'll be dirty]
by Kim_Bruning
4/20/2026 at 11:06:51 PM
What exactly is the "you" in your sentence?by joquarky
4/21/2026 at 1:16:13 AM
What about group 5: Actually, we're just simulating consciousness too.by thfuran
4/21/2026 at 3:25:50 AM
Assuming 3. Maybe in order to reproduce human level consciousness one would need to treat at least most human cells as neurons, and reconstruct all the diversity of neuron types and their signalling mechanisms.If human consciousness is reproducible, maybe we will long underestimate the depth and diversity it uses to model reality the way it does.
by FloorEgg
4/21/2026 at 2:04:02 AM
I would place myself in 3, with the caveat that I don't think any current llms or other programs/dataset/relationships are close to conscious. It's certainly possible in the future, though.Atoms arranged into a brain generate consciousness. There's no reason to think atoms in other arrangements can't. Brains aren't magic, just well optimized.
by kbelder
4/21/2026 at 4:29:31 AM
What would have to change about future systems to make you think they're conscious in a way that modern systems aren't?That is to say, what evidence would you need from a system in order to think that it's conscious?
by in-silico
4/20/2026 at 11:09:13 PM
Yep, #2 feels like geocentrism all over again.by joquarky
4/20/2026 at 10:04:14 PM
Am I the only person who is confused by there being a philosophy called "biological naturalism", which is not the science?by Kim_Bruning
4/21/2026 at 1:12:52 AM
“Natural” is a word often used in opposition to science.It really has 1000 meanings. Usually whatever the speaker wants it to mean.
by Nevermark
4/20/2026 at 11:02:04 PM
As someone who places themselves in #4, at some point the people in #3 need to accept a bit of scientific humility. The reason we are "biological naturalists" is that we can point to hundreds of thousands of conscious species on planet Earth which are not humans, and whose consciousness clearly has nothing to do with an ability to say "Forsooth, I am a conscious thinking being." AI folks have been ignoring this since Alan Turing! And it's not a coincidence that humanity has yet to build a robot which is convincingly smarter than a cockroach.If you grant that humans are conscious, then surely domestic cats are as well. It is simply irrational to talk about Claude's "consciousness" without actually engaging with this: cats, humans, pigeons, fish, etc etc all share some common features we associate with consciousness (I don't mean sensory awareness, I mean the fuzzy cognitive concept). Claude really does not. In fact Claude doesn't even have much in common with uncontacted hunter-gatherers! Claude imitates the solipsism of formally educated human philosophers.
It is uncharitable and curmudgeonly but totally scientific to dismiss people in camp #3 as unserious and not worth engaging with: they ignore scientific criticism and don't provide any themselves, it's just a mishmash of sci-fi-adjacent philosophy. There's nothing "functional" about ignoring animals and there's nothing scientific about waving your hands and saying "computationalism." That's certainly how I feel. I know this isn't a very nice comment. But I am so sick of AI folks thinking they can ignore animals and still have an honest conversation about machine consciousness. It's just sci-fi ghost stories.
by LeCompteSftware
4/20/2026 at 11:31:30 PM
Oh dear, just a short while after me saying I was confused by the term too.Are you sure you're a <biological naturalist>? [1] Which is to say, do you adhere to Searle's position about syntax not leading to semantics?
Or is it more like: You're scientifically inclined, and thus you accept Ethology[2] or Neuroscience[3] as being empirically rigorous studies of animal behavior and cognition respectively?
Incidentally, Alan Turing's 1950 imitation game paper was actually pretty Ethological if you look it up. He immediately replaces the question "can machines think" with a more practical operationalization: the famous imitation game.
[1] https://en.wikipedia.org/wiki/Biological_naturalism
[2] https://en.wikipedia.org/wiki/Ethology
[3] https://en.wikipedia.org/wiki/Neuroscience
[4] https://en.wikipedia.org/wiki/Computing_Machinery_and_Intell...
by Kim_Bruning
4/21/2026 at 12:34:22 PM
I didn't say I was a formal biological naturalist according to Searle, I put myself in one of the four boxes the parent comment offered. Please read my comment in context.Your response is too condescending to engage with. You should have assumed I know what neuroscience is. Please don't ever email me about anything.
by LeCompteSftware
4/21/2026 at 12:08:35 AM
(ps. A quick search gives me the impression <biological naturalism> arguably rejects much of biology's findings on animal cognition. My mail is in my user description if you'd like me to dig up the relevant literature for you.)by Kim_Bruning
4/20/2026 at 11:28:46 PM
What is the evidence that non-human animals have the "fuzzy cognitive concept" we call consciousness, but Claude "really does not"?I personally have not been ignoring animal consciousness in how I think about the possibility of AI consciousness and I don't see how animals having consciousness means that AI can't.
by reverius42
4/21/2026 at 12:11:19 AM
What about robots? Not necessarily humanoid robots, but the classic RL demonstrations that can scurry around and achieve simple goals?In the computational functionalist argument, the thing that we share with cats, pigeons, and robots (and in some ways Claude) is the fact that we react to our environment in a way that requires computation.
I myself lean (without confidence) towards weak panpsychism, where a lot of things down from humans to cats to fish to trees to bacteria are in some way sentient. We all have in common a computationally driven sense/"think"/act cycle, and that is where it derives from.
by in-silico
4/21/2026 at 12:28:04 PM
The problem with robots is, again, humanity has yet to build a robot with the intelligence of a cockroach, or apparent conscious agentic behavior of a nematode. If I see such a robot I will update my views on machine consciousness. I don't think either of us will live that long.The problem with the "computational functionalist" argument is that a) there's ZERO evidence other animals brains are computational, that is begging the question; and b) pretty much any embedded system is a device that reacts to its environment in a way that requires computation, and none of them have anything close to the psuedoconsciousness of a bacteria. let alone an insect. Point a) is the more important one: only humans have meaningfully Turing-complete brains. Other animals might be hardware-capable but they'll never be trained to correctly execute a program, nor does their own intelligence seem especially amenable to being described by a classical symbolic algorithm - e.g. animals are very good at object identification, quantity discrimination, causal reasoning, and we don't have anything close to a symbolic algorithm for any of these[1]. Computation is linked to the ability to communicate symbolically, and most animals do not regardless of intelligence. The idea that "the brain is a computer" has always been a poetic description, not a scientific fact. It is more correct to say humans have the ability to think computationally because we think symbolically. Again, maybe someone can identify that animals do think symbolically even if they don't communicate that way, or (somehow) we will have a non-symbolic theory of computation. Perhaps a beautiful symphony. Absent either of these two things, "the chimpanzee's brain is like a computer" is simply not scientific.
The supposed "sense/think/act cycle" is just you begging the question again, applying a computational aesthetic in place of understanding; this time it's blatantly false. Animals do not have a "cycle": sensing is an act and processing senses is a thought. Thinking is an act and many animals can perceive themselves thinking (demonstrated in crows and chimps). Dogs think very deeply while they smell, and the manner in which they sniff (tentative whiff versus greedy huffs) is itself an act requiring thought. Most importantly: even in animals, thoughts can be totally disconnected from actions and senses. Actually this might be the most major difference between a pigeon and Claude: their thoughts and actions are not directly tied to environmental stimulus, whereas Claude can only think and act according to a short-term context provided by a human. You can fake an agentic loop with a prompt, but it's not convincing agency the way a nematode has convincing agency. It's just a chatbot in a loop. If you expose it to real sensory data like a webcam, the agentic behavior becomes even more brittle and unconvincing. It's just nothing like an animal.
[1] I know there's work being done on formal causal reasoning, I thought this monograph was interesting: https://direct.mit.edu/books/oa-monograph/3451/Actual-Causal.... I am not convinced by it. The funny thing about these causal theories... they don't have a causal explanation :) :) :) The argument works by going through cases until you agree it works, empirically, possibly after complicating things further by patching out oversights and inadequacies. Very amusing. Causality is a tough nut to crack!
by LeCompteSftware
4/21/2026 at 3:19:50 AM
[dead]by grantcas