5/4/2026 at 4:48:45 AM
Dawkins declared himself unable to determine consciousness through the chat terminal, which is the reason the Turing test is relevant.Try imagining the same 'gotchas' in the original Turing test, (i.e. you're told beforehand you're talking to an AI, and you have insider knowledge of how AI works.) Then the role of the test-taker is to simply disregard the chat and to already know the answer.
Dawkin's posts might be gross and out of touch, but let's at least get a proper rebuttal - what definition of consciousness, when applied to interactive chat, could differentiate a person from an LLM?
by mrkeen
5/4/2026 at 6:29:58 AM
I used to think conscious was just being able to say "nah not doing that", but that's typically based on rules, and those rules can be programmed in always.I don't know, I made a comment a week or so ago asking the same thing, why is our neural network conscious? We're also very easy to poison
by ramon156