12/12/2025 at 6:39:43 PM
It's unfortunate that there's so little (none in the article, just 1 comment here as of this writing) mention of the Turing Test. The whole premise of the paper that introduced that was that "do machines think" is such a hard question to define that you have to frame the question differently. And it's ironic that we seem to talk about the Turing Test less than ever now that systems almost everyone can access can arguably pass it now.by TallGuyShort
12/12/2025 at 11:48:05 PM
> “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” ~ Edsger W. DijkstraThe point of the Turing Test is that if there is no extrinsic difference between a human and a machine the intrinsic difference is moot for practical purposes. That is not an argument to whether a machine (with linear algebra, machine learning, large language models, or any other method) can think or what constitutes thinking or consciousness.
The Chinese Room thought experiment is a compliment on the intrinsic side of the comparison: https://en.wikipedia.org/wiki/Chinese_room
by fabianhjr
12/13/2025 at 3:49:44 PM
I kind of agree but I think the point is what people mean by words is vague, so he said:>Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
which is can you tell the AI answers from the humans ones in a test. It then becomes an experimental result rather than what you mean by 'think' or maybe by 'extrinsic difference'.
by tim333
12/13/2025 at 12:59:42 PM
The Chinese Room is a pretty useless thought exercise I think. It's an example which if you believe machines can't think seems like an utterly obvious result, and if you believe machines can think it's just obviously wrong.by rcxdude
12/13/2025 at 3:59:14 PM
People used to take it surprisingly seriously. Now it's hard to make the argument that machines can't understand say Chinese when you can give a Chinese document to a machine and ask it questions about it and get pretty good answers.by tim333
12/13/2025 at 12:15:03 PM
>And it's ironic that we seem to talk about the Turing Test less than ever now that systems almost everyone can access can arguably pass it now.Has everyone hastily agreed that it has been passed? Do people argue that a human can't figure out it's talking to an LLM if the user is aware that LLMs exist in the world and is aware of their limitations and that the chat log is able to extend to infinity ( "infinity" is a proxy here for any sufficient time, it could be minutes, days, months, or years)?
In fact, it is blindly easy for these systems to fail the Turing test at the moment. No human would have the patience to continue a conversation indefinitely without telling the person on the other side to kindly fuck off.
by sillyfluke
12/13/2025 at 4:06:51 PM
No, they haven't agreed because there was never a practical definition of the test. Turing had a game:>It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B.
>We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?
(some bits removed)
It was done more as thought experiment. As a practical test it would probably be too easy to fake with ELIZA type programs to be a good test. So computers could probably pass but it's not really hard enough for most people's idea of AI.
by tim333
12/14/2025 at 11:28:07 AM
The definition seems to suffice if you give the interrogator as much time as they want and don't limit their world knowledge, which the definition that you cited doesn't seem to limit or constrain? By "world knowledge" I mean any knowledge that includes and is not limited to knowledge about how the machine works and its limitations. Therefore if the machine can't fool Alan Turing specifically then it fails even though it might have fooled some random Joe who's been living under a rock.Hence since current LLMs are bound to hallucinate given enough time and seem not to able to maintain a conversation context window as robustly as humans, they would inevitably fail?
by sillyfluke