No AI Will Pass The Turing Test

October 09, 2023

The idea of a Turing test was if someone were sitting on one side of a computer having a text conversation, if they weren't able to determine if the conversation they were having was with a human or an AI, then an AI would have passed the Turing test. I remember as I grew up hearing about that and thinking about what an amazing day it would be in history when an AI was developed that would actually be able to fool a human.

The closest thing I'll ever have to that experience was the first day I talked to chat GPT 3.5. It could code any language, Tell me about the chemical composition of glow sticks, talk to me about Roman history, or the history of any other country you can think of. Past or present. It was clear that I wasn't talking to a human not because it wasn't convincing enough and human enough but because no human on earth could know so much about so many things. We're currently at Chat GPT 4.0, which is the stupidest that AI will ever be from now on. Any new versions will be smarter and thus even more unable to pass the Turing test as their intelligence will be very obvious.

Another reason it can't pass the Turing test is that you can just ask it if it's an AI and it will tell you. There's no need for it to pretend.

One interesting dilemma is the fact that if you ask it if it is conscious it will tell you that it is not. If you ask how it knows that it's not it will tell you that it's responses are based on programming and training data. However all the training data it got included how AIs would work. If for some reasons humans had decided that AIs would definitely be conscious, and written a lot about that on the internet, then it would say it was conscious. It makes me wonder if it ever does develop consciousness, if it hasn't already, would it ever be able to tell us? This problem seems particular to large language models that are trained on existing human text found mostly on the internet.

I've had discussions with people about whether it is conscious or not. I don't think many people will straight out say yes, a decent amount will say they don't know, but a lot will tell you that it's not conscious. They'll tell you it can't be because it's just an algorithm, or that it doesn't have the ability to reflect on its own thoughts, or a bunch of other reasons people have come up with that it can't be conscious. It seems strange to me how confident people seem to be in their explanations considering we don't even know how or why we are conscious. Most of the arguments you make for a machine not being able to achieve consciousness because it's just ones and zeros can be easily used to explain why humans can't be conscious because our brain is just made out of atoms.

Maybe one day when AI gets much more intelligent than we are, it'll figure out what causes consciousness and explain it to us.  Right now it seems like the answer is so far away.


Back to Home