Intelligently Passing the Turing Test
The Turing Test has long been the standard test when it comes to determining whether a machine is performing intelligently. The experiment is as follows: A human has a conversation with a hidden computer, through some sort of interface. He doesn't know beforehand whether that he is conversing with a computer, or a human. If the human, by means of conversation, cannot determine whether the thing behind the wall is a computer or a human, then we've successfully programmed artificial intelligence. That is, the computer is functioning so normally - normally, that is, as a human would function, that it isn't possible to see it as anything but human.
The pitfall of the Turing Test is that it automatically assumes the intelligence of the human - and the intelligence of conversations two humans can have. I'm sure, particularly when it comes to instant messaging, we've all come across humans we just don't understand. Humans who are, in fact, more robotic than our own computers we have fine tuned to be extensions of ourselves.
Recently in a competition, the 2008 Loebner Competition - where AI computers are put to the Turing test - Elbot emmerged as a winner. Though it didn't quite pass the Turing Test, it did fool 3 out of 12 judges. You can chat with ElBot here: http://www.elbot.com/ (Push the red button)
Now, granted, when I first chatted with Elbot, I knew it was a computer. So I wasn't really practicing the Turing Test myself - I had the prior knowledge that the hidden computer is a computer. But to be perfectly frank, after 5 minutes of talking with Elbot, I don't see how anyone can not immediately identify this chat bot as a computer - and a pretty dumb one at that.
The programmers claimed that they felt they could do better in the test if they gave Elbot more personality. Rather than having Elbot assert that it is human, the way other chat bots do, Elbot instead sarcastically admits that it's a computer in an effort to raise doubt. Sure, interesting strategy - except this immediately is not Artificial Intelligence. It would be if Elbot came up with that strategy. But humans came up with it, and fed it to Elbot as a series of directives. It did not adapt this strategy from a series of success and failed attempts. In other words, it did not act intelligently. But even accepting that point, the mere fact that Elbot evades every single possible question you ask it and so generally and loosely responds to key words you may say, there is no way anyone should be fooled by Elbot - unless everyone you ever chat with on instant messengers have the habit of evading every thing you say to them.
If we ever do pass the Turing Test, we're a long way off. Language is far too complicated to be broken down into logical bits. Particularly when it comes to instant messaging, it's why we have to sometimes rely on italicizing what we write, use winks, or give other forms of queues to the other person to denote our sarcasm, our earnestness, our happiness, or what have you. Language is so complicated that even as humans, we have our misunderstandings. This doesn't mean we have to create a computer that's smarter than humans - afterall, if the computer is confused by the same tones a human would be confused by, that would fool us. If it knew exactly what we were saying every single time, that would not only be overkill - and signal that it is too good - but it would also be very frightening. Think: Everytime you try to fool another person in conversation, everytime you exaggerate, everytime you fib, everytime you use sarcasm, this computer would not be fooled. That's very frightening. It also means you'd be able to quickly spot the computer as a computer. Why? Tell it a joke. Most jokes are plays on our own expectations - or rather, our own misunderstandings. We hear the story be set up, and we start understanding it to be one way. The punch line is a clarifying statement which reveals to us that our understanding was a misunderstanding. A computer that understands everything wouldn't find this type of joke particularly funny. So while we don't need to take computers beyond our own level of understanding, bringing them to our own level is difficult enough. Particularly because we don't even understand our own understanding - and how can you ever replicate that which you don't understand?
So how do we understand each other? How do we know when someone is speaking ironically? How do we know when someone is teasing us? How do we know the difference between a sincere statement and a lie? Most of it, is based on experience. The least gullible of us are those who have already come across many misleading statements, can recall instances when they were misleading, and then immediately throw up a red flag. Those who are gullible? Well, people who have never experienced a lie are immediately gullible. For example, most children. They haven't been around long enough, and so happen to believe everything that they are told - they aren't aware of the possibility, and cannot reason that someone would mislead another. Adults who are gullible just have difficulties in recognizing a misleading statement, or otherwise, the one making the statement is particularly good at masking it. But all of this is based on experience. So quite clearly, the best way to get a computer to our level would be through experience. Give it a lifetime worth of conversational experience - where people lie, where people are sarcastic, where people insult, where people quite curiously express terms of endearment (and if you've ever seen the logs of one of these chat bots, you'd see just how many people can't help but talk dirty to a robot). Unfortunately, this is a lot of memory, and a lot of processing time. Get the statement, analyze the statement, compare with other statements, store the statement, compare responses, obtain best and most logical response, return response. While this is something you and I would do in a matter of milliseconds (sometimes longer, in which we say "Uhhh..." instead of "Loading, please wait..."), a computer requires a much longer time to do it. And it's precisely because we don't understand our own brains very well. Our own brains which are remarkably fast at informational retrieval, but also remarkably well at archiving huge amounts of information, with 90% compression rate. Sure, there's a little degradation here and there, but when you consider how much information we can fill in our brain, and how well we compress it, you can accept a little degradation.
To really explain the difficulty at hand, think of a movie you last saw. If you liked the movie, the parts you remember are the crucial moments that defined the story. The scenes which you felt particularly close to. The scenes where you may laughed. You don't remember every single shot - very difficult, when you consider that a standard shot is around 5 seconds long, giving a 90 minute film over 1000 shots. Without thinking about it, without remarking to yourself what's important, what isn't important, you remember all the bits of the movie that help you reconstruct it later. And you forget about the irrelevant parts. All without trying. To get a computer to do this would be incredibly difficult, for the simple reason that we have a lot of trouble explaining how we do it, ourselves. It just... happens. And more remarkably, despite every movie being different (insert bad joke about standard Hollywood movies...), we manage to do it for every single movie. Talk about having dynamic brains! At best, with a computer, we could tell it the salient bits to remember, and which ones to forget. And we'd need to do this one by one, for each movie, because they're different. To be able to have the computer do it on its own is far beyond doable, because we cannot fully describe doing it ourselves.
The genius of the Turing Test is precisely all that it takes into account. It doesn't explicitly say how difficult a task it is to pass the test, but it is an enormous task. Is it doable? I'm sure it one day will be, but we're a long way... though, those three judges may disagree.