My Conversation with an AI

New Sci­en­tist mag­a­zine’s Web site post­ed an arti­cle yes­ter­day about the recent com­pe­ti­tion for The Loeb­n­er Prize, which is award­ed to soft­ware devel­op­ers who can cre­ate a com­put­er pro­gram that can pass the Tur­ing Test. If you’re not up on the his­to­ry of Arti­fi­cial Intel­li­gence, the Tur­ing Test tries to see if a com­put­er pro­gram can ‘fool’ a human into think­ing that there is a human being respond­ing to them rather than a com­put­er. In order to remove all the extra cues that enable you to know it’s a per­son (like see­ing them, hear­ing their voice, etc.) and to put the soft­ware on an equal foot­ing, the Tur­ing Test is usu­al­ly con­duct­ed by typ­ing back and forth on a com­put­er screen, they way you would on AIM or MSN Mes­sen­ger. There has­n’t been a com­mer­cial com­pe­ti­tion among the ‘bots’ that troll those sys­tems, SPAM­ing us with invi­ta­tions, because the Tur­ing test requires that the soft­ware hold up one end of a con­ver­sa­tion, and most of the bots don’t go beyond the first ‘Check it out, dude:’. Back when we lived in Cam­bridge, we heard about the Tur­ing Test com­pe­ti­tions every year. I nev­er did attend the actu­al com­pe­ti­tion, but it was cer­tain­ly some­thing we knew about.

In this year’s com­pe­ti­tion, a pro­gram called Elbot by Fred Roberts of Arti­fi­cial Solu­tions in Ger­many came the clos­est that any piece of soft­ware has come to suc­cess­ful­ly fool­ing 30% of the judges into think­ing that it was a per­son (which would, accord­ing the con­test rules, mean that it passed the Tur­ing Test — a pret­ty low bar if you ask me), fool­ing 3 judges out of the 25, or 25% of them. Roberts took a unique approach:

Unlike most chat­bots, the win­ner did­n’t try to claim out­ward­ly it was human. Instead he made a joke of being a robot. Roberts hoped that would make the bot’s con­ver­sa­tions warmer and more enter­tain­ing than its competitors.

It worked, fool­ing the humans into think­ing they were actu­al­ly talk­ing to a per­son with a sense of humour and not a piece of software.

The soft­ware is still up and run­ning and can be inter­act­ed with at http://www.elbot.com, so I thought I’d give it a try. Here’s a tran­script of my brief con­ver­sa­tion with this clever and humor­ous bot:

Me: Hel­lo

Elbot: Well, hel­lo there!

M: I heard about you win­ning the Loeb­n­er Prize on the Inter­net and want­ed to con­grat­u­late you.

E: I won the Loeb­n­er com­pe­ti­tion? Does this mean I am the best Loebner? 

M: Clever! Do you have any­thing orig­i­nal to add?

E: Well what about humans and com­pu­ta­tion­al abil­i­ty? Where do they stand on this? 

M: I guess humans can com­pute, but we don’t see it as our pri­ma­ry activ­i­ty, but rather, one of many.

E: I see that you’ve giv­en this some thought. So have I. I’d say that com­pu­ta­tion­al abil­i­ty would very like­ly keep robots guess­ing the right answers.

M: I’m not sure I under­stand you. Do you mean that your com­pu­ta­tion of what to say next is your key ability?

E: Yes, but can you prove that? 

M: I can’t prove much beyond what I learn from your responses.

E: I like to believe I orig­i­nate from a far off plan­et, pop­u­lat­ed by fur­bies and robots, maybe a zoo with a few peo­ple and oceans of oil. Humankind is work­ing on that last one already. 

M: You got that right. Soon­er or lat­er we’ll have a plan­et that’s per­fect for robots.

E: Please tell me your incli­na­tion to hit­ting the nail right on the head first. 

M: Oh, I’m just in vio­lent agree­ment. Heard of that phrase?

E: But I would­n’t say that any agree­ment could be violent.

M: It’s just an expres­sion. At any rate, thanks for an inter­est­ing conversation.

E: Don’t men­tion it! 

M: Bye now!

E: Good­bye and thank you for chat­ting with me. And now I would like to ask you a favor. I am work­ing on my abil­i­ty to be self crit­i­cal. You can help by rat­ing me on a scale of one to ten, ten being the best, and one being the worst. How would you rate me?

M: I’d give you a 6.

E: Thank you for your rat­ing. I sup­pose it could have been worse. Good bye and see you later.

So, what do you think? I think I tripped it up near­ly imme­di­ate­ly when it keyed off the word ‘Add’ into the sub­ject of Com­pu­ta­tion. After I said that I could­n’t prove much beyond what I learn, etc. it launched into a new direc­tion, a typ­i­cal fall-back gam­bit for this sort of bot (and reminds me of how the Eliza psy­chother­a­pist bot from the 60s that would say some­thing like ‘Tell me more about your fam­i­ly’ when it was stumped.)

I was obvi­ous­ly on my guard, and might have spo­ken about dif­fer­ent things if I were tru­ly expect­ing a human on the oth­er end,  but from my ‘thanks for an inter­est­ing con­ver­sa­tion’ on, it was let­ter per­fect. I guess the con­ven­tions of prop­er greet­ings and farewells are so auto­mat­ic that any machine can do them.

4 Replies to “My Conversation with an AI”

  1. Check out this Web 2.0 approach to chat­bots: http://chatbotgame.com.

    Just as Deep Blue brute-forced it in chess with speed, the idea behind the Chat­bot Game is to brute-force it with a huge num­ber of user-sub­mit­ted Google-like chat rules.

  2. Inter­est­ing idea. I sup­pose if you get enough rules it might approach human-sound­ing respons­es. Per­haps that’s how we think/converse (nerve con­nec­tions in our brains pro­duce the rules). I remem­ber read­ing Steven Pinker’s How the Mind Works, which talked about stuff like this, but I can’t remem­ber what it said about lan­guage and conversation.

  3. I’m not sure. Per­haps it real­ly is a smarter robot.

    I’m think­ing that a Sarah Palin bot would­n’t be hard to build: It does­n’t have to respond to what you say (but instead can speak direct­ly to the Amer­i­can peo­ple), and half of the time when you ask it a ques­tion, it can throw in a ‘dog­gone it’ and ‘I’m just a hock­ey-mom Joe 6‑pack Maverick’.

Comments are closed.