With a lecture though I have to be a little quicker, an hour of ideas being thrown about and I want to get them down.
This evening saw me tripping from Notting Hell to the Royal Institution to hear Brian Christian talk about his experiences of the Turing Test, which in turn was the basis for his book The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive.
Much has been written about the Turing Test, originally known as the Imitation Game, so I won't go dwell on it here. Besides it's late and this ditzy brunette wants to go to sleep. In a nutshell though it's how do you prove you are human.
Which is also a bit like trying to prove you didn't do something.
If you didn't do something there is no evidence, because you didn't do it, so how can you prove something just by a lack of evidence? Now prove you're human. Obviously if you insist on playing music loudly through crap white ear pieces you've already gone a long way to proving you're not.
It's difficult. What is it about us that makes us us? And then, how do we make a machine do the self same thing.
Turing predicted that eventually machines would be able to convince us they were human. But it's not happened yet. To give a bit of an incentive, a chap called Hugh Loebner underwrote a prize, cunningly called the Loebner Prize, that would be awarded to whoever writes a program that convinces people it's human.
It's not been awarded yet.
But each year there is a prize for the most human like machine and, amusingly, the most human human. The one that managed to persuade people they were human. I've been on the Central Line today. Twice. I'm sure there were an awful lot of people there that wouldn't have had a chance.
The funny thing is, there was a time when computers were people. Computers weren't a machine, they were a job description. As you might imagine this was pretty much before 1940ish. The funny example given was the father of Information Theory, Claude Shannon who met Betty, his computer (or numerical analyst), at Bell Labs. They married in 1949. I wonder if he gave her Perls... Sorry.
Computers were so stylish in the 1940's.
Back to Turing. When he was trying to explain what the machine was he essentially said "well, it's kind of like a computer". Now when we think of somebody particularly good at sums or with an astounding memory we say they are like a computer. The literal and the figurative have switched places.
The talk meandered on and touched on the philosophers. Which has a certain glorious aptness as last week I had a delightful talk with a philosopher of science. Aristotle's view of what made us different to other lifeforms was discussed. So, all living things have a nutritive soul, animals also have a sensitive soul but we have the rational soul. Aren't we the clever ones. I do doubt this at times. Anyway, we have the ability to think abstractly and in a pure form so that's what must make us human. Right? Hmm. maybe not. But it was rather convenient that a professional philosopher should say thinking was okay.
Moving on we covered René Descartes. Now René, before he ran a small café in Nouvion, was a philosopher who decided that maybe nothing existed. So he pondered that nothing existed. And if nothing existed did it follow that he didn't exist. His view was that, yes, he did exist because cogito ergo sum. I think therefore I am.
Now at this point I will stop. Two reasons.
The first is that I'm going to poke the eyes out of the little hot housed shit sitting behind me that kept kicking my chair, sniffing, talking to his elderly mother and, yes, clever lad, knew what cogito ergo sum meant. Little boy, you are sitting in the Royal Institution, a lot of the members can understand Latin, we don't need your help!
The second was at the end of the talk during the questions. A chap (who I recall also asked a good question during Claudia Hammond's talk in June) pointed out that how could Descartes be sure? He said "I", how did he know he existed and that, actually nothing was there but pure thoughts. I would have pondered this, but I really needed to spend a penny.
Anyway. The concept amused me. Moving on.
Next we covered Moravec's Paradox. This is something that effects all computing, not just artificial intelligence, and it comes down to this, what we think is hard, we find it easy to make computers do. What we find easy to do is really, really, really hard for computers to do.
So a computer can predict a check mate solution solution from a given chess board in 25 moves, but a human can work out that a picture contains a dog. And some grass.
As a further example he used the analogy of a black cab driver. At 2 years old the cabbie could walk, avoid obstacles, mostly, and talk about how they met Elton John at nursery last week. By 17 the cabbie may have moved on to pushing levers, turning the wheel and gesticulating at someone in a bid to avoid Bank when the roadworks are in place. By 21, the earliest you can become a cabbie I believe, they could find the quickest route from Soho to Contrary Towers having done 4 years of The Knowledge.
Interestingly, they can't find Contrary Towers. We've given up now and have to help with the last bit.
Computers though find route mapping really easy. And can find Contrary Towers. Unfortunately they are barely past the point where they can twiddle with levers and turn wheels. And as for avoiding obstacles, well, getting there, but some distance off.
Computers also can't gesticulate. But will give you details of anything to do with Elton John.
Things that are easy for us are hard for computers. Or at least hard for humans to program computers with. Trust me, this is what I do for a living. It's difficult.
Part of the problem is understanding. Both the problem and the context. A computer can be filled with all the knowledge known to humanity. But if it can't understand the question it will come up with some tiddly brained answer.
Like 42.
I'll give my own example. Many years ago I was looking at building a virtual friend system for a large mobile phone manufacturer. I was trying to explain the problem of interpreting the text messages coming in the give reasonable responses, hampered by the use of text speak. Whilst working this out I sat on Quayside in Cambridge and received a text message saying...
RUSTILTHRErrr. It didn't help that I didn't recognise the number. After an hour I gave up, I couldn't see what russ-til-thu meant (sorry for my feeble attempt to explain how I read it). I sent a text reply.
Who is this and do you speak english? I don't understand!The sender came back...
It's X, new number, are you still there?Oh.
R U STIL THRFFS. Computers, and ditzy brunettes as it transpires, can't cope with this.
Which brought us nicely to another point with regard to electronic communication. And a neat statement:
All communication is suspectConnected with these are two concepts
All communication is a Turing test
The nature of authenticationAn example was given of how a young man managed to "hack" (I despise that word) Sarah Palin's Yahoo account. He went to the password recovery it asked various questions. All of which could be discovered by using Google. A fundamentally flawed approach when dealing with public figures. Regardless of how mad they are.
The nature of intimacy
So how can you be sure, how can you authenticate somebody you can't see? Well we can, and do. When my flatmate pops up online I always know it is her at the other end of the connection, not because of some innate connection formed after many morning cups of tea and shared soup, but because I recognise her style, our way of interacting, it's as distinct as a signature. And as with a signature it's something that is very difficult to replicate, the information can be limited but the nature of the information is dripping with identifying characteristics.
Whether it be the gait, a laugh, their hand-writing, diction or syntax, we know who they are. I know this only too well, a few months back I asked somebody how they found me and she said through my writing style.
Computers can't do this.
And I guess therein lays the simplicity of the concept of humanity. My view is that maybe it does just come down to context and connections. Things that only two people can know and use as a cunning, impossibly complex, cryptographic exchange to say they are who they are.
Taking this further it shows why shared experiences are so important, because through them you don't just enrich your life but also provide, beyond reasonable doubt, that you are not a machine, or an imposter, you actually are...
The most human human.