I remember vividly when I first became interested in psychology. It was 1985 and I was a final year computer science undergraduate. One of the modules I took covered artificial intelligence and expert systems. The module was aimed at two distinct audiences – people like me, who’d spent the previous two and a half years being taught the fundamentals of programming, computational theory, electronics and robotics and MSc psychology students who’d spent their academic careers studying … well, I had absolutely no idea what they’d been studying if I’m honest.
At the end of the first lecture, with all the computer scientists sat on one side of the room and the psychologists on the other, the lecturers (one from each department) asked if there were any questions about the module. One of the psychologists asked if we would be using or writing a computer program that simulated the way that the human brain worked. Remember, this question is being asked in 1985, and the most powerful computer in the university probably had less computing power than the iPhone you’re trying to read this on. Worse, that amount of raw computing power was considered sufficient to support 20 or 30 people writing and running code at the same time. But I digress…
There were two very distinct reactions to the question. All of the psychologists thought that this would be a really good thing to do, whereas all of the computer scientists (me included) thought that this was a ludicrous idea. We all laughed heartily. Why would anyone want to write a computer program that imitated the way the human brain worked? How inefficient! If humanity was ever going to get anything useful from artificial intelligence, surely it had to be precisely that – artificial – and using efficient algorithms to make the best use of scare computing resources.
By the end of the module there was rather more understanding between the two camps and I’d filed away a note in my brain to have a proper look at this cognitive psychology stuff when I had some time (which didn’t happen until 2007, but life has this habit of getting in the way of study intentions of course). It no longer seemed silly to suggest that there was value in creating computer programs to mimic the way the human brain tackled problems.
Fast forward to today – and even with the vast increase in computing power available, artificial intelligence still seems to be mired in the mid 1980s. Even when it does surface in the media it seems to be treated as a joke item. For example, Radio 4’s Today programme last Friday morning had a truly toe-curling interview between the researcher behind Cleverbot and John Humphries. The image below is a conversation I’ve just had with it. Cleverbot speaks first. Its output is rather unimpressive, frankly, even if the technology and algorithms behind it are very impressive indeed.
For computer programs to be truly be first class citizens in a conversation, simply passing a stimulus and response test like that envisaged by Turing isn’t anything like enough. Using language properly is about so much more than combing through large databases of previous conversations and finding responses that appear to make syntactic and semantic sense. As discursive psychology points out, we use language strategically to achieve particular social outcomes. Humans do this by selecting an appropriate discursive repertoire, take a position within it and then apply language to justify our actions or to blame others. It’s hard to see how a computer program could ever be so in tune with the subtleties of discourse and how it is used in social interactions to be truly convincing.
Perhaps my initial scepticism about modelling the human brain with computer programs (or ‘apps’ as I suppose we have to call them these days … how language changes!) wasn’t so far short of the mark after all. Why should we bother to create apps that try to simulate the way that the brain works when other ways of solving problems with a computer are so much more efficient and reliable?