Skip to main content

Common Sense, The Turing Test and the Quest for Real AI - Hector Levesque *****

It was fascinating to read this book immediately after Ed Finn's What Algorithms Want. They are both by academics on aspects of artificial intelligence (AI) - but where reading Finn's book is like wading through intellectual treacle, this is a delight. It is short, to the point, beautifully clear and provides just as much in the way of insights without any of the mental anguish.

The topic here is the nature of artificial intelligence, why the current dominant approach of adaptive machine learning can never deliver true AI and what the potential consequences are of thinking that learning from big data is sufficient to truly act in a smart fashion.

As Hector Levesque points out, machine learning is great at handling everyday non-exceptional circumstances - but falls down horribly when having to deal with the 'long tail', where there won't be much past data to learn from. For example (my examples, not his), a self-driving car might cope wonderfully with typical traffic and roads, but get into a serious mess if a deer tries to cross the motorway in front of it, or should the car encounter Swindon's Magic Roundabout.

There is so much here to love. Although the book is compact (and rather expensive for its size), each chapter delivers excellent considerations. Apart from the different kinds of AI (I love that knowledge-based AI has the acronym of GOFAI for 'good old-fashioned AI'), this takes us into considerations of how the brain works, the difference between real and fake intelligence, learning and experience, symbols and symbol processing and far more. Just to give one small example of something that intrigued me, Levesque gives the example of a very simple computer program that generates quite a complex outcome. He then envisages taking the kind of approaches we use to try to understand human intelligence - both psychological and physiological - showing how doing the same thing with this far simpler computer equivalent would fail to uncover what was happening behind the outputs.

For too long, those of us who take an interest in AI have been told that the 'old-fashioned' knowledge-based approach was a dead end, while the modern adaptive machine learning approach, which is the way that, for instance, programs like Siri and Alexa appear to understand English, is the way forward. But as the self-driving car example showed above, anything providing true AI has to be reliable and predictable to be able to cope with odd and relatively unlikely circumstances - because while any individual unlikely occurrence will probably never happen, the chances are that something unlikely will come along. And when it does, it takes knowledge to select the most appropriate action.

Highly recommended.

Hardback:  

Kindle 
Using these links earns us commission at no cost to you
Review by Brian Clegg

Comments

Popular posts from this blog

Roger Highfield - Stephen Hawking: genius at work interview

Roger Highfield OBE is the Science Director of the Science Museum Group. Roger has visiting professorships at the Department of Chemistry, UCL, and at the Dunn School, University of Oxford, is a Fellow of the Academy of Medical Sciences, and a member of the Medical Research Council and Longitude Committee. He has written or co-authored ten popular science books, including two bestsellers. His latest title is Stephen Hawking: genius at work . Why science? There are three answers to this question, depending on context: Apollo; Prime Minister Margaret Thatcher, along with the world’s worst nuclear accident at Chernobyl; and, finally, Nullius in verba . Growing up I enjoyed the sciencey side of TV programmes like Thunderbirds and The Avengers but became completely besotted when, in short trousers, I gazed up at the moon knowing that two astronauts had paid it a visit. As the Apollo programme unfolded, I became utterly obsessed. Today, more than half a century later, the moon landings are

Splinters of Infinity - Mark Wolverton ****

Many of us who read popular science regularly will be aware of the 'great debate' between American astronomers Harlow Shapley and Heber Curtis in 1920 over whether the universe was a single galaxy or many. Less familiar is the clash in the 1930s between American Nobel Prize winners Robert Millikan and Arthur Compton over the nature of cosmic rays. This not a book about the nature of cosmic rays as we now understand them, but rather explores this confrontation between heavyweight scientists. Millikan was the first in the fray, and often wrongly named in the press as discoverer of cosmic rays. He believed that this high energy radiation from above was made up of photons that ionised atoms in the atmosphere. One of the reasons he was determined that they should be photons was that this fitted with his thesis that the universe was in a constant state of creation: these photons, he thought, were produced in the birth of new atoms. This view seems to have been primarily driven by re

Deep Utopia - Nick Bostrom ***

This is one of the strangest sort-of popular science (or philosophy, or something or other) books I've ever read. If you can picture the impact of a cross between Douglas Hofstadter's  Gödel Escher Bach and Gaileo's Two New Sciences  (at least, its conversational structure), then thrown in a touch of David Foster Wallace's Infinite Jest , and you can get a feel for what the experience of reading it is like - bewildering with the feeling that there is something deep that you can never quite extract from it. Oxford philosopher Nick Bostrom is probably best known in popular science for his book Superintelligence in which he looked at the implications of having artificial intelligence (AI) that goes beyond human capabilities. In a sense, Deep Utopia is a sequel, picking out one aspect of this speculation: what life would be like for us if technology had solved all our existential problems, while (in the form of superintelligence) it had also taken away much of our appare