I’ve just spent a couple of days at the ICAPS PlanRob workshop. About 35 of us met together at King’s College in London, and my paper on the Instinct Planner was one of 23 accepted papers.
Manuela Veloso gave the keynote talk. She talked about CoBots (robots that must cooperate with humans to get tasks done), and also RoboCup Soccer. I particularly liked her opening remark “Where are the robots? I mean, where are the robots?“.
I agree. With all this talk of robots, why do we not see them in our everyday lives, why are they not commonplace?
Perhaps the answer is partially found by considering the challenges that robots must face in order to move around autonomously in our human centric world, interact with us, and most importantly do something useful for us. We forget just how intelligent we humans are when we do these things.
As I moved around King’s College, I did not have any kind of accurate map in my head. I’ve never been there before, and although the conference brochure has a map, I made no effort to memorise it. I simply found the building, identified someone who looked like they should know where I was supposed to go, and then asked them. They gave me directions verbally, and that was enough. Then I read some signs, generally followed the flow of people that looked like they were also attending the conference, and found my way around. Even now, I have no detailed accurate map, just a few sketchy facts, like the toilets are by the main entrance, the seminar room is some way down the 2nd floor corridor on the right, there is a cafe on the 2nd floor and so on. The rest I work out when I’m there.
As Rod Brooks pointed out a long time ago, “the world is its own best model“; we don’t need to make another detailed one in our head. However, to exploit the information available in the world, I’ve used written language (signs), spoken language (directions), social intelligence (identifying a knowledgeable person to speak to, and also following people like me), common sense (knowing that workshops are in small seminar rooms and toilets use common signeage) and so on. Without all these skills I’d need detailed maps and instructions. Of everywhere. All the time. Such maps do not exist. We forget how smart we are.
This brings me on to the workshop itself, which was all about planning as applied to robots. Much of this was about motion planning, either for robot arms (to pick up and manipulate objects), or for navigation (moving around an environment to get from A to B). These apparently simple planning tasks still exercise the best minds, and producing useful plans in a timely manner is difficult. As the world changes (e.g. human ‘obstacles’ move around) replanning is necessary, and getting these systems to work in real environments is an ongoing research problem requiring powerful computers running complex algorithms. Watch the RoboCup robots and you soon see how hard it is.
Compare this with the R5 robot. It has a tiny, unlearning ‘Darwinian mind‘ with only 8,192 bytes of memory in it, yet it can move around a room using several strategies to avoid obstacles, whilst looking out for humans, stopping to interact with them if it finds one, and resting from time to time to conserve its batteries. It’s programming is relatively simple, in fact the plans within it are designed visually, drawn in a freeware drawing program. It uses a different approach – reactive planning. Much as I react to my environment as I encounter it, so does the little R5. It has pre-defined simple behaviours and a pre-defined reactive plan to invoke them, depending on its internal motivations and its sensory inputs. Sure, it would need some powerful subsystems if it had to read signs, speak to people and understand their replies, but nevertheless the basic idea of interacting with the world as the world presents itself to us is a powerful and general purpose one.
The nobel prize winner, Nico Tinbergen observed this reactive hierarchical intelligent behavior by studying sticklebacks and gulls over 60 years ago. His 1951 book, The Study of Instinct, inspired me to create the Instinct reactive planner, based on Joanna Bryson’s POSH planner.
Within our Artificial Models of Natural Intelligence (AmonI) research group, we try to learn what we can from nature about intelligence, because its been intelligent for much longer than our AI or even humans themselves have been around. Evolution has solved some pretty complex problems over the last few billion years, let’s use some of that to help us build robots.
We certainly do need navigational and other types of planning algorithms for certain tasks, but in an uncertain and constantly changing world, the reactive planning paradigm has considerable mileage yet for robotics.