Robot Encounter and Transparency

Over the last year or so, one of my main research interests has been machine transparency. We’ve been using the R5 Robot in various ‘robot encounter’ experiments, both online using video recordings and in a live environment. We’ve shown that people do not naturally form good mental models of robots on first encounter. They tend to overestimate their practical and cognitive abilities, and often completely fail to recognise the purpose of the robot, embedded in its goals. Depending on the tasks that the robot is supposed to be doing, this misunderstanding can lead to mis-calibrated trust, suspicion, fear or merely a dismissive attitude. Whatever the negative response, the result will be that the robot is either misused, or not used at all.

Scene with R5 robot in an enclosure with other objects

It is therefore important that robots are designed to convey both their purpose and their capabilities (and implicitly or explicitly their limitations) to the people around them, as they operate. They must be designed to be transparent. Such machines will enable us to properly calibrate our trust and expectations.

This need for transparency is well put in the Principles of Robotics – the result of an EPSRC funded initiative to create “rules advising those who design, sell and use robots about how they should act.”

We’ve been using a new tool that shows a simple, abstracted, real-time visualisation of a robot’s artificial intelligence (AI) as it operates. We’ve found that observers with no experience of robots benefit significantly by using the tool, known as ABOD3, created by my colleague Andreas Theodorou

Next week I’ll be conducting a further experiment at the At-Bristol Science Learning Centre, together with Vivienne Rogers from Swansea University. This time we’ll be investigating the transparency benefit of vocalising the transparency output from the robot. Although the robot will be uttering sentences, notice I carefully avoided saying that the robot will be speaking, because this implies much more than the robot is capable of.

It’ll be very interesting to capture data from this muttering robot experiment.

For further information also see the Bath AmonI project research pages.

Rob Wortham

December 2016

Leave a Reply

Your email address will not be published. Required fields are marked *