Pepper, Jibo and Milo make up the first generation of social robots, leading what promises to be a cohort with diverse capabilities and applications in the future. But what are social robots and what should they be able to do? This article gives an overview of theories that can help us understand social robotics better.
What is a social robot?
I most like the definition which describes social robots as robots for which social interaction plays a key role. So these skills should be needed by the robot to enable them to perform some kind of function. A survey of socially interactive robots (5) defines some key characteristics which summarise this group very well. A social robot should show emotions, have capabilities to converse on an advanced level, understand the mental models of their social partners, form social relationships, make use of natural communication cues, show personality and learn social capabilities.
Understanding Social Robots (1) offers another interesting perspective of what a social robot is:
Social robot = robot + social interface
In this definition, the robot has its own purpose outside of the social aspect. Examples of this can be care robots, cleaning robots in our homes, service desk robots at an airport or mall information desk or chef robots in a cafeteria. The social interface is simply a kind of familiar protocol which makes it easy for us to communicate effectively with the robot. Social cues can give us insight into the intention of a robot, for example shifting gaze towards a mop gives a clue that the robot is about to change activity, although they might not have eyes in the classical sense.
These indicators of social capability can be as useful as actual social ability and drivers in the robot. As studies show, children are able to project social capabilities onto simple inanimate objects like a calculator. A puppet becomes an animated social partner during play. In the same way, robots only have to have the appearance of sociability to be effective communicators. An Ethical Evaluation of Human–Robot Relationships
How should social robots look?
Masahiro Mori defined the Uncanny Valley theory in 1970, in a paper on the subject. He describes the effects of robot appearance and robot movement on our affinity to the robot. In general, we seem to prefer robots to look more like humans and less like robots. There is a certain point at which robots look both human-like and robot-like, and it becomes confusing for us to categorise them. This is the Uncanny Valley – where robot appearance looks very human but also looks a bit ‘wrong’, which makes us uncomfortable. If robot appearance gets past that point, and looks more human, likeability goes up dramatically.
In Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley (2) we learn that there is a similar effect between robot appearance and trustworthiness of a robot. Robots that showed more positive emotions were also more likeable. So it seems like more human looking robots would lead to more trust and likeability.
Up to this point we assume that social robots should be humanoid or robotic. But what other forms can robots take? The robot should at least have a face (1) to give it an identity and make it into an individual. Further, with a face, you can indicate attention and imitate the social partner to improve communication. Most non verbal cues are relayed through the face, and creates expectation of how to engage with the robot.
The appearance of a robot can help set people’s expectations of what they should be capable of, and limit those expectations to some focused functions which can be more easily achieved. For example, a bartender robot can be expected to be able to have a good conversation and serve drinks, take payment, but probably it’s ok if it can only speak one language, as it only has to fit the context it’s in (1).
In Why Every Robot at CES Looks Alike, we learn that Jibo’s oversized, round head is designed to mimic the proportions of a young animal or human to make it more endearing. It has one eye to prevent it from triggering the Uncanny Valley effect by looking too robotic and human at the same time. Also, appearing too human-like creates the impression that the robot will respond like a human, while they are not yet capable.
Another interesting example is of Robin, a Nao robot being used to teach children with diabetes how to manage their illness (6). The explanation given to the children is that Robin is a toddler. The children use this role to explain any imperfections in Robin’s speech capabilities.
Different levels of social interaction for robots
A survey of socially interactive robots (5) contains some useful concepts in defining levels of social behaviour in robots:
- Socially evocative: Do not show any social capabilities but rely on human tendency to project social capabilities.
- Social interface: Mimic social norms, without actually being driven by them.
- Socially receptive: Understand social input enough to learn by imitation but do not see social contact.
- Sociable: Have social drivers and seek social contact.
- Socially situated: Can function in a social environment and can distinguish between social and non-social entities.
- Socially embedded: Are aware of social norms and patterns
- Socially intelligent: Show human levels of social understanding and awareness based on models of human cognition.
Clearly, social behaviour is nuanced and complex. But to come back to the previous points, social robots can still make themselves effective without reaching the highest levels of social accomplishment.
Effect of social robots on us
To close, de Graaf poses a thought-provoking question (4):
“how will we share our world with these new social technologies and how will a future robot society change who we are, how we act and interact—not only with robots but also with each other?“
It seems that we will first and foremost shape robots by our own human social patterns and needs. But we cannot help but be changed as individuals and a society when we finally add a more sophisticated layer of robotic social partners in the future.
- Understanding Social Robots (Hegel, Muhl, Wrede, Hielscher-Fastabend, Sagerer, 2009)
- Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley’ (Mathur and Reichling, 2015)
- The Uncanny Valley (Mori, 1970)
- An Ethical Evaluation of Human–Robot Relationships (
- A survey of socially interactive robots (Fong, Nourbakhsh, Dautenhahn, 2003)
- Making New “New AI” Friends: Designing a Social Robot for Diabetic Children from an Embodied AI Perspective (
7 thoughts on “Understanding Social Robotics”
Very nice article, concise and referenced.
Would be nice to go further 🙂
Thank you! What do you think the next step would be?
Would be nice to have a deeper discussion on the well listed social behavior, the link between each and how far are we from mastering these behavior.
I would add one more thing that I think people tends to forget when discussing these subjects: collaboration. There will be more than 1 robot around so we must think of interactions (social or not here) between multiple robots and humans. Food for thought.
Yes I see what you mean – I have something like it for cognitive capabilities:
Challenge accepted, and thanks for the tips