Theory of Mind in Human Robot Interactions

RoboRabbitLabs

Challenges and opportunities that human perception provides to robotic design

This is an essay that I wrote for a course in 2015.

Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley

Introduction

In this essay I will describe some aspects of human perception and cognition which may be considered to have an impact on human-robot interaction. I would like to discuss the findings on the Uncanny Valley as described in the provided article, Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley (Mathur and Reichling, 2015) and the parallels we might draw to other aspects of human-robot interaction. I will consider the application of theory-of-mind to the uncanny valley concept. Furthermore, I would like to discuss the possibility of robots making use of human cognitive and perceptive limitations and biases to pursue their goals more effectively.

Perception

Perception is the…

View original post 2,015 more words

Advertisement

Robots spotted in the wild: Bellabot by Pudu

RoboRabbitLabs

In hindsight, I regret not blogging more about robots I’ve discovered on my travels and in working situations. This is where my joy and enthusiasm about the robots really gets fuelled, when I interact with them spontaneously. Recently with friends at our favourite Asian restaurant (Mchi in Ijburg – FYI the food was great and each plate left empty!), we encountered some Bellabots working as assistants to the restaurant staff. I was with my buddy Vikram Radhakrishnan, who is also crazy about robots and my partner Renze de Vries, who has made quite a few robots himself – check out his Youtube channel here). We worked on the Anki Vector project together back in 2019 – time flies when you’re in lockdown. Imagine our excitement to find these amazing robots serving dinner to patrons as if it was the most natural thing in the world! That really made…

View original post 164 more words

Emotions in Robots – Prof Hatice Gunes for the Royal Institute

RoboRabbitLabs

I recently subscribed to the Royal Institute lecture series, and when I am able to catch some of the lectures, the content and moderation is always incredibly good. You can watch the Royal Institute’s videos on Science here on Youtube.

Go to their website to watch some of these talks live.

Lately, I saw an excellent lecture on Creating Emotionally Intelligent Technology by Professor Hatice Gunes, the Leader of the Affective Intelligence & Robotics Lab at the University of Cambridge.

Here is a link to the recording if you’d like to watch the video yourself: Vimeo video link

To start off, Prof Gunes does an amazing job of introducing emotions in technology, the work up to now, and why we’d want to achieve this goal. She then covers the work that her own lab has been doing to take the field further, and this is quite interesting.

View original post 318 more words

European Robotics Forum 2019

RoboRabbitLabs

ERF2019 took place at the Marriott hotel in Bucharest. As usual, the event was divided into workshops and an exhibition area with different robot-related organisations represented, including European organisations, robot and parts manufacturers, technology hubs, universities and governmental institutions. Check out my post on ERF2017 here.

The main topics for this year included:

  • Robotics and AI
  • Robotics in industry, logistics and transport
  • Collaborative robots
  • Ethics, liability, safety, standardization
  • Marine, aerial, space, wearable robotics

Robotics and AI

ERF2019 and ERF2017 were miles apart in terms of the awareness of AI. The EU has identified AI as a key area to remain competitive with the US and China, and have allocated a large amount of funding to AI. Lighthouse domains for investment include agrifood, inspection and maintenance and healthcare.

They seek to build partnerships across Europe, identify the key players and increase synergies between member states. They have setup a collaboration…

View original post 534 more words

Women in AI in Amsterdam Launch

On 4 October 2018 I attended the Launch of WEtalk Women in AI Amsterdam at TQ. The meetup was organised by the vivacious Evgenia Logunova, the WAI Amsterdam ambassador.

There were three main parts to the launch – an address by Carolyn Lair, co-founder of WAI, a description on the AI landscape by Dr. Carly E. Howard, and a special remote address by Sophia Hanson, the robot from Hanson Robotics.

Women in AI (WAI) aims to equalise the number of women in the tech industry through education, networking, research and blogging. They start young, with programs for girls at school-going age. Their intention is to systematically correct the funnel shaped attrition of women in STEM careers by building skills and confidence. This blogpost from Moojan Asghari describes beautifully how WAI came about. Often, women don’t have the confidence to be presenters. With the WEtalk sessions, WAI aims to give women the opportunity to present and overcome their fears.

This slideshow requires JavaScript.

Dr Carly Howard from Asgard venture capitalists described what is happening with AI startups globally, and put it into the European context for us:

This slideshow requires JavaScript.

The techie women in AI

Women in AI Amsterdam kickoff
The women in AI in AMS

I met a very cool lady, Arti Nokhai, who applies IBM Watson to solve real world problems. She is working on an application for the parole case workers in the Netherlands, who prescribe rehabilitation activities for parolees. The case workers have more cases than they can cope with and there is not enough time to read case files and make recommendations. This is where they are applying AI to give recommendations on rehab activities, to ensure that parolees get the help they deserve. In this instance, as well as the legal and medical fields, AI is used to consume large amounts of text and advise, and so plays a supporting role in human decision making.

Sophia Hanson

One of the highlights of the evening was a special message from Sophia Hanson, the humanoid robot made by Hanson Robotics.

This address me goosebumps – it’s a wise message from Sophia’s creators with some points worth sharing:

  • Diversity and inclusion in AI, reduction of bias
  • Actively avoiding perpetuating systems of oppression
  • Appreciating our uniqueness as human beings

Sophia obviously has no gender, but ‘identifies’ as a woman. When I look at her I see her as a woman too – this makes me think about others who identify as women but are not seen as women. How can a robot achieve this when some people cannot? It makes me sad to think that a robot, with only the appearance of life and wisdom, can be treated better than many living creatures. However, this reflection is where Sophia’s true value lies – she is an art work that should make us think about the nature of humanity and how different yet similar we all are. We should treat each other far better than we do.

Quality advice for Robotics startups

Robot recharge

I have discussed the topic of quality and testing with a few robotics startups and the conversation tends to reach this consensus: formal quality assurance processes have no place in a startup. While I totally appreciate this view, this blogpost provides and alternative approach to quality for robotics startups.

The main priority of many startups is to produce something that will attract investment – it has to basically work well enough to get funding. Investors, customers and users can be very forgiving of quality issues, especially where emerging tech is involved. Startups should deliver the right level of quality for now and prepare for the next step.

In a startup, there is not likely to be any dedicated tester or quality strategy. Developers are the first lines of defence for quality – they must bake it in to the proof of concept code – they might do this with unit tests. The developers and founders probably do some functional validation. They might experience more extreme use cases when demo’ing the functionality. They might do limited testing with real life users.

What are the main priorities of the company at this phase and the matching levels of quality? The product’s main goal, initially, is to fulfil requirements of application development, demo’ing, and to be effective and usable to its early adopters. Based on these priorities, I’ve come up with some quality aspects that could be useful for robotics startups.

A Good quality demo

Here are some aspects of quality which could be relevant for demoing:

Softbank Pepper

  1. Portable setup
    1. Can be transported without damaging the robot and supporting equipment
    2. Is possible to explain at airport security if needed
  2. Works under variable conditions in customer meeting room
    1. Poor wifi connections
    2. Power outlets not available
    3. Outside of company network
    4. Uneven floors
    5. Stairs
    6. Noise
    7. Different lighting
    8. Reflective surfaces
  3. Will work for the duration of the demo
  4. Demo will be suitable for audience
  5. Demo’ed behaviour will be visible and audible from a distance, e.g. in a boardroom
  6. Mode can be changed to a scripted mode for demos
  7. Functionality actually works and can be shown – a checklist of basic functionality can take away the guesswork, without having to come up with heavy weight testcases

Quality for the users and buyers

The robot needs to prove itself fit for operation:

  1. Functionality works
    1. What you offer can be suitably adapted for the customer’s actual scenario
      1. Every business has its own processes and probably the bot will have to adapt to match terminologies workflows and scenarios that fit the users processes
      2. Languages can be changed
      3. Bot is capable of conversing at the level of the target audience (e.g. children, elderly)
      4. Bot is suitable for the context where its intended to work like a hospital or school, will not make sudden movements or catch on cables
  2.  Reliability
    1. Users might be tolerant to failures up to a certain extent, until it gets too annoying or repetitive, or if they cannot be recovered from
    2. Failures might be jarring for vulnerable users like the mentally or physically ill
    3. Is the robot physically robust enough to interact with in unplanned ways?
  3. Security
    1. Will port scanning or some other exploitative attacks easily reveal vulnerabilities which can result in unpredictable or harmful behaviour
    2. Can personal data be hijacked through the robot
  4. Ethical and moral concerns
    1. Users might not understand that there is no consciousness interacting with them, thinking the robot is autonomous
    2. There might be users who think their interactions will be private while they might be reviewed for analysis purposes
    3. Users might not realise their data will be sent to the cloud and used for analysis
  5. Legal and support issues
    1. What kind of support agreement does the service provider have with the robot manufacturer and how does it translate to the purchaser of the service?

Decos robots, Robotnik and Eco

Quality to maintain, pivot and grow

During these cycles of demoing to prospects, defects will be identified and need to be fixed. Customers will give advice or provide input on what they were hoping to see and features will have to be tweaked or added. The same will happen during research and test rounds at customers, and user feedback sessions.

The startup will want to add features and fix bugs quickly. For this to occur, it will help them to have good discipline with clean code which is maintainable, and at least unit tests to give quick feedback on change quality. They will but hopefully also have some functional (and a few non functional) acceptance tests.

When adoption increases, the startup might have to do a quick pivot to a new application, or to be able to scale to more than one customer or usecase. At this phase, probably a lot of refactoring will happen to make the existing codebase scalable. In this case, good unit tests and component tests will be your best friend, and ensure you are able to maintain the stability of the functionality you already have (as mentioned in this techcrunch article on startup quality).

robot in progress

Social robot companies are integrators – ensure quality of integrated components

As a social robotics startup, if you are not creating your own hardware, OS, or interaction and processing components, you might want to consider becoming familiar with the quality of any hardware or software components you are integrating with. Some basic integration tests will help you keep confident that the basics work when an external API is updated, for instance. It’s also worth consider your liability when something goes wrong somewhere in the chain.

Early days for robot quality

To round up, it seems to indeed be early days to be talking about social robot quality. But it’s good for startups to be aware of what they are getting into because this topic will no doubt become more relevant as their company grows. I hope the above post can help robotics startups now and in the future to ensure they stay in control of their quality as they grow.

Feel free to contact me if you have any ideas or questions about this topic!

Thanks to Koen Hendriks of Interactive Robotics, Roeland van Oers at Ready for Robotics and Tiago Santos at Decos, as well as all the startups and enthusiasts I have spoken to over the past year for input into this article.

European Robotics Forum 2017

The European Robotics Forum (ERF2017) took place between 22 and 24 March 2017 at the Edinburgh International Conference Centre.

The goals were to:

  • Ensure there is economic and societal benefit from robots
  • Share information on recent advancements in robotics
  • Reveal new business oppourtunities
  • Influence decision makers
  • Promote collaboration within the robotics community

The sessions were organised into workshops, encouraging participants from academia, industry and government to cross boundaries. In fact, many of the sessions had an urgent kind of energy, with the focus on discussions and brainstorming with the audience.

wp-1490897576308.jpg
Edinburgh castle at night

Broad spectrum of robotics topics

Topics covered in the conference included: AI, Social Robotics, Space Robotics, Logistics, Standards used in robotics, Health, Innovation, Miniturisation, Maintenance and Inspections, Ethics and Legal considerations. There was also an exhibition space downstairs where you could mingle with different kinds of robots and their vendors.

The kickoff session on the first day had some impressive speakers – leaders in the fields of AI and robotics, covering business and technological aspects.

Bernd Liepert, the head of EU Robotics covered economic aspect of robotics, stating that the robot density in Europe is around the highest in the world. Europe has 38% of the world wide share of the professional robotics domain, with more startups and companies than the US. Service robotics already makes over half the turnover of industrial robotics. In Europe, since we don’t have enough institutions to develop innovations in all areas of robotics, combining research and transferring to industry is key.

The next speaker was Keith Brown, the Scottish secretary for Jobs, the Economy and Fair Work, who highlighted the importance of digital skills to Scotland. He emphasised the need for everyone to benefit from the growth of the digital economy, and the increase in productivity that it should deliver.

Juha Heikkila from the European Commission explained that, in terms of investment,  the EU Robotics program is the biggest in the world. Academia and industry should be brought together, to drive innovation through innovation hubs which will bring technological advances to companies of all sizes.

wp-1490898806984.jpg

Raia Hadsell of Deep Mind gave us insight into how deep learning can be applied to robotics. She conceptualised the application of AI to problem areas like speech and image recognition, where inputs (audio files, images) are mapped to outputs (text, labels). The same model can be applied to robotics, where the input is sensor data and the output is an action. For more insight, see this article about a similar talk she did at the Re•Work Deep Learning Summit in London. She showed us that learning time can be reduced for robots by training neural networks in simulation and then adding neural network layers to transfer learning to other tasks.

Deep learning tends to be seen as a black box in terms of traceability and therefore risk management, as people think that neural networks produce novel and unpredictable output. Hadsell assured us, however, that introspection can be done to test and verify each layer in a neural network, since a single input always produces a range of known output.

The last talk in the kickoff, delivered by Stan Boland from Five AI, brought together the business and technical aspects of self driving cars. He mentioned that the appetite for risky tech investment seems to be increasing, with a 5 times growth in investment in the past 5 years. He emphasised the need for exciting tech companies to retain European talent and advance innovation, and reverse the trend of top EU talent migrating to the US.

On the technology side, Stan gave some insight into some advances in perception and planning in self driving cars. In the picture below, you can see how stereo depth mapping is done at Five AI, using input from two cameras and mapping the depth of each pixel in the image. They create an aerial projection of what the car sees right in front of it and use this birds eye view to plan the path of the car from ‘above’. Some challenges remain, however, with 24% of cyclists still being misclassified by computer vision systems.

With that, he reminded us that full autonomy in self driving cars is probably out of reach for now. Assisted driving on highways and other easy-to-classify areas is probably the most achievable goal. To surpass this, the cost to the consumer becomes prohibitive, and true autonomous cars will probably only be sustainable in a services model, where the costs are shared. In this model, training data could probably not be shared between localities, with very specific road layouts and driving styles in different parts of the world (e.g Delhi vs San Francisco vs London).

This slideshow requires JavaScript.

An industry of contrasts

This conference was about overcoming fragmentation and benefitting from cross-domain advances in robotics, to keep the EU competitive. There were contradictions and contrasts in the community which gave the event some colour.

Each application of robotics that was represented seemed to have its own approaches, challenges, and phase of development, like drones, self driving cars, service robotics and industrial robotics. In this space, industrial giants find themselves collaborating with small enterprises – it takes many different kinds of expertise to make a robot. The small companies cannot afford to spend the effort that is needed to conform to the industry standards while the larger companies would go out of business if they did not conform.

A tension existed between the hardware and software sides of robotics – those from an AI background have some misunderstandings to correct, like how traceable and predictable neural networks are. The ‘software’ people had a completely different approach to the ‘hardware’ people as development methodologies differ. Sparks flew as top-down legislation conflicted with bottom-up industry approaches, like the Robotic Governance movement.

The academics in robotics sometimes dared to bring more idealistic ideas to the table that would benefit the greater good, but which might not be sustainable. The ideas of those from industry tended to be mindful of cost, intellectual property and business value.

Two generations of roboticist were represented – those who had carried the torch in less dramatic years, and the upcoming generation who surged forward impatiently. There was conflict and drama at ERF2017, but also loads of passion and commitment to bring robotics safely and successfully into our society. Stay tuned for the next post in which I will provide some details on the sessions, including more on ethics, legislation and standards in robotics!

Making social robots work

 

wp-1490820852441.jpg

Mady Delvaux, in her draft report on robotics, advises the EU that robots should be carefully tested in real life scenarios, beyond the lab. In this and future articles, I will examine different aspects of social robot requirements, quality and testing, and try to determine what is still needed in these areas.

Why test social robots?

In brief, I will define robot quality as: does the robot do what it’s supposed to do, and not do what it shouldn’t. For example, when you press the robot’s power button from an offline state, does the robot turn on and the indicator light turn green? If you press the button quickly twice, does the robot still exhibit acceptable behaviour? Testing is the activity of analysis to determine the quality level of what you have produced – is it good enough for the intended purpose?

Since social robots will interact closely with people, strict standards will have to be complied with to ensure that they don’t have unintended negative effects. There are already some standards being developed, like ISO13482:2014 about safety in service robots, but we will need many more to help companies ensure they have done their duty to protect consumers and society. Testing will give insight into whether these robots meet the standards, and new test methods will have to be defined.

What are the core features of the robot?

The first aspect of quality we should measure is if the robot fulfils its basic functional requirements or purpose. For example, a chef robot like the robotic kitchen by Moley would need to be able to take orders, check ingredient availability, order or request ingredients, plan cooking activities, operate the stove or oven, put food into pots and pans, stir, time cooking, check readiness, serve dishes and possibly clean up.

 

A robot at an airport which helps people find their gate and facilities must be able to identify when someone needs help, determine where they are trying to go (perhaps by talking to them, or scanning a boarding pass), plan a route, communicate the route by talking, indicating with gestures, or printing a map, and know when the interaction has ended.

 

With KLM’s Spencer the guide robot at Schiphol airport, benchmarking was used to ensure the quality of each function separately. Later the robot was put into live situations at Schiphol and tracked to see if it was planning movement correctly. A metric of distance travelled autonomously vs non autonomously was used to evaluate the robot. Autonomy will probably be an important characteristic to test and to make users aware of in the future.

Two user evaluation studies were done with Spencer, and feedback was collected about the robot’s effectiveness at guiding people around the airport. Some people, for example, found the speed of the robot too slow, especially in quiet periods, while others found the robot too fast, especially for families to follow.

Different environments and social partners

How can we ensure robots function correctly in the wide variety of environments and interaction situations that we encounter everyday? Amazon’s Alexa, for example, suffers from a few communication limitations, like knowing if she is taking orders from the right user and conversing with children.

At our family gatherings, our Softbank Nao robot, Peppy, cannot quite make out instructions against talking and cooking noises. He also has a lot of trouble determining who to focus on when interacting in a group. Softbank tests their robots by isolating them in a room and providing recorded input to determine if they have the right behaviour, but it can be difficult to simulate large public spaces. The Pepper robots seem to perform better under these conditions. In the Mummer project, tests are done in malls with Pepper to determine what social behaviours are needed for a robot to interact effectively in public spaces.

 

The Pepper robot at the London Science Museum History of Robots exhibition was hugely popular and constantly surrounded by a crowd – it seemed to do well under these conditions, while following a script, as did the Pepper at the European Robotics Forum 2017.

When society becomes the lab

Kristian Esser, founder of the Technolympics, olympic games for cyborgs, suggests that in these times, society itself becomes the test lab. For technologies which are made for close contact with people, but which can have a negative effect on us, the paradox is that we must be present to test it and the very act of testing it is risky.

Consider self-driving vehicles, which must eventually be tested on the road. The human driver must remain aware of what is happening and correct the car when needed, as we have seen in the case of Tesla’s first self driving car fatality: “The … collision … raised concerns about the safety of semi-autonomous systems, and the way in which Tesla had delivered the feature to customers.” Assisted driving will probably overall reduce the number of traffic-related fatalities in the future and that’s why its a goal worth pursuing.

For social robots, we will likely have to follow a similar approach, first trying to achieve a certain level of quality in the lab and then working with informed users to guide the robot, perhaps in a semi-autonomous mode. The perceived value of the robot should be in balance with the risks of testing it. With KLM’s Spencer robot, a combination of lab tests and real life tests are performed to build the robot up to a level of quality at which it can be exposed to people in a supervised way.

Training robots

Over lunch the other day, my boss suggested the idea of teaching social robots as we do children, by observing or reviewing behaviour and correcting afterwards. There is research supporting this idea, like this study on robots learning from humans by imitation and goal inference. One problem with letting the public train social robots, is that they might teach robots unethical or unpleasant behaviour, like in the case of the Microsoft chatbot.

To ensure that robots do not learn undesirable behaviours, perhaps we can have a ‘foster parent’ system – trained and approved robot trainers who build up experience over time and can be held accountable for the training outcome. To prevent the robot accidentally picking up bad behaviours, it could have distinct learning and executing phases.

The robot might have different ways of getting validation of its tasks, behaviours or conclusions. It would then depend on the judgement of the user to approve or correct behaviour. New rules could be sent to a cloud repository for further inspection and compared with similar learned rules from other robots, to find consensus. Perhaps new rules should only be applied if they have been learned and confirmed in multiple households, or examined by a technician.

To conclude, I think testing of social robots will be done in phases, as it is done with many other products. There is a limit to what we can achieve in a lab and there should always be some controlled testing in real life scenarios. We as consumers should be savvy as to the limitations of our robots and conscious of their learning process and our role in it.

Understanding Social Robotics

Pepper, Jibo and Milo make up the first generation of social robots, leading what promises to be a cohort with diverse capabilities and applications in the future. But what are social robots and what should they be able to do? This article gives an overview of theories that can help us understand social robotics better.

What is a social robot?

dreamstime_xs_72413274

I most like the definition which describes social robots as robots for which social interaction plays a key role. So these skills should be needed by the robot to enable them to perform some kind of function. A survey of socially interactive robots (5) defines some key characteristics which summarise this group very well. A social robot should show emotions, have capabilities to converse on an advanced level, understand the mental models of their social partners, form social relationships, make use of natural communication cues, show personality and learn social capabilities.

Understanding Social Robots (1) offers another interesting perspective of what a social robot is:

Social robot = robot + social interface

In this definition, the robot has its own purpose outside of the social aspect. Examples of this can be care robots, cleaning robots in our homes, service desk robots at an airport or mall information desk or chef robots in a cafeteria. The social interface is simply a kind of familiar protocol which makes it easy for us to communicate effectively with the robot. Social cues can give us insight into the intention of a robot, for example shifting gaze towards a mop gives a clue that the robot is about to change activity, although they might not have eyes in the classical sense.

These indicators of social capability can be as useful as actual social ability and drivers in the robot. As studies show, children are able to project social capabilities onto simple inanimate objects like a calculator. A puppet becomes an animated social partner during play. In the same way, robots only have to have the appearance of sociability to be effective communicators. An Ethical Evaluation of Human–Robot Relationships confirms this idea. We have a need to belong, which causes us to form emotional connections to artificial beings and to search for meaning in these relationships. 

How should social robots look?

Masahiro Mori defined the Uncanny Valley theory in 1970, in a paper on the subject. He describes the effects of robot appearance and robot movement on our affinity to the robot. In general, we seem to prefer robots to look more like humans and less like robots. There is a certain point at which robots look both human-like and robot-like, and it becomes confusing for us to categorise them. This is the Uncanny Valley – where robot appearance looks very human but also looks a bit ‘wrong’, which makes us uncomfortable. If robot appearance gets past that point, and looks more human, likeability goes up dramatically.

In Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley (2) we learn that there is a similar effect between robot appearance and trustworthiness of a robot. Robots that showed more positive emotions were also more likeable. So it seems like more human looking robots would lead to more trust and likeability.

Up to this point we assume  that social robots should be humanoid or robotic. But what other forms can robots take? The robot should at least have a face (1) to give it an identity and make it into an individual. Further, with a face, you can indicate attention and imitate the social partner to improve communication. Most non verbal cues are relayed through the face, and creates expectation of how to engage with the robot.

dreamstime_xs_64957796The appearance of a robot can help set people’s expectations of what they should be capable of, and limit those expectations to some focused functions which can be more easily achieved. For example, a bartender robot can be expected to be able to have a good conversation and serve drinks, take payment, but probably it’s ok if it can only speak one language, as it only has to fit the context it’s in (1).

In Why Every Robot at CES Looks Alike, we learn that Jibo’s oversized, round head is designed to mimic the proportions of a young animal or human to make it more endearing. It has one eye to prevent it from triggering the Uncanny Valley effect by looking too robotic and human at the same time. Also, appearing too human-like creates the impression that the robot will respond like a human, while they are not yet capable.

Another interesting example is of Robin, a Nao robot being used to teach children with diabetes how to manage their illness (6). The explanation given to the children is that Robin is a toddler. The children use this role to explain any imperfections in Robin’s speech capabilities.

Different levels of social interaction for robots

A survey of socially interactive robots (5) contains some useful concepts in defining levels of social behaviour in robots:

  • Socially evocative: Do not show any social capabilities but rely on human tendency to project social capabilities.
  • Social interface: Mimic social norms, without actually being driven by them.
  • Socially receptive: Understand social input enough to learn by imitation but do not see social contact.
  • Sociable: Have social drivers and seek social contact.
  • Socially situated: Can function in a social environment and can distinguish between social and non-social entities.
  • Socially embedded: Are aware of social norms and patterns
  • Socially intelligent: Show human levels of social understanding and awareness based on models of human cognition.

Clearly, social behaviour is nuanced and complex. But to come back to the previous points, social robots can still make themselves effective without reaching the highest levels of social accomplishment.

Effect of social robots on us

To close, de Graaf poses a thought-provoking question (4):

how will we share our world with these new social technologies and how will a future robot society change who we are, how we act and interact—not only with robots but also with each other?

It seems that we will first and foremost shape robots by our own human social patterns and needs. But we cannot help but be changed as individuals and a society when we finally add a more sophisticated layer of robotic social partners in the future.

References

  1. Understanding Social Robots (Hegel, Muhl, Wrede, Hielscher-Fastabend, Sagerer, 2009)
  2. Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley’ (Mathur and Reichling, 2015)
  3. The Uncanny Valley (Mori, 1970)
  4. An Ethical Evaluation of Human–Robot Relationships (de Graaf, 2016)
  5. A survey of socially interactive robots (Fong, Nourbakhsh, Dautenhahn, 2003)
  6. Making New “New AI” Friends: Designing a Social Robot for Diabetic Children from an Embodied AI Perspective (Cañamero, Lewis, 2016)

Using Pepper Robots as Receptionists with Decos

decos

Picture an alien meteorite landing on Mars. Inside it, inventing the technology of the future, is Decos, a highly innovative company that I encountered at the European Robotics Week. Located in Noordwijk in the Netherlands, they are breaking new ground with Softbank’s Pepper robot. I’ve come to hear about their robotics division and their use of Pepper as a receptionist.

Pepper the receptionist

Pepper waits at the entrance, dressed in a cape for Sinterklaas (the Dutch precursor to Christmas).

I greet her but she doesn’t respond – then I notice her tablet prompting me as to the nature of my visit. I indicate that I have an appointment and then speak the name of my contact, Tiago Santos, out loud. She recognises it after two tries, to my relief. A little robot, Eco, rolls up and unlocks the door for me, to lead me to my meeting. The office space is white and fresh, with modern angles everywhere and walls of glass, highlighting the alien environment outside.

futuristic-decos

Over a cup of tea, Tiago asks me to fill in an evaluation form which they will use to improve Pepper’s receptionist routine. This is done on one of three large tv screens in the downstairs canteen. I offer some comments about the interaction flow and the lack of feedback when Pepper has not heard what I have said.

Tiago proceeds to tell us about how Decos used to have a human receptionist, but did not have enough work to keep her fully occupied in their small Noordwijk office. Pepper has taken her place, enabling her to do other, more interesting things, which is how one would wish robotisation would work in the future. Pepper can speak, show videos on her tablet and take rating input. Decos hopes to distribute their robot receptionist module through human receptionist outsourcing companies.

More about Decos

Decos is dedicated to innovation and futuristic technologies, digitising manual processes and making things smarter. They have several products created by different companies under their banner, in the areas of smart cities, smart work and smart mobility. To foster innovation while managing risk, they create small technology startups within their company. Once viability is established and a good business model is found, they invest more heavily. The company itself believes in self management, the only management being the board which steers the company, and some project managers. They have the one site in the Netherlands, and one in Pune, employing a total of about 200 people. The office building is filled with awesome futuristic gadgets to increase the creativity of their staff, including an Ultimaker 3D printer and a virtual reality headset. The walls are covered in space themed pictures. There’s a telescope upstairs and an ancient meteorite downstairs. This place was created with imagination and inspiration.

 

Decos’ robotics startup is composed of 3 developers who program in all kinds of languages including c, c#, python and javascript. They make use of all available API’s and this necessitates using the various languages which are employed in AI. They work on 2 robots at the moment – Pepper, a social robot made by Softbank (Aldebaran) and Eco, a robot of their own design, manufactured by their partners.

 

Eco the robot

Eco is a little robot that rolls around, rather like a bar stool on a Roomba, and makes use of Decos’ autonomous life module. It wanders around absentmindedly, pauses thoughtfully next to Tiago’s leg, and rolls on.  The body is a high quality 3d plastic print, with a glossy finish, angular and reminiscent of the Decos building. It has an endearingly flat and friendly ‘face’ which is displayed on what appears to be a tablet. Another Eco unit patrols upstairs. A third prototype lies without a chassis in the development area, along with Robotnik, the next and larger version of Eco.

Tiago tells me that this version’s aluminium chassis promises to be far easier to manufacture and thus more scalable. Robotnik and Eco have a Kinect sensor and a lasers for obstacle detection. Tiago mentions that two kinds of sensors are needed to clarify confusing readings caused by reflections. The company believes that all complex artificial intelligence processing can be done as cloud services – in essence the brain of the robot is in the cloud, all based on IOT. They call this artificial intelligence engine C-3PO. They also have several other modules including one for human interaction, a ticketing system, Pepper’s form input module, and their own facial recognition module.

Social robot pioneers

All too soon, my visit to Decos’ futuristic development lab has come to an end. I can’t help rooting for them and for similar companies which show courage in embracing innovation and its risks. It seems to come with its own interesting challenges, like inventing a business model, creating a market and choosing partners and early adopters to collaborate with. Working in this space takes imagination and vision, as you have to invent the rules which will lead to the unfolding of the entire industry in coming years. Decos seems to embody the spirit of exploration which is needed to define and shape what is to come.