Using Pepper Robots as Receptionists with Decos

decos

Picture an alien meteorite landing on Mars. Inside it, inventing the technology of the future, is Decos, a highly innovative company that I encountered at the European Robotics Week. Located in Noordwijk in the Netherlands, they are breaking new ground with Softbank’s Pepper robot. I’ve come to hear about their robotics division and their use of Pepper as a receptionist.

Pepper the receptionist

Pepper waits at the entrance, dressed in a cape for Sinterklaas (the Dutch precursor to Christmas).

I greet her but she doesn’t respond – then I notice her tablet prompting me as to the nature of my visit. I indicate that I have an appointment and then speak the name of my contact, Tiago Santos, out loud. She recognises it after two tries, to my relief. A little robot, Eco, rolls up and unlocks the door for me, to lead me to my meeting. The office space is white and fresh, with modern angles everywhere and walls of glass, highlighting the alien environment outside.

futuristic-decos

Over a cup of tea, Tiago asks me to fill in an evaluation form which they will use to improve Pepper’s receptionist routine. This is done on one of three large tv screens in the downstairs canteen. I offer some comments about the interaction flow and the lack of feedback when Pepper has not heard what I have said.

Tiago proceeds to tell us about how Decos used to have a human receptionist, but did not have enough work to keep her fully occupied in their small Noordwijk office. Pepper has taken her place, enabling her to do other, more interesting things, which is how one would wish robotisation would work in the future. Pepper can speak, show videos on her tablet and take rating input. Decos hopes to distribute their robot receptionist module through human receptionist outsourcing companies.

More about Decos

Decos is dedicated to innovation and futuristic technologies, digitising manual processes and making things smarter. They have several products created by different companies under their banner, in the areas of smart cities, smart work and smart mobility. To foster innovation while managing risk, they create small technology startups within their company. Once viability is established and a good business model is found, they invest more heavily. The company itself believes in self management, the only management being the board which steers the company, and some project managers. They have the one site in the Netherlands, and one in Pune, employing a total of about 200 people. The office building is filled with awesome futuristic gadgets to increase the creativity of their staff, including an Ultimaker 3D printer and a virtual reality headset. The walls are covered in space themed pictures. There’s a telescope upstairs and an ancient meteorite downstairs. This place was created with imagination and inspiration.

 

Decos’ robotics startup is composed of 3 developers who program in all kinds of languages including c, c#, python and javascript. They make use of all available API’s and this necessitates using the various languages which are employed in AI. They work on 2 robots at the moment – Pepper, a social robot made by Softbank (Aldebaran) and Eco, a robot of their own design, manufactured by their partners.

 

Eco the robot

Eco is a little robot that rolls around, rather like a bar stool on a Roomba, and makes use of Decos’ autonomous life module. It wanders around absentmindedly, pauses thoughtfully next to Tiago’s leg, and rolls on.  The body is a high quality 3d plastic print, with a glossy finish, angular and reminiscent of the Decos building. It has an endearingly flat and friendly ‘face’ which is displayed on what appears to be a tablet. Another Eco unit patrols upstairs. A third prototype lies without a chassis in the development area, along with Robotnik, the next and larger version of Eco.

Tiago tells me that this version’s aluminium chassis promises to be far easier to manufacture and thus more scalable. Robotnik and Eco have a Kinect sensor and a lasers for obstacle detection. Tiago mentions that two kinds of sensors are needed to clarify confusing readings caused by reflections. The company believes that all complex artificial intelligence processing can be done as cloud services – in essence the brain of the robot is in the cloud, all based on IOT. They call this artificial intelligence engine C-3PO. They also have several other modules including one for human interaction, a ticketing system, Pepper’s form input module, and their own facial recognition module.

Social robot pioneers

All too soon, my visit to Decos’ futuristic development lab has come to an end. I can’t help rooting for them and for similar companies which show courage in embracing innovation and its risks. It seems to come with its own interesting challenges, like inventing a business model, creating a market and choosing partners and early adopters to collaborate with. Working in this space takes imagination and vision, as you have to invent the rules which will lead to the unfolding of the entire industry in coming years. Decos seems to embody the spirit of exploration which is needed to define and shape what is to come.

 

Advertisements

European Robotics Week 2016

The European Robotics Week in 2016 took place from 18 to 22 November in several countries including The Netherlands, Austria, Lithuania, Norway, Portugal, Serbia and many more. This event has been occurring since 2011 to spread public awareness of robotics applications and brings together industry, researchers and policymakers. The central event this year was held in Amsterdam. I attended one of the 5 days of activities in the Maritime Museum. The theme was ‘Robots at your service – empowering healthy aging’ which encompassed a variety of activities arranged over the 5 day duration, including debates, open sessions where you could network while interacting with different kinds of robots, workshops for children and a 2 day hackathon. I attended the robot expo and two of the debate sessions which I will summarise below.

Robot Exhibition

Although it’s clearly still early days for general consumer robotics in terms of the price-value ratio, there are ever more options available for enthusiasts and for very specific applications. The exhibition had a good selection, including these lovely bots:

Panel Discussion: Roboethics

This discussion was about ethics in robotics but it touched a really wide variety of aspects around this, including some philosophy. The speakers were of a very high standard and from a wide variety of backgrounds which gave the discussion its great breadth.

Here are some highlights:

Robots in care – good or bad?

  • How should robots in care evolve?
    • Robots should be applied to care because the need for care increases as populations in developed countries age, while the labour force interested in care shrinks. But are robots really the answer to this sensitive problem?
    • The distinction was made that a wide range of activities qualify as ‘care’, ranging from assisting people to care for themselves, to caring for them, to providing psychological and emotional support in times of depression, distress or loneliness. Are robots suitable for all of these kinds of activities or only a spectrum? Before you judge, consider this example given, of a Nao robot used in interacting with children with diabetes, in the PAL project (Personal Assistant for health Lifestyle). The robot interacts with the children to educate them about their illness and help them track it. The children confide in the robot more easily than adults and hospital attendance in children with diabetes goes up. This is an example of using robots to build a relationship and put people at ease – something that the robot does more easily in this case than a human. When is a robot more trustworthy than a human? When is the human touch really needed?
    • Should we want robots to be more human when they are used for care? In some cases we do, when there is a need to soothe and connect, to comfort. But in other cases a more impersonal and less present robot might be desirable. For instance, if you needed help going to the toilet or rising from bed for the rest of your life, would you need a lot of human interaction around that, or prefer it to blend seamlessly into your life to enable you to be as independent as possible?

Robot ethics and liability

  • Responsibility vs liability of robots for their misdeeds
    • This question always comes up in a discussion about robot ethics – people feel uncomfortable with the idea of a robot’s accountability for crimes. For instance, if an autonomous car runs over a pedestrian, who would be responsible for that, the car, owner or manufacturer?
    • A good point was raised by philosophy professor Vincent Muller to take this argument further. If a child throws a stone through a window, she is responsible for the action but the parent is liable for damages. In the same way, a robot can be responsible for doing something wrong, but another entity, like the owner, might be liable for damages caused.
    • When discussing that a robot can be held liable for a crime, we imply it can understand that it’s actions were wrong and did it anyway. But robots as yet have no understanding of what they are doing, thus the conclusion was that a robot cannot be meaningfully convicted of a crime.
wp-1479497111251.jpg

Gorgeous Maritime Museum in Amsterdam

 

Panel Discussion: Our Robotic Future

In this next session, a general discussion on the future of robotics ensued, followed by each speaker giving their wish and hope for this future. There was a strong Euro-centric flavour to this discussion, which gave a fascinating insight into the European search for identity in this time of change – in the robotics and AI revolution, who are we and what do we stand for? How will we respond to the threats and oppourtunities? How can we lead and usher in a good outcome? The panel itself fell on the optimistic side of the debate, looking forward to positive outcomes.

Robotics Research

  • The debate started off underlining the need to share information and research so that we can progress quickly with these high potential technologies.
  • A distinction was made between American and European methods of research in AI and robotics
    • American research is funded by defence budgets and not shared openly
    • In the US, research is done by large corporations like Facebook and Google, and is often funded by DARPA.
    • In Europe, research is funded by the European Commission, and often is performed through startups which means less brute strength, but more agility to bring research to fruition. However because these startups are so small, they can be reluctant to share their intellectual property which might be their only business case.

Accepting the Robot Revolution

  • There is a lot of hype about robots and AI in the media which stirs people’s imaginations and fears – how can we usher in all the benefits of the robot revolution?
    • Part of the fear is that people don’t like to accept that the last distinction we have of being the smartest on Earth could be lost.
    • There is a growing economic divide caused by this technological revolution
      • We should not create a system that creates advantage for only a select group, but aim for an inclusive society that allows all to benefit from abundance of robotics.
      • People can be less afraid of robots if the value they add is made clear, for example a robot surgeon which is more accurate than a human surgeon can be viewed positively instead of as a threat.
      • Technological revolutions are fuelled by an available workforce which can pick up the skills needed to usher them in. For example, in the industrial revolution, in which agricultural workers could be retrained to work in factories, and later a large labour force was available to fuel the IT revolution which, again, would have put many out of work by automating manual tasks. There is a concern that with STEM graduates on the decline, we will lack the skilled resources to build momentum for AI and Robotics.
    • Investors in robotics and AI are discouraged by the growing stigma around these technologies.
    • Communication policies on this topic are designed around getting people to understand the science, but this thinking must shift. The end users who currently fear and lack understanding must become the center of communication in the future – AI and robotics must enable them and they should have the tools to judge and decide on it’s fate.

 

Conclusion

This is certainly an exciting time to be alive, as there is so much to still determine and discover in the growth of the AI and Robotics industries and disciplines. Such an event also highlights how far we are from this reality – 10 year horizons were discussed for AI and Robotics to become commodities in our homes. It is very encouraging to see the clear thinking and good intentions that go into making these technologies mainstream. In the coming months I’d like to dig into the topics of ethics, regulation, EU funding, and what the future of AI and robotics could bring.

Nao Robot with Microsoft Computer Vision API

Lately, I’ve been experimenting with integrating an Aldebaran Nao robot with an artificial intelligence API.

While writing my previous blog post on artificial intelligence APIs, I realised there were way too many API options out there to try out casually. I did want to start getting some hands-on experience with the API’s myself, so I had to find a project.

Pep the humanoid robot from Aldebaran

My boyfriend, Renze de Vries and I,  were both captivated by the Nao humanoid robots during conferences and meetups but found the price of buying one ourselves prohibitive. He already had a few robots of his own – the Lego mindstorms robot and the Robotis bioloid robot which we named Max – he has written about his projects here. Eventually we crossed the threshold and bought our very own Nao robot together from http://www.generationrobots.com/  – we call him Peppy. Integrating an AI API into Peppy seemed like a good project for me to get familiar with what the AI API’s can do with real life input.

DSC02814

Peppy the Nao robot from Aldebaran

Nao API

The first challenge was to get Pep to produce an image that could be processed. Pep has a bunch of sensors including those for determining position and temperature of his joints, touch sensors in his head and hands, bumper sensors in his feet, a gyroscope, sonar, microphones, infrared sensors, and two video cameras.

The Nao API, Naoqi, contains modules for motion, audio, vision, people and object recognition, sensors and tracking. In the vision module you have the option to take a picture or grab video. The video aspect seemed overly complicated for this small POC so I went with the ALPhotoCapture class – java docs here. This api saves pictures from the camera to the local storage on the robot, and if you want to process them externally, you have to connect to Pep’s filesystem and download them.

ALPhotoCapture photocapture = new ALPhotoCapture(session);
photocapture.setResolution(2);
photocapture.setPictureFormat("jpg");
photocapture.takePictures(1, "/home/nao/recordings/cameras/", "pepimage", true);

The Nao’s run a Linux Gentoo version called OpenNAO. They can be reached on their ip address after they connect to your network using a cable or over wifi. I used JSCape’s SCP module to connect and copy the file to my laptop.

pepimage2

Picture taken by Peppy’s camera

Microsoft Vision API

Next up was the visual api – I really wanted to try the Google Cloud Vision API, but it’s intended for commercial use and you need to have a VAT number to be able to register. I also considered IBM Bluemix (I have heard good things about the Alchemy API) but you need to deploy your app into IBM’s cloud in that case, which sounded like a hassle. I remembered that the Microsoft API was just a standard webservice without much investment needed, so that was the obvious choice for a quick POC.

At first, I experimented with uploading the .jpg file saved by Pep to the Microsoft Vision API test page, which returned this analysis:

Features:
Feature Name Value
Description { “type”: 0, “captions”: [ { “text”: “a vase sitting on a chair”, “confidence”: 0.10692098826160357 } ] }
Tags [ { “name”: “indoor”, “confidence”: 0.9926377534866333 }, { “name”: “floor”, “confidence”: 0.9772524237632751 }, { “name”: “cluttered”, “confidence”: 0.12796716392040253 } ]
Image Format jpeg
Image Dimensions 640 x 480
Clip Art Type 0 Non-clipart
Line Drawing Type 0 Non-LineDrawing
Black & White Image Unknown
Is Adult Content False
Adult Score 0.018606722354888916
Is Racy Content False
Racy Score 0.014793086796998978
Categories [ { “name”: “abstract_”, “score”: 0.00390625 }, { “name”: “others_”, “score”: 0.0078125 }, { “name”: “outdoor_”, “score”: 0.00390625 } ]
Faces []
Dominant Color Background
Dominant Color Foreground
Dominant Colors
Accent Color

#AC8A1F

I found the description of the image quite fascinating – it seemed to describe what was in the image closely enough. From this, I got the idea to return the description to Pep and use his text to speech API to describe what he has seen.

Next, I had to register on the Microsoft website to get an api key. this allowed me to programatically  pass Pep’s image to the API using a POST request. The response was a JSON string containing data similar to that above. You had to put in some url parameters to get the specific information you need. The Microsoft Vision API Docs are here. I used the Description text because it was as close as possible to a human constructed phrase.

https://api.projectoxford.ai/vision/v1.0/analyze?visualFeatures=Description

The result looks like this – the tags man, fireplace and bed were incorrect, but the rest are correct:

{"description":{"tags":["indoor","living","room","chair","table","television","sitting","laptop","furniture","small","white","black","computer","screen","man","large","fireplace","cat","kitchen","standing","bed"],"captions":[{"text":"a living room with a couch and a chair","confidence":0.67932875215020883}]},"requestId":"37f90455-14f5-4fc7-8a79-ed13e8393f11","metadata":{"width":640,"height":480,"format":"Jpeg"}}

Text to speech

The finishing touch was to use Nao’s text to speech API to create the impression that he is talking about what he has seen.

ALTextToSpeech tts = new ALTextToSpeech(session);
tts.say(text);

This was Nao looking at me while I was recording with my phone. The Microsoft Vision API incorrectly classifies me as a man with a wii. I could easily rationalise that the specifics of the classification are wrong, but the generalities are close enough.

Human

|                 |

Woman    Man

 

Small Electronic Device

|                    |                 |

Remote    Phone         Wii

 

This classification was close enough to correct – a vase of flowers sitting on a table.

Interpreting the analysis

Most of the analysis values returned are accompanied by a confidence level. The confidence level in the example I have is pretty low, the range being from 0 to 1.

a vase sitting on a chair", "confidence": 0.10692098826160357

This description also varied based on how I cropped the image before analysis. Different aspects were chosen as the subject of the picture with slightly different cropped views.

The vision api also returned Tags and Categories.

Categories give you a two-level taxonomy categorisation, with the top level being:

abstract, animal, building, dark, drink, food, indoor, others, outdoor, people, plant, object, sky,text, trans

Tags are more detailed than categories and give insight into the image content in terms of objects, living beings and actions. They give insight into everything happening in the image, including the background and not just the subject of the image.

Conclusions

Overall, I was really happy to integrate Nao with any kind of Artificial Intelligence API. It feels like the ultimate combination of robotics with AI.

The Microsoft Vision API was very intuitive and easy to get started with. For a free API with general classification capabilities, I think it’s not bad. These API’s are only as good as their training, so for more specific applications, you would obviously want to invest in training the API more intensively for the context. I tried IBM Bluemix’s demo with the same test image from Pep, but could not get a classification out of it – perhaps the image was not good enough.

I did have some reservations about sending live images from Pep into Microsoft’s cloud. In a limited and controlled setting, and in the interests of experimentation and learning, it seemed appropriate, but in a general sense, I think the privacy concerns need some consideration.

During this POC I thought about more possibilities for integrating Pep with some other API’s. The Nao robots have some sophisticated Aldebaran software of their own which provides basic processing of their sensor data like facial and object recognition and speech to text. I think there is a lot of potential in combining these API’s to enrich the robot’s interactive capabilities and delve further into the current capabilities of AI API’s.