Quality advice for Robotics startups

Robot recharge

I have discussed the topic of quality and testing with a few robotics startups and the conversation tends to reach this consensus: formal quality assurance processes have no place in a startup. While I totally appreciate this view, this blogpost provides and alternative approach to quality for robotics startups.

The main priority of many startups is to produce something that will attract investment – it has to basically work well enough to get funding. Investors, customers and users can be very forgiving of quality issues, especially where emerging tech is involved. Startups should deliver the right level of quality for now and prepare for the next step.

In a startup, there is not likely to be any dedicated tester or quality strategy. Developers are the first lines of defence for quality – they must bake it in to the proof of concept code – they might do this with unit tests. The developers and founders probably do some functional validation. They might experience more extreme use cases when demo’ing the functionality. They might do limited testing with real life users.

What are the main priorities of the company at this phase and the matching levels of quality? The product’s main goal, initially, is to fulfil requirements of application development, demo’ing, and to be effective and usable to its early adopters. Based on these priorities, I’ve come up with some quality aspects that could be useful for robotics startups.

A Good quality demo

Here are some aspects of quality which could be relevant for demoing:

Softbank Pepper

  1. Portable setup
    1. Can be transported without damaging the robot and supporting equipment
    2. Is possible to explain at airport security if needed
  2. Works under variable conditions in customer meeting room
    1. Poor wifi connections
    2. Power outlets not available
    3. Outside of company network
    4. Uneven floors
    5. Stairs
    6. Noise
    7. Different lighting
    8. Reflective surfaces
  3. Will work for the duration of the demo
  4. Demo will be suitable for audience
  5. Demo’ed behaviour will be visible and audible from a distance, e.g. in a boardroom
  6. Mode can be changed to a scripted mode for demos
  7. Functionality actually works and can be shown – a checklist of basic functionality can take away the guesswork, without having to come up with heavy weight testcases

Quality for the users and buyers

The robot needs to prove itself fit for operation:

  1. Functionality works
    1. What you offer can be suitably adapted for the customer’s actual scenario
      1. Every business has its own processes and probably the bot will have to adapt to match terminologies workflows and scenarios that fit the users processes
      2. Languages can be changed
      3. Bot is capable of conversing at the level of the target audience (e.g. children, elderly)
      4. Bot is suitable for the context where its intended to work like a hospital or school, will not make sudden movements or catch on cables
  2.  Reliability
    1. Users might be tolerant to failures up to a certain extent, until it gets too annoying or repetitive, or if they cannot be recovered from
    2. Failures might be jarring for vulnerable users like the mentally or physically ill
    3. Is the robot physically robust enough to interact with in unplanned ways?
  3. Security
    1. Will port scanning or some other exploitative attacks easily reveal vulnerabilities which can result in unpredictable or harmful behaviour
    2. Can personal data be hijacked through the robot
  4. Ethical and moral concerns
    1. Users might not understand that there is no consciousness interacting with them, thinking the robot is autonomous
    2. There might be users who think their interactions will be private while they might be reviewed for analysis purposes
    3. Users might not realise their data will be sent to the cloud and used for analysis
  5. Legal and support issues
    1. What kind of support agreement does the service provider have with the robot manufacturer and how does it translate to the purchaser of the service?

Decos robots, Robotnik and Eco

Quality to maintain, pivot and grow

During these cycles of demoing to prospects, defects will be identified and need to be fixed. Customers will give advice or provide input on what they were hoping to see and features will have to be tweaked or added. The same will happen during research and test rounds at customers, and user feedback sessions.

The startup will want to add features and fix bugs quickly. For this to occur, it will help them to have good discipline with clean code which is maintainable, and at least unit tests to give quick feedback on change quality. They will but hopefully also have some functional (and a few non functional) acceptance tests.

When adoption increases, the startup might have to do a quick pivot to a new application, or to be able to scale to more than one customer or usecase. At this phase, probably a lot of refactoring will happen to make the existing codebase scalable. In this case, good unit tests and component tests will be your best friend, and ensure you are able to maintain the stability of the functionality you already have (as mentioned in this techcrunch article on startup quality).

robot in progress

Social robot companies are integrators – ensure quality of integrated components

As a social robotics startup, if you are not creating your own hardware, OS, or interaction and processing components, you might want to consider becoming familiar with the quality of any hardware or software components you are integrating with. Some basic integration tests will help you keep confident that the basics work when an external API is updated, for instance. It’s also worth consider your liability when something goes wrong somewhere in the chain.

Early days for robot quality

To round up, it seems to indeed be early days to be talking about social robot quality. But it’s good for startups to be aware of what they are getting into because this topic will no doubt become more relevant as their company grows. I hope the above post can help robotics startups now and in the future to ensure they stay in control of their quality as they grow.

Feel free to contact me if you have any ideas or questions about this topic!

Thanks to Koen Hindriks of Interactive Robotics, Roeland van Oers at Ready for Robotics and Tiago Santos at Decos, as well as all the startups and enthusiasts I have spoken to over the past year for input into this article.

European Robotics Forum 2017

The European Robotics Forum (ERF2017) took place between 22 and 24 March 2017 at the Edinburgh International Conference Centre.

The goals were to:

  • Ensure there is economic and societal benefit from robots
  • Share information on recent advancements in robotics
  • Reveal new business oppourtunities
  • Influence decision makers
  • Promote collaboration within the robotics community

The sessions were organised into workshops, encouraging participants from academia, industry and government to cross boundaries. In fact, many of the sessions had an urgent kind of energy, with the focus on discussions and brainstorming with the audience.

wp-1490897576308.jpg
Edinburgh castle at night

Broad spectrum of robotics topics

Topics covered in the conference included: AI, Social Robotics, Space Robotics, Logistics, Standards used in robotics, Health, Innovation, Miniturisation, Maintenance and Inspections, Ethics and Legal considerations. There was also an exhibition space downstairs where you could mingle with different kinds of robots and their vendors.

The kickoff session on the first day had some impressive speakers – leaders in the fields of AI and robotics, covering business and technological aspects.

Bernd Liepert, the head of EU Robotics covered economic aspect of robotics, stating that the robot density in Europe is around the highest in the world. Europe has 38% of the world wide share of the professional robotics domain, with more startups and companies than the US. Service robotics already makes over half the turnover of industrial robotics. In Europe, since we don’t have enough institutions to develop innovations in all areas of robotics, combining research and transferring to industry is key.

The next speaker was Keith Brown, the Scottish secretary for Jobs, the Economy and Fair Work, who highlighted the importance of digital skills to Scotland. He emphasised the need for everyone to benefit from the growth of the digital economy, and the increase in productivity that it should deliver.

Juha Heikkila from the European Commission explained that, in terms of investment,  the EU Robotics program is the biggest in the world. Academia and industry should be brought together, to drive innovation through innovation hubs which will bring technological advances to companies of all sizes.

wp-1490898806984.jpg

Raia Hadsell of Deep Mind gave us insight into how deep learning can be applied to robotics. She conceptualised the application of AI to problem areas like speech and image recognition, where inputs (audio files, images) are mapped to outputs (text, labels). The same model can be applied to robotics, where the input is sensor data and the output is an action. For more insight, see this article about a similar talk she did at the Re•Work Deep Learning Summit in London. She showed us that learning time can be reduced for robots by training neural networks in simulation and then adding neural network layers to transfer learning to other tasks.

Deep learning tends to be seen as a black box in terms of traceability and therefore risk management, as people think that neural networks produce novel and unpredictable output. Hadsell assured us, however, that introspection can be done to test and verify each layer in a neural network, since a single input always produces a range of known output.

The last talk in the kickoff, delivered by Stan Boland from Five AI, brought together the business and technical aspects of self driving cars. He mentioned that the appetite for risky tech investment seems to be increasing, with a 5 times growth in investment in the past 5 years. He emphasised the need for exciting tech companies to retain European talent and advance innovation, and reverse the trend of top EU talent migrating to the US.

On the technology side, Stan gave some insight into some advances in perception and planning in self driving cars. In the picture below, you can see how stereo depth mapping is done at Five AI, using input from two cameras and mapping the depth of each pixel in the image. They create an aerial projection of what the car sees right in front of it and use this birds eye view to plan the path of the car from ‘above’. Some challenges remain, however, with 24% of cyclists still being misclassified by computer vision systems.

With that, he reminded us that full autonomy in self driving cars is probably out of reach for now. Assisted driving on highways and other easy-to-classify areas is probably the most achievable goal. To surpass this, the cost to the consumer becomes prohibitive, and true autonomous cars will probably only be sustainable in a services model, where the costs are shared. In this model, training data could probably not be shared between localities, with very specific road layouts and driving styles in different parts of the world (e.g Delhi vs San Francisco vs London).

This slideshow requires JavaScript.

An industry of contrasts

This conference was about overcoming fragmentation and benefitting from cross-domain advances in robotics, to keep the EU competitive. There were contradictions and contrasts in the community which gave the event some colour.

Each application of robotics that was represented seemed to have its own approaches, challenges, and phase of development, like drones, self driving cars, service robotics and industrial robotics. In this space, industrial giants find themselves collaborating with small enterprises – it takes many different kinds of expertise to make a robot. The small companies cannot afford to spend the effort that is needed to conform to the industry standards while the larger companies would go out of business if they did not conform.

A tension existed between the hardware and software sides of robotics – those from an AI background have some misunderstandings to correct, like how traceable and predictable neural networks are. The ‘software’ people had a completely different approach to the ‘hardware’ people as development methodologies differ. Sparks flew as top-down legislation conflicted with bottom-up industry approaches, like the Robotic Governance movement.

The academics in robotics sometimes dared to bring more idealistic ideas to the table that would benefit the greater good, but which might not be sustainable. The ideas of those from industry tended to be mindful of cost, intellectual property and business value.

Two generations of roboticist were represented – those who had carried the torch in less dramatic years, and the upcoming generation who surged forward impatiently. There was conflict and drama at ERF2017, but also loads of passion and commitment to bring robotics safely and successfully into our society. Stay tuned for the next post in which I will provide some details on the sessions, including more on ethics, legislation and standards in robotics!

Making social robots work

 

wp-1490820852441.jpg

Mady Delvaux, in her draft report on robotics, advises the EU that robots should be carefully tested in real life scenarios, beyond the lab. In this and future articles, I will examine different aspects of social robot requirements, quality and testing, and try to determine what is still needed in these areas.

Why test social robots?

In brief, I will define robot quality as: does the robot do what it’s supposed to do, and not do what it shouldn’t. For example, when you press the robot’s power button from an offline state, does the robot turn on and the indicator light turn green? If you press the button quickly twice, does the robot still exhibit acceptable behaviour? Testing is the activity of analysis to determine the quality level of what you have produced – is it good enough for the intended purpose?

Since social robots will interact closely with people, strict standards will have to be complied with to ensure that they don’t have unintended negative effects. There are already some standards being developed, like ISO13482:2014 about safety in service robots, but we will need many more to help companies ensure they have done their duty to protect consumers and society. Testing will give insight into whether these robots meet the standards, and new test methods will have to be defined.

What are the core features of the robot?

The first aspect of quality we should measure is if the robot fulfils its basic functional requirements or purpose. For example, a chef robot like the robotic kitchen by Moley would need to be able to take orders, check ingredient availability, order or request ingredients, plan cooking activities, operate the stove or oven, put food into pots and pans, stir, time cooking, check readiness, serve dishes and possibly clean up.

 

A robot at an airport which helps people find their gate and facilities must be able to identify when someone needs help, determine where they are trying to go (perhaps by talking to them, or scanning a boarding pass), plan a route, communicate the route by talking, indicating with gestures, or printing a map, and know when the interaction has ended.

 

With KLM’s Spencer the guide robot at Schiphol airport, benchmarking was used to ensure the quality of each function separately. Later the robot was put into live situations at Schiphol and tracked to see if it was planning movement correctly. A metric of distance travelled autonomously vs non autonomously was used to evaluate the robot. Autonomy will probably be an important characteristic to test and to make users aware of in the future.

Two user evaluation studies were done with Spencer, and feedback was collected about the robot’s effectiveness at guiding people around the airport. Some people, for example, found the speed of the robot too slow, especially in quiet periods, while others found the robot too fast, especially for families to follow.

Different environments and social partners

How can we ensure robots function correctly in the wide variety of environments and interaction situations that we encounter everyday? Amazon’s Alexa, for example, suffers from a few communication limitations, like knowing if she is taking orders from the right user and conversing with children.

At our family gatherings, our Softbank Nao robot, Peppy, cannot quite make out instructions against talking and cooking noises. He also has a lot of trouble determining who to focus on when interacting in a group. Softbank tests their robots by isolating them in a room and providing recorded input to determine if they have the right behaviour, but it can be difficult to simulate large public spaces. The Pepper robots seem to perform better under these conditions. In the Mummer project, tests are done in malls with Pepper to determine what social behaviours are needed for a robot to interact effectively in public spaces.

 

The Pepper robot at the London Science Museum History of Robots exhibition was hugely popular and constantly surrounded by a crowd – it seemed to do well under these conditions, while following a script, as did the Pepper at the European Robotics Forum 2017.

When society becomes the lab

Kristian Esser, founder of the Technolympics, olympic games for cyborgs, suggests that in these times, society itself becomes the test lab. For technologies which are made for close contact with people, but which can have a negative effect on us, the paradox is that we must be present to test it and the very act of testing it is risky.

Consider self-driving vehicles, which must eventually be tested on the road. The human driver must remain aware of what is happening and correct the car when needed, as we have seen in the case of Tesla’s first self driving car fatality: “The … collision … raised concerns about the safety of semi-autonomous systems, and the way in which Tesla had delivered the feature to customers.” Assisted driving will probably overall reduce the number of traffic-related fatalities in the future and that’s why its a goal worth pursuing.

For social robots, we will likely have to follow a similar approach, first trying to achieve a certain level of quality in the lab and then working with informed users to guide the robot, perhaps in a semi-autonomous mode. The perceived value of the robot should be in balance with the risks of testing it. With KLM’s Spencer robot, a combination of lab tests and real life tests are performed to build the robot up to a level of quality at which it can be exposed to people in a supervised way.

Training robots

Over lunch the other day, my boss suggested the idea of teaching social robots as we do children, by observing or reviewing behaviour and correcting afterwards. There is research supporting this idea, like this study on robots learning from humans by imitation and goal inference. One problem with letting the public train social robots, is that they might teach robots unethical or unpleasant behaviour, like in the case of the Microsoft chatbot.

To ensure that robots do not learn undesirable behaviours, perhaps we can have a ‘foster parent’ system – trained and approved robot trainers who build up experience over time and can be held accountable for the training outcome. To prevent the robot accidentally picking up bad behaviours, it could have distinct learning and executing phases.

The robot might have different ways of getting validation of its tasks, behaviours or conclusions. It would then depend on the judgement of the user to approve or correct behaviour. New rules could be sent to a cloud repository for further inspection and compared with similar learned rules from other robots, to find consensus. Perhaps new rules should only be applied if they have been learned and confirmed in multiple households, or examined by a technician.

To conclude, I think testing of social robots will be done in phases, as it is done with many other products. There is a limit to what we can achieve in a lab and there should always be some controlled testing in real life scenarios. We as consumers should be savvy as to the limitations of our robots and conscious of their learning process and our role in it.

Understanding Social Robotics

Pepper, Jibo and Milo make up the first generation of social robots, leading what promises to be a cohort with diverse capabilities and applications in the future. But what are social robots and what should they be able to do? This article gives an overview of theories that can help us understand social robotics better.

What is a social robot?

dreamstime_xs_72413274

I most like the definition which describes social robots as robots for which social interaction plays a key role. So these skills should be needed by the robot to enable them to perform some kind of function. A survey of socially interactive robots (5) defines some key characteristics which summarise this group very well. A social robot should show emotions, have capabilities to converse on an advanced level, understand the mental models of their social partners, form social relationships, make use of natural communication cues, show personality and learn social capabilities.

Understanding Social Robots (1) offers another interesting perspective of what a social robot is:

Social robot = robot + social interface

In this definition, the robot has its own purpose outside of the social aspect. Examples of this can be care robots, cleaning robots in our homes, service desk robots at an airport or mall information desk or chef robots in a cafeteria. The social interface is simply a kind of familiar protocol which makes it easy for us to communicate effectively with the robot. Social cues can give us insight into the intention of a robot, for example shifting gaze towards a mop gives a clue that the robot is about to change activity, although they might not have eyes in the classical sense.

These indicators of social capability can be as useful as actual social ability and drivers in the robot. As studies show, children are able to project social capabilities onto simple inanimate objects like a calculator. A puppet becomes an animated social partner during play. In the same way, robots only have to have the appearance of sociability to be effective communicators. An Ethical Evaluation of Human–Robot Relationships confirms this idea. We have a need to belong, which causes us to form emotional connections to artificial beings and to search for meaning in these relationships. 

How should social robots look?

Masahiro Mori defined the Uncanny Valley theory in 1970, in a paper on the subject. He describes the effects of robot appearance and robot movement on our affinity to the robot. In general, we seem to prefer robots to look more like humans and less like robots. There is a certain point at which robots look both human-like and robot-like, and it becomes confusing for us to categorise them. This is the Uncanny Valley – where robot appearance looks very human but also looks a bit ‘wrong’, which makes us uncomfortable. If robot appearance gets past that point, and looks more human, likeability goes up dramatically.

In Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley (2) we learn that there is a similar effect between robot appearance and trustworthiness of a robot. Robots that showed more positive emotions were also more likeable. So it seems like more human looking robots would lead to more trust and likeability.

Up to this point we assume  that social robots should be humanoid or robotic. But what other forms can robots take? The robot should at least have a face (1) to give it an identity and make it into an individual. Further, with a face, you can indicate attention and imitate the social partner to improve communication. Most non verbal cues are relayed through the face, and creates expectation of how to engage with the robot.

dreamstime_xs_64957796The appearance of a robot can help set people’s expectations of what they should be capable of, and limit those expectations to some focused functions which can be more easily achieved. For example, a bartender robot can be expected to be able to have a good conversation and serve drinks, take payment, but probably it’s ok if it can only speak one language, as it only has to fit the context it’s in (1).

In Why Every Robot at CES Looks Alike, we learn that Jibo’s oversized, round head is designed to mimic the proportions of a young animal or human to make it more endearing. It has one eye to prevent it from triggering the Uncanny Valley effect by looking too robotic and human at the same time. Also, appearing too human-like creates the impression that the robot will respond like a human, while they are not yet capable.

Another interesting example is of Robin, a Nao robot being used to teach children with diabetes how to manage their illness (6). The explanation given to the children is that Robin is a toddler. The children use this role to explain any imperfections in Robin’s speech capabilities.

Different levels of social interaction for robots

A survey of socially interactive robots (5) contains some useful concepts in defining levels of social behaviour in robots:

  • Socially evocative: Do not show any social capabilities but rely on human tendency to project social capabilities.
  • Social interface: Mimic social norms, without actually being driven by them.
  • Socially receptive: Understand social input enough to learn by imitation but do not see social contact.
  • Sociable: Have social drivers and seek social contact.
  • Socially situated: Can function in a social environment and can distinguish between social and non-social entities.
  • Socially embedded: Are aware of social norms and patterns
  • Socially intelligent: Show human levels of social understanding and awareness based on models of human cognition.

Clearly, social behaviour is nuanced and complex. But to come back to the previous points, social robots can still make themselves effective without reaching the highest levels of social accomplishment.

Effect of social robots on us

To close, de Graaf poses a thought-provoking question (4):

how will we share our world with these new social technologies and how will a future robot society change who we are, how we act and interact—not only with robots but also with each other?

It seems that we will first and foremost shape robots by our own human social patterns and needs. But we cannot help but be changed as individuals and a society when we finally add a more sophisticated layer of robotic social partners in the future.

References

  1. Understanding Social Robots (Hegel, Muhl, Wrede, Hielscher-Fastabend, Sagerer, 2009)
  2. Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley’ (Mathur and Reichling, 2015)
  3. The Uncanny Valley (Mori, 1970)
  4. An Ethical Evaluation of Human–Robot Relationships (de Graaf, 2016)
  5. A survey of socially interactive robots (Fong, Nourbakhsh, Dautenhahn, 2003)
  6. Making New “New AI” Friends: Designing a Social Robot for Diabetic Children from an Embodied AI Perspective (Cañamero, Lewis, 2016)

Using Pepper Robots as Receptionists with Decos

decos

Picture an alien meteorite landing on Mars. Inside it, inventing the technology of the future, is Decos, a highly innovative company that I encountered at the European Robotics Week. Located in Noordwijk in the Netherlands, they are breaking new ground with Softbank’s Pepper robot. I’ve come to hear about their robotics division and their use of Pepper as a receptionist.

Pepper the receptionist

Pepper waits at the entrance, dressed in a cape for Sinterklaas (the Dutch precursor to Christmas).

I greet her but she doesn’t respond – then I notice her tablet prompting me as to the nature of my visit. I indicate that I have an appointment and then speak the name of my contact, Tiago Santos, out loud. She recognises it after two tries, to my relief. A little robot, Eco, rolls up and unlocks the door for me, to lead me to my meeting. The office space is white and fresh, with modern angles everywhere and walls of glass, highlighting the alien environment outside.

futuristic-decos

Over a cup of tea, Tiago asks me to fill in an evaluation form which they will use to improve Pepper’s receptionist routine. This is done on one of three large tv screens in the downstairs canteen. I offer some comments about the interaction flow and the lack of feedback when Pepper has not heard what I have said.

Tiago proceeds to tell us about how Decos used to have a human receptionist, but did not have enough work to keep her fully occupied in their small Noordwijk office. Pepper has taken her place, enabling her to do other, more interesting things, which is how one would wish robotisation would work in the future. Pepper can speak, show videos on her tablet and take rating input. Decos hopes to distribute their robot receptionist module through human receptionist outsourcing companies.

More about Decos

Decos is dedicated to innovation and futuristic technologies, digitising manual processes and making things smarter. They have several products created by different companies under their banner, in the areas of smart cities, smart work and smart mobility. To foster innovation while managing risk, they create small technology startups within their company. Once viability is established and a good business model is found, they invest more heavily. The company itself believes in self management, the only management being the board which steers the company, and some project managers. They have the one site in the Netherlands, and one in Pune, employing a total of about 200 people. The office building is filled with awesome futuristic gadgets to increase the creativity of their staff, including an Ultimaker 3D printer and a virtual reality headset. The walls are covered in space themed pictures. There’s a telescope upstairs and an ancient meteorite downstairs. This place was created with imagination and inspiration.

 

Decos’ robotics startup is composed of 3 developers who program in all kinds of languages including c, c#, python and javascript. They make use of all available API’s and this necessitates using the various languages which are employed in AI. They work on 2 robots at the moment – Pepper, a social robot made by Softbank (Aldebaran) and Eco, a robot of their own design, manufactured by their partners.

 

Eco the robot

Eco is a little robot that rolls around, rather like a bar stool on a Roomba, and makes use of Decos’ autonomous life module. It wanders around absentmindedly, pauses thoughtfully next to Tiago’s leg, and rolls on.  The body is a high quality 3d plastic print, with a glossy finish, angular and reminiscent of the Decos building. It has an endearingly flat and friendly ‘face’ which is displayed on what appears to be a tablet. Another Eco unit patrols upstairs. A third prototype lies without a chassis in the development area, along with Robotnik, the next and larger version of Eco.

Tiago tells me that this version’s aluminium chassis promises to be far easier to manufacture and thus more scalable. Robotnik and Eco have a Kinect sensor and a lasers for obstacle detection. Tiago mentions that two kinds of sensors are needed to clarify confusing readings caused by reflections. The company believes that all complex artificial intelligence processing can be done as cloud services – in essence the brain of the robot is in the cloud, all based on IOT. They call this artificial intelligence engine C-3PO. They also have several other modules including one for human interaction, a ticketing system, Pepper’s form input module, and their own facial recognition module.

Social robot pioneers

All too soon, my visit to Decos’ futuristic development lab has come to an end. I can’t help rooting for them and for similar companies which show courage in embracing innovation and its risks. It seems to come with its own interesting challenges, like inventing a business model, creating a market and choosing partners and early adopters to collaborate with. Working in this space takes imagination and vision, as you have to invent the rules which will lead to the unfolding of the entire industry in coming years. Decos seems to embody the spirit of exploration which is needed to define and shape what is to come.

 

European Robotics Week 2016

The European Robotics Week in 2016 took place from 18 to 22 November in several countries including The Netherlands, Austria, Lithuania, Norway, Portugal, Serbia and many more. This event has been occurring since 2011 to spread public awareness of robotics applications and brings together industry, researchers and policymakers. The central event this year was held in Amsterdam. I attended one of the 5 days of activities in the Maritime Museum. The theme was ‘Robots at your service – empowering healthy aging’ which encompassed a variety of activities arranged over the 5 day duration, including debates, open sessions where you could network while interacting with different kinds of robots, workshops for children and a 2 day hackathon. I attended the robot expo and two of the debate sessions which I will summarise below.

Robot Exhibition

Although it’s clearly still early days for general consumer robotics in terms of the price-value ratio, there are ever more options available for enthusiasts and for very specific applications. The exhibition had a good selection, including these lovely bots:

Panel Discussion: Roboethics

This discussion was about ethics in robotics but it touched a really wide variety of aspects around this, including some philosophy. The speakers were of a very high standard and from a wide variety of backgrounds which gave the discussion its great breadth.

Here are some highlights:

Robots in care – good or bad?

  • How should robots in care evolve?
    • Robots should be applied to care because the need for care increases as populations in developed countries age, while the labour force interested in care shrinks. But are robots really the answer to this sensitive problem?
    • The distinction was made that a wide range of activities qualify as ‘care’, ranging from assisting people to care for themselves, to caring for them, to providing psychological and emotional support in times of depression, distress or loneliness. Are robots suitable for all of these kinds of activities or only a spectrum? Before you judge, consider this example given, of a Nao robot used in interacting with children with diabetes, in the PAL project (Personal Assistant for health Lifestyle). The robot interacts with the children to educate them about their illness and help them track it. The children confide in the robot more easily than adults and hospital attendance in children with diabetes goes up. This is an example of using robots to build a relationship and put people at ease – something that the robot does more easily in this case than a human. When is a robot more trustworthy than a human? When is the human touch really needed?
    • Should we want robots to be more human when they are used for care? In some cases we do, when there is a need to soothe and connect, to comfort. But in other cases a more impersonal and less present robot might be desirable. For instance, if you needed help going to the toilet or rising from bed for the rest of your life, would you need a lot of human interaction around that, or prefer it to blend seamlessly into your life to enable you to be as independent as possible?

Robot ethics and liability

  • Responsibility vs liability of robots for their misdeeds
    • This question always comes up in a discussion about robot ethics – people feel uncomfortable with the idea of a robot’s accountability for crimes. For instance, if an autonomous car runs over a pedestrian, who would be responsible for that, the car, owner or manufacturer?
    • A good point was raised by philosophy professor Vincent Muller to take this argument further. If a child throws a stone through a window, she is responsible for the action but the parent is liable for damages. In the same way, a robot can be responsible for doing something wrong, but another entity, like the owner, might be liable for damages caused.
    • When discussing that a robot can be held liable for a crime, we imply it can understand that it’s actions were wrong and did it anyway. But robots as yet have no understanding of what they are doing, thus the conclusion was that a robot cannot be meaningfully convicted of a crime.
wp-1479497111251.jpg
Gorgeous Maritime Museum in Amsterdam

 

Panel Discussion: Our Robotic Future

In this next session, a general discussion on the future of robotics ensued, followed by each speaker giving their wish and hope for this future. There was a strong Euro-centric flavour to this discussion, which gave a fascinating insight into the European search for identity in this time of change – in the robotics and AI revolution, who are we and what do we stand for? How will we respond to the threats and oppourtunities? How can we lead and usher in a good outcome? The panel itself fell on the optimistic side of the debate, looking forward to positive outcomes.

Robotics Research

  • The debate started off underlining the need to share information and research so that we can progress quickly with these high potential technologies.
  • A distinction was made between American and European methods of research in AI and robotics
    • American research is funded by defence budgets and not shared openly
    • In the US, research is done by large corporations like Facebook and Google, and is often funded by DARPA.
    • In Europe, research is funded by the European Commission, and often is performed through startups which means less brute strength, but more agility to bring research to fruition. However because these startups are so small, they can be reluctant to share their intellectual property which might be their only business case.

Accepting the Robot Revolution

  • There is a lot of hype about robots and AI in the media which stirs people’s imaginations and fears – how can we usher in all the benefits of the robot revolution?
    • Part of the fear is that people don’t like to accept that the last distinction we have of being the smartest on Earth could be lost.
    • There is a growing economic divide caused by this technological revolution
      • We should not create a system that creates advantage for only a select group, but aim for an inclusive society that allows all to benefit from abundance of robotics.
      • People can be less afraid of robots if the value they add is made clear, for example a robot surgeon which is more accurate than a human surgeon can be viewed positively instead of as a threat.
      • Technological revolutions are fuelled by an available workforce which can pick up the skills needed to usher them in. For example, in the industrial revolution, in which agricultural workers could be retrained to work in factories, and later a large labour force was available to fuel the IT revolution which, again, would have put many out of work by automating manual tasks. There is a concern that with STEM graduates on the decline, we will lack the skilled resources to build momentum for AI and Robotics.
    • Investors in robotics and AI are discouraged by the growing stigma around these technologies.
    • Communication policies on this topic are designed around getting people to understand the science, but this thinking must shift. The end users who currently fear and lack understanding must become the center of communication in the future – AI and robotics must enable them and they should have the tools to judge and decide on it’s fate.

 

Conclusion

This is certainly an exciting time to be alive, as there is so much to still determine and discover in the growth of the AI and Robotics industries and disciplines. Such an event also highlights how far we are from this reality – 10 year horizons were discussed for AI and Robotics to become commodities in our homes. It is very encouraging to see the clear thinking and good intentions that go into making these technologies mainstream. In the coming months I’d like to dig into the topics of ethics, regulation, EU funding, and what the future of AI and robotics could bring.

Nao Robot with Microsoft Computer Vision API

Lately, I’ve been experimenting with integrating an Aldebaran Nao robot with an artificial intelligence API.

While writing my previous blog post on artificial intelligence APIs, I realised there were way too many API options out there to try out casually. I did want to start getting some hands-on experience with the API’s myself, so I had to find a project.

Pep the humanoid robot from Aldebaran

My boyfriend, Renze de Vries and I,  were both captivated by the Nao humanoid robots during conferences and meetups but found the price of buying one ourselves prohibitive. He already had a few robots of his own – the Lego mindstorms robot and the Robotis bioloid robot which we named Max – he has written about his projects here. Eventually we crossed the threshold and bought our very own Nao robot together from http://www.generationrobots.com/  – we call him Peppy. Integrating an AI API into Peppy seemed like a good project for me to get familiar with what the AI API’s can do with real life input.

DSC02814
Peppy the Nao robot from Aldebaran

Nao API

The first challenge was to get Pep to produce an image that could be processed. Pep has a bunch of sensors including those for determining position and temperature of his joints, touch sensors in his head and hands, bumper sensors in his feet, a gyroscope, sonar, microphones, infrared sensors, and two video cameras.

The Nao API, Naoqi, contains modules for motion, audio, vision, people and object recognition, sensors and tracking. In the vision module you have the option to take a picture or grab video. The video aspect seemed overly complicated for this small POC so I went with the ALPhotoCapture class – java docs here. This api saves pictures from the camera to the local storage on the robot, and if you want to process them externally, you have to connect to Pep’s filesystem and download them.

ALPhotoCapture photocapture = new ALPhotoCapture(session);
photocapture.setResolution(2);
photocapture.setPictureFormat("jpg");
photocapture.takePictures(1, "/home/nao/recordings/cameras/", "pepimage", true);

The Nao’s run a Linux Gentoo version called OpenNAO. They can be reached on their ip address after they connect to your network using a cable or over wifi. I used JSCape’s SCP module to connect and copy the file to my laptop.

pepimage2
Picture taken by Peppy’s camera

Microsoft Vision API

Next up was the visual api – I really wanted to try the Google Cloud Vision API, but it’s intended for commercial use and you need to have a VAT number to be able to register. I also considered IBM Bluemix (I have heard good things about the Alchemy API) but you need to deploy your app into IBM’s cloud in that case, which sounded like a hassle. I remembered that the Microsoft API was just a standard webservice without much investment needed, so that was the obvious choice for a quick POC.

At first, I experimented with uploading the .jpg file saved by Pep to the Microsoft Vision API test page, which returned this analysis:

Features:
Feature Name Value
Description { “type”: 0, “captions”: [ { “text”: “a vase sitting on a chair”, “confidence”: 0.10692098826160357 } ] }
Tags [ { “name”: “indoor”, “confidence”: 0.9926377534866333 }, { “name”: “floor”, “confidence”: 0.9772524237632751 }, { “name”: “cluttered”, “confidence”: 0.12796716392040253 } ]
Image Format jpeg
Image Dimensions 640 x 480
Clip Art Type 0 Non-clipart
Line Drawing Type 0 Non-LineDrawing
Black & White Image Unknown
Is Adult Content False
Adult Score 0.018606722354888916
Is Racy Content False
Racy Score 0.014793086796998978
Categories [ { “name”: “abstract_”, “score”: 0.00390625 }, { “name”: “others_”, “score”: 0.0078125 }, { “name”: “outdoor_”, “score”: 0.00390625 } ]
Faces []
Dominant Color Background
Dominant Color Foreground
Dominant Colors
Accent Color

#AC8A1F

I found the description of the image quite fascinating – it seemed to describe what was in the image closely enough. From this, I got the idea to return the description to Pep and use his text to speech API to describe what he has seen.

Next, I had to register on the Microsoft website to get an api key. this allowed me to programatically  pass Pep’s image to the API using a POST request. The response was a JSON string containing data similar to that above. You had to put in some url parameters to get the specific information you need. The Microsoft Vision API Docs are here. I used the Description text because it was as close as possible to a human constructed phrase.

https://api.projectoxford.ai/vision/v1.0/analyze?visualFeatures=Description

The result looks like this – the tags man, fireplace and bed were incorrect, but the rest are correct:

{"description":{"tags":["indoor","living","room","chair","table","television","sitting","laptop","furniture","small","white","black","computer","screen","man","large","fireplace","cat","kitchen","standing","bed"],"captions":[{"text":"a living room with a couch and a chair","confidence":0.67932875215020883}]},"requestId":"37f90455-14f5-4fc7-8a79-ed13e8393f11","metadata":{"width":640,"height":480,"format":"Jpeg"}}

Text to speech

The finishing touch was to use Nao’s text to speech API to create the impression that he is talking about what he has seen.

ALTextToSpeech tts = new ALTextToSpeech(session);
tts.say(text);

This was Nao looking at me while I was recording with my phone. The Microsoft Vision API incorrectly classifies me as a man with a wii. I could easily rationalise that the specifics of the classification are wrong, but the generalities are close enough.

Human

|                 |

Woman    Man

 

Small Electronic Device

|                    |                 |

Remote    Phone         Wii

 

This classification was close enough to correct – a vase of flowers sitting on a table.

Interpreting the analysis

Most of the analysis values returned are accompanied by a confidence level. The confidence level in the example I have is pretty low, the range being from 0 to 1.

a vase sitting on a chair", "confidence": 0.10692098826160357

This description also varied based on how I cropped the image before analysis. Different aspects were chosen as the subject of the picture with slightly different cropped views.

The vision api also returned Tags and Categories.

Categories give you a two-level taxonomy categorisation, with the top level being:

abstract, animal, building, dark, drink, food, indoor, others, outdoor, people, plant, object, sky,text, trans

Tags are more detailed than categories and give insight into the image content in terms of objects, living beings and actions. They give insight into everything happening in the image, including the background and not just the subject of the image.

Conclusions

Overall, I was really happy to integrate Nao with any kind of Artificial Intelligence API. It feels like the ultimate combination of robotics with AI.

The Microsoft Vision API was very intuitive and easy to get started with. For a free API with general classification capabilities, I think it’s not bad. These API’s are only as good as their training, so for more specific applications, you would obviously want to invest in training the API more intensively for the context. I tried IBM Bluemix’s demo with the same test image from Pep, but could not get a classification out of it – perhaps the image was not good enough.

I did have some reservations about sending live images from Pep into Microsoft’s cloud. In a limited and controlled setting, and in the interests of experimentation and learning, it seemed appropriate, but in a general sense, I think the privacy concerns need some consideration.

During this POC I thought about more possibilities for integrating Pep with some other API’s. The Nao robots have some sophisticated Aldebaran software of their own which provides basic processing of their sensor data like facial and object recognition and speech to text. I think there is a lot of potential in combining these API’s to enrich the robot’s interactive capabilities and delve further into the current capabilities of AI API’s.