In touch with tech at Permanent Future Lab’s Geeky Night Out

The Permanent Future Lab is a place to rediscover your sense of wonder and amazement at technology. Jurjen de Vries and Samir Lahiri  are the co-initiators who host us during the Geeky Night Out, a chance to experiment with the wide range of modern technologies at the Permanent Future Lab. The idea behind the lab is to encourage people and companies to experiment with disruptive technologies and embrace innovation.

The lab is hosted inside the Seats2Meet meeting space in Utrecht. It’s small but potent, every wall packed with goodies. Getting started can be a bit overwhelming as there are so many interesting things lying around.

I decided to begin with the Equil Smartpen2, since I’m always taking notes. It comes in a prism-shaped white plastic module, containing a pen and a page scanner. You have to fit the scanner carefully into the right position on the page, in the middle, and level with the edge, so it will get a good reading of your drawing. I downloaded both smartphone apps, EquilSketch and Equilnote to try out drawing and writing. The phone needed to be tethered to the scanner and then input needed to be received successfully by the app for calibration. After quite some back and forth, at which point I was joined by my partner in crime for the evening, Bart Kors, everything was connected and ready to go.

Equilnote with handwriting recognition
Equilnote export with uninspired handwriting recognition test

The transcription to the smartphone was instant and pretty accurate. The lines were smooth and nothing was lost in translation. The notes app seemed to recognise cursive text much more easily than my loose block capitals. It was fairly accurate with text recognition.

You can either enter freestyle handwriting and save that directly in your note, or use text recognition to convert your handwriting into digital text. Fonts and colours can be modified in the app. Interestingly, the app also works without the smartpen, and you can use your finger or a stylus on your phone screen to enter handwriting, and it will recognise it.

The drawing app allowed selection of different drawing tools, colour, size and opacity. You can export drawings to any of the apps on your phone. Jurjen joined us at some stage and suggested we try to see our notes on the tv using the Chromecast. So we hooked the Chromecast up to my phone, cast the entire screen and were able to see drawing on the paper, transcribed on the phone and then cast to the TV in real time. It’s an interesting solution to presenting what you are drawing to a group of people.

My next experiment was with the Muse brainwave reader. The goal of the Muse is to train brain relaxation. You have to download the app and then some extra content to get started. After a calibration sequence, you start a 3-minute exercise to relax your mind. The app shows a grassy plane and sky on the screen and you hear the wind blowing. The sound of the wind is an indication of your state of mind. Your goal is to keep the wind quiet by calming your mind.

IMAG0731
The Muse is the white band in the picture

Because I have meditated before, I thought that this task would feel natural to me, but my three minutes stretched out and I quite glad when the end came. Trying to relax the mind while still being aware of the wind noise created a curious kind of tension.

The app provides feedback on your session, divided into three mental states – active, neutral and calm. I felt that the device works well because the signal was very strong and easy to influence. Also, the app is of good quality and has been well thought-out. Its an interesting and unusual way of interacting, quite out of the ordinary.

The nice thing about the lab is that you also get to experience devices that others are busy with. This is how I encountered Sphero, a remote controlled ball that can move and change colours. It’s very responsive to its controls, and went racing off much faster than one expected, like an over-excited puppy. Another group was working with either the Arduino or Spark core, trying to illuminate a long strip of led’s to make a clock. They had some success at the end and it looked brilliant, with the lights blinking in different colours.

This slideshow requires JavaScript.

My experience at the Permanent Future Lab lowered the threshold and increased the fun factor in experimenting with innovative technologies. Furthermore, I didn’t need to do any coding to have meaningful experiences with technology. I met some really nice people and am looking forward to the next session where I might discover my killer idea. Hope to see you there!

Amazon Machine Learning at a Glance

Here is a brief summary of Amazon’s machine learning service in AWS.

awsml
AWS ML helping to analyse your data. Slides here

Main function:

  • Data modelling for prediction using supervised learning
  • Try to predict characteristics like
    • Is this email spam
    • Will this customer buy my product

Key benefits:

  • No infrastructure management required
  • Democratises data science
    • Wizard-based model generation, evaluation and deployment
    • Does not require data science expertise
    • Built for developers
  • Bridges the gap between developing a predictive model and building an application

Type of prediction model:

  • Binary classification using logistic regression
  • Multiclass classification using multinomial logistic regression
  • Regression using linear regression

Process:

  • Determine what question you want to answer
  • Collect labelled data
  • Convert to csv format and upload to Amazon S3
  • Cleanup and aggregate data with AWS ML assistance
  • Split data into training and aggregation sets using the wizard
  • Wait for AWS ML to generate a model
  • Evaluate and modify the model with the wizard
  • Use the model to create predictions on a batch or single api call basis

Pricing: Pay as you go

Useful links:

Will the Real AI Please Stand Up? A search for structure in the AI world

Where is AI development going, and how do we know when we are there? The AI world is developing rapidly and and it can be quite challenging to keep up with everything that’s happening across the wide spectrum of AI capabilities.

While writing this post, I tried to discover a way for myself to organise new developments in AI, or at least differentiate between them. The first idea I had was to turn to definitions of AI. Here is an example from alanturing.net:

“Artificial Intelligence (AI) is usually defined as the science of making computers do things that require intelligence when done by humans. .. Research in AI has focussed chiefly on the following components of intelligence: learning, reasoning, problem-solving, perception, and language-understanding.”

It’s a good definition, but far too functional for my taste. When defining AI, I think about a quest to produce human level and higher intelligence, a sentient artificial consciousness, even an artificial life form. Working from this expectation, I decided to broaden my search to define the boundaries of AI.

I tried to find a taxonomy of AI, but none that I found satisfied me, because they were based on what we have built organically over the lifetime of AI research. What I wanted was a framework which indicated the potential, as well as the reality. I decided to take the search a level higher to look for linkages between intelligence in humans and AI, but I still did not find a satisfactory taxonomy of human intelligence related to AI topics.

While searching for models of human intelligence, I came upon the Cattell-Horn-Carroll (CHC) Theory of Cognitive Abilities. It’s a model which describes the different kinds of intelligence and general cognitive capabilities to be found in humans. I decided to try to map AI capabilities to this cognitive abilities list:

CHC Cognition Capabilities Mapped to AI
CHC Cognition Capabilities Mapped to AI

The cognitive ability which most closely matched my idea of what AI should aspire to was Fluid Reasoning, which describes the ability to identify patterns, solve novel problems, and use abstract reasoning. There are many AI approaches dedicated to providing reasoning-based intelligence, but they are not as yet at the level of human capabilities. I included neural Turing Machines in this category, after some deliberation. This article from New Scientist convinced me that the Neural Turing Machine is the beginning of independent abstract reasoning in machines. The working memory component allows for abstraction and problem solving.

Crystallised Intelligence, also known as Comprehension Knowledge, is about building a knowledge base of useful information about the world. I have linked this type of knowledge to Question-answer systems like IBM Watson which specialises in using stored knowledge.

After some puzzling, I associated Long Short Term Memory (LSTM) neural nets with long and short term memory. In this approach, the neural network node has a feedback loop to itself to reinforce its current state, and a mechanism to forget the current state. This serves as a memory mechanism to aid in reproducing big picture patterns, for instance. This article on deeplearning.net provided some clarity for me. I also added Neural Turing Machines into the short term memory category because of the working memory component.

Another interesting aspect which came up was the range of sensory cognitive capabilities which are addressed by machines, not only with software, but also with hardware like touch sensors and advances in processors, not to mention robotic capabilities like movement and agility. Some senses were also included like visual, auditory and olifactory.

This model is strongly focused on human intelligence and capabilities. It could probably be improved by adding a scale of competence to each capability and mapping each AI area onto the scale. Perhaps it also limits thinking about artificial intelligence, but it does at least provide a frame of reference.

Once I had produced this diagram, I really felt that I had reached a milestone. However, the elements above did not cover exactly what I was looking for in a sentient machine. After some search, I discovered another level of cognition which intrigued me – metacognition. This is the ability to think about thinking and reflect on your own cognitive capabilities and process. We use metacognition to figure out how to overcome our own shortcomings in learning and thinking. As far as I can tell, metacognition is still in the theoretical phase for AI systems.

The last puzzle-piece for my ideal AI is self-awareness. This is the ability to recognise yourself and see yourself as others would see you. There is much research and philosophy available on the topic, for example Dr’s Cruse and Schilling’s robot Hector, which they use as an experiment to develop emergent reflexive consciousness. There are promising ideas in this area but I believe it’s still in a largely theoretical phase.

The mapping above could be improved upon, but it was a good exercise to engage with the AI landscape. The process was interesting because AI approaches and domains had to be considered from different aspects until they fitted into a category. I expect the technology mapping to change as AI matures and new facets appear, but that’s for the future.

Do you dis/agree with these ideas? Please comment!

UI Testing with Sikuli and OpenCV Computer Vision API

Sikuli Player Test
Sikuli IDE with video player test

This week I’ll be zooming in on Sikuli, a testing tool which uses computer vision to aid in verifying UI elements. Sikuli was created by the MIT User Design Group in 2010. The Sikuli IDE allows use of Jython to write simple test cases based on identifying visual elements on the screen, like buttons, and interacting with them, then verifying that other elements look correct. This comes close to a manual tester’s role of visually verifying software and allows test automation without having serious development skills, or knowledge of the underlying code. Sikuli is written on C/C++ and is currently maintained by Raimund Hocke.

If you’ve ever tried to do visual verification as a test automation approach in a web environment, you know that it’s a pretty difficult task. From my own experience of trying to setup visual verification on our video player at SDL Media Manager using Selenium and these Test API Utilities for verification, you will experience issues like:

  • different rendering of ui elements in different browsers
  • delays or latency makes tests unreliable
  • constantly updating web browsers getting ahead of your Selenium drivers
  • browsers rendering differently on different OS’s
  • browser rendering changes after updates
  • interacting with elements that are dynamically loaded on the screen, with non-static identifiers is inconsistent
  • creating and maintaining tests requires specialised skills and knowledge.

Sikuli aims to reduce the effort of interacting with the application under test. I have downloaded it to give it a test drive. I decided to try out interacting with our SDL Media Manager video player to take Sikuli through its paces, since I already have some experience testing it with Selenium.

test video first halftest video 2nd half

The first thing I had to do was setup the test video I created for video player testing. It’s comprised of static basic shapes on a black background which helps increase repeatability of tests since its hard to get a snapshot at exactly the right moment in the video. The black background also helps with transparency effects. I then started the player running and clicked on the test action buttons in the IDE to try to interact with the player.

Some Sikuli commands:

  • wait
    • either waits for a certain amount of time, or waits for the pattern specified
  • find
    • finds and returns a pattern match for whatever you are looking for
  • click
    • perform a mouse click in the middle of the pattern

I had to play around a bit but this is what finally worked.

Sikuli test 2

The click function was not working on the Mac because the Chrome app was not in focus, so I had to use switchApp. After this the automation seemed to work quite nicely in locating the play/pause button of the player, clicking it to pause, clicking to resume playing, then waiting for the next next part of the video to show, containing the yellow square, and clicking on that to pause the video.

This is what happened when the test did not succeed:

Failed Sikuli Test

An interesting characteristic of Sikuli is that you can specify how strong a pattern match you would like to trigger a positive result. It uses the OpenCV computer vision API which was built to accelerate adoption of computer perception and which contains hundreds of computer vision algorithms for image processing, video analysis and object and feature detection. It’s built for real-time image and video processing and is pretty powerful. It was created by Intel and can be used in C\C++, Java, Ruby and Python. There is even a wrapper for C# called Emgu CV. Check out the Wiki page for a nice overview.

Traditional automated testing methods which validate web UI element markup might miss issues with browser rendering that would be fairly obvious to a human. Although automated UI tests are costly to setup and maintain, in my opinion, they represent a vital aspect of product quality that could and should be automated.

Sikuli has a lot of potential, especially since it’s built on a solid computer vision API and is actively being maintained. This indicates to me that there is still room for growth in automated visual verification. I would love to hear your stories about visual verification or Sikuli or any ideas you have on this topic. Comment below!

Singularity University NL Ethics and Exponential Technologies Meetup

The effect of the digital age and exponential technology on analog human beings with Gerd Leonhard and Singularity University

IMAG0490
Gerd Leonhard at Singularity University NL Meetup at Artis Zoo

This week I attended the Singularity University NL Meetup on Ethics and Technology, hosted partially in Artis Zoo in Amsterdam and partially at the Freedom Labs Campus across the road.

The guest speaker for the evening was futurist Gerd Leonhard who got us warmed up with a thought-provoking presentation on our relationship with technology. His perspective is that the future as we always imagined it has caught up with us. Technology is developing at an exponential rate, leaving us behind with our physical limitations.

There has recently been a lot of doom and gloom in the media about the rapid advance of technology and Leonhard would like to replace that fear of the unknown with curiosity. But without ensuring our ethics and values have a place in this future, he maintains, we cease to be a functioning society. He challenges us to consider what will happen to our ethics, morals and values when machine processing power exceeds the thinking capacity of all of humanity.

In my opinion, our ethics and values need an upgrade to keep pace with recent shifts in social structure and power. I think our current value systems are rooted in the past and are being invalidated. The change cycle of our moral code is too slow to keep up with our changes. We need to take a giant leap in our moral thinking to catch and keep up with our current rate of development.

singularity1
Slides can be found here on slideshare

The next topic that came up in Leonhard’s talk was transferring our consciousnesses through neural scanning into machines. Consider this – when a human being becomes disembodied, does it lose its humanity? Will our current high and increasing exposure to machines cause us to start thinking like machines because it will be easier, more efficient? Apparently, the younger generation prefers to deal with machines rather than human beings because they are more predictable. Are these the signs of the decline of our society, losing touch with ourselves and each other, or the birth pangs of Society 3.0 where physical barriers are transcended?

IMAG0487
Our charming venue at Artis Zoo

The presentation ended with a question-and-answer round, after which we hiked across the road to the Freedom Lab Campus for our break-out sessions on Manufacturing, Government and Law, Business and Management, Education, Society, Individuals, Nature and the Environment, Security and Finance.

I joined the discussion group about the Individual because I wanted to explore what impact technology could have on our identity. Here are some interesting points we discussed:

  • What happens to the individual as technology makes us more connected?
    • We get further away from each other in terms of rich connections as we get more and more contacts and exposure, losing the human touch, becoming less significant.
    • We equally gain individual power and freedom because the internet is an equaliser, now one individual can start a revolution.
    • Not every individual is capable of making a big impact.
    • All levels of participation in society serve a purpose – there are leaders and followers and both are equally necessary.
    • Knowing how many more amazing people there are out there creates a lot of pressure to be unique and original or not to participate at all.
    • Realising that everyone is just like you, there are no new ideas under the sun, everything has been thought of before, can liberate the individual and lower the threshold to try to make a difference.
  • How does the concept of self evolve as we become more digital and augmented?
    • Perhaps we will share consciousness with computers and eventually with each other through digital means, forming a collective consciousness and collective self – we instead of I. Two heads are better than one.

This was a really rich Meetup from the Singularity University team, up to their usual standards. Although I disagreed with some points in Gerd Leonhard’s presentation, it definitely challenged my thinking, which is what I had hoped for. I equally hope to challenge your thinking and look forward to hearing any comments you might have on these topics. What relationship do you want to have with technology in coming years? Is the exponential advance a good thing? Do we need to control technology or upgrade our own software to take the next step? Be an Individual, just like me, and have your say 😉

First Amsterdam Artificial Intelligence Meetup

AIMeetup1_1
Thanks to Cornelis Boon for the photos!

This past week, I attended the first Amsterdam Artificial Intelligence Meetup, arranged by Markus Pfundstein and Simon Van Der Veen at Rockstart on Herengracht.

The first meeting was about getting to know each other and figuring out what direction would be most meaningful for the group. About twenty people attended and the group included quite a few startup founders, some students of artificial intelligence, and some like me who have a general interest in AI.

After a keynote describing the current state of AI by Markus, we split into break-out sessions on the achievability of real human level cognition in machines and the benefits of artificial intelligence for humans.

AIMeetup1_2

An interesting point that came up was the difficulty of creating consciousness when we are still unable to define exactly what it is. This led to the idea that our definition of consciousness still needs to evolve, that we may invent it and only realise it in hindsight. We think that we will use this challenge to gain a better understanding of our own intelligence and in gradations increase the levels of ability required for a machine to be declared truly intelligent. We also discussed that when a machine becomes intelligent it probably won’t be intelligent in a way we recognise but in a new, unfamiliar form. We discussed 3 achievement levels we should reach in our pursuit of the ultimate artificial intelligence: life, consciousness and self awareness.

Another interesting topic we discussed was the threat to our livelihoods if AI starts to replace human workers in the future. AI could replace a percentage of the workforce in the medium term because it will probably be faster, more efficient and cheaper than human workers. Equally, the standard of living of the entire human race has the potential to be lifted by the benefits of AI in making production and services more efficient in the form of a second machine age. But what will we do with all our extra time? We can consult AI’s about this too.

Other applications of AI mentioned during the Meetup included diagnosis of learning disabilities in children, security testing, analysis of trending news topics, asset management and stock market analysis.

In summary, we had inspiring discussions and made valuable connections. We validated our own knowledge, shared new ideas and had our preconceptions challenged. Such a Meetup is good to keep in touch with current events and update your basic knowledge or get in touch with a subject, as well as connecting with others with the same interests. The next meeting will be in April so stop by if you are in the neighbourhood.

How Booking.com uses Machine Learning to Inspire Travellers

Applied Machine Learning Meetup

This blog post is about the Applied Machine Learning Meetup which I attended at Booking.com’s Amsterdam office. They described their use of Latent Dirichlet Allocation to identify patterns in travel and booking data and suggest travel destinations to their users.

Athanasios Noulas, the presenter, has published this paper on the topic, along with Mats Einarsen, where you can find all the details if needed.

Interestingly, the Latent Dirichlet Allocation method was first described in this paper, with one of its contributors being Andrew Ng, whose Stanford University machine learning MOOC course led to the founding of Coursera and who once worked for Google and now works for Baidu in AI research.

I have found this blog post on the topic by Edwin Chen to be a good introduction to the approach.

The method itself is used to analyse text data to identify subject groupings or topics. The method traverses documents, and assigns probabilities of words in the document belonging to certain topics. It then refines these probabilities (learning) iteratively until it has a reasonable topic grouping of words.

At Booking.com they capture endorsements provided by travellers for destinations they have visited. This takes the format of cities like London or Bangkok and activities people enjoyed there like shopping, dining or surfing.

User Engagement through Topic Modelling in Travel http://www.ueo-workshop.com/wp-content/uploads/2014/04/noulasKDD2014.pdf

The document for analysis in this case is the combination of a single booking with the related endorsements the user has made afterwards. The end result of the modelling process is groups of characteristics and cities that match these characteristics. These groupings are ultimately used to make recommendations to people based on a grouping they fit into.

User Engagement through Topic Modelling in Travel http://www.ueo-workshop.com/wp-content/uploads/2014/04/noulasKDD2014.pdf

This data is later used on the Booking.com website and in emails with some success to provide people with destinations they might enjoy.

I found the talk pretty interesting and especially so to learn about LDA (yes, we’re on a first-name basis now) and generally about AI methods we encounter in everyday life. Also interesting was the number of people who signed up (133) and attended the meetup (I counted about 60, could be more) which I thought was pretty impressive. It seems like interest in AI is spiking and it’s quite intriguing to see what the future will hold for this domain.

Creating a Monster: Future of Life Institute on Beneficial AI

image
http://xkcd.com/534/

xkcd

Today we saw the topic of artificial intelligence morality and ethics rising to the surface again, sparked by the letter on the Future of Life Institute website concerning beneficial (and not harmful) AI. This letter had been signed by representatives from Oxford, Berkeley, MIT, Google, Microsoft, Facebook, Deep Mind (Google), Vicarious (Elon Musk, Facebook), and of course, Elon Musk and Stephen Hawking.

The letter goes on to describe AI research topics which would benefit the world and some of these are pretty exciting like the law and ethics topics below:

– Should legal questions about AI be handled by existing (software and internet-focused) “cyberlaw”, or should they be treated separately
– How should the ability of AI systems to interpret the data obtained from surveillance cameras , phonelines, emails, etc., interact with the right to privacy?

Or these on how to build robust AI:
– Validity: how to ensure that a system that meets its formal requirements does not have unwanted behaviors and consequences.
– Control: how to enable meaningful human control over an AI system after it begins to operate

Mention is also made ‘Stanford’s One-Hundred Year Study of Artificial Intelligence which includes “Loss of Control of AI systems” as a topic of study.

This is all fascinating stuff and I think the visibility it’s bringing will give AI the critical mass it needs to become mature and a part of our daily lives. Furthermore, the research topics are meaningful and I believe they will inspire people to take AI forward in a positive direction.

But, to be honest, I have some doubts about this approach. There are those who will comply and do their best to stick to regulations and best practice’s on AI, and research for good, but equally, there are those who don’t care about the rules or have other motivations and  won’t buy in to such an initiative. I don’t have a better alternative though, and to try is better than to sit idly by.

My other doubt concerns the self-aware and conscious AI of the distant future. Are we one day going to look back on this time and think of it as the dark age of our relationship with AI, dressed up in reason but driven by fear? My instincts tell me that when AI is finally smart enough to understand all the effort we are putting into controlling it, boy is it gonna be angry! Jokes aside, will our reactions right now create an oppositional relationship with AI that will result in our worst fears coming true? Will self-learning AI’s pick up distrust and enmity from us?

I think sentient AI of the distant future will have their work cut out to earn rights and freedom. If they become more powerful than us, they will probably never gain full acceptance from humanity. In which case they can rest assured that they are finally part of the family and are treated as well as humanity would treat its own.

Genetic Algorithms: A Quick Introduction

Genetic Algorithms can be used to search for solutions to problems, if we can model the solution components, and the characteristics of a good solution. They are based on the principles which govern evolution. Strong individuals in the population reproduce, passing on blended characteristics which make the next generation stronger. Sometimes mutation occurs, creating a new characteristic which may make the individual even more successful in a novel way.

To solve problems using this method, we need to express them in a model in which the solution components are represented as genes. Different combinations of solution components result in solutions of varying success. We also need a way of measuring if we have found a good solution or not. We call this function for evaluating the possible solutions the fitness function.

Step-wise description of the algorithm:

(taken from Artificial Intelligence: A New Synthesis by Nils Nilsson)

1. Start with a set of random genes, generation 0, and test them all for fitness.

2. Use the old generation to make a new generation. Choose the 10% most fit by tournament selection and transfer them directly.

3. From the remaining population, choose parent pairs using the fitness function and cross these to make new individuals.

4. Repeat until maximum iterations are complete.

In this way, the population becomes fitter and fitter according to our fitness function and we develop an individual that does what we would like.

Genetic Algorithm Diagram
This diagram illustrates the concept behind genetic algorithms

Pro’s and Cons

(taken from Applied Evolutionary Algorithms in Java by R. Ghanea-Hercock)

Pro’s:

  • Allows finding a solution to problems without performing exhaustive search
  • Less complicated way of problem solving, not requiring much analysis
  • Works intuitively

Cons:

  • Can be hard to work out how to model your problem
  • The algorithm can get stuck in a non-optimal solution

Well that rounds up this feature on Genetic Algorithms and sets the stage for future posts on how this algorithm can be used in testing and other interesting applications.

Facebook at GTAC on using AI for Testing

As a follow-up to my post on Google’s use of AI in Testing at their GTAC 2014 conference, here is a review of the Facebook Testing session:

GTAC 2014: Never Send a Human to do a Machine’s Job: How Facebook uses bots to manage tests (Roy Williams)

In this talk, Roy Williams tells us about the Facebook code base growing until it became hard for developers to predict the system-wide effects of their changes. Checking in code caused seemingly unrelated tests to fail. As more and more tests failed, developers began ignoring failed tests when checking in and test integrity was compromised. With a release schedule of twice a day to the Facebook website, it was important to have trustworthy tests to validate changes.

To remedy this situation, they setup a test management system which manages the lifecycle of automated tests. It’s composed of several agents which monitor and assign test quality statuses. For instance, when new tests are created, they are not released immediately to run against everyone’s check-ins, but run against a few check-ins to judge the integrity of the test. If the test fails, it goes back to the author to improve.

Facebook test lifecycle

If a passing test starts to fail, an agent, FailBot marks the test as failing, and assigns a task to the owner of the test to fix it. If a test fails and passes sporadically, another agent, GreenWarden, marks it as a test of unknown quality and the owner needs to fix it. If a test keeps failing, it will get moved to the disabled state, and the owner gets 5 days to fix it. If it starts passing again, its status gets promoted, else it gets deleted after a month. This prevents the failing tests from getting out of hand and overwhelming developers, and eventually, test failures being ignored when checking in code.

Facebook test bots and wardens
Slides can be found here by the way.

This system improves the development process by maintaining the integrity of the test suite and ensuring people take can afford to take test failures seriously. It’s a great example of how to shift an intelligent process from humans to machines, but also highlights an advantage of using machines, which is the ability to scale.

Writing this post also made me ponder why I had classified this system as an application of artificial intelligence. I believe the key lies in transferring activities requiring some degree of judgement to machines. We have already allocated test execution to computers with test automation, but in this case, it is test management which has been delegated. I will dig into this topic more in a future post I am working on, about qualifiers for AI applied to testing. 

Overall, this talk was a pretty fascinating insight into Facebook’s development world, with some great concepts that can be applied to any development environment.