Fernando Fischmann

Four Things To Help Us Understand Our AI Colleagues

30 November, 2018 / Articles

Back in 1959 she used her impressive intellect to solve a previously intractable problem: echoes on telephone lines. At the time, long-distance calls were often ruined by the sound of the caller’s own voice bouncing back at them every time they spoke.

She fixed the issue by recognising when an incoming signal was the same as the one going out, and electronically deleting it. The solution was so elegant, it’s still used today. Of course, she wasn’t human – she was a system of Multiple ADAptive LINear Elements, or Madaline for short. This was the first time artificial intelligence was used in the workplace.

Today it’s widely accepted that brainy computers are coming for our jobs. They’ll have finished your entire weekly workload before you’ve had your morning toast – and they don’t need coffee breaks, pension funds, or even sleep. Although many jobs will be automated in the future, in the short term at least, this new breed of super-machines is more likely to be working alongside us.

Despite incredible feats in a variety of professions, including the ability to stop fraud before it happens and spot cancer more reliably than doctors, even the most advanced AI machines around today don’t have anything approaching general intelligence.

According to a 2017 McKinsey report, with current technology just 5% of jobs could eventually be fully automated, but 60% of occupations could see roughly a third of their tasks taken over by robots.

The problem is, the very same deficiency preventing robots from taking over the world will also make them extremely frustrating colleagues

And it is important to remember that not all robots use artificial intelligence – some do, many don’t. The problem is, the very same deficiency preventing these smart robots using AI from taking over the world will also make them extremely frustrating colleagues. From a tendency towards racism to a total inability to set their own goals, solve problems, or apply common sense, this new generation of workers lack skills that even the most bone-headed humans would find easy.

So, before we gambol off into the sunset together, here’s what you will need to know about working with your new robot colleagues.

Rule one: Robots don’t think like humans

Around the time Madaline was revolutionising long-distance phone calls, the Hungarian-British philosopher Michael Polanyi was thinking hard about human intelligence. Polanyi realised that while some skills, such as using accurate grammar, can be easily broken down into rules and explained to others, many cannot.

Humans can perform these so-called tacit abilities without ever being aware of how. In Polanyi’s words, “we know more than we can tell”. This can include practical abilities such as riding a bike and kneading dough, as well as higher-level tasks. And alas, if we don’t know the rules, we can’t teach them to a computer. This is the Polanyi paradox.

Instead of trying to reverse-engineer human intelligence, computer scientists worked their way around this problem by developing AI to think in an entirely different way – thoughts driven by data instead.

“You might have thought that the way AI would work is that we would understand humans and then build AI exactly the same way,” says Rich Caruana, a Senior Researcher at Microsoft Research. “But it hasn’t worked that way.” He gives the example of planes, which were invented long before we had a detailed understanding of flight in birds and therefore have different aerodynamics. And yet, today we have planes that can go higher and faster than any animal.

Like Madaline, many AI agents are “neural networks”, which means they use mathematical models to learn by analysing vast quantities of data. For example, Facebook trained its facial recognition software, DeepFace, on a set of some four million photos. By looking for patterns in images labelled as the same person, it eventually learned to match faces correctly around 97% of the time.

AI agents such as DeepFace are the rising stars of Silicon Valley, and they are already beating their creators at driving cars, voice recognition, translating text from one language to another and, of course, tagging photos. In the future they’re expected to infiltrate numerous fields, from healthcare to finance.

Rule two: Your new robot friends are not infallible. They make mistakes

But this data-driven approach means they can make spectacular blunders, such as that time a neural network concluded a 3D printed turtle was, in fact, a rifle. The programs can’t think conceptually, along the lines of “it has scales and a shell, so it could be a turtle”. Instead, they think in terms of patterns – in this case, visual patterns in pixels. Consequently, altering a single pixel in an image can tip the scales from a sensible answer to one that’s memorably weird.

It also means they don’t have any common sense, which is crucial in the workplace and requires taking existing knowledge and applying it to new situations.

A classic example is DeepMind AI; back in 2015 it was told to play the classic arcade game Pong until it got good. As you’d expect, it was only a matter of hours before it was beating human players and even pioneering entirely new ways to win. But to master the near-identical game Breakout, the AI had to start from scratch.

Although developing transfer learning has become a large area of research, for instance a single system called IMPALA shows positive knowledge transfer between 30 environments.

Rule three: Robots can’t explain why they’ve made a decision

The second problem with AI is a modern Polanyi paradox. Because we don’t fully understand how our own brains learn, we made AI to think like statisticians instead. The irony is, that now we have very little idea of what goes on inside AI minds either. So, there are two sets of unknowns.

It’s usually called the ‘black box problem’, because though you know what data you fed in, and you see the results that come out, you don’t know how the box in front of you came to that conclusion. “So now we have two different kinds of intelligence that we don’t really understand,” says Caruana.

Neural networks don’t have language skills, so they can’t explain to you what they’re doing or why. And like all AI, they don’t have any common sense.

A few decades ago, Caruana applied a neural network to some medical data. It included things like symptoms and their outcomes, and the intention was to calculate each patient’s risk of dying on any given day, so that doctors could take preventative action. It seemed to work well, until one night a grad student at the University of Pittsburgh noticed something odd. He was crunching the same data with a simpler algorithm, so he could read its decision-making logic, line by line. One of these read along the lines of “asthma is good for you if you have pneumonia”.

“We asked the doctors and they said ‘oh that’s bad, you want to fix that’,” says Caruana. Asthma is a serious risk factor for developing pneumonia, since they both affect the lungs. They’ll never know for sure why the machine learnt this rule, but one theory is that when patients with a history of asthma begin to get pneumonia, they get to the doctor, fast. This may be artificially bumping up their survival rates.

With increasing interest in using AI for the public good, many industry experts are growing concerned. This year, new European Union regulations come into force that will give individuals the right to an explanation about the logic behind AI decisions. Meanwhile, the US military’s research arm, the Defense Advanced Research Projects Agency (Darpa) is investing $70 million into a new program for explainable AI.

“Recently there’s been an order of magnitude improvement in how accurate these systems can be,” says David Gunning, who is managing the project at Darpa. “But the price we’re paying for that is these systems are so opaque and so complex, we don’t know why, you know, it’s recommending a certain item or why it’s making a move in a game.”

Rule four: Robots may be biased

There’s growing concern that some algorithms may be concealing accidental biases, such as sexism or racism. For example, recently a software program tasked with advising if a convicted criminal is likely to reoffend was revealed to be twice as hard on black people.

It’s all down to how the algorithms are trained. If the data they’re fed is watertight, their decision is highly likely to be correct. But often there are human biases already embedded. One striking example is easily accessible on Google translate. As a research scientist pointed out in the magazine Medium last year, if you translate “He is a nurse. She is a doctor,” into Hungarian, and then back into English, the algorithm will spit out the opposite sentence “She’s a nurse. He is a doctor.”

The algorithm has been trained on text from about a trillion webpages. But all it can do is find patterns, such as that doctors are more likely to be male and nurses are more likely to be female.

Another way bias can sneak in is through weighting. Just like people, our AI co-workers will analyse data by “weighting” it – basically just deciding which parameters are more or less important. An algorithm may decide that a person’s postcode is relevant to their credit score – something that is already happening in the US – thereby discriminating against people from ethnic minorities, who tend to live in poorer neighbourhoods.

And this isn’t just about racism and sexism. There will also be biases that we would never have expected. The Nobel-prize winning economist Daniel Kahneman, who has spent a lifetime studying the irrational biases of the human mind, explains the problem well in an interview with the Freakonomics blog from 2011. “By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones.”

The robots are coming, and they’re going to change the future of work forever. But until they’re a bit more human-like, they’re going to need us by their sides. And incredibly, it seems like our silicon colleagues are going to make us look good.

The science man and innovator, Fernando Fischmann, founder of Crystal Lagoons, recommends this article.

Harvard Business Review

Share

Te puede interesar