Skip to the content

Artificial Intelligence Primer

You’re probably already experiencing the benefits of artificial intelligence, although you may not be aware.

Image recognition for example, is a common form of AI used on social platforms to help identify and tag friends. If you’re lucky enough to have an iPhoneX it has a neural engine built into its A11 processor; and if you’ve ever called out to Google, Siri or Alexa, then you’ve engaged with AI as it works to understand your voice and interpret your commands.

Other AI tools include the recommendation features used in products like Netflix and Spotify, as well as those well hidden in the background e.g. the algorithms that protect your credit card against fraudulent use. But what is it that separates artificial intelligence, or more specifically, machine learning from more traditional computational processes - and why is it such a critical technological advance. Let’s look at the second part of the question first.

Throughout human history technology has propelled us forward, from harnessing fire, to developing the wheel, and onward to the combustion engine; and the most powerful of these technologies are the ones described as general purpose. These are the technologies that multiply out into further waves of innovation e.g. the combustion engine is used in planes, trains and automobiles - but it has also given rise to suburbia, shopping malls, and war in the Middle East. That last one maybe pushing it a bit far, but the point is, these technologies have multiplying effects - they permeate our lives deeply, at any number of touch-points. Artificial Intelligence, along with Blockchain, are disruptive, general purpose technologies that are likely to drive continued innovation across a wide swathe of our economy. They are game-changers.

But what makes it so different to the computer systems we’ve been using for the past few decades? To answer that, it’s important to understand that machine learning requires a completely different approach to software development. Traditional software development attempts to embed prescriptive knowledge into a system in order to deliver a particular result. This has obviously worked pretty well for some time, but it has some fairly major weaknesses. Imagine trying to teach someone how to swing a cricket ball, or ride a bike - just by writing down the instructions. This difficulty is known as Polanyi’s Paradox: We know more than we can tell, i.e. many of the tasks we perform, rely on tacit, intuitive knowledge that is difficult to codify and automate. Currently at least, there are two ways of solving this problem (i) environmental control i.e. making the environment easier for computers to navigate, and (ii) machine learning.

Modern machine learning systems like DeepMind, use advanced neural networks to learn from examples, not a pre-programmed set of rules. Instead of attempting to map out every single game possibility, a brute force type of AI , the neural network in DeepMind used a supervised learning protocol, studying large numbers of games played by humans against each other. The more data the system accumulates, the more accurate it will become. In a strange bit of irony, the most powerful deep neural networks are proving to be surprisingly efficient at learning, more so than expected even. The puzzling success of deep learning has even its creators scratching their heads.

Whether or not AI poses some kind of existential threat as Elon Musk seems to believe, there’s no putting the genie back in the bottle at this point. AI is going to have a significant impact on the economy, as virtually every industry moves to take advantage of machine learning. It’s a brave new world, limited largely by imagination - and probably corporate bureaucracy.

About the author

Rowan Schaaf

Rowan Schaaf

Rowan is the co-founder and CEO of Pattern.

How can we help?

If you have a problem that needs software to solve, don't hesitate to get in touch.