Maker.io main logo

What is Edge AI? Machine Learning + IoT

2019-11-11 | By ShawnHymel

License: Attribution

The term “Edge AI” might be the new buzzword of 2019/2020, much like “Internet of Things” was in 2016/2017. To understand this growing new trend, we need to provide a solid definition of what constitutes “Artificial Intelligence on the Edge.”

Here is a video discussing this notion of Edge AI:

 

History of AI

With the invention of digital computers in the early 1900s, researchers began to theorize that the functions of a human (or other animal) brain could be recreated in digital form. The name “Artificial Intelligence” came from John McCarthy’s 1956 proposal to host a conference for academics to discuss the possibility of programming a computer to mimic higher functions of the human brain.

When asked to define artificial intelligence (AI), McCarthy would later describe it as:

"The science and engineering of making intelligent machines, especially intelligent computer programs."

While this is slightly redundant, when pressed about the nature of intelligence, McCarthy gave the answer:

"Intelligence is the computational part of the ability to achieve goals in the world."

To that end, we can conclude that AI is really any computer program that makes decisions to achieve some goal. A simple digital thermostat might even be considered “AI” under this broad definition, even though it likely uses a series of simple if-else-then statements to control the air conditioner in your home.

Thermostat

In 1959, AI researcher Arthur Samuel published a seminal paper describing how he trained a computer to play checkers better than he could: by having it play countless games against itself and learn from each iteration. In the paper, Samuel coined the term “machine learning,” which we understand to be any computer program that can learn from experience.

Many years later, Tom Mitchell provides a fantastically precise description of “machine learning” in his 1997 textbook:

"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."

Your credit card company likely uses several advanced machine learning techniques to track your spending habits. When something happens out of the ordinary, the algorithm can notify customer support that a potential fraudulent activity has occurred on your account. In order to understand what’s considered “fraudulent,” a machine learning model must first be trained on your actual habits so that it knows what to expect on a daily basis.

Advanced machine learning models have been around since the 1960s, but they have proven difficult to implement due to their required computational complexity. Computers were just too slow!

In her 1986 paper, “Learning While Searching in Constraint-Satisfaction-Problems,” Rina Dechter coined the term “deep learning” to describe some of these more computational complex models. From this, we can come to an understanding that deep learning is a subset of machine learning, and machine learning is a subset of artificial intelligence.

AI Machine Learning Deep Learning

All deep learning algorithms can be considered part of machine learning and artificial intelligence, but not all AI algorithms are machine learning (or deep learning).

In the early 2010s, Google formed a research team known as “Google Brain” to make deep learning more accessible. In 2012, the team made headlines by demonstrating the ability of a computer (or, rather, a cluster of 16,000 computers) to accurately classify an image as containing a cat or not.

Around the same time, graphics cards began seeing a rise in popularity as research tools (in addition to their normal use as gaming accessories).

Graphics Card

Researchers discovered that many of these machine learning algorithms could be “parallelized” to run more efficiently on graphics cards, as many such algorithms relied on matrix operations. A graphics card is very efficient at performing one math operation on thousands or millions of operands at the same time, which greatly decreased the computation time required by some of these algorithms.

Because of these advances in machine learning, companies began to realize that they could employ machine learning to help business development. That is, certain algorithms could be used to predict what consumers would buy next or where they might browse on a website. This type of data proved valuable in creating engaging content and ads that garnered users’ attention.

As a result, companies started to collect heaps of data from consumers, leading to the rise of the 2012 buzzword: “Big Data.”

Server room

Moving, storing, and analyzing all this data became a new problem for companies, which resulted in other companies offering services to help with “Big Data.” 

While that gives a very brief overview of how far machine learning has come, we still have not touched on the idea of “Edge AI.” We need to examine the Internet of Things (IoT) to see how machine learning might work well with data collected from IoT devices.

What is IoT?

In 1982, several grad students at Carnegie-Mellon’s Computer Science Department decided to connect a Coca-Cola vending machine up to the Internet. The simple interface allowed users to check the stock and temperature of the machine, but it was mostly a project for fun. 

IOT graphic

The concept of connecting non-browser and non-backbone equipment to the Internet grew from there. In 2016, we were promised by many headlines that it would revolutionize our lives. The notion of a futuristic smart home never materialized for most of us, but IoT is still making traction in industry.

In this 2017 study by IDC, we see that the biggest investment in IoT is in the manufacturing and asset management sectors. While things like smart thermostats (Nest), smart watches, (Apple Watch), and smart speakers (Amazon Echo) are readily apparent to most consumers, we generally don’t see the innovations occurring on the production side. IoT sensors can help increase efficiency by measuring throughput or notifying employees of problems before they occur, potentially saving thousands in costly repairs of equipment.

Enter Edge AI

IOT diagram

Most IoT configurations look something like the image above. Sensors or devices are connected directly to the Internet through a router, providing raw data to a backend server. Machine learning algorithms can be run on these servers to help predict a variety of cases that might interest managers.

However, things get tricky when too many devices begin to clog the network traffic. Perhaps there’s too much traffic on local WiFi or there’s too much data being piped to the remote server (and you don’t want to pay for that). To help alleviate some of these issues, we can begin to run less complex machine learning algorithms on a local server or even the devices themselves.

Edge AI diagram

This is known as “Edge AI.” We’re running machine learning algorithms on locally owned computers or embedded systems as opposed to on remote servers. While the algorithms might not be as powerful, we can potentially curate the data before sending it off to a remote location for further analysis.

One prime example of Edge AI is your common smart speaker. A wake word/phrase (such as “Alexa” or “OK Google”) has been trained as a machine learning model and stored locally on the speaker. Whenever the smart speaker hears the wake word/phrase, it will begin “listening.” In other words, it will begin streaming audio data to a remote server where it can process the full request.

Amazon Echo Dot

The implications of this use of machine learning has not yet been fully realized. However, expect to see some wild and new computer devices appear in the future. Instead of teaching computers what to do, we will begin teaching computers how to learn.

TechForum

Have questions or comments? Continue the conversation on TechForum, DigiKey's online community and technical resource.

Visit TechForum