People often refer to deep learning and machine learning interchangeably, but they’re not the same. Both fall under the category of artificial intelligence (AI), a broad term that means making computers behave in ways that mimic human intelligence. Machine learning (ML) is a type of AI, and deep learning is a subset of ML.
In this article, we’ll dive into the definitions of ML and deep learning and how they can be applied in cellular IoT. Use the links below to jump to what you need.
Machine learning (ML) can be explained as a four-step process. For example, imagine you want a computer to determine whether there’s a car pictured in a photograph. The first step, input, involves inputting as many photos as possible — some of cars and some of other things — into the algorithm.
The second step, the feature extraction process, is done by a human (in supervised ML) or the algorithm itself (in unsupervised ML). This process involves identifying specific traits within many hundreds of photos. It lets the system know that if it identifies wheels, a windshield, and windshield wipers, it’s reasonable to determine that the photo contains a car.
The third step, classification, takes place within the system. In this step, the computer system applies its new understanding of the features that make up a photo of a car to all of the inputted photos.
The fourth and final step, output, comes in the form of two sorted categories of photos — those that have cars, and those that don’t.
Now, let’s look at a few types of machine learning:
In this kind of ML, a smaller subset of the full dataset is carefully chosen and used to teach the algorithm. So, using the car photo example, the programmer might select a 100-photo subset out of the 10,000 photos. The collection of pictures within the subset should have the same percentages of each given feature as the full dataset, giving the algorithm a representative sample to work from.
In this type of ML, there is no human-selected subset of photos, but rather the entire dataset is given to the algorithm to categorize on its own. The advantage here is that much larger datasets can be used, but that comes at the cost of the task-focused aspects of supervised learning. So if 10,000 photos are given to the algorithm, it might organize them based on predominant colors, or whether or not there’s a face in the photo — but since the input is not labeled, no one will know what categorization process the machine is using. The machine proceeds to sort data and put similar photos together, but how exactly they are alike remains a mystery.
This type of learning is based on human psychology and can best be compared to Pavlov’s dogs. Whenever an algorithm accomplishes a desired outcome, a human interpreter provides the algorithm with a reward. If the outcome is not desirable, the algorithm is forced to repeat the process until it achieves success.
Using the car photo example, let’s say the user wants a picture of a red convertible from that set of 10,000 photos. The algorithm shows a picture of a palm tree, so the user gives no reward and asks again. The machine then shows a picture of a red SUV. The user gives a small reward, but asks the computer to try again. Next, the machine shows a red convertible and receives a full reward. In this way, the algorithm learns that there are degrees of success (after all, it got a small reward for the red SUV even though that answer wasn’t quite accurate).
Deep learning is a type of machine learning in which the algorithms attempt to mimic the way that the human brain builds neural pathways and learns new things. It’s basically a computer learning to learn in the same way that a human brain does.
Now, let’s look at the three most common types of deep learning algorithms:
Convolutional Neural Networks (CNNs)
These algorithms are used mainly for image processing and object detection. They contain several layers that process and extract features from the data. The convolution layer filters the data, then the rectified linear unit makes a map of the features that have gone through the filter. The map is fed into the pooling layer and undergoes a down sampling operation, reducing the features of the map — in other words, it eliminates the details that aren’t relevant. Finally, the data enters the fully connected layer where it’s classified into an output layer, as the machine makes a final choice about the outcome.
Long Short Term Memory Networks (LSTMs)
LSTMs are a subtype of Recurrent Neural Networks (RNN) that can learn and memorize information over a long period of time. They’re primarily used for long-term storage, and work by forgetting irrelevant parts of information and updating cell states as needed. Basically, LSTMs keep data from getting mixed up in improper ways — like when a computer user has to clean up a PC’s hard drive to prevent data corruption.
Generative Adversarial Networks (GAN)
To best understand how GANs work, let’s use another example: imagine you want to create a system that can generate (not just identify) an image of a face. The first phase of a GAN is a random pixel generator, which is processed into an image. This image is then compared to a set of facial images by an algorithm known as a discriminator. This software has been programmed to be able to recognize whether or not each photo contains a face. It’s a bit like learning to paint the Mona Lisa by splattering paint against a wall — it’s a slow process, but after millions of tries, you’ve recreated the Mona Lisa.
While they share many of the same ideas, deep learning differs from ML in two key areas:
1. Use of Neural Networks
ML uses more rudimentary and binary identification processes, while deep learning attempts to emulate how the human brain learns. Deep learning algorithms are a more complex and evolved version of ML algorithms, using the multi-layered structure of artificial neural networks that are designed to function like the human mind.
2. Ability to Generate Content
ML is primarily focused on learning about and understanding material that is fed into the system by users, while deep learning (through GANs) goes beyond identifying patterns and responding correctly to questions — it can also learn to create content or complete tasks on its own. For example, using a deep learning algorithm, a robot can learn how to do a task simply by observing a human completing it.
ML is mainly focused on recognizing and categorizing information based on defined features. Using this technology, ML can assess a given situation, make pre-determined adjustments, or offer suggestions for improvement. ML is often at work in IoT use cases, particularly those that generate large datasets that must be analyzed to gather insights, a process known as big data analytics. Let’s look at a few current and potential examples of how ML contributes to cellular IoT applications:
So-called smart buildings often include energy management systems that monitor usage of HVAC, elevators, security, and lighting systems with IoT sensors. With the addition of ML algorithms, these systems can run data analytics to make accurate predictions about energy usage and suggest ways to reduce utility costs and carbon emissions.
Manufacturing equipment that uses ML is able to learn the consumption rate of materials and predict the exact amounts of raw materials needed in a given period of time — and identify slight variations in consumption that might signal problems in the process. These insights can bring additional quality assurance and help minimize waste.
Smart cities often use IoT sensors to detect empty parking spaces and notify drivers when a garage is full. When paired with ML, these applications can detect patterns and make predictions about when certain garages will fill up, helping cities to minimize congestion, make the most of the parking resources they already have, and make better predictions about when (and where) they will need to create more parking options.
Self-Adjusting Car Settings
When a car has multiple drivers, the seat adjustments may be changed multiple times a day. Using ML, IoT sensors, and facial (or thumbprint) recognition, a vehicle could automatically adjust to the driver’s preferences the moment they sit down and the system recognizes them. ML could also be used to adjust seat belts and hold drivers and passengers in place during sharp turns or bumps. Apple is working on a self-driving car that promises to include some of these features.
Connected fitness is an exploding realm, with companies like Echelon offering smart mirrors and exercise equipment that connects you to live instructors for personalized workouts. But ML can provide similar benefits (likely at a lower cost), observing a user’s health and effort and calculating optimal workouts based on those criteria. For example, an exercise bicycle equipped with a heart rate monitor and oximeter might make choices about each user’s workout difficulty based on their health status.
Connected Medical Devices
The predictive capabilities of machine learning are incredibly useful in complex systems — particularly in IoT devices that attempt to mimic the human body. Insulin pumps, for example, use machine learning to identify blood glucose patterns and automatically regulate and administer doses of essential medication.
While deep learning is a subset of machine learning, its key distinction is the ability to generate original content rather than simply organize and identify patterns. This ability makes possible a number of IoT applications. Let’s consider a few:
The most significant obstacle to the success of self-driving cars is the potential for unknown environmental variables such as wet roads, a swerving semi-truck, or a puppy running across the road. Deep learning allows the car to come up with solutions to problems for itself instead of being dependent on a set of pre-programed responses. Using deep learning, the car’s onboard computer can come up with multiple detours if a roadway is blocked — and make a quick decision about which one to take.
On a larger scale, the growth of IoT presents a challenge to existing networks because of their limited bandwidth (although that may be changing soon, with the advent of 5G and massive IoT). As IoT devices proliferate, network demand rises. Deep learning has the ability to characterize and categorize data, allowing far more efficient data consumption in any IoT device — and taking some of the strain off service providers.
Deep learning has the ability to create, and has been used to write music. For example, Google Magenta is an open source project with around 20 deep learning models that can be used for musical creation — generating piano parts, creating chord accompaniments, filling in percussion, and more. If some of these models were applied at the endpoint level, a deep learning-enabled keyboard would be able to create backup chords to any melody the user played, serving as accompaniment when other human players are not available.
Cellular IoT with Hologram
For IoT applications of ML and deep learning to function well, IoT devices need a dependable source of connectivity. Hologram’s IoT SIM card offers seamless, global coverage for IoT devices with access to LTE/4G/3G/2G technologies. With our Hyper eUICC-enabled SIMs, you’ll gain access to new connectivity partnerships without any additional carrier negotiations, integrations, or hardware swaps.