You’ve almost certainly heard the phrase “machine learning” (ML) by now. If you have a sense of what it means, but you’re still not sure, the following machine learning primer is just what you need to get to the bottom of it.
Machine learning is an incredibly powerful tool that’s driving economic growth and industrial process improvements, and making our computers orders of magnitude more intelligent, productive and useful.
You can think of machine learning as the process that engineers use to “train” an artificial intelligence (AI). As the ML process develops further and engineers work with larger and more comprehensive data sets, these AI become “smarter” and better able to make meaningful predictions about the future.
What Is Machine Learning?
There’s a simple way to think about machine learning. It allows computers to make determinations and take actions without human beings programming each of the potential action/reaction sequence into the computer beforehand. ML involves “using algorithms to parse data, learn from it and then make a determination or prediction about something in the world.”
In other words, machine learning is a process where a computer receives a set of data to study and basic instructions on how to interpret it (algorithms). These algorithms allow the computer to pick up on patterns in large data sets that humans might miss. Large enough collections of current and historical data allow ever-more-precise predictions for the future.
Predictive analytics is especially important in the business community. It empowers organizations to engage in more effective and longer-term planning, anticipate customer demand, perform proactive maintenance and perform a host of other tasks that boost profits and competitiveness.
Experts predict that the global market for predictive analytics technologies will reach $34.52 billion by 2030.
The inception of machine learning probably arrived in 1783 with the publication of Thomas Bayes’ now-famous Bayes’ Theorem. This theorem provides the probability of a future event based on historical data about that event. His findings formed the basis of “Bayesian inferences” and “Bayesian machine learning,” which now helps computers all over the world essentially “learn from experience.”
What Are the Types of Machine Learning?
Within machine learning are three distinct variations of the technology. Each one works differently and has a unique set of applications and advantages. These three types are:
1. Supervised Learning
This methodology uses collections of pre-labeled data to “train” an AI. The computer uses this known data to learn how to identify similar instances of the data in the future. Supervised learning takes the form of either classification or regression:
- Classification: Email spam filters are an example of classification. The computer sorts incoming mail into classes based on markers they have in common with the training data.
- Regression: Weather forecasts are a good example of regression. This is where the computer uses known, labeled data to draw conclusions about future patterns.
2. Unsupervised Learning
On the other hand, unsupervised learning uses unlabeled (sometimes called “unstructured”) data to train an AI. Most data in the real world is not labeled, which makes this type of machine learning especially important. Again, there are two types:
- Clustering: This is where objects are grouped based on behaviors or other features. For instance, a marketer might use clustering to reveal and target a group of customers with similar characteristics (age, marital status, financial standing).
- Dimensionality reduction: With this type of learning, the computer winnows the list of extraneous variables within a data set in order to find common threads between them.
3. Reinforcement Learning
Unlike the first two types, reinforcement learning isn’t immediately concerned with providing “correct” results. It’s a method for perfecting a computer’s methods over time. The most illustrative example would be training a computer to play chess against a human.
As the computer studies the “consequences” of poor performance — losing pieces and eventually the match — the computer learns which actions to repeat and which to avoid in the future.
Examples of Machine Learning in Everyday Life
Think machine learning only happens behind the scenes? The truth is, most of us now make use of machine learning on a practically daily basis:
- Recommendations on Netflix or Prime Video
- Digital assistants like Alexa and Siri
- Predictive maintenance for industrial equipment
- Email spam filters
- Marketing automation to segment audience “classes”
- Self-adjusting “learning” thermostats
- Voice and handwriting recognition
- Image manipulation and processing tools
It’s now possible even to use machine learning and artificial intelligence to automate the process of restoring and upscaling older films and TV shows.
Deep Learning, Neural Networks and Beyond
Probably the most important leap forward for machine learning was the 2006 introduction of deep learning. Whereas other ML methods focus on classification and prediction, deep learning even more closely mimics the structure and workings of the human mind.
Any time a social media website recommends that you tag specific friends in your uploaded photos, it’s using deep learning to recognize them. Deep learning underpins facial recognition as well as speech and handwriting recognition. It’s also the technology that allows autonomous cars to differentiate between road features and obstacles — such as debris or pedestrians.
Today, the most important work in ML concerns neural networks. Warren McCullough and Walter Pitts were the first to suggest, in 1944, that a computer could mimic the structure of the human brain.
Neural networks are large and complex networks of thousands or millions of processing nodes. Deep learning uses small amounts of labeled data alongside larger quantities of unlabeled data to train neural networks.
What’s most fascinating about neural networks is that, just like the brain, engineers and scientists aren’t 100% certain how the “identifying functions” actually work — simply that they do. Scientists are now at work developing even newer machine learning techniques that essentially force the computers to reveal more about the methods going on “under the hood.”
This mysterious aspect of AI — that it behaves oftentimes inscrutably within a black box — is why many public figures are calling for reasonable restrictions on AI research. It’s a technology that has untold potential to streamline industries, allocate earthly resources more equitably and even help feed the human race as the population swells.
Sharing the Future With Intelligent Machines
We hope you’ve enjoyed this machine learning primer. There’s a lot more we need to know about the ramifications of the technology, but all of this promise points to an exciting and productive future for humanity and machines working together.
Recent Stories
Follow Us On
Get the latest tech stories and news in seconds!
Sign up for our newsletter below to receive updates about technology trends