How Old Is AI? The History of Artificial Intelligence

May 5, 2021 • Zachary Amos

Advertisements

In This Article

  1. AI’s Early History
  2. Developing the Tech Behind Modern AI
  3. The First AI Winter
  4. AI Takes Off Again
  5. Modern Uses of AI
  6. What’s Next: the Future of AI

Artificial intelligence (AI) seems to be everywhere. A massive number of companies, both inside and outside the tech world, are using it to help improve their businesses or offer new products.

While the tech has only really seen mainstream use in just the past decade, researchers have been trying to create artificial intelligence — machines that can think like a person does — for decades. 

The origins of artificial intelligence can be traced back to the 1950s, so AI is generally considered about 70 years old. However, the concept behind the tech is even older than that.

This is where AI tech has come from and where it’s probably going.

When Did Artificial Intelligence Start?

The question “how old is AI?” can be difficult to answer. While the technology we consider AI today didn’t emerge until relatively recently, researchers started laying the groundwork for it decades ago.

In the mid-20th century, research computers were slowly becoming cheaper and more accessible. For the first time, it was possible for universities — even those without massive budgets — to afford computers that could store commands and information. This allowed researchers to experiment with early computer programming.

As a result, a growing number of scientists became more interested in answering the same question — could you make a computer think?

In 1950, computer scientist Alan Turing published one of the most important papers in the history of artificial intelligence. The paper proposed a test now known as the Turing Test. It involves a robot and a person in separate rooms answering questions through text from a human judge.

If the judge can’t correctly identify which party is human and which is the machine, the computer passes the test. Passing the Turing Test signifies a machine is intelligent or can successfully imitate human intelligence. While researchers today question the Turing Test’s accuracy, it represents the mid-20th century’s growing fascination with the concept of artificial intelligence.

Most people point to this period as the origins of artificial intelligence. The phrase “artificial intelligence” itself was first used at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) in 1956, organized by John McCarthy and Marvin Minsky.

The conference was key for the development of AI and kicked off a multi-decade period of enthusiasm and investment in new AI tech. From the 1950s to the 1970s, there was significant research on early AI and a lot of optimism about what researchers would be able to achieve. In general, the feeling was that, within a generation, computers would be just as intelligent and good at problem-solving as the average person.

Developing the Tech Behind Modern Artificial Intelligence

That 1956 conference may have kickstarted artificial intelligence research, but it wasn’t when the first AI was created. Early versions of modern AI tech would emerge later.

Before computers were even available, Turing wrote a program for playing chess, following the code himself. In the ’50s and ’60s, researchers started applying this concept to actual computers, trying to create chess-playing machines. A few decades later, AI would finally achieve that goal.

One of the earliest examples of artificial intelligence appeared in 1964 with a rudimentary chatbot called Eliza. Eliza communicated using pattern matching, a basic form of natural language processing (NLP), an AI process you can find everywhere today. This chatbot is also one of the earliest examples of a machine that could fool some people into thinking it was a human.

Two years later, Stanford Research Institute created a robot called Shakey. Shakey used NLP to interpret verbal commands, making it the first mobile robot that could understand instructions. While this understanding was limited by today’s standards, it marked an important milestone for AI development.

In 1973, researchers created another semi-intelligent mobile robot called WABOT-1. WABOT-1 could go beyond interpreting commands to communicate with people like today’s AI does. Some people call this bot the first intelligent humanoid robot, making it an early predecessor to modern AI-powered robots like Sophia.

The First AI Winter

In the history of artificial intelligence, the mid-’70s to the mid-’90s is known as an “AI Winter.” That’s because, compared to the previous two decades, AI funding fell, and people started to lose interest in the technology. While there may have been fewer landmark moments in this period, artificial intelligence tech was still progressing.

One of the most important AI developments to appear during this time is the expert system. These have expert knowledge in a specific field and can simulate human judgment in their areas of expertise. This concept first emerged in the 1970s and continued to grow throughout the ’80s and beyond. You could call many modern AI solutions expert systems.

Apart from expert systems, the ’80s and ’90s laid the foundation for a lot of the practical AI tech encountered in daily life. 

For example, neural networks are a popular modern artificial intelligence tech used to power advancements like computer vision, which allows a computer to “see,” breaking down complex visual input into objects and shapes. Much of the groundwork for modern neural networks was developed in the mid-1980s.

This era also saw AI grow in the medical field, one of its most promising applications today. For example, in 1986, the University of Massachusets released DXplain, a medical decision support program. DXplain could generate diagnoses based on patient symptoms and provide information on roughly 500 diseases. The system is still in use today and has data on more than 2,400 health conditions.

AI Takes Off Again

By the 1990s, AI research and funding started to pick up steam again. Researchers were demonstrating how artificial intelligence using a particular kind of algorithm to learn, called backpropagation, was able to beat professional players in games like backgammon, checkers and chess.

At this point in the history of AI, research hit a landmark moment in 1997 when IBM’s Deep Blue won a game of chess against reigning champion Garry Kasparov. Kasparov had beat Deep Blue and an earlier version of the software on a few occasions prior, making this victory something of an underdog story.

The ’90s also saw several other major artificial intelligence advancements — like a cross-U.S. tour of the first semi-autonomous car powered by AI.

One of the most famous examples of artificial intelligence appeared in 2007 when IBM created Watson. Watson, which still exists as a broader, more advanced AI platform, uses NLP and machine learning to understand and answer complex questions. In 2011, Watson had its most iconic moment when it won the game show “Jeopardy!” against the show’s previous human champions.

Modern Uses of Artificial Intelligence

Today, artificial intelligence is almost anywhere. If you regularly use the internet, you’re probably taking advantage of some type of modern AI.

Most major names in tech — like Alphabet, Microsoft, Amazon and Apple — use AI in one way or another. New product recommendations algorithms, modern search engines and many customer service chatbots are all examples of artificial intelligence in practice.

Some of the most familiar examples of AI are smart assistants like Siri, Alexa and Google Assistant. These digital helpers use NLP to understand commands and machine learning to understand your habits and preferences. While they seem a far cry from their predecessors like Eliza and Shakey, these bots operate on many of the same basic principles.

Many examples of artificial intelligence you run into today are less obvious. Whenever you search for something on Google or look at recommended videos on YouTube, you’re using AI, whether you realize it or not. These services analyze content and your history to match things to your preferences, helping you find what you’re after sooner.

The influence of artificial intelligence extends well beyond the tech world. AI-powered warehouse robots help workers organize goods. Self-driving cars are now just a few years or more from being a reality. In just about every field, artificial intelligence data analysis can help improve predictions and analyze unstructured data. The right tool can also help analysts find subtle patterns that you might not be able to see with a more traditional analytic approach.

Theoretical AI research is still ongoing. For many scientists, the goal remains more complex AIs. The most ambitious researchers hope to eventually develop AIs that have a theory of mind or even self-awareness. However, this kind of artificial intelligence is closer to science fiction than something you’ll probably see in the near future.

What’s Next: The Future of AI

From the history of AI to its unknown future, the story of artificial intelligence research is ironically the story of the triumph of human intelligence. More than 70 years in the making, AI tech is still advancing. Shortly, researchers hope to use AI to power completely self-driving cars, advanced machine translation tools, and new autonomous robots for manufacturing and construction. People in these industries could work alongside and even hold conversations with robot co-workers.

Beyond that, scientists are still pursuing the original goal of AI research — creating machines that think in the same way people do. This tech is probably still a very long way off. However, with the recent rise of generative AI tools like ChatGPT, less complex AI will likely become increasingly important to business and daily life.

Recent Stories

Follow Us On

bg-pamplet-2