Who created artificial intelligence? Would they be proud of where it is now or worried about the future?
Research suggests there’s a 50% chance of AI automating all human jobs out of existence within 120 years. It’s obvious that AI is progressing at a truly rapid pace. Its effects won’t be felt evenly across society or all at once as it continues to “go mainstream” in business, government, and everyday life.
But who created artificial intelligence? That’s what we’re here to answer. The technology is capable of tremendously impressive feats and some questionable ones – but it’s all in how you use it. So how did the creator of AI envision we’d come to use AI?
Who Coined the Term “Artificial Intelligence”?
The man behind the now-ubiquitous phrase is one John McCarthy. He was a noted cognitive and computer scientist who was active primarily throughout the 1950s and 1960s. When he came up with the term “artificial intelligence,” he defined it like this:
“[AI] is the science and engineering of making intelligent machines, especially intelligent computer programs.”
To this, McCarthy added: “AI does not have to confine itself to methods that are biologically observable,” and, furthermore and perhaps most importantly:
“I don’t see that human intelligence is something that humans can never understand.”
McCarthy never recognized himself as the “primary mover” of AI research, though. For that, he credited Alan Turing, for whom the now-famous Turing Test is named. The two traveled in the same circles throughout the 1950s and 1960s, but McCarthy credits Turing’s lectures and publications in the late 1940s as the real proximate cause of AI research.
Who Created Artificial Intelligence?
John McCarthy had a lot to say on the subject of artificial intelligence during his time as a public figure in technology and science. He passed away in October 2011 at the age of 84. He won’t get a chance to see the concept he pioneered develop into a truly mainstream technology. We also can’t ask him, today, how he would feel about the course AI research is taking in 2022. We can make some informed guesses, though.
McCarthy came to widespread attention in 1956 when he presented his ideas at a conference of researchers and educators at Dartmouth College. For all intents and purposes, this was the beginning of true artificial intelligence research.
Some of the other now-famous names at this conference included the following:
- Alan Turing
- Marvin Minsky
- Allen Newell
- Claude Shannon
- Nathaniel Rochester
- Geoffrey Hinton
If John McCarthy is the father of artificial intelligence, then Geoffrey Hinton is the father of deep learning. Each of the individuals on this list left lasting impacts on technology and the trajectory artificial intelligence would take.
So what did they think that trajectory would look like?
What Does the Definition of AI Say About Its Creator’s Intentions?
Interestingly, when asked to define artificial intelligence in greater detail, John McCarthy maintained that the goal was “not always or even usually” about simulating human intelligence. He went on to argue that conventional measures of human intelligence, like the Intelligence Quotient (IQ), don’t apply to machines worthy of the label “artificial intelligence.”
Some researchers and technologists have taken to referring to the architecture of an AI as something that resides in a “black box.” In other words, although AI exhibits the ability to reason and draw conclusions, we don’t always know how those conclusions come about, nor the precise methods the AI used to make its decisions.
McCarthy said that the goal of AI is not to “simulate” human intelligence. But he referred to human intelligence as a process. AI should not be beholden to that process.
What he meant is that the goal of AI should be to simulate the output – not the mechanisms of human intelligence. His version of AI seems to be “build a computer fast enough to come to the same conclusions a human being might,” – with the important distinction that the goal is not necessarily to simulate the brain’s structure or its decision-making process. It’s an interesting distinction to draw attention to.
The conversation surrounding AI today seems to trending along the lines of, “When will AI replace humans?” or “How long until AI is more intelligent than us?”
Supplanting human levels of intelligence wasn’t the goal, according to McCarthy. He wanted AI and cognitive research to seek two goals simultaneously:
- Fully understand human intelligence and how it functions and makes decisions.
- Build computers fast enough to reach the same conclusions a human would.
When asked, “Does AI aim at human-level intelligence?”, John McCarthy said:
“The ultimate effort [of AI] is to make computer programs that can solve problems and achieve goals in the world as well as humans. However, many people involved in particular research areas are much less ambitious.”
When Machines Solve Problems “as Well as Humans”
John McCarthy didn’t live to see AI live up to that lofty goal – not truly. And he may be dismayed at some of the sources of fear surrounding it. So maybe it doesn’t matter who created artificial intelligence. We all have a stake in its future now.
Turing, Hinton, and McCarthy wouldn’t have invested so much of their time and energy into researching AI if they didn’t think it was a worthwhile endeavor. They spoke about “thinking machines” that mimic human decision-making capacity as an inevitability and a boon to human progress. As to questions of how to define this fledgling life, Turing himself said, “If [a machine] seems conscious, it is.”
But what he, McCarthy, and the rest of their inner circle seemed caught up in is the excitement of helping pioneer a technology that can help us understand – not replace – human consciousness.
Maybe that’s what the current AI conversation is missing: AI as a lens to study the human psyche and condition. We have a lot of data already about AI replicating human shortcomings and prejudices, for example. John McCarthy – who most people will remember as the man who created artificial intelligence – was a student of the cognitive and digital sciences. He knew that humanity had to understand itself more fully, and why we do what we do, before AI could truly come into its own. Let us hope latter-day cognitive and computer scientists continue his work with similar priorities.
Follow Us On
Get the latest tech stories and news in seconds!
Sign up for our newsletter below to receive updates about technology trends