There was a time when AI only existed in movies, but now it’s everywhere you look. As common as this technology is, though, it’s not quite what you see in sci-fi media. For all of our technological advancements, we still don’t have self-aware artificial intelligence.
A lot of the movies and shows that deal with AI ask ethical questions about robot sentience. Given how prominent this subject is in media, there’s a surprising lack of it in AI ethics discussions in real life. You could chalk that up to today’s limited technology, but it’s not quite that straightforward.
So, how far away are we from self-aware AI, and what will it take to get there? Does it even matter? Let’s take a closer look.
How Close Are We to Self-Aware Artificial Intelligence?
First things first, is machine consciousness even something you could see in your lifetime? Probably not. There have been a few instances where machines reportedly passed the Turing Test, but that doesn’t mean they were sentient.
As smart as today’s AI is, all of it still follows its programming. It replicates human intelligence but doesn’t have intelligence of its own, which experts agree doesn’t constitute self-awareness. Moving past that line is challenging, in part because scientists are still unsure how human consciousness works.
AI today can act autonomously and sometimes correct itself, but you wouldn’t call it conscious. That’s where the biggest issue with this subject comes in. How can you define and measure consciousness apart from “you’ll know it when you see it”?
Issues With Measuring Self-Aware Artificial Intelligence
The Turing Test measures people’s perception, not machine sentience itself, so it doesn’t work as a consciousness test. Still, you could argue that if something is good enough to fool people, it’s close enough to the real thing. That’s why measuring self-awareness in AI is so hard. It’s tricky to define.
Some researchers have defined consciousness along three levels — C0, C1 and C2. C0 refers to unconscious calculations like facial recognition, and C1 involves making decisions by considering multiple possibilities. C2, or metacognition, is an awareness understanding of one’s own thoughts, which is much harder to measure.
Since some machines can recognize errors and correct themselves, they showcase attributes of metacognition. You still wouldn’t call them self-aware, though, so there has to be something more to it. That “something more” seems impossible to narrow down, much less measure scientifically.
Take your own consciousness, for example. How can you prove that you’re aware of your own thoughts?
Do We Need Sentient AI?
Some people might argue that these questions about how to measure sentience are unnecessary. Why should we pursue self-aware artificial intelligence? As significant a breakthrough as sentient AI would be, it may not offer us anything better than what we already have.
Consider self-driving cars. While we’re still years away from having advanced enough technology to make them a reality, we don’t need robot sentience for them. They don’t have to be self-aware to do their job, and the same is true for most automated tasks.
As for more nuanced work, if it’s best left to humans, then why try to change that? AI and people work best when they work together, each taking advantage of their unique skills. We don’t need robots to resemble humans if it’s better for them to do things humans can’t.
Consciousness is Complicated
We may never have self-aware artificial intelligence, and if we do, it almost certainly won’t be soon. Consciousness is too hard to define or measure. When you consider the big picture, though, that’s not a bad thing.
AI doesn’t need to be self-aware to be of better use to humans. In fact, you could argue that it’s better if machines aren’t exactly like us. If nothing else, it’d be nice to avoid a Terminator-style robot uprising, right?
Follow Us On
Get the latest tech stories and news in seconds!
Sign up for our newsletter below to receive updates about technology trends