,

What You Should Know About Black Box AI

January 28, 2022 • Rehack Team

Advertisements

Artificial intelligence can appear mysterious and complex, especially when we have no way of knowing for certain how AI models come to their conclusions. The part of an AI where its critical reasoning processes happen, the black box, is virtually impossible for developers to access, let alone understand. What exactly is the black box, though, and why is it the most important part of the entire AI?

What Is the Black Box?

AI contains layers of various functions within one another. The black box is part of the AI’s deep learning neural network, which is part of the AI’s machine learning function. Machine learning (ML) is what allows AI to learn and develop independently, gaining understanding through repetitive examination of thousands or even millions of pieces of example data. Deep learning is what enables AI to interpret this data and make decisions and predictions based on it. The black box is where those decisions happen.

The black box gets its name from its mysterious nature. Scientists have no way of natively knowing what goes on inside an AI’s black box. They can see the data that was input and the decision or prediction that was output, but the exact logical process that led to that result is a mystery. Interestingly, researchers have pointed out the similarity to human brains. While we can study our decisions and preferences extensively, it is virtually impossible for us to know for certain what leads us to make the choices we do. The AI black box works the same way, but researchers are working to unlock the black box’s secrets.

Why Is the Black Box Important?

Understanding the AI black box is about more than scientific interest. In fact, it is arguably the greatest concern in the entire field of AI because of its far-reaching impact.

A general sentiment is growing in popular culture that labels AI as suspicious or untrustworthy. While this is based largely on the fictions of film and TV, people see it reflected in real life when AI makes incorrect or “creepy” decisions. To make matters worse, businesses that could benefit from AI are becoming suspicious of it, as well, because they aren’t sure they can trust it to do the tasks they need reliably. This suspicion isn’t unfounded, either.

Logic Bias

Cases of AI presenting flawed reasoning have sprung up more frequently over recent years. One major example can be found in Amazon’s now-discontinued recruiting AI. Amazon was secretly using an AI to analyze applicants’ resumes and information in order to screen candidates for job openings. Amazon shut down the AI against the backdrop of a serious PR crisis when they discovered that the AI eliminated candidates based on gender rather than merit. Candidates were being disqualified for mentioning “female” or “woman” in their application, such as the phrase “Captain of Women’s Chess Club” or even mention of an all-women college or university. Unsurprisingly, well over half of Amazon’s employees are men.

AI bias is a serious issue at the center of concerns surrounding the black box. In earlier days of AI, this wasn’t much of a problem since most uses concentrated on scientific or novel endeavors. However, that is no longer the case. We now use AI to drive cars, stop crime, and even influence medical decisions. These serious life-or-death uses demand a clear and thorough understanding of exactly how an AI thinks because that thought process has monumental real-world implications.

Recent Controversies

As an illustration of the importance of AI bias, intense controversy has recently sparked surrounding the use of AI in law enforcement. Some law enforcement organizations are using AI facial recognition software to help identify people in security camera footage. The issue is that this technology was found to be inherently statistically biased against people of color to the point of misidentifying individuals entirely. These results are baffling since AI has no concept of race or even skin color and therefore shouldn’t technically be capable of making racist logic-based decisions.

Similar issues arise in virtually any use of AI, from autonomous vehicles misunderstanding objects on the road to photo analysis AI misidentifying pictures of animals. Sometimes the AI can appear to be functioning correctly when in actuality it has learned to base its decisions on incorrect hypotheses that just happen to be unnoticeable. For example, maybe we train an AI to recognize pictures of books. Rather than flagging them based on the presence of a title on the spine or the shape of a cover, the AI recognizes a book simply based on the shape of a rectangle. This technically works for identifying books, but it also means the AI will recognize random rectangles incorrectly as books too.

Why Neural Networks Make Errors

When studying AI, it is important to remember that AI have no way of naturally understanding the real world. They can only understand data. So, while a human would never mistake a snake for a giraffe, an AI could because it has no conceptualization of what a snake, a giraffe, or even animals are. This makes determining the cause of logic within machine learning difficult because the AI could be using any number of false connections to come to a conclusion. It could determine that an autonomous car should stop when it is facing north or that the presence of grass in a photo indicates that the picture shows a dog.

To use Amazon’s recruiting AI as an example, the factors that led the AI to believe that it should only hire men were somewhere in its training data. Perhaps it found that more men were already working at Amazon, or maybe its trainers showed it more favorable example resumes for male candidates. It is impossible to know for certain why the AI made this conclusion because the black box is extremely challenging to access, and AI don’t use the natural logic that humans do. If the AI is fed training data that contains even subtle bias or flaws, the AI could latch onto those and essentially end up with a foundation of incorrect reasoning.

How Are Scientists Cracking the Black Box?

The black box has massive implications for the successful use of AI in any field, from things as trivial as AI characters in video games to things as serious as law enforcement. Increasing public trust in AI depends on developing a solution to the black box issue. This is still a work in progress. Some may wonder why scientists don’t just pull up the data on an AI’s decision-making and analyze it. This would be convenient, but the problem is that an AI processes everything in terms of data, not code the way we know it.

Scientists can technically crack open the black box, but what they find inside would be equivalent to data mashed potatoes, little more than a messy pile of indistinguishable letters and numbers. Although humans can’t understand this data, another AI might be able to decipher it. Researchers are using a variety of approaches to make the black box user-friendly.

Black Box Analysis Tools

Computer scientists have discovered ways of applying complex algorithms and tools to black box AI models that allow developers to analyze the black box data. These tools might track the pathway of a single neuron within the ML deep learning neural network. They can also use reverse psychology to test hypotheses about why an AI isn’t functioning correctly. This approach identifies potential triggers for incorrect results, then removes them from the input data and assesses whether or not this changes the output data noticeably.

While these black box analysis tools can’t guarantee 100% accuracy, they can at least help scientists develop a better understanding of why an AI comes to the conclusion that it does.

White Box AI

A more complex, cutting-edge approach to understanding black box AI models is developing an AI from the ground up that is designed to be interpretable. These “white box” or “interpretable” AI are developed in a way that allows scientists to see exactly how conclusions are being made easily. This field is often referred to as explainable machine learning.

Some have been led to believe that white box AI cannot make decisions with the same accuracy as black box AI, but this hypothesis has been proven inaccurate. Explainable AI may be more complicated to develop by nature, but the extra time and effort are well worth it, especially from an ethical standpoint. With white box AI, developers can know for sure that their model is running only on reliable, sound reasoning that is free of bias. This transparency and accountability are the keys to increasing the trustworthiness and reliability of AI.

The Future of Friendly AI

AI has a lot to offer the world and will be instrumental in shaping the technological innovations of the future. The uses of AI that are going wrong don’t need to put an end to AI before it has a chance to develop fully. Unlocking AI’s black box may seem impossible, but researchers have already created the foundation for new AI models that not only unlock the black box but translate it. By creating new AI designed to be explainable, the AI of tomorrow can be more friendly, accurate, and reliable.

bg-pamplet-2