,

The Complicated Relationship Between Artificial Intelligence and Ethics

January 18, 2021 • Devin Partida

Artificial intelligence (AI) is one of the most exciting technologies under development today. It’s also one of the most enigmatic and potentially dangerous.

The promise of AI lies in its ability to use human-like cognition in analyzing and responding to datasets. Artificial intelligence can make predictions about future events, anticipate disruptions and streamline workflows and deliver personalized recommendations on almost any topic.

As more organizations and business types — like agriculture, banking, manufacturing, logistics and many others — turn over more and more data to artificial intelligence, we’re approaching a reckoning on some of the less ethical uses for AI. And even when it’s working as intended, it still takes a lot of work to ensure the results don’t show bias.

Here’s a rundown of some of the biggest open-ended questions right now when it comes to artificial intelligence and ethics.

Social Engineering

One of the most appealing applications of machine learning and AI is receiving personalized recommendations for music, movies, TV shows and even news articles.

It’s the idea of personalized newsfeeds that brings the greatest worry. Confirmation bias and filter bubbles are well-observed and highly destructive phenomena. It’s where individuals only seek out and believe information that fits with their existing biases and expectations.

Turning over news aggregation to AI is an incredibly dangerous trend. So is having AI write the actual news stories. The problem is that healthy public discourse and a thriving democratic process depend on citizens casting a wide net for perspectives and facts. News feeds personalized by AI only deepen our misconceptions, preconceptions and biases.

Political polarization will only grow more pronounced if engineers don’t develop ways to democratize the algorithms powering AI news aggregation systems.

Wealth Distribution and Unemployment

Technology naturally delivers opportunities to unburden ourselves of repetitive labor. Global economic output surged dramatically in the years following World War II thanks to technology breakthroughs. It continues to rise steadily and slowly.

Automation and now artificial intelligence have led to substantial improvements in how much work can be performed per hour of human labor. The ethical question arises at the point where a job can be performed more cheaply and quickly by artificial intelligence or robotics than by a human worker.

Most people on earth rely on selling a majority of their time to survive and make a living. And yet, some 78% of predictable physical labor is at a high risk of automation along with 25% of unpredictable labor. There are major unanswered questions about what happens to people’s livelihoods in the coming years if predictions pan out.

As many as 800 million jobs could be on the line by 2030, which would radically remake the economy and global workforce. Some are calling for heavy taxes on companies that adopt automation, so that we can continue to invest in communities, social programs and, potentially, some kind of universal basic income.

There’s a chance to deliver ourselves from busywork using AI — but success depends on keeping the owners of the robots and algorithms accountable to the workers they’re displacing.

Racism and Bias

Human beings have innumerable flaws. Not surprisingly, we’ve been developing artificial intelligence in our own image. AI has a tremendous capacity to study huge sets of data and draw conclusions or make decisions. When it comes to making decisions that directly affect human lives, it appears that AI struggle with ideas like fairness and equality.

Examples already abound of artificial intelligences displaying signs of racism when asked to predict the likelihood of future crimes. In a now-famous example, one AI demonstrated bias against people of color when asked to predict the likelihood of a criminal committing a second offense.

Other examples show that artificial intelligence cannot yet be counted on to show objectivity in rating customers’ potential creditworthiness. Indeed, as Deloitte points out, “AI systems are only as good as the data we put into them.”

It would appear that AI developers need to substantially improve their methods for ensuring there’s no existing racial, gender or ideological biases in AI software.

Humane Treatment of AI

We’d be remiss if we didn’t flip the script to look at the artificial intelligence and ethics problem from the other way around. What constitutes ethical treatment of an AI?

Even the simplest animals share several behavior mechanisms with humans, such as aversion and reward. As artificial intelligence becomes more lifelike and more capable of responding to a wide range of stimuli, we will almost certainly face increasing pressure to recognize some examples of AI as fledgling forms of life.

Will this prove as controversial as extending human rights to our most closely related animal cousins? The Great Apes Project is one example of a movement pressuring nations to extend human rights to the great apes, including chimpanzees, gorillas, bonobos, orangutans and others.

Artificial intelligence will become more convincingly human in their cognition, speech patterns and demeanor. The future will likely see more questions like these extended to artificial intelligences, and advance debates about the point at which an AI achieves true sentience.

Towards an Ethical Future for AI

There’s little question that AI is changing the world as we know it. The questions surrounding artificial intelligence and ethics aren’t going anywhere, however. AI has the potential to make us vastly safer and more productive, and even to mitigate human suffering. It could make life better for all the peoples of the earth by allocating resources more effectively.

To get there, we must ensure that human frailties or actors with questionable motives don’t contaminate the data or otherwise subvert these positive visions of the future.

Recent Stories

Follow Us On

bg-pamplet-2