Sunday, October 29, 2017

Scientists Find Way to Reduce Racism and Sexism in Robots


In 2016, Microsoft released a “playful” chatbot named Tay onto Twitter designed to show off the tech giant’s burgeoning artificial intelligence research. Within 24 hours, it had become one of the internet’s ugliest experiments.

By learning from its interactions with other Twitter users, Tay quickly went from tweeting about how “humans are super cool,” to claiming “Hitler was right I hate the jews.” 

Trending: Trump Approval Rating Plummeting Even Among White Voters In Fox News Poll

While it was a public relations disaster for Microsoft, Tay demonstrated an important issue with machine learning artificial intelligence: That robots can be as racist, sexist and prejudiced as humans if they acquire knowledge from text written by humans.

Fortunately, scientists may now have discovered a way to better understand the decision-making process of artificial intelligence algorithms to prevent such bias.

AI researchers sometimes refer to the complex process machine learning algorithms go through when reaching a decision as the “black box” problem, as they are unable to explain the reason for an action. In order to better understand it, scientists at Columbia and Lehigh Universities reverse engineered a neural network in order to debug and error-check them.

“You can think of our testing process as reverse engineering the learning process to understand its logic,” said Suman Jana, a computer scientist at Columbia Engineering and a co-developer of the system. “This gives you some visibility into what the system is doing and where it’s going wrong.”

In order to understand the errors made, Jana and the other developers tricked an AI algorithm used in self-driving cars into making mistakes. This is a particularly pressing issue considering recent adoption of the technology—last year a Tesla operating autonomously collided with a truck it mistook for a cloud, killing its driver.

Don't miss: El Chapo's CFO Captured in Mexico City

artificial intelligence bias racist robot

A debugging tool developed by researchers at Columbia and Lehigh generates real-world test images meant to expose logic errors in deep neural networks. The darkened photo at right tricked one set of neurons into telling the car to turn into the guardrail. After catching the mistake, the tool retrains the network to fix the bug. Columbia Engineering

By feeding a deep learning neural network with confusing, real-world inputs, Jana and his team was able to expose flawed reasoning within the decision-making process. The DeepXplore tool developed to do this was also able to automatically retrain the neural network and fix the bug.

DeepXplore was tested on 15 state-of-the-art neural networks, including self-driving networks developed by Nvidia. The software discovered thousands of bugs that had been missed by previous error-spotting techniques.

Beyond self-driving cars, the researchers say DeepXplore can be used on artificial intelligence used in air traffic control systems, as well as uncovering malware disguised as benign code in antivirus software.

The technology may also prove useful in eliminating racism and other discriminatory assumptions embedded within predictive policing and criminal sentencing software.

Most popular: CIA and Treasury Scrap Over Who Gets to Spy on Americans

Earlier this year, a separate team of researchers from Princeton University and Bath University in the UK warned of artificial intelligence replicating the racial and gender prejudices of humans.

“Don’t think that AI is some fairy godmother,” said study co-author Joanna Bryson. “AI is just an extension of our existing culture.”

artificial intelligence self-driving car MIT

A roof-mounted camera and radar system is shown on a self-driving car during a demonstration in Pittsburgh on September 13, 2016. Aaron Josefczyk/ REUTERS

Learning from data supplied by humans, AI can make presumptions about everything from crime to facts about the labor force. For example, a 2004 study published in The American Economic Review found that when using resumés of the same quality, AI still favored European-American names over African-American names.

“We plan to keep improving DeepXplore to open the black box and make machine learning systems more reliable and transparent,” said Columbia graduate student and co-developer Kexin Pei.

“As more decision-making is turned over to machines, we need to make sure we can test their logic so that outcomes are accurate and fair.”

More from Newsweek





All Hypnosis Feeds

No comments:

Post a Comment