Artificial intelligence (AI) systems now control everything from regulating social media and hiring decisions to policy and governance. Today, we have some companies that are building systems that use AI that can predict the next development direction of COVID-19 and make decisions about healthcare. However, when creating these systems and selecting data that can provide a basis for decision-making, there is a huge risk of human bias spreading and magnifying people’s mistakes.

To better understand how people build trust in AI systems, we interviewed Gargi Dasgupta, Director of IBM Research India and Sameep Mehta, a distinguished engineer, and Dr. Vivienne Ming, an AI expert and founder of Socos Labs (located in California). AI-based incubator to find some answers.

First, how does prejudice penetrate the AI ​​system?

Dr. Ming explained that when AI is trained on data that already has bias, bias becomes a problem. Dr. Ming is a neuroscientist and the founder of Socos Labs. Socos Labs is an incubator dedicated to finding solutions to human messy problems through the application of AI. She explained: “As an academic, I have had the opportunity to collaborate a lot with Google, Amazon and other companies.”

“If you really want to build a system that can solve the problem, it is important to look at the problem first. In fact, it is important to ensure that a large amount of data and a misunderstanding of the problem will be generated.”

IBM’s Dasgupta added: “Special tools and techniques are needed to ensure that we are not biased. We need to be sure and take extra care to eliminate deviations so that they are not inherently passed on to the model.”

See also  Solar eclipse 2021: when and where and how to watch the first "ring of fire" this year

Since machine learning is based on past data, it is difficult for algorithms to find correlations and treat them as causality. The model can interpret noise and random fluctuations as core concepts. However, when new data is entered, if it does not have the same fluctuations, the model will consider it to not meet the requirements.

“How do we build an AI recruitment system that is not biased towards women? Amazon wants me to build something like this, and I told them that the way they do it doesn’t work,” Dr. Ming explained. “They are just training AI with a lot of hiring history. They have a huge data set of past employees. But I think for all of us, for many reasons, almost all hiring history is biased towards men, which is not surprising .”

“It’s not just that they are bad people, but they are included. They are not bad people, but AI is not magic. If humans cannot figure out sexism, racism, or caste systems, then artificial intelligence will not do it for us.”

How to eliminate prejudice in artificial intelligence systems and build trust?

Compared with standardizing them, Dr. Ming prefers to audit AI systems. “I am not a major advocate of regulation. Companies from large to small need to be audited. Her AI, algorithms, and data audit methods are exactly the same as those in the financial industry.” She said.

Dr. Ming explained: “If we want the AI ​​system in recruitment to be impartial, then we need to be able to see what makes someone a good employee, not the correlation with the good employees in the past.”

See also  India and Singapore will be connected to UPI and PayNow fast payment systems

“It’s easy to connect with each other-elite schools, certain genders, certain races-at least in some parts of the world, they are already part of the recruitment process. When you do a causal analysis, going to an elite school can no longer explain why people Good at their own work. Most people who have never attended an elite school have the same work as those who have attended a school. We usually find that there are ten people in a data set of about 122 million people, and even some Under circumstances, about 100 times as many qualified people have not attended elite universities.”

In order to solve this problem, we must first understand whether there are deviations in the AI ​​model and how to generate the deviations, and secondly, we must develop an algorithm to eliminate the deviations.

According to Mehta, “The story has two parts-one is to understand whether the AI ​​model is biased. If so, the next step is to provide an algorithm to eliminate this bias.”

The IBM Research team has released a series of tools designed to address and alleviate biases in AI. IBM’s AI Fairness 360 Toolkit is such a tool. It is an open source indicator for checking whether there are unnecessary deviations in data sets and machine learning models. It uses about 70 different methods to calculate deviations in AI.

Dasgupta said that in many cases, the system is biased and the IBM team can predict. “After we predict the deviation, how the customer will integrate it as part of the remediation process is in the hands of the customer.”

See also  Student Coin (STC) tokens are now listed on Bitcoin.com exchange – press release Bitcoin News

The IBM Research team also developed the AI ​​Explainability 360 Toolkit, which is an algorithm toolkit that supports the interpretability of machine learning models. Dasgupta explained that this allows customers to understand and further improve and iterate their systems.

Part of this system is what IBM calls FactSheets-very similar to nutrition labels or Apple’s recently introduced App Privacy label.

FactSheets contains questions such as “Why build this kind of AI?”, “How to train it?”, “What are the characteristics of the training data”, “Is the model fair?”, “Can the model explain?” and so on. and many more. This standardization also helps to compare two AIs with each other.

IBM also recently introduced new features for Watson, its AI system. Mehta said that IBM’s AI Fairness 360 Toolkit and Watson Openscale have been deployed in multiple locations to help customers make decisions.