Important Steps to Take to Address the Bias in AI

New advances in AI technology offer promising opportunities, but they also have risks with inherent bias.

7 Min Read
Shutterstock Photo License - By Drozd Irina

We mentioned previously that bias is a big problem in machine learning that has to be mitigated. People need to take important steps to help mitigate it for the future.

Regardless of how culturally, socially, or environmentally aware people consider themselves to be, bias is an inherent trait that everyone has. We are naturally attracted to facts that confirm our own beliefs. Most of us tend to believe that younger people will perform certain tasks better than their older colleagues, or vice versa. Countless studies reveal that physically attractive candidates have a better shot at getting hired than unattractive ones. The list goes on.

We, as humans, can’t confidently say that our decision-making is bias-free. The root cause of this problem is that bias creeps in unconsciously, making us helpless in figuring out if the decisions we took were biased or not.

This is why the notion of biased artificial intelligence algorithms shouldn’t be surprising as the whole point of AI systems is to replicate human decision-making patterns. To make a functional AI system, developers train it with countless examples of how real people solved that particular problem.

For example, to build an AI system that can help sort job applications, engineers would show the algorithm many examples of accepted and rejected CVs. The AI system then would figure out the main factors that impact decisions, developers would test the system accuracy, and deploy it. In this simplified example, two problems can emerge: HR specialists can be biased in their decision-making to begin with, and the training dataset can appear unrepresentative of a certain gender, age, race, etc. For example, it can be that, historically, a company have been unintentionally hiring only men for the frontend developer position, prompting the AI to rule out women from even getting a chance to be interviewed. This leads us to the first method of eliminating bias from AI.

Data Fairness

AI has been important in solving many challenges. However, the data behind it must be well structured and as free of bias as possible.

In the majority of cases the biggest reason for AI unfairness, especially when it comes to inexperienced developers or small companies, lies in the training data. Getting a diverse enough dataset, which takes into account every demographic or any other critical attribute is what data scientists can only dream of. That’s why you should approach AI development as if your training data is inherently biased and account for this at every stage of the process.

The Alan Turing Institute has introduced a method called ‘Counterfactual fairness’, which is aimed to reveal dataset problems. Let’s get back to our example of a company that hires a frontend developer using AI. In this case, to ensure that the algorithm is fair, developers need to conduct a simple test, letting the AI system evaluate two candidates with identical skillset and experience, with the only difference being gender or any other non-essential variable. Unbiased AI would rate both of those candidates equally, while unfair AI would assign a higher score to men, indicating that readjustments need to be made.

The Institute produced a set of guidelines posed to help AI developers ensure model fairness. Here, at Itransition, we believe that such initiatives will play an increasingly important role in tackling bias in AI.

Adversarial Learning

Besides flawed datasets, bias can also creep in during the model learning stage. To mitigate this, many developers now opt for the adversarial training method. This implies that besides your main model (e.g., the one that sorts applications), you apply another model, which tries to figure out the sensitive variable (age, gender, race, etc.) based upon the results of the main model. If the main model is bias-free, the adversarial model won’t be able to determine the sensitive attribute. Data scientists cite this technique as one of the most effective and easy-to-use, as unlike conventional reweighing, adversarial learning can be applied to the majority of modeling approaches.

Reject Option-based Classification

Lastly, there is also a number of post-processing techniques that can help mitigate bias. The appeal of such methods is that neither engineers nor data scientists need to be bothered with tweaking the model or changing datasets, as only the model outputs need to be modified.

Reject option-based classification is among the most popular post-processing techniques. In essence, the bias is reduced by rejecting predictions that the model is least confident in. For example, we can set a confidence threshold of 0.4. If the prediction certainty is 0.39 or below, the system will flag the output as biased.

Team Diversity

Navigating the AI landscape depends on understanding the business context more than it’s generally perceived. No doubt, data science is closely associated with number crunching, but realizing what’s behind those numbers is equally important. And even then, data scientists’ unconscious prejudices can play a critical role in how bias takes over their algorithms. This is why, more often than not, combating bias in AI is closely tied to hiring people of different race, gender and background.

To enable more thoughtful hiring, companies need to incorporate more objective interviewing techniques. Especially when it comes to large enterprises, too many interviews are limited to traditional CV screening. Forward-looking and innovative companies now make real-world project-based data analysis a centerpiece of their interview process. Not only do they assess how well a candidate performs data analysis science-wise, but they also ensure that he or she can explain findings in the business context.

With AI being a driving force behind many business transformations, it’s imperative that we establish definitive frameworks that tackle bias in AI. It’s also important to realize that we can’t mitigate bias completely. However, it’s far more attainable to control prejudices in algorithms than in humans.

Share This Article
Exit mobile version