AI Can Improve Financing Access and Equitability to Disadvantaged Socioeconomic Groups

Artificial intelligence can help financial access more equitable for some disadvantaged demographics.

6 Min Read
Shutterstock Licensed Photo - By Andrey Suslov | stock vector ID: 1684712782

Artificial intelligence has helped many businesses in various industries tackle a variety of challenges that they have faced in recent years. The merits of using artificial intelligence are usually discussed from the context of maximizing profits. However, AI technology can have more holistic benefits as well.

We have talked about how big data and AI change the way we evaluate economics. It can have tremendous benefits when used properly. One of the potential benefits of AI is that it can reduce inequality.

This can be especially important in the field of banking, where there is a lot of inequity in financing opportunities. It can help people of all genders and races have access to home equity lines of credit.

AI creates the potential to address lending concerns for minorities by neutralizing actuarial discrimination

The practice of redlining was banned under the Fair Housing Act. However, there is still tremendous inequity in the lending process in many areas.

Atlanta is a good case study on the challenges that minorities face when seeking funding. Policy Map published heat maps detailing the rate that loan applications were denied in both predominantly white and black neighborhoods. In the vast majority of black neighborhoods, over 21% of all loan applications were denied. The vast majority of white neighborhoods had loan rejection rates under 10%.

Although it is easy to outright blame racial bias for the discrepancy, the reality is a lot more complicated and difficult to solve. Part of the issue is due to other factors that are caused by racial inequality, such as lower earnings. However, racial discrimination also plays a role in the equation as well. Some lending models might rely on seemingly neutral data that has been tainted by bias, such as historical lending criteria.

These problems have to be addressed carefully. Artificial intelligence might play an important role in dealing with the inequity of the lending process implemented throughout the financial industry. It is one of the biggest examples of ways that big data and AI are changing consumer lending.

Could AI really help improve the fairness of the financial industry?

There has always been a lot of speculation that algorithmic financial decisions should be fairer than those made by humans. The theory has been that machines lack the capacity to discriminate and focus entirely on objective actuarial criteria.

Sian Townson of Harvard Business Review said this theory is backed by common sense, but has not stood up to reality in the past. Lending decisions that are made with algorithms often rely heavily on biased data sets. This has often amplified inequity, rather than resolving it.

However, there is growing evidence that a new approach to algorithmic lending could prove to be much fairer. The trick is to revamp machine learning algorithms to be more focused on relevant data points. They must be programmed to disqualify people entirely based on their likelihood of meeting their financial obligations, rather than any irrelevant data that would prove to be discriminatory.

What can lenders do to improve fairness in their AI algorithms?

There are a lot of steps that financial institutions can take to make the learning process fairer. These steps need to be followed carefully in order to reduce bias.

The most important thing that lenders need to take into consideration is that they can’t be too dependent on prior lending decisions. Women and minorities were often denied loans in the past. This means that lenders have a smaller sample size of data to evaluate when looking at prospective borrowers in these protective classes. If they look at historical loan data, then they will end up making unfair assumptions that have been prejudiced by older models that were riddled with bias.

This means that actuarial lending decisions need to be reworked. They must remove bias from consideration, which means not relying too heavily on historical lending decisions.

Banks might also have to modify historical data to be more in line with fairer outcomes. For example, one financial institution discovered that women historically needed to earn 30% more than men to qualify for the same loan. They were able to make their lending decisions fairer by adjusting the earnings of female applicants with a multiplier.

Townson said that there is still a possibility for inequality in the lending process after developing a seemingly neutral algorithm. However, it will still be a good start. The program can continually be modified to reflect the need for greater fairness.

Share This Article
Exit mobile version