OFFICIAL PUBLICATION OF THE COMMUNITY BANKERS ASSOCIATION OF KANSAS

Pub. 4 2023 Issue 6

AI in Lending Decisioning and Unintended Discrimination

This story appears in the
In Touch Magazine Pub. 4 2023 Issue 6

With the advancements in artificial intelligence (AI) technology, businesses around the world are considering how they can use AI to improve efficiency and advance business goals. Financial institutions are no exception. While AI can bring many efficiencies and advancements to the way business is conducted, in the highly regulated financial services industry, there are many considerations that need to be addressed by financial institutions seeking to use AI.

In the context of lending, there are many credit decisioning technology platforms advertised to improve, automate and eliminate bias in credit decisioning. However, the issue of bias is not so straightforward and regulatory agencies are not backing away from this issue. The Consumer Financial Protection Bureau (CFPB) stated, “Tech marked as ‘artificial intelligence’ and as taking bias out of decision-making has the potential to produce outcomes that result in unlawful discrimination.”1

On April 25, the CFPB and other federal agencies released a joint statement regarding the use of advanced technologies, including AI.2 CFPB Director Rohit Chopra stated, “Today’s joint statement makes it clear that the CFPB will work with its partner enforcement agencies to root out discrimination caused by any tool or system that enables unlawful decision-making.”

The Equal Credit Opportunity Act (ECOA) of 1974, which is implemented by Regulation B, applies to all lenders. The statute prohibits financial institutions and other firms engaged in the extension of credit from discriminating against a borrower on the basis of sex, marital status, race, color, religion, national origin, age (provided the applicant has the capacity to contract), because all or part of the applicant’s income derives from any public assistance program, or because the applicant has, in good faith, exercised any right under the Consumer Credit Protection Act.

So how could AI, which is designed to create efficiencies and fairness and improve the lending process, run afoul of the ECOA? To answer this question, we must consider the data being used to make lending decisions. These technology platforms rely on voluminous datasets to power their algorithmic decision-making. We have all heard the adage “bad data in, bad data out.” In other words, incorrect data input creates bad results. Algorithmic bias describes errors in a technology system that create unintentional unfair outcomes.

As applied to lending, algorithmic bias could result in one group of applicants receiving some advantage or disadvantage when compared to other applicants, even where there is no relevant difference between the two groups. This bias is created because of erroneous assumptions in the machine-learning process. When the algorithmic bias results in different treatments or impacts disfavoring applicants based on characteristics prohibited by the ECOA, the result is algorithmic discrimination, which, even if generated by a technology platform, still violates the ECOA.

If your financial institution wants to take advantage of the latest innovations in AI, what steps need to be taken to ensure there are no ECOA violations? The federal government has provided instruction to designers, developers and deployers of these technologies to protect against algorithmic discrimination.

“Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.”3

As a financial institution utilizing these technologies, it will be crucial for your institution to conduct appropriate due diligence on the technology service provider, which should include a review of the third party’s algorithmic impact assessments, which should include disparity testing results and mitigation information. The federal regulatory agencies made it clear in their June 9 Interagency Guidance on Third-Party Relationships: Risk Management publication that, especially when using new technologies, financial institutions have heightened responsibilities, given the increased risk of such technologies and third-party relationships, to ensure the technologies being provided comply with applicable laws and regulations.

Failure to complete a thorough due diligence review will very likely result in serious negative consequences, especially if it is discovered that the technology results in algorithmic discrimination.

Shelli J. Clarkston is an Of Counsel attorney in the Kansas City, Missouri office of Spencer Fane, LLP. She can be reached at (816) 292-8893 and sclarkston@spencerfane.com.

  1. Consumer Financial Protection Bureau, CFPB and Federal Partners Confirm Automated Systems and Advanced Technology Not an Excuse for Lawbreaking Behavior, April 25, 2023.
  2. See Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems.
  3. The White House, Algorithmic Discrimination Protections, Blueprint for an AI Bill of Rights, August 22, 2023.