Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • CFPB asks tech workers to report AI lending discrimination

    Federal Issues

    On December 15, the CFPB released a blog post calling on technology workers to report potential violations of federal consumer financial laws, including related to artificial intelligence (AI), as part of the Bureau’s efforts to adapt to the evolving financial landscape. According to the Bureau, AI has become a part of nearly every consumer financial market, creating the potential for intentional and unintentional discrimination within the decision-making process. As an example, while algorithmic mortgage underwriting has the potential to reduce discrimination, the Bureau warned that “researchers found discriminatory effects of these new technologies, as Black and Hispanic families have been more likely to be denied a mortgage compared to similarly situated white families.” The Bureau asked tech workers, including engineers, data scientists, and others with detailed knowledge of these algorithms and technologies, to report potential discrimination or other misconduct to the Bureau to help ensure these technologies are not being misused or abused. “Tech workers may have entered the field to change the world for the better, but then discover their work being misused or abused for unlawful ends,” CFPB Chief Technologist Erie Meyer stated. The Bureau updated its whistleblower webpage to provide additional information on the whistleblower submission process, and noted that fair lending experts and technologists will review submitted whistleblower tips. The webpage also describes the type of information the Bureau is seeking, and outlines whistleblower protections.

    Federal Issues CFPB Artificial Intelligence Fintech Whistleblower Fair Lending Consumer Finance

  • Senate launches Financial Innovation Caucus

    Federal Issues

    On May 25, Senators Cynthia Lummis (R-WY) and Kyrsten Sinema (D-AZ), along with several other bipartisan Senators, announced the creation of the U.S. Senate Financial Innovation Caucus to highlight “responsible innovation in the United States financial system, and how financial technologies can improve markets to be more inclusive, safe and prosperous for all Americans.” The Senate will use the caucus “to discuss domestic and global financial technology issues, and to launch legislation to empower innovators, protect consumers and guide regulators, while driving U.S. financial leadership on the international stage.” The press release notes that the caucus is timely because of the “growing regulatory focus on digital assets,” which includes efforts by the Federal Reserve Board, SEC, and other foreign governments to create digital currencies. The caucus will focus on critical issues pertaining to the future of banking and U.S. competitiveness on the global stage, including: (i) distributed ledger technology (blockchain); (ii) artificial intelligence and machine learning; (iii) data management; (iv) consumer protection; (v) anti-money laundering; (vi) faster payments; (vii) central bank digital currencies; and (viii) financial inclusion and opportunity for all.

    Federal Issues Fintech U.S. Senate Digital Assets Artificial Intelligence Finance Federal Reserve SEC Bank Regulatory Central Bank Digital Currency

  • FDIC chairman addresses the importance of innovation

    Fintech

    On May 11, FDIC Chairman Jelena McWilliams spoke at the Federalist Society Conference about the Dodd-Frank Act in a post Covid-19 environment and the future of financial regulation. Among other topics, McWilliams emphasized the importance of promoting innovation through inclusion, resilience, amplification, and protecting the future of the banking sector. McWilliams pointed out that “alternative data and AI can be especially important for small businesses, such as sole proprietorships and smaller companies owned by women and minorities, which often do not have a long credit history” and that “these novel measures of creditworthiness, like income streams, can provide critical access to capital” that otherwise may not be possible to access.  McWilliams also discussed an interagency request for information announced by the FDIC and other regulators in March (covered by InfoBytes here), which seeks input on financial institutions’ use of AI and asks whether additional regulatory clarity may be helpful. McWilliams also added that rapid prototyping helps initiate effective reporting of more granular data for banks. Additionally, McWilliams addressed agency’s efforts to expand fintech partnerships through several initiatives intended to facilitate cooperation between fintech groups and banks to promote accessibility to new customers and offer new products. Concerning the ability to confront the direct cost of developing and deploying technology at any one institution, McWilliams added that “there are things that we can do to foster innovation across all banks and to reduce the regulatory cost of innovation.”

    Fintech FDIC Covid-19 Dodd-Frank Artificial Intelligence Bank Regulatory

  • FDIC announces FDItech virtual ‘Office Hours’

    Fintech

    On April 29, the FDIC’s technology lab, FDiTech, announced that it will host a series of virtual “office hours” to hear from a variety of stakeholders in the business of banking concerning current and evolving technological innovations. The office hours will be hour-long, one-on-one sessions that will provide insight into the contributions that innovation has made in reshaping banks and enabling regulators to manage their oversight efficiently. According to the FDIC, “FDiTech seeks to evaluate and promote the adoption of innovative and transformative technologies in the financial services sector and to improve the efficiency, effectiveness, and stability of U.S. banking operations, services, and products; to support access to financial institutions, products, and services; and to better serve consumers.” FDiTech’s goal is to contribute to the transformation of banking by supporting “the adoption of technological innovations through increased collaboration with market participants.” In the first series of office hour sessions, the FDIC and FDiTech are seeking participants’ outlook on artificial intelligence and machine learning related to: (i) automation of back office processes; (ii) Bank Secrecy Act/Anti-Money Laundering compliance; (iii) credit underwriting decisions; and (iv) cybersecurity.

    FDiTech anticipates hosting approximately 15 one-hour sessions each quarter. Interested parties seeking to participate in these sessions must contact the FDIC by May 24.

    Fintech FDiTech Artificial Intelligence Bank Secrecy Act FDIC Bank Regulatory

  • House Financial Services Committee reauthorizes fintech, AI task forces

    Federal Issues

    On April 30, the House Financial Services Committee announced the reauthorization of the Task Forces on Financial Technology and Artificial Intelligence. According to Chairwoman Maxine Waters (D-CA), the “Task Forces will investigate whether these technologies are serving the needs of consumers, investors, small businesses, and the American public, which is needed especially as we recover from the COVID-19 pandemic.” Representative Stephen Lynch (D-MA) will chair the Task Force on Financial Technology, which will continue to monitor the opportunities and challenges posed by fintech applications for lending, payments, and money management and offer insight on how Congress can ensure Americans’ data and privacy is protected. Representative Bill Foster (D-IL) will chair the Task Force on Artificial Intelligence, which will examine how AI is impacting the way Americans operate in the marketplace, how to think about identity security, and how to interact with financial institutions. The task forces will also examine issues related to algorithms, digital identities, and combatting fraud. As previously covered by InfoBytes, these task forces were set to expire in December 2019.

    House GOP members also released a report that highlights efforts of the Task Forces on Financial Technology and on Artificial Intelligence and includes recommendations on how to utilize innovation. According to the report, the two “key takeaways” are that “Congress must (1) promote greater financial inclusion and expanded access to financial services, and (2) ensure that the federal government does not hinder the United States’ role as a global leader in financial services innovation.” The report also includes recommendations for policy regulators and Congress to: (i) decide how to assist innovation, especially in the private sector; (ii) use the power of data and machine learning to fight fraud, streamline compliance, and make better underwriting decisions; and (iii) “keep up with technology to better protect consumers.”

    Federal Issues House Financial Services Committee Fintech Artificial Intelligence

  • FTC provides AI guidance

    Federal Issues

    On April 19, the FTC’s Bureau of Consumer Protection wrote a blog post identifying lessons learned to manage the consumer protection risks of artificial intelligence (AI) technology and algorithms. According to the FTC, over the years the Commission has addressed the challenges presented by the use of AI and algorithms to make decisions about consumers, and has taken many enforcement actions against companies for allegedly violating laws such as the FTC Act, FCRA, and ECOA when using AI and machine learning technology. The FTC stated that it has used its expertise with these laws to: (i) report on big data analytics and machine learning; (ii) conduct a hearing on algorithms, AI, and predictive analytics; and (iii) issue business guidance on AI and algorithms. To assist companies navigating AI, the FTC has provided the following guidance:

    • Start with the right foundation. From the beginning, companies should consider ways to enhance data sets, design models to account for data gaps, and confine where or how models are used. The FTC advised that if a “data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups.” 
    • Watch out for discriminatory outcomes. It is vital for companies to test algorithms—both prior to use and periodically after that—to prevent discrimination based on race, gender, or other protected classes.
    • Embrace transparency and independence. Companies should consider how to embrace transparency and independence, such as “by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening. . . data or source code to outside inspection.”
    • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, company “statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence.”
    • Data transparency. In the FTC guidance on AI last year, as previously covered by InfoBytes, an advisory warned companies to be careful about how they get the data that powers their models.
    • Do more good than harm. Companies are warned that if their models cause “more harm than good—that is, in Section 5 parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition—the FTC can challenge the use of that model as unfair.”
    • Importance of accountability. The FTC warns of the importance of being transparent and independent and cautions companies to hold themselves accountable or the FTC may do it for them.

    Federal Issues Big Data FTC Artificial Intelligence FTC Act FCRA ECOA Consumer Protection Fintech

  • Prudential regulators exploring how institutions use AI

    Agency Rule-Making & Guidance

    On March 29, the FDIC, Fed, OCC, CFPB, and NCUA issued a request for information (RFI) seeking input on financial institutions’ use of artificial intelligence (AI), which may include AI-based tools and models used for (i) fraud prevention to identify unusual transactions for Bank Secrecy Act/anti-money laundering investigations; (ii) personalization of customer services; (iii) credit underwriting; (iv) risk management; (v) textual analysis; and (vi) cybersecurity. The RFI also solicits information on challenges financial institutions face in developing, adopting, and managing AI, as well as on appropriate governance, risk management, and controls over AI when providing services to customers. Additionally, the agencies seek input on whether it would be helpful to provide additional clarification on using AI in a safe and sound manner and in compliance with applicable laws and regulations. According to FDIC FIL-20-2021, while the agencies support responsible innovation by financial institutions and believe that new technologies, including AI, have “the potential to augment decision-making and enhance services available to consumers and businesses, . . . identifying and managing risks are key.” Comments on the RFI are due 60 days after publication in the Federal Register.

    Agency Rule-Making & Guidance Federal Issues Artificial Intelligence Federal Reserve FDIC OCC CFPB NCUA Fintech Bank Regulatory

  • FTC provides annual ECOA summary to CFPB

    Federal Issues

    On February 3, the FTC announced it recently provided the CFPB with its annual summary of work on ECOA-related policy issues, focusing specifically on the Commission’s activities with respect to Regulation B during 2020. The summary discusses, among other things, the following FTC research and policy development initiatives:

    • The FTC submitted a comment letter in response to the CFPB’s request for information on ways to provide additional clarity under ECOA (covered by InfoBytes here). Among other things, the FTC noted that Regulation B explicitly incorporates disparate impact and offered suggestions should the Bureau choose to provide additional detail regarding its approach to disparate impact analysis. The FTC also urged the Bureau to remind entities offering credit to small businesses that ECOA and Regulation B may apply based “on the facts and circumstances involved” and that entities cannot avoid application of these statutes based solely on how they characterize a transaction or the benefits they claim to provide.
    • The FTC hosted the 13th Annual FTC Microeconomics Conference, which focused on the use of machine-learning algorithms when making decisions in areas such as credit access.
    • The FTC’s Military Task Force continued to work on military consumer protection issues, including military consumers’ “rights to various types of notifications as applicants for credit, including for adverse action, and information about the anti-discrimination provisions, in ECOA and Regulation B.”
    • The FTC continued to participate in the Interagency Task Force on Fair Lending, along with the CFPB, DOJ, HUD, and the federal banking regulatory agencies. The Commission also joined the newly formed Interagency Fair Lending Methodologies Working Group with the aforementioned agencies in order “to coordinate and share information on analytical methodologies used in enforcement of and supervision for compliance with fair lending laws, including ECOA.”

    The summary also highlights FTC ECOA enforcement actions, business and consumer education efforts on fair lending issues, as well as blog posts discussing fair lending safeguards and the use of artificial intelligence in automated decision-making.

    Federal Issues FTC Enforcement CFPB ECOA Fair Lending Artificial Intelligence Regulation B

  • Brainard weighs benefits and risks of using AI in financial services industry

    Federal Issues

    On January 12, Federal Reserve Governor Lael Brainard spoke at the AI Academic Symposium hosted by the Fed’s Board about the increased use of artificial intelligence (AI) in the financial services industry. Brainard reflected that since she first shared early observations on the use of AI in 2018 (covered by InfoBytes here), the Fed has been exploring ways to better understand the use of AI, as well as how banking regulators can best manage risk through supervision while supporting the responsible use of AI and providing equitable outcomes. “Regulators must provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolve,” Brainard noted, adding that the Fed is currently collaborating with the other federal banking agencies on a potential request for information on the risk management of AI applications in financial services.

    Emphasizing the “wide ranging” scope of AI applications, Brainard commented that financial services firms have been using AI for operational risk management, customer-facing applications, and fraud prevention and detection. Brainard also suggested that machine learning-based fraud detection tools could also have the potential to increase the detection of suspicious activity “with greater accuracy and speed,” while potentially enabling firms to respond in real time. Brainard also acknowledged the potential of AI to improve accuracy and fairness of credit decisions and improve overall credit availability.

    However, Brainard also discussed AI challenges, including the “black box problem” that can arise with complex machine learning models that “operate at a level of complexity” which is difficult to fully understand. This lack of model transparency is a central challenge she noted, stressing that financial services firms must understand the basis on which a machine learning model determines creditworthiness, as well as the potential for AI models to “reflect or amplify bias.” With respect to safety and soundness, Brainard stated that “bank management needs to be able to rely on models’ predictions and classifications to manage risk. They need to have confidence that a model used for crucial tasks such as anticipating liquidity needs or trading opportunities is robust and will not suddenly become erratic.” She added that “regulators must provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolve.”

    Federal Issues Federal Reserve Artificial Intelligence Fintech Bank Regulatory

  • CFPB issues automated underwriting NAL to Fintech

    Federal Issues

    On November 30, the Bureau issued a no action letter (NAL) to a Fintech covering its automated underwriting and pricing model that facilitates the origination of unsecured, closed-end loans made by third party lenders. The NAL states that the Bureau will not bring supervisory or enforcement actions against the lender concerning alleged discrimination on a prohibited basis from its use of the automated model for unsecured, closed-end loans under (i) Section 701(a) of ECOA and Sections 1002.4(a) and (b) of Regulation B; or (ii) its authority to prevent unfair, deceptive, or abusive acts or practices. According to the lender’s application, after applicants meet initial eligibly requirements, the automated model, which uses artificial intelligence techniques and alternative data, is designed “to assess the individual risk profile of [eligible] applicants…and is responsible for assigning the maximum amount an applicant can borrow and the appropriate interest rate based on that risk assessment.” If the model’s assigned interest rate “falls within the parameters of a lending partner’s loan program,” the applicant will be approved. The NAL expires after 36 months.

    Federal Issues CFPB No Action Letter Fintech Artificial Intelligence Underwriting

Pages

Upcoming Events