Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • CFPB highlights problems with chatbots in banking

    Federal Issues

    On June 6, the CFPB released an Issue Spotlight exploring the adoption and use of chatbots by financial institutions. According to the report, financial institutions implement chatbots to reduce the costs of customer service, which is sometimes poorly deployed and can lead to customer frustration, reduced trust, and even violations of the law. 

    The report found that the use of chatbots raised several risks including: (i) noncompliance with federal consumer financial protection laws; (ii) diminished customer service and trust; and (iii) harm to customers. The Bureau said it has received several complaints from customers who claimed they cannot get the answers they need from such chatbots. The agency reported that about 37 percent of the U.S. population has interacted with chatbots, which is a figure projected to grow, and cautioned that chatbots should not be the primary source of customer service delivery when it is reasonably clear that a chatbot is unable to meet customer needs.

    The Bureau said it will continue to monitor the market and encourages people who are having trouble getting the answers they need due to lack of human interaction to submit their complaints to the agency. It also encourages financial institutions to ensure new technology is increasing the quality of customer care.

    Federal Issues CFPB Fintech Consumer Finance Artificial Intelligence

  • FTC proposes changes to Health Breach Notification Rule

    Agency Rule-Making & Guidance

    On May 18, the FTC issued a notice of proposed rulemaking (NPRM) and request for public comment on changes to its Health Breach Notification Rule (Rule), following a notice issued last September (covered by InfoBytes here) warning health apps and connected devices collecting or using consumers’ health information that they must comply with the Rule and notify consumers and others if a consumer’s health data is breached. The Rule also ensures that entities not covered by HIPAA are held accountable in the event of a security breach. The NPRM proposed several changes to the Rule, including modifying the definition of “[personal health records (PHR)] identifiable health information,” clarifying that a “breach of security” would include the unauthorized acquisition of identifiable health information, and specifying that “only entities that access or send unsecured PHR identifiable health information to a personal health record—rather than entities that access or send any information to a personal health record—qualify as PHR related entities.” The modifications would also authorize the expanded use of email and other electronic methods for providing notice of a breach to consumers and would expand the required content for notices “to include information about the potential harm stemming from the breach and the names of any third parties who might have acquired any unsecured personally identifiable health information.” Comments on the NPRM are due 60 days after publication in the Federal Register.

    The same day, the FTC also issued a policy statement warning businesses against making misleading claims about the accuracy or efficacy of biometric technologies like facial recognition. The FTC emphasized that the increased use of consumers’ biometric information and biometric information technologies (including those powered by machine learning) raises significant consumer privacy and data security concerns and increases the potential for bias and discrimination. The FTC stressed that it intends to combat unfair or deceptive acts and practices related to these issues and outlined several factors used to determine potential violations of the FTC Act.

    Agency Rule-Making & Guidance Federal Issues Privacy, Cyber Risk & Data Security FTC Consumer Protection Biometric Data Artificial Intelligence Unfair Deceptive UDAP FTC Act

  • Federal agencies reaffirm commitment to confront AI-based discrimination

    Federal Issues

    On April 25, the CFPB, DOJ, FTC, and Equal Employment Opportunity Commission released a joint statement reaffirming their commitment to protect the public from bias in automated systems and artificial intelligence (AI). “America’s commitment to the core principles of fairness, equality, and justice are deeply embedded in the federal laws that our agencies enforce to protect civil rights, fair competition, consumer protection, and equal opportunity,” the agencies said, emphasizing that existing authorities apply equally to the use of new technologies and responsible innovation as they do to any other conduct. The agencies have previously expressed concerns about potentially harmful AI applications, including black box algorithms, algorithmic marketing and advertising, abusive AI technology usage, digital redlining, and repeat offenders’ use of AI, which may contribute to unlawful discrimination, biases, and violate consumers’ rights.

    “We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats,” FTC Chair Lina M. Khan said. “Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition,” Khan added.

    CFPB Director Rohit Chopra echoed Khan’s sentiments and said the Bureau, along with other agencies, are taking measures to address unchecked AI. “While machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what is happening,” Chopra said. “When consumers and regulators do not know how decisions are made by artificial intelligence, consumers are unable to participate in a fair and competitive market free from bias,”  Chopra added. The Director’s statements concluded by noting that the Bureau will continue to collaborate with other agencies to enforce federal consumer financial protection laws, regardless of whether the violations occur through traditional means or advanced technologies.

    Additionally, Assistant Attorney General Kristen Clarke of the DOJ’s Civil Rights Division noted that “[a]s social media platforms, banks, landlords, employers and other businesses [] choose to rely on artificial intelligence, algorithms and other data tools to automate decision-making and to conduct business, we stand ready to hold accountable those entities that fail to address the discriminatory outcomes that too often result.”

    Federal Issues FTC CFPB DOJ Artificial Intelligence EEOC Discrimination Consumer Finance Racial Bias Fintech

  • Fed governor weighs tokenization and AI

    On April 20, Federal Reserve Governor Christopher J. Waller spoke on innovation and the future of finance during remarks at the Global Interdependence Center. Commenting that “[i]nnovation is a double-edged sword, with costs and benefits, and different effects on different groups of people,” Waller stressed the importance of considering whether innovation is creating new efficiencies and helping to mitigate risks and increase financial inclusion or whether it is creating new or exacerbating existing risks. Waller’s remarks focused on two specific areas of innovation that he believes may have the potential to deliver substantial benefits to the banking industry: tokenization and artificial intelligence (AI).

    With respect to tokenization and tokenized assets, Waller flagged several advantages to innovations in this space that use blockchain over traditional transaction approaches, including (i) being able to offer faster or “even near-real time transfers,” which can, among other things, give parties precise control over settlement times and reduce liquidity risks; and (ii) “smart contract” functionalities, which can help mitigate settlement and counterparty credit risks by constructing and executing transactions based on the meeting of specified conditions. He acknowledged, however, that both innovations introduce risks, including potential cyber vulnerabilities and other risks.

    Waller also addressed the banking industry’s use of AI to increase the range of marketing possibilities, expand customer service applications, monitor fraud, and refine credit underwriting processes and analysis, but cautioned that AI also presents “novel risks,” as these models rely on high volumes of data, which can complicate efforts to detect problems or biases in datasets. There is also the “black box” problem where it becomes difficult to explain how outputs are derived, where even AI developers have difficulty understanding exactly how the AI technology approach works, Waller stated. “All of these innovations will have their champions, who make claims about how their innovation will change the world; and I think it’s important to view such claims critically,” Waller said. “But it’s equally important to challenge the doubters, who insist that these innovations are much ado about nothing, or that they will end in disaster.”

    Bank Regulatory Federal Issues Federal Reserve Digital Assets Fintech Cryptocurrency Tokens Artificial Intelligence

  • DFPI cracks down on crypto platforms’ AI claims

    State Issues

    On April 19, the California Department of Financial Protection and Innovation (DFPI) announced enforcement actions against five separate entities and an individual for allegedly offering and selling unqualified securities and making material misrepresentations and omissions to investors in violation of California securities laws. According to DFPI, the desist and refrain orders allege that the subjects (which touted themselves as cryptocurrency trading platforms) engaged in a variety of unlawful and deceptive practices, including promising investors high yield returns through the use of artificial intelligence to trade crypto assets, falsely representing that an insurance fund would prevent investor losses, and using investor funds to pay purported profits to other investors. The subjects also allegedly took measures to make the scams appear to be legitimate businesses through the creation of professional websites and social media accounts where influencers and investors shared testimonials about the money they were supposedly making. The orders require the subjects to stop offering, selling, buying, or offering to buy securities in the state, and demonstrate DFPI’s continued crackdown on high yield investment programs.

    State Issues Securities Enforcement California State Regulators Digital Assets DFPI Artificial Intelligence Cryptocurrency

  • FTC provides 2022 ECOA summary to CFPB

    Federal Issues

    On February 9, the FTC announced it recently provided the CFPB with its annual summary of activities related to ECOA enforcement, focusing specifically on the Commission’s activities with respect to Regulation B. The summary discussed, among other things, the following FTC enforcement, research, and policy development initiatives:

    • Last June, the FTC released a report to Congress discussing the use of artificial intelligence (AI), and warning policymakers to use caution when relying on AI to combat the spread of harmful online conduct. The report also raised concerns that AI tools can be biased, discriminatory, or inaccurate, could rely on invasive forms of surveillance, and may harm marginalized communities. (Covered by InfoBytes here.)
    • The FTC continued to participate in the Interagency Task Force on Fair Lending, along with the CFPB, DOJ, HUD, and federal banking regulatory agencies. The Commission also continued its participation in the Interagency Fair Lending Methodologies Working Group to “coordinate and share information on analytical methodologies used in enforcement of and supervision for compliance with fair lending laws, including the ECOA.”
    • The FTC initiated an enforcement action last April against an Illinois-based multistate auto dealer group for allegedly adding junk fees for unwanted “add-on” products to consumers’ bills and discriminating against Black consumers. In October, the FTC initiated a second action against a different auto dealer group and two of its officers for allegedly engaging in deceptive advertising and pricing practices and discriminatory and unfair financing. (Covered by InfoBytes here and here.)
    • The FTC engaged in consumer and business education on fair lending issues, and reiterated that credit discrimination is illegal under federal law for banks, credit unions, mortgage companies, retailers, and companies that extend credit. The FTC also issued consumer alerts discussing enforcement actions involving racial discrimination and disparate impact, as well as agency initiatives centered around racial equity and economic equality.   

    Federal Issues CFPB FTC ECOA Regulation B Fair Lending Enforcement Artificial Intelligence Consumer Finance Auto Finance Discrimination

  • Barr says AI should not create racial disparities in lending

    On February 7, Federal Reserve Board Vice Chair for Supervision, Michael S. Barr, delivered remarks during the “Banking on Financial Inclusion” conference, where he warned financial institutions to make sure that using artificial intelligence (AI) and algorithms does not create racial disparities in lending decisions. Banks “should review the underlying models, such as their credit scoring and underwriting systems, as well as their marketing and loan servicing activities, just as they should for more traditional models,” Barr said, pointing to findings that show “significant and troubling disparities in lending outcomes for Black individuals and businesses relative to others.” He commented that “[w]hile research suggests that progress has been made in addressing racial discrimination in mortgage lending, regulators continue to find evidence of redlining and pricing discrimination in mortgage lending at individual institutions.” Studies have also found persistent discrimination in other markets, including auto lending and lending to Black-owned businesses. Barr further commented that despite significant progress over the past 25 years in expanding access to banking services, a recent FDIC survey found that the unbanked rate for Black households was 11.3 percent as compared to 2.1 percent for White households.

    Barr suggested several measures for addressing these issues and eradicating discrimination. Banks should actively analyze data to identify where racial disparities occur, conduct on-the-ground testing to identify discriminatory practices, and review AI or other algorithms used in making lending decisions, Barr advised. Banks should also devote resources to stamp out unfair, abusive, or illegal practices, and find opportunities to support and invest in low- and moderate-income (LMI) communities, small businesses, and community infrastructure. Meanwhile, regulators have a clear responsibility to use their supervisory and enforcement tools to make sure banks resolve consumer protection weaknesses, Barr said, adding that regulators should also ensure that rules provide appropriate incentives for banks to invest in LMI communities and lend to such households.

    Bank Regulatory Federal Issues Federal Reserve Supervision Discrimination Artificial Intelligence Algorithms Consumer Finance Fair Lending

  • NIST releases new AI framework to help organizations mitigate risk

    Privacy, Cyber Risk & Data Security

    On January 26, the National Institute of Standards and Technology (NIST) released voluntary guidance to help organizations that design, deploy, or use artificial intelligence (AI) systems mitigate risk. The Artificial Intelligence Risk Management Framework (developed in close collaboration with the private and public sectors pursuant to a Congressional directive under the National Defense Authorization for Fiscal Year 2021), “provides a flexible, structured and measurable process that will enable organizations to address AI risks,” NIST explained. The framework breaks down the process into four high-level functions: govern, map, measure, and manage. These categories, among other things, (i) provide guidance on how to evaluate AI for legal and regulatory compliance and ensure policies, processes, procedures and practices are transparent, robust, and effective; (ii) outline processes for addressing AI risks and benefits arising from third-party software and data; (iii) describe the mapping process for collecting information to establish the context to frame AI-related risks; (iv) provide guidance for employing and measuring “quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts”; and (v) set forth a proposed process for managing and allocating risk management resources. Examples are also provided within the framework to help organizations implement the guidance.

    “This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” Deputy Commerce Secretary Don Graves said in the announcement. “It should accelerate AI innovation and growth while advancing—rather than restricting or damaging—civil rights, civil liberties and equity for all.” 

    Privacy, Cyber Risk & Data Security NIST Artificial Intelligence Risk Management

  • DOJ, HUD say Fair Housing Act extends to algorithm-based tenant screening

    Federal Issues

    On January 9, the DOJ and HUD announced they filed a joint statement of interest in a pending action alleging discrimination under the Fair Housing Act (FHA) against Black and Hispanic rental applicants based on the use of an algorithm-based tenant screening system. The lawsuit, filed in the U.S. District Court for the District of Massachusetts, alleged that Black and Hispanic rental applications who use housing vouchers to pay part of their rent were denied rental housing due to their “SafeRent Score,” which is derived from the defendants’ algorithm-based screening software. The plaintiffs claimed that the algorithm relies on factors that disproportionately disadvantage Black and Hispanic applicants, such as credit history and non-tenancy related debts, and fails to consider that the use of HUD-funded housing vouchers makes such tenants more likely to pay their rents. Through the statement of interest, the agencies seek to clarify two questions of law they claim the defendants erroneously represented in their motions to dismiss: (i) the appropriate standard for pleading disparate impact claims under the FHA; and (ii) the type of companies that fall under the FHA’s application.

    The agencies first challenged that the defendants did not apply the proper pleading standard for a claim of disparate impact under the FHA. Explaining that in order to establish an FHA disparate impact claim, “plaintiffs must show ‘the occurrence of certain outwardly neutral practices’ and ‘a significantly adverse or disproportionate impact on persons of a particular type produced by the defendant’s facially neutral acts or practices,’” The agencies disagreed with the defendants’ assertion that the plaintiffs “must also allege specific facts establishing that the policy is ‘artificial, arbitrary, and unnecessary.” This contention, the agencies said, “conflates the burden-shifting framework for proving disparate impact claims with the pleading burden.” The agencies also rejected arguments that the plaintiffs must challenge the entire “formula” of the scoring system and not just one element in order to allege a statistical disparity, in addition to providing “statistical findings specific to the disparate impact of the scoring system.” According to the agencies, the plaintiffs adequately identified an “essential nexus” between the algorithm’s scoring system and the disproportionate effect on certain rental applicants based on race.

    The agencies also explained that residential screening companies, including the defendants, fall under the FHA’s purview. While the defendants argued that the FHA does not apply to companies “that are not landlords and do not make housing decisions, but only offer services to assist those that do make housing decisions,” the agencies contended that this misconstrues the clear statutory language of the FHA and presented case law affirming that FHA liability reaches “a broad array of entities providing housing-related services.”

    “Housing providers and tenant screening companies that use algorithms and data to screen tenants are not absolved from liability when their practices disproportionately deny people of color access to fair housing opportunities,” Assistant Attorney General Kristen Clarke of the DOJ’s Civil Rights Division stressed. “This filing demonstrates the Justice Department’s commitment to ensuring that the Fair Housing Act is appropriately applied in cases involving algorithms and tenant screening software.”

    Federal Issues Courts DOJ Fair Housing Act Artificial Intelligence HUD Algorithms Discrimination Disparate Impact

  • FTC’s annual PrivacyCon focuses on consumer privacy and security issues

    Privacy, Cyber Risk & Data Security

    On November 1, the FTC held its annual PrivacyCon event, which hosted research presentations on a wide range of consumer privacy and security issues. Opening the event, FTC Chair Lina Khan stressed the importance of hearing from the academic community on topics related to a range of privacy issues that the FTC and other government bodies may miss. Khan emphasized that regulators cannot wait until new technologies fully emerge to think of ways to implement new laws for safeguarding consumers. “The FTC needs to be on top of this emerging industry now, before problematic business models have time to solidify,” Khan said, adding that the FTC is consistently working on privacy matters and is “prioritizing the use of creative ideas from academia in [its] bread-and-butter work” to craft better remedies to reflect what is actually happening. She highlighted a recent enforcement action taken against an online alcohol marketplace and its CEO for failing to take reasonable steps to prevent two major data breaches (covered by InfoBytes here). Khan noted that while the settlement’s requirements, such as imposing multi-factor authentication requirements and destroying unneeded user data, may not sound “very cutting-edge” they serve as a big step forward for government enforcers. Chief Technology Officer Stephanie Nguyen, who is responsible for leading the charge to integrate technologists across the FTC’s various lines of work, including consumer privacy, discussed the work of these technologists (including AI and security experts, software engineers, designers, and data scientists) to help develop remedies in data security-related enforcement actions and to push companies to not just do the minimum to remediate areas like unreasonable data security but to model best practices for the industry. “We want to see bad actors face real consequences,” Nguyen said, adding that the FTC wants to hold corporate leadership accountable as it did in the enforcement action Khan cited. Nguyen further stressed that there is also a need to address systemic risk by making companies delete illegally collected data and destroy any algorithms derived from the data.

    The one-day conference featured several panel sessions covering a number of topics related to consumer surveillance, automated decision-making systems, children’s privacy, devices that listen to users, augmented/virtual reality, interfaces and dark patterns, and advertising technology. Topics addressed during the panels include (i) requiring data brokers to provide accurate information; (ii) understanding how data inaccuracies can disproportionately affect minorities and those living in poverty, and why relying on this data can lead to discriminatory practices; (iii) examining bias and discrimination risks when engaging in emotional artificial intelligence; (iv) understanding automated decision making systems and how the quality of these systems impact populations they are meant to represent; (v) recognizing the lack of transparency related to children’s data collection and use, and the impact various privacy laws, including the Children’s Online Privacy Protection Rule, the General Data Protection Regulation, and the California Consumer Privacy Act, have on the collection/use/sharing of personal data; (vi) recognizing challenges related to cookie-consent interfaces and dark patterns; and (vii) examining how targeted online advertising both in the U.S. and abroad affects consumers.

    Privacy, Cyber Risk & Data Security FTC Consumer Protection Artificial Intelligence Dark Patterns Enforcement

Pages

Upcoming Events