Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • SEC proposes rules for addressing conflicts of interest raised by predictive data analytics

    Agency Rule-Making & Guidance

    On July 26, the SEC issued proposed rules under the Securities Exchange Act of 1924 and the Investment Advisors Act of 1940 to address certain conflicts of interest associated with the use of predictive data analytics, including artificial intelligence (AI) and similar technologies, “that optimize for, predict, guide, forecast, or direct investment-related behaviors or outcomes.” The SEC explained that broker-dealers and investment advisors (collectively, “firms”) are increasingly using AI to improve efficiency and returns but cautioned that, due to the scalability of these technologies and the potential for firms to quickly reach a large audience, any resulting conflicts of interest could result in harm to investors that is more pronounced and on a broader scale than previously possible.

    Based on existing legal standards, the proposed rules generally would require a firm to identify and eliminate, or neutralize, the effects of conflicts of interest that result in the firm’s (or associated persons) interests being placed ahead of investors’ interests. Firms, however, would be permitted to employ tools that they believe would address such risks and that are specific to the particular technology being used. Firms that use covered technology for investor interactions would also be required to have written policies and procedures in place to ensure compliance with the proposed rules, the SEC said. These policies and procedures must include a process for evaluating the use of covered technology in investor interactions and addressing any conflicts of interest that may arise. Firms must also maintain books and records related to these requirements. Comments on the proposed rules are due 60 days after publication in the Federal Register.

    Agency Rule-Making & Guidance Federal Issues Securities SEC Third-Party Risk Management Artificial Intelligence Securities Exchange Act Investment Advisers Act

  • CFPB, FTC to conduct inquiry into high housing costs for renters

    Federal Issues

    On July 25, CFPB Director Rohit Chopra shared prepared remarks for the Community Table on a White House Blueprint for a Renters Bill of Rights to address high housing costs for renters. Chopra raised concerns about corporate investors imposing high rents and charging renters with what the director described as “junk fees and other aggressive tactics.” He mentioned that corporate investor owners, including private equity firms, are more likely to evict tenants, even when controlling for other factors, and that corporate investor ownership of rental units has risen to over 45 percent. Chopra also emphasized the growing use of artificial intelligence and social scoring in the rental process, stating that such changes can lead to rent hikes and denials of housing due to an algorithm's definition of "high-quality tenants." The remarks suggested that tenants are not being given appropriate opportunity to correct inaccurate information in their background checks, despite the legal requirement for companies to inform consumers when using such information for adverse rental decisions. The speech also stressed the CFPB's commitment to identifying inaccurate AI and illegal practices that lead to misleading data and clarified that name-only matching, a common but illegal practice in screening, can result in inaccurate information, disproportionately affecting individuals with common last names. To address these issues, Chopra announced a joint inquiry with the FTC, to collect feedback from the public about their experiences with tenant screening.

    Federal Issues CFPB FTC Consumer Finance Artificial Intelligence Landlords

  • Fed’s Barr raises concerns about AI redlining

    Federal Issues

    On July 18, Federal Reserve Vice Chair for Supervision Michael Barr delivered a speech on adjusting the Fair Housing Act and ECOA in response to the increasing relevance of artificial intelligence. Barr explained how the digital economy offers many great utilizations, such as accessing the creditworthiness of individuals without credit history and facilitating wider access to credit for those who may otherwise be excluded. Along with a digital economy, Barr cautioned, comes negative implications where technologies can potentially violate the fair lending laws and may perpetuate existing disparities and inaccuracies, among other things. Barr highlighted Special Purpose Credit Programs as a tool to address discrimination and bias in mortgage credit transactions. In addition, Barr highlighted two recent initiatives taken by the Fed to tackle appraisal discrimination and bias in housing mortgage credit transactions—one involved inviting public feedback on a proposed rule to uphold credibility and integrity in automated valuation models, and the other sought input on guidance addressing risks related to deficient home appraisals, emphasizing "reconsiderations of value" in the process. (Covered by InfoBytes here and here.) Barr also commented that through the Fed’s supervisory process, it is evaluating whether firms have proper risk management and controls, including with respect to these new technologies.

    Federal Issues Fintech Federal Reserve Fair Housing Act ECOA Artificial Intelligence Fair Lending Redlining Consumer Finance

  • Gensler highlights challenges of AI-based models

    Securities

    On July 17, SEC Chair Gary Gensler spoke before the National Press Club, where he discussed opportunities and challenges stemming from the use of artificial intelligence (AI)-based models. While Gensler acknowledged that AI has the potential to promote greater financial inclusion and enhance user experience, he warned that there are also challenges associated with AI advancements that need to be considered at both the individual and broader economic levels. At the individual (micro) level, Gensler explained that AI’s predictive capabilities allow for personalized communication, product offerings, and pricing. However, this individualized approach (also known as “narrowcasting”) also raises questions about how individuals will respond to tailored messages and offers, he said, pointing out that when AI models are used to make important decisions such as job selection, loan approvals, credit decisions, and healthcare allocation, issues related to explainability, bias, and robustness become a concern. Gensler elaborated that AI models often produce unexplainable decisions and outcomes due to their nonlinear and hyper-dimensional nature. Furthermore, AI may also make it more difficult to ensure fairness and can inadvertently perpetuate biases present in historical data or use latent features that act as proxies for protected characteristics, Gensler said, adding that “the challenges of explainability may mask underlying systemic racism and bias in AI predictive models.”

    Gensler explained that these data analytics challenges are not new and that in the late 1960s and early 1970s, the Fair Housing Act, FCRA, and ECOA were, in part, driven by similar issues. He warned advisers and brokers that as they incorporate these technologies into their services, they must ensure that when offering advice and recommendations (whether or not based on AI) they consider the best interests of their clients and retail customers and not place their interests ahead of investors’ interests.

    Securities Federal Issues Fintech Consumer Finance Risk Management Artificial Intelligence

  • CFPB, EU start talks on AI, digital finance

    Federal Issues

    On July 17, CFPB Director Rohit Chopra and Commissioner for Justice and Consumer Protection of the European Commission Didier Reynders issued a joint statement announcing the start of new dialogue on consumer financial protection with a primary focus on digital developments in the financial sector and ways to improve policy and regulatory cooperation.

    Chopra and Reynders stressed that there are significant implications for both businesses and households from the digitalization of the financial services sector, including impacts on pricing, customer service, competition, and privacy. They noted that financial institutions are increasingly deploying automated decision-making processes, leveraging artificial intelligence technologies, and developing and introducing new financial products and services, such as Buy Now, Pay Later. Chopra and Reynders also commented that digital payments are becoming “increasingly offered and controlled by Big Tech.” They warned these developments, if not properly regulated, “could increase consumers’ exposure to fraud and manipulation, limit their product options over time, threaten their control over their own data, and force them to accept more expensive personalized pricing for the same products and services compared to other consumers.” Chopra and Reynders also cautioned that policymakers must do more to keep pace with evolving markets and ensure consumer protection.

    The dialogue will address topics relating to:

    • The deployment of automated decision-making and data processing and implications for consumers;
    • Risks associated with emerging credit options, including the potential risks of over-consumption and over-indebtedness for consumers who use these products;
    • Measures for exploring ways to assist over-indebted consumers in managing and repaying their debt sustainably;
    • Digital transformation and access to fair financial services, including to unbanked and underbanked consumers, as well as those who prioritize protecting their personal data; and
    • Competition, privacy, security, and financial stability implications associated with big tech companies that offer financial services.

    Chopra and Reynders will meet informally at least once per year to share insights and experiences on consumer financial issues. According to the statement, the dialogue will also involve staff discussions, bilateral meetings with subject matter experts, and roundtables with stakeholders. The cooperation and exchanges within the informal dialogue are expected “to occur in parallel with other forms of cooperation and exchanges between the European Union and the United States on various digital and financial services policies and regulations,” the joint statement said.

    Federal Issues Fintech CFPB Of Interest to Non-US Persons EU Artificial Intelligence Consumer Finance Buy Now Pay Later

  • Senators demand that CFPB address voice-cloning risks

    Privacy, Cyber Risk & Data Security

    On July 6, four Democrats on the Senate Banking Committee sent a letter to CFPB Director Rohit Chopra, in which they expressed their concerns about the emergence of voice cloning technology. The senators observed that “voice cloning, the process of reproducing an individual’s voice with high accuracy using AI and machine learning techniques, has seen remarkable advancements in recent years, and is increasingly being used in malicious ways.” The letter noted the “particularly alarming” use of voice cloning in financial scams, in which scammers use the technology to convincingly impersonate family, friends, and even financial advisors or bank employees. Many times, the letter mentioned, scammers target consumers “who often have no reimbursement recourse from banks and peer-to-peer payment apps.” The senators also highlighted the threat that this technology poses to financial institutions that utilize voice authentication services. The senators urged Chopra and the Bureau to review the risks posed by voice cloning technology and implement measures to effectively address the emerging threat to unsuspecting consumers.

    Privacy, Cyber Risk & Data Security Federal Issues CFPB Senate Banking Committee Artificial Intelligence Consumer Protection

  • Highlights from the CFPB’s 2022 fair lending report

    Federal Issues

    On June 29, the CFPB issued its annual fair lending report to Congress which outlines the Bureau’s efforts in 2022 to fulfill its fair lending mandate. Much of the Bureau’s work in 2022 was directed towards unlawful discrimination in the home appraisal industry and addressing redlining. According to the report, the CFPB also honed its efforts on factors that influence fair access to credit which included insight into factors affecting consumers’ credit profiles. The report highlights one fair lending enforcement action from 2022, where the CFPB and DOJ filed a joint complaint and proposed consent order against a company for allegedly violating ECOA, Regulation B, and the CFPA by discouraging prospective applicants from applying for credit. Notably, the Bureau notes that under section 704 of ECOA, it must refer any cases with instances of a creditor being believed to have engaged in a “pattern or practice of lending discrimination” to the DOJ. According to the report, the FDIC, NCUA, Federal Reserve Board, and CFPB collectively made 23 such referrals to the DOJ in 2022, a 91 percent increase from 2020. Five of the 23 matters were sent by the CFPB, four of which involved alleged racial discrimination in redlining, and one involving alleged discrimination in underwriting based on receipt of public assistance income. The report also discusses the CFPB’s risk-based prioritization process that resulted in initiatives concerning small business lending, policies and procedures on exclusions in underwriting, and the use of artificial intelligence. Moving forward, the Bureau will continue its collaborative approach with other agencies and prioritize areas such as combating bias in home appraisals, redlining, and the use of advanced technologies in financial services. Additionally, the report states that by focusing on restorative outcomes, comprehensive remedies, and equal economic opportunities, the CFPB aims to create a fair, equitable, and nondiscriminatory credit market for consumers.

    Federal Issues CFPB Fair Lending DOJ ECOA Enforcement Consumer Finance Redlining Artificial Intelligence Supervision

  • Biden administration launches NIST working group on AI

    Federal Issues

    On June 22, the Biden administration announced that the National Institute of Standards and Technology (NIST) launched a new public working group on generative AI. The Public Working Group on Generative AI will reportedly help NIST develop guidance surrounding the special risks posed by AI in order to help organizations and support initiatives to address the opportunities and challenges associated with generative AI’s creation of code, text, images, videos, and music. “The public working group will draw upon volunteers, with technical experts from the private and public sectors, and will focus on risks related to this class of AI, which is driving fast-paced changes in technologies and marketplace offerings” NIST stated. NIST also outlined the immediate, midterm, and long-term goals for the group. Initially, the working group will research how the NIST AI Risk Management Framework can be used to support AI technology development. The working group’s midterm goal will be to support NIST in testing, evaluation and measurement related to generative AI. In the long term, the group will explore the application of generative AI to address challenges in health, environment, and climate change. NIST encourages those interested in joining the working group to submit a form no later than July 9.

    Federal Issues Biden Artificial Intelligence NIST Risk Management

  • Hsu tells banks to approach AI cautiously

    On June 16, Acting Comptroller of the Currency Michael J. Hsu warned that the unpredictability of artificial intelligence (AI) can pose significant risks to the financial system. During remarks presented at the American Bankers Association’s Risk and Compliance Conference, Hsu cautioned that banks must manage risks when adopting technologies such as tokenization and AI. Although Hsu reiterated his skepticism of cryptocurrency (covered by InfoBytes here), he acknowledged that AI and blockchain technology (where most tokenization efforts are currently focused) have the potential to present “significant” benefits to the financial system. He explained that trusted blockchains may improve settlement efficiency through tokenization of real-world assets and liabilities by minimizing lags and thereby reducing related frictions, costs, and risks. However, he warned that legal frameworks and risk and compliance capabilities for tokenizing real-world assets and liabilities at scale require further development, especially considering cross-jurisdictional situations and ownership and property rights.

    With respect to banks’ adoption of AI, Hsu flagged AI’s “potential to reduce costs and increase efficiencies; improve products, services and performance; strengthen risk management and controls; and expand access to credit and other bank services.” But there are significant challenges, Hsu said, including bias and discrimination challenges in consumer lending, fraud, and risks created from the use of “generative” AI. Alignment is also the core challenge, Hsu said, explaining that because AI systems are built to learn and may not do what they are programed to do, governance and accountability challenges may become an issue. “Who can and should be held accountable for misaligned, unexpected, and harmful outcomes?” Hsu asked, pointing to banks’ use of third parties to develop and support their AI systems as an area of concern.

    Hsu advised banks to approach innovation “responsibly and purposefully” and to proceed cautiously while keeping in mind three principles for managing risks: (i) innovate in stages, expand only when ready, and monitor, adjust and repeat; (ii) “build the brakes while building the engine” and ensure risk and compliance professionals are part of the innovation process; and (iii) engage with regulators early and often during the process and ask for permission, not forgiveness.

    Bank Regulatory Federal Issues Fintech OCC Artificial Intelligence Tokens Compliance Risk Management Blockchain

  • Chopra testifies at congressional hearings

    Federal Issues

    On June 13, CFPB Director Rohit Chopra testified before the Senate Banking Committee to discuss the Bureau’s most recent semi-annual report to Congress. Covering the period beginning April 1, 2022 and ending September 30, 2022, the semi-annual report addressed a wide range of issues, including the adoption of significant rules and orders, supervisory and enforcement actions, and actions taken by states relating to federal consumer financial law. The report also stated the Bureau received approximately 1.237 million consumer complaints, for which roughly 75 percent pertained to credit or consumer reporting. With respect to the Bureau’s mandated objectives, Chopra’s prepared statement highlighted rulemaking progress on several topics, including small business lending data collection and PACE lending. He also emphasized the agency’s heightened focus on supervising nonbank financial firms and reiterated that the Bureau will continue to shift its enforcement focus from small businesses to repeat offenders.

    Committee Chair Sherrod Brown (D-OH) praised Chopra’s leadership in his opening statement, highlighting actions taken by the Bureau since Chopra’s last hearing appearance and disagreeing with the U.S. Court of Appeals for the Fifth Circuit’s decision that the agency’s funding authority violates the Constitution’s Appropriations Clause and the separation of powers. However, Ranking Member Tim Scott (R-SC) argued that Chopra “has created uncertainty in the marketplace by attempting to regulate through speeches and blog posts under the guise of ‘clarifying guidance,’” and continues to mislabel payment incentives as “junk fees” or “illegal fees.” Scott also took issue with the Bureau’s small business lending rule and asked why the agency should be trusted to collect a large amount of lending data when the agency itself experienced a data breach when an employee transferred sensitive consumer data to a personal email account without authorization.

    During the hearing, Chopra addressed concerns accusing him of bypassing regulatory review by issuing policy changes through agency guidance and press announcements. “The things we hear from small firms is they really want to know how existing law applies,” Chopra said. “We have so many changes in technology, and these small firms don’t have the ability to hire so many lawyers[,] [s]o I’ve actually continued a practice of my predecessor, Director Kraninger to issue these advisory opinions and other guidance documents. They do not create any new obligations. They simply restate what the existing laws are.”

    Chopra also answered questions relating to the Bureau’s proposal to limit credit card late fees and, among other things, adjust the safe harbor dollar amount for late fees to $8 for any missed payment (issuers are currently able to charge late fees of up to $41). (Covered by InfoBytes here.) Chopra explained that the proposed rule still allows recovery of costs but said the agency is trying to make the process “more rigorous and make sure it reflects market realities.” “[I]ssuers tell us is that they don’t want to profit off of late fees,” Chopra added. “That's exactly the goal here, because the law says those penalty fees are supposed to be reasonable and proportional. We’re trying to make it more clear about the way we can do that, while also making the market more competitive.”

    Republican senators expressed concerns with the proposal during the hearing, with Scott commenting that no one wants to pay the late fee, but that “the truth of the matter is that fee is going to be paid just in a different form. . . .whether it’s through increased interest rates or increased cost of products, it doesn’t go away.” Senator Elizabeth Warren (D-MA) countered that “if there’s an $8 cap on credit card late fees, unless the banks can show that their costs are higher, in which case they can charge more, all that will happen, as best I can tell is that the banks will have slightly lower profit margins.”

    Chopra faced similar question during a hearing held the next day before the House Financial Services Committee. Among the topics, committee members raised questions relating to technology risks presented by artificial intelligence and how existing law applies to machine learning. Chopra was also accused of overseeing an unconstitutional agency and flouting the notice-and-comment rulemaking process. Also discussed during the hearing was a recently introduced joint resolution to nullify the Bureau’s small business lending rule. (Covered by InfoBytes here.) Representative Roger Williams (R-TX) stressed that community banks are “concerned that the complicated reporting requirements will tie up loan officers and increase compliance costs plus compliance officers, which will be passed down to the consumer.”

    Federal Issues CFPB Senate Banking Committee House Financial Services Committee Section 1071 Consumer Finance Artificial Intelligence Junk Fees Funding Structure Credit Cards Student Lending

Pages

Upcoming Events