Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • NIST releases new AI framework to help organizations mitigate risk

    Privacy, Cyber Risk & Data Security

    On January 26, the National Institute of Standards and Technology (NIST) released voluntary guidance to help organizations that design, deploy, or use artificial intelligence (AI) systems mitigate risk. The Artificial Intelligence Risk Management Framework (developed in close collaboration with the private and public sectors pursuant to a Congressional directive under the National Defense Authorization for Fiscal Year 2021), “provides a flexible, structured and measurable process that will enable organizations to address AI risks,” NIST explained. The framework breaks down the process into four high-level functions: govern, map, measure, and manage. These categories, among other things, (i) provide guidance on how to evaluate AI for legal and regulatory compliance and ensure policies, processes, procedures and practices are transparent, robust, and effective; (ii) outline processes for addressing AI risks and benefits arising from third-party software and data; (iii) describe the mapping process for collecting information to establish the context to frame AI-related risks; (iv) provide guidance for employing and measuring “quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts”; and (v) set forth a proposed process for managing and allocating risk management resources. Examples are also provided within the framework to help organizations implement the guidance.

    “This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” Deputy Commerce Secretary Don Graves said in the announcement. “It should accelerate AI innovation and growth while advancing—rather than restricting or damaging—civil rights, civil liberties and equity for all.” 

    Privacy, Cyber Risk & Data Security NIST Artificial Intelligence Risk Management

  • DOJ, HUD say Fair Housing Act extends to algorithm-based tenant screening

    Federal Issues

    On January 9, the DOJ and HUD announced they filed a joint statement of interest in a pending action alleging discrimination under the Fair Housing Act (FHA) against Black and Hispanic rental applicants based on the use of an algorithm-based tenant screening system. The lawsuit, filed in the U.S. District Court for the District of Massachusetts, alleged that Black and Hispanic rental applications who use housing vouchers to pay part of their rent were denied rental housing due to their “SafeRent Score,” which is derived from the defendants’ algorithm-based screening software. The plaintiffs claimed that the algorithm relies on factors that disproportionately disadvantage Black and Hispanic applicants, such as credit history and non-tenancy related debts, and fails to consider that the use of HUD-funded housing vouchers makes such tenants more likely to pay their rents. Through the statement of interest, the agencies seek to clarify two questions of law they claim the defendants erroneously represented in their motions to dismiss: (i) the appropriate standard for pleading disparate impact claims under the FHA; and (ii) the type of companies that fall under the FHA’s application.

    The agencies first challenged that the defendants did not apply the proper pleading standard for a claim of disparate impact under the FHA. Explaining that in order to establish an FHA disparate impact claim, “plaintiffs must show ‘the occurrence of certain outwardly neutral practices’ and ‘a significantly adverse or disproportionate impact on persons of a particular type produced by the defendant’s facially neutral acts or practices,’” The agencies disagreed with the defendants’ assertion that the plaintiffs “must also allege specific facts establishing that the policy is ‘artificial, arbitrary, and unnecessary.” This contention, the agencies said, “conflates the burden-shifting framework for proving disparate impact claims with the pleading burden.” The agencies also rejected arguments that the plaintiffs must challenge the entire “formula” of the scoring system and not just one element in order to allege a statistical disparity, in addition to providing “statistical findings specific to the disparate impact of the scoring system.” According to the agencies, the plaintiffs adequately identified an “essential nexus” between the algorithm’s scoring system and the disproportionate effect on certain rental applicants based on race.

    The agencies also explained that residential screening companies, including the defendants, fall under the FHA’s purview. While the defendants argued that the FHA does not apply to companies “that are not landlords and do not make housing decisions, but only offer services to assist those that do make housing decisions,” the agencies contended that this misconstrues the clear statutory language of the FHA and presented case law affirming that FHA liability reaches “a broad array of entities providing housing-related services.”

    “Housing providers and tenant screening companies that use algorithms and data to screen tenants are not absolved from liability when their practices disproportionately deny people of color access to fair housing opportunities,” Assistant Attorney General Kristen Clarke of the DOJ’s Civil Rights Division stressed. “This filing demonstrates the Justice Department’s commitment to ensuring that the Fair Housing Act is appropriately applied in cases involving algorithms and tenant screening software.”

    Federal Issues Courts DOJ Fair Housing Act Artificial Intelligence HUD Algorithms Discrimination Disparate Impact

  • FTC’s annual PrivacyCon focuses on consumer privacy and security issues

    Privacy, Cyber Risk & Data Security

    On November 1, the FTC held its annual PrivacyCon event, which hosted research presentations on a wide range of consumer privacy and security issues. Opening the event, FTC Chair Lina Khan stressed the importance of hearing from the academic community on topics related to a range of privacy issues that the FTC and other government bodies may miss. Khan emphasized that regulators cannot wait until new technologies fully emerge to think of ways to implement new laws for safeguarding consumers. “The FTC needs to be on top of this emerging industry now, before problematic business models have time to solidify,” Khan said, adding that the FTC is consistently working on privacy matters and is “prioritizing the use of creative ideas from academia in [its] bread-and-butter work” to craft better remedies to reflect what is actually happening. She highlighted a recent enforcement action taken against an online alcohol marketplace and its CEO for failing to take reasonable steps to prevent two major data breaches (covered by InfoBytes here). Khan noted that while the settlement’s requirements, such as imposing multi-factor authentication requirements and destroying unneeded user data, may not sound “very cutting-edge” they serve as a big step forward for government enforcers. Chief Technology Officer Stephanie Nguyen, who is responsible for leading the charge to integrate technologists across the FTC’s various lines of work, including consumer privacy, discussed the work of these technologists (including AI and security experts, software engineers, designers, and data scientists) to help develop remedies in data security-related enforcement actions and to push companies to not just do the minimum to remediate areas like unreasonable data security but to model best practices for the industry. “We want to see bad actors face real consequences,” Nguyen said, adding that the FTC wants to hold corporate leadership accountable as it did in the enforcement action Khan cited. Nguyen further stressed that there is also a need to address systemic risk by making companies delete illegally collected data and destroy any algorithms derived from the data.

    The one-day conference featured several panel sessions covering a number of topics related to consumer surveillance, automated decision-making systems, children’s privacy, devices that listen to users, augmented/virtual reality, interfaces and dark patterns, and advertising technology. Topics addressed during the panels include (i) requiring data brokers to provide accurate information; (ii) understanding how data inaccuracies can disproportionately affect minorities and those living in poverty, and why relying on this data can lead to discriminatory practices; (iii) examining bias and discrimination risks when engaging in emotional artificial intelligence; (iv) understanding automated decision making systems and how the quality of these systems impact populations they are meant to represent; (v) recognizing the lack of transparency related to children’s data collection and use, and the impact various privacy laws, including the Children’s Online Privacy Protection Rule, the General Data Protection Regulation, and the California Consumer Privacy Act, have on the collection/use/sharing of personal data; (vi) recognizing challenges related to cookie-consent interfaces and dark patterns; and (vii) examining how targeted online advertising both in the U.S. and abroad affects consumers.

    Privacy, Cyber Risk & Data Security FTC Consumer Protection Artificial Intelligence Dark Patterns Enforcement

  • White House proposes AI “Bill of Rights”

    Federal Issues

    Recently, the Biden administration’s Office of Science and Technology Policy released a Blueprint for an AI Bill of Rights. The blueprint’s proposed framework identifies five principles for guiding the design, use, and deployment of automated systems to protect the public as the use of artificial intelligence grows. The principles center around topics related to stronger safety measures, such as (i) ensuring systems are safe and effective; (ii) implementing proactive protections against algorithmic discrimination; (iii) incorporating built-in privacy protections, including providing the public control over how data is used and ensuring that the data collection meets reasonable expectations and is necessary for the specific context in which it is being collected; (iv) providing notice and explanation as to how an automated system is being used, as well as the resulting outcomes; and (v) ensuring the public is able to opt out from automated systems in favor of a human alternative and has access to a person who can quickly help remedy problems. According to the announcement, the proposed framework’s principles should be incorporated into policies governing systems with “the potential to meaningfully impact” an individual or community’s rights or access to resources and services related to education, housing, credit, employment, health care, government benefits, and financial services, among others.

    Federal Issues Privacy, Cyber Risk & Data Security Biden Artificial Intelligence Fintech

  • OCC reports on cybersecurity and financial system resilience

    Privacy, Cyber Risk & Data Security

    Recently, the OCC released its annual report on cybersecurity and financial system resilience, which describes its cybersecurity policies and procedures, including those adopted in accordance with the Federal Information Security Modernization Act. According to the report, cybersecurity and operational resilience are “top issues for the federal banking system.” The OCC also noted that it has implemented regulations and standards requiring banks to implement information security programs and protect confidential information. For example, the Interagency Guidelines Establishing Standards for Safety and Soundness Standards “require insured banks to have internal controls and information systems appropriate for the size of the institution and for the nature, scope, and risk of its activities and that provide for, among other requirements, effective risk assessment and adequate procedures to safeguard and manage assets.” OCC regulations also, among other things, require banks to file Suspicious Activity Reports when a known or suspected violation of federal law or a suspicious transaction related to illegal activity, or a violation of the Bank Secrecy Act is detected. In regard to examination manuals, the OCC also noted that it uses a risk-based supervision process to evaluate banks’ risk management, identify material and emerging concerns, and require banks to take corrective action when warranted. The report also discussed current and emerging cybersecurity and resilience threats to the banking sector, which include ransomware, account takeover, supply chain risks, and geopolitical threats. Additionally, the OCC noted that it “monitor[s] longer-term technology developments, which may affect cybersecurity and resilience in the future.” The use of artificial intelligence, including machine learning, is one such development that may impact cybersecurity, according to the OCC.

    Privacy, Cyber Risk & Data Security OCC Bank Regulatory Bank Secrecy Act Artificial Intelligence

  • FTC issues report to Congress on use of AI

    Privacy, Cyber Risk & Data Security

    On June 16, the FTC issued a report to Congress regarding the use of artificial intelligence (AI), warning that policymakers should use caution when relying on AI to combat the spread of harmful online conduct. In the 2021 Appropriations Act, Congress directed the FTC to study and report on whether and how AI “may be used to identify, remove, or take any other appropriate action necessary to address” a wide variety of specified “online harms,” referring specifically to content that is deceptive, fraudulent, manipulated, or illegal. The report suggests that adoption of AI could be problematic, as AI tools can be biased, discriminatory, or inaccurate, and could rely on invasive forms of surveillance. To avoid introducing these additional harms, the report suggests lawmakers instead focus on developing legal frameworks to ensure no additional harm is caused by AI tools used by major technology platforms and others. The report further suggests that Congress, regulators, platforms, scientists, and others focus their attention on creating frameworks to address the following related considerations, among others: (i) the need for human intervention in connection with monitoring the use and decisions of AI tools intended to address harmful content; (ii) the need for meaningful transparency, “which includes the need for it to be explainable and contestable, especially when people’s rights are involved or when personal data is being collected or used”; and (iii) the need for accountability with respect to the data practices and results of the use of AI tools by platforms and other companies. Other recommendations include use of authentication tools, responsible use of inputs and outputs by data scientist, and using interventions, such as tools that slow the viral spread or otherwise limit the impact of certain harmful content.

    The Commission voted 4-1 at an open meeting to send the report to Congress. Commissioner Noah Joshua Phillips issued a dissenting statement, finding that the report provides “short shrift to how and why AI is being used to combat the online harms identified by Congress,” and instead “reads as a general indictment of the technology itself.”

    Privacy/Cyber Risk & Data Security Federal Issues FTC Artificial Intelligence Congress

  • OCC discusses use of AI

    On May 13, OCC Deputy Comptroller for Operational Risk Policy Kevin Greenfield testified before the House Financial Services Committee Task Force on Artificial Intelligence (AI) discussing banks' use of AI and innovation in technology services. Among other things, Greenfield addressed the OCC’s approach to innovation and supervisory expectations, as well as the agency’s ongoing efforts to update its technological framework to support its bank supervision mandate. According to Greenfield’s written testimony, the OCC “recognizes the paramount importance of protecting sensitive data and consumer privacy, particularly given the use of consumer data and expanded data sets in some AI applications.” He noted that many banks use AI technologies and are investing in AI research and applications to automate, augment, or replicate human analysis and decision-making tasks. Therefore, the agency “is continuing to update supervisory guidance, examination programs and examiner skills to respond to AI’s growing use.” Greenfield also pointed out that the agency follows a risk-based supervision model focused on safe, sound, and fair banking practices, as well as compliance with laws and regulations, including fair lending and other consumer protection requirements. This risk-based approach includes developing supervisory strategies based upon an individual bank’s risk profile and examiners’ review of new, modified, or expanded products and services. Greenfield further noted that “the OCC is focused on educating examiners on a wide range of AI uses and risks including risks associates with third parties, information security and resilience, compliance, BSA, credit underwriting, and fair lending and data governance, as part of training courses and other educational resources.” According to Greenfield’s oral statement, “banks need effective risk management and controls for model validation and explainability, data management, privacy, and security regardless of whether a bank develops AI tools internally or purchases through a third party.”

    Bank Regulatory Federal Issues OCC House Financial Services Committee Privacy/Cyber Risk & Data Security Artificial Intelligence Third-Party Risk Management Fintech

  • DOJ and EEOC address AI employment decision disability discrimination

    Federal Issues

    On May 12, the DOJ and the Equal Employment Opportunity Commission (EEOC) released a technical assistance document addressing disability discrimination when using artificial intelligence (AI) and other software tools to make employment decisions. According to the announcement, the DOJ’s guidance document, Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring, provides a broad overview of rights and responsibilities in plain language, and, among other things, (i) provides examples of technological tools used by employers; (ii) clarifies that employers must consider the impact on different disabilities when designing or choosing technological tools; (iii) describes employers’ obligations under the ADA when using algorithmic decision-making tools; and (iv) provides information for employees on actions they may take if they believe they have experienced discrimination. The EEOC also released a technical assistance document, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, which focuses on preventing discrimination against job seekers and employees with disabilities.

    Federal Issues DOJ EEOC Artificial Intelligence Americans with Disabilities Act Discrimination

  • FHFA releases AI/ML risk management guidance for GSEs

    Federal Issues

    On February 10, FHFA released Advisory Bulletin (AB) 2022-02 to Fannie Mae and Freddie Mac (GSEs) on managing risks related to the use of artificial intelligence and machine learning (AI/ML). Recognizing that while the use of AI/ML has rapidly grown among financial institutions to support a wide range of functions, including customer engagement, risk analysis, credit decision-making, fraud detection, and information security, FHFA warned that AI/ML may also expose a financial institution to heightened compliance, financial, operational, and model risk. In releasing AB 2022-02 (the first publicly released guidance by a U.S. financial regulator that specifically focuses on AI/ML risk management), FHFA advised that the GSEs should adopt a risk-based, flexible approach to AI/ML risk management that should also be able “to accommodate changes in the adoption, development, implementation, and use of AI/ML.” Diversity and inclusion (D&I) should also factor into the GSEs’ AI/ML processes, stated a letter released the same day from FHFA’s Office of Minority and Women Inclusion, which outlined its expectations for the GSEs “to embed D&I considerations throughout all uses of AI/ML” and “address explicit and implicit biases to ensure equity in AI/ML recommendations.” The letter also emphasized the distinction between D&I and fairness and equity, explaining that D&I “requires additional deliberation because it goes beyond the equity considerations of the impact of the use of AI/ML and requires an assessment of the tools, mechanisms, and applications that may be used in the development of the systems and processes that incorporate AI/ML.”

    Additionally, AB 2022-02 outlined four areas of heightened risk in the use of AI/ML: (i) model risk related to bias that may lead to discriminatory or unfair outcomes (includes “black box risk” where a “lack of interpretability, explainability, and transparency” may exist); (ii) data risk, including concerns related to the accuracy and quality of datasets, bias in data selection, security of data from manipulation, and unfamiliar data sources; (iii) operational risks related to information security and IT infrastructure, among other things; and (iv) regulatory and compliance risks concerning compliance with consumer protection, fair lending, and privacy laws. FHFA provided several key control considerations and encouraged the GSEs to strengthen their existing risk management frameworks where heightened risks are present due to the use of AI/ML.

    Federal Issues FHFA Fintech Artificial Intelligence Mortgages GSEs Risk Management Fannie Mae Freddie Mac Diversity

  • FTC comments on CFPB’s big tech payments inquiry

    Federal Issues

    On December 21, FTC Chair Lina M. Khan submitted a comment in response to the CFPB's Notice and Request for Comment inquiring about the CFPB’s October orders issued to six large U.S. technology companies seeking information and data on their payment system business practices. (Covered by InfoBytes here.) In her comment, Khan noted her three areas of concern that she hopes can help to inform the CFPB’s inquiry, including that big tech companies’ (i) “participation in payments and financial services could enable them to entrench and extend their market positions and privileged access to data and AI techniques in potentially anticompetitive and exploitative ways”; (ii) “use of algorithmic decision-making in financial services amplifies concerns of discrimination, bias, and opacity”; and (iii) “increasingly commingled roles as payment and authentication providers could concentrate risk and create single points of failure.” Khan noted that “[t]he potential risks created by Big Tech’s expansion into payments and financial services are notable and demand close scrutiny,” and stated that she will be monitoring “this inquiry and the findings it produces to help inform the FTC’s work.”

    Federal Issues FTC CFPB Payments Artificial Intelligence Discrimination

Pages

Upcoming Events