Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • FSOC report highlights AI, climate, banking, and fintech risks; CFPB comments

    Privacy, Cyber Risk & Data Security

    On December 14, the Financial Stability Oversight Counsel released its 2023 Annual Report on vulnerabilities in financial stability risks and recommendations to mitigate those risks. The report was cited in a statement by the Director of the CFPB, Rohit Chopra, to the Secretary of the Treasury. In his statement, Chopra said “[i]t is not enough to draft reports [on cloud infrastructure and artificial intelligence], we must also act” on plans to focus on ensuring financial stability with respect to digital technology in the upcoming year. In its report, the FSOC notes the U.S. banking system “remains resilient overall” despite several banking issues earlier this year. The FSOC’s analysis breaks down the health of the banking system for large and regional banks through review of a bank’s capital and profitability, credit quality and lending standards, and liquidity and funding. On regional banks specifically, the FSOC highlights how regional banks carry higher exposure rates to all commercial real estate loans over large banks due to the higher interest rates.

    In addition, the FSOC views climate-related financial risks as a threat to U.S. financial stability, presenting both physical and transitional risks. Physical risks are acute events such as floods, droughts, wildfires, or hurricanes, which can lead to additional costs required to reduce risks, firm relocations, or can threaten access to fair credit. Transition risks include technological changes, policy shifts, or changes in consumer preference which can all force firms to take on additional costs. The FSOC notes that, as of September 2023, the U.S. experienced 24 climate disaster events featuring losses that exceed $1 billion, which is more than the past five-year annual average of 18 events (2018 to 2022). The FSOC also notes that member agencies should be engaged in monitoring how third-party service providers, like fintech firms, address risks in core processing, payment services, and cloud computing. To support this need for oversight over these partnerships, the FSOC cites a study on how 95 percent of cloud breaches occur due to human error. The FSOC highlights how fintech firms face risks such as compliance, financial, operational, and reputational risks, specifically when fintech firms are not subject to the same compliance standards as banks.

    Notably, the FSOC is the first top regulator to state that the use of Artificial Intelligence (AI) technology presents an “emerging vulnerability” in the U.S. financial system. The report notes that firms may use AI for fraud detection and prevention, as well as for customer service. The FSOC notes that AI has benefits for financial instruction, including reducing costs, improving inefficiencies, identifying complex relationships, and improving performance. The FSOC states that while “AI has the potential to spur innovation and drive efficiency,” it requires “thoughtful implementation and supervision” to mitigate potential risks.

    Privacy, Cyber Risk & Data Security Bank Regulatory FSOC CFPB Artificial Intelligence Banks Fintech

  • EU Commission, Council, and Parliament agree on details of AI Act

    Privacy, Cyber Risk & Data Security

    On December 9, the EU Commission announced a political agreement between the European Parliament and the European Council regarding the proposed Artificial Intelligence Act (AI Act).  The agreement is provisional and is subject to finalizing the text and formal approval by lawmakers in the European Parliament and the Council. The AI Act will regulate the development and use of AI systems, as well as impose fines on any non-compliant use. The object of the law is to ensure that AI technology is safe and that its use respects fundamental democratic rights while balancing the need to allow businesses to grow and thrive. The AI Act will also create a new European AI Office to ensure coordination, transparency, and to “supervise the implementation and enforcement of the new rules.” According to this EU Parliament press release, powerful foundation models that pose systemic risks will be subject to specific rules in the final version of the AI Act based on a tiered classification.

    Except with foundation models, the EU AI Act adopts a risk-based approach to the regulation of AI systems, classifying these into different risk categories: minimal risk, high-risk, and unacceptable risk. Most AI systems would be deemed as minimal risk since they pose little to no risk to citizens’ safety. High-risk AI systems would be subject to the heaviest obligations, including certifications on the adoption of risk-mitigation systems, data governance, logging of activity, documentation obligations, transparency requirements, human oversight, and cybersecurity standards.  Examples of high-risk AI systems include utility infrastructures, medical devices, institutional admissions, law enforcement, biometric identification and categorization, and emotion recognition systems. AI systems deemed “unacceptable” are those that “present a clear threat to the fundamental rights of people” such as systems that manipulate human behaviors, like “deep fakes,” and any type of social scoring done by governments or companies. While some biometric identification is allowed, “unacceptable” uses include emotional recognition systems at work or by law enforcement agencies (with narrow exceptions).

    Sanctions for breach of the law will range from a low of €7.5 million or 1.5 percent of a company’s global total revenue to as high as €35 million or 7 percent of revenue. Once adopted, the law will be effective from early 2026 or later. Compliance will be challenging (the law targets AI systems made available in the EU), and companies should identify whether their use and/or development of such systems will be impacted.

    Privacy, Cyber Risk & Data Security Privacy European Union Artificial Intelligence Privacy/Cyber Risk & Data Security Of Interest to Non-US Persons

  • NYDFS settles with title insurance company for $1 million

    Privacy, Cyber Risk & Data Security

    On November 27, the NYDFS entered into a consent order with a title insurance company, which required the company to pay $1 million for failing to maintain and implement an effective cybersecurity policy and correct a cybersecurity vulnerability. The vulnerability allowed members of the public to access others’ nonpublic information, including driver’s license numbers, social security numbers, and tax and banking information. The consent order indicates the title insurance company discovered the vulnerability as early as 2018. The title insurance company’s failure to correct these changes violated Section 500.7 of the Cybersecurity Regulation.

    In May 2019, a cybersecurity journalist published an article on the existence of a vulnerability in the title insurance company’s application, that led to a public exposure of 885 million documents, some found through search engine results. The journalist noted that “replacing the document ID in the web page URL… allow[ed] access to other non-related sessions without authentication.” Following the cybersecurity journalist’s article, and as required by Section 500.17(a) of the Cybersecurity Regulation, the title insurance company notified NYDFS of its vulnerability, at which point NYDFS investigated further. The title insurance company has been ordered to pay the penalty no later than ten days after the effective date.

    Privacy, Cyber Risk & Data Security State Issues Securities NYDFS Auto Insurance Enforcement

  • FTC orders prison contractor to fix security exposures after data breach

    Privacy, Cyber Risk & Data Security

    On November 16, the FTC issued a proposed order against an integrated technology services company finding a violation of Section 5(a) of the Federal Trade Commission Act. According to the order, the company offered various products and services to jails, prisons, and detention facilities. These products and services included means of communication between incarcerated and non-incarcerated individuals, and, among other things, allowed non-incarcerated individuals to deposit funds into the accounts of incarcerated individuals. According to the complaint, and due to the nature of its operations, the company collected individuals’ sensitive personally identifiable information, including names, addresses, passport numbers, driver’s license numbers, Social Security numbers, and financial account information, some of which was exposed as a result of a data breach in August 2020 due to a misconfiguration in the company’s cloud storage environment.

    In its decision, the FTC ordered the company to, among other things, (i) implement a comprehensive data security program, including “change management” measures and multifactor authentication; (ii) notify users affected by the data breach, who had not yet received notice, and offer credit monitoring and identity protection products; (iii) inform consumers and facilities within 30 days of future data breaches; and (iv) notify the FTC within 10 days of reporting any security incident to local, state, or federal authorities.

    Privacy, Cyber Risk & Data Security Federal Issues FTC Data Enforcement

  • CFTC speech highlights new executives, dataset use, and AI Task Force

    Privacy, Cyber Risk & Data Security

    On November 16, the Chairman of the CFTC, Rostin Behnam, delivered a speech during the 2023 U.S. Treasury Market Conference held in New York where he showcased the CFTC’s plans to better use data and roll out an internal AI task force. One of the CFTC’s initiatives comes with the hiring of two new executive-level roles: a Chief Data Officer and a Chief Data Scientist. These executives will manage how the CFTC uses AI tools, and oversee current processes, including understanding large datasets, cleaning the datasets, identifying and monitoring pockets of stress, and combating spoofing.

    The CFTC also unveiled its plans to create an AI Task Force and to “gather[] information about the current and potential uses of AI by our registered entities, registrants, and market participants in areas such as trading, risk management, and cybersecurity.” The Commission plans to obtain feedback for the AI Task Force through a formal Request for Comment process in 2024. The CFTC hopes these comments will help the agency create a rulemaking policy on “safety and security, mitigation of bias, and customer protection.”

    Privacy, Cyber Risk & Data Security CFTC Big Data Artificial Intelligence Spoofing

  • Minnesota amends health care provision in extensive new law

    Privacy, Cyber Risk & Data Security

    On November 9, the State of Minnesota enacted Chapter 70--S.F.No. 2995, a large bill to amend certain sections of its current health care provisions. The bill covers extensive changes to healthcare provisions, from prescription contraceptives, hearing aids, mental health, long COVID, and childcare, among many others.

    One of the significant new laws requires a hospital to first check if a patient’s bill is eligible for charity care before sending it off to a third-party collection agency. Further, the bill places new requirements on hospitals collecting on a medical debt before it can “garnish wages or bank accounts” of an individual. The Minnesota law also outlines how a hospital wishing to use a third-party collection agency, must first complete an affidavit attesting that it has checked if the patient is eligible for charity care, confirmed proper billing, given the patient the opportunity to apply for charity care, and, under certain circumstances, if the patient is unable to pay in one lump sum, offered a reasonable payment plan instead.

    Privacy Privacy, Cyber Risk & Data Security Minnesota Health Care Medical Debt Debt Collection

  • FTC approves amendment to Safeguards Rule requiring nonbanks to report data breaches

    Privacy, Cyber Risk & Data Security

    On October 27, the FTC approved an amendment to the Safeguards Rule to require nonbanks to report data breaches. Under the amended rule, financial institutions, including mortgage brokers, motor vehicle dealers, and payday lenders, will be required to notify the FTC of data breaches as soon as possible, and no later than 30 days after the discovery of incident involving at least 500 consumers. Notice of an incident is required if unencrypted consumer information was acquired without their authorization, as the FTC noted that encrypted consumer information is unlikely to cause consumer harm. The FTC will provide an online form that will be used to report certain information, including the type of information involved in the security event and the number of consumers affected or potentially affected. Additionally, the amended rule will require nonbanks to “to develop, implement, and maintain a comprehensive security program to keep their customers’ information safe.” As previously covered by InfoBytes, the FTC recently extended compliance on some Safeguards provisions finalized in October 2021 (covered by InfoBytes here), to June of this year.

    The commission voted 3-0 to publish the amendment, which will become effective 180 days after its publication in the Federal Register.

    Privacy, Cyber Risk & Data Security Federal Issues Data Breach FTC Safeguards Rule Nonbank Supervision

  • President Biden issues Executive Order targeting AI safety

    Federal Issues

    On October 30, President Biden issued an Executive Order (EO) outlining how the federal government can promote artifical intelligence (AI) safety and security to protect US citizens’ rights by: (i) directing AI developers to share critical information and test results with the U.S. government; (ii) developing standards for safe and secure AI systems; (iii) protecting citizens from AI-enabled fraud; (iv) establishing a cybersecurity program; and (v) creating a National Security Memorandum developed by the National Security Council to address AI security.

    President Biden also called on Congress to act by passing “bipartisan data privacy legislation” that (i) prioritizes federal support for privacy preservation; (ii) strengthens privacy technologies; (iii) evaluates agencies’ information collection processes for AI risks; and (iv) develops guidelines for federal agencies to evaluate privacy-preserving techniques. The EO additionally encourages agencies to use existing authorities to protect consumers and promote equity. As previously covered by InfoBytes, the FCC recently proposed to use AI to block unwanted robocalls and texts). The order further outlines how the U.S. can continue acting as a leader in AI innovation by catalyzing AI research, promoting a fair and competitive AI ecosystem, and expanding the highly skilled workforce by streamlining visa review.

    Federal Issues Privacy, Cyber Risk & Data Security White House Artificial Intelligence Biden Executive Order Consumer Protection

  • 7th Circuit: Court upholds dismissal of FDCPA lawsuit over debt information sharing

    Courts

    On October 23, the U.S. Court of Appeals for the Seventh Circuit affirmed the dismissal of a consumer’s putative class action lawsuit alleging that a collection agency violated the FDCPA by sharing the consumer’s debt information with a third-party vendor. The court ruled that the consumer lacked standing because she did not sustain an injury from the sharing of her information.

    To collect a defaulted credit-card debt, the defendant collection agency used a third-party vendor to print and mail a collection letter to the consumer. The consumer alleged that the collection agency violated the FDCPA by disclosing to the vendor the consumer’s personal information, and the disclosure was analogous to the tort of invasion of privacy. The appeals court disagreed, reasoning that the sharing of a debtor’s data with a third-party mail vendor to populate and send a form collection letter that caused no cognizable harm, legally speaking. The court also noted that the U.S. Courts of Appeal for the Tenth and Eleventh Circuits have reached similar conclusions. “The transmission of information to a single ministerial intermediary does not remotely resemble the publicity element of the only possibly relevant variant of the privacy tort.”

    Courts Privacy, Cyber Risk & Data Security Seventh Circuit FDCPA Class Action Appellate Credit Cards

  • SEC announces 2024 examination priorities, excludes ESG

    Securities

    On October 16, the SEC’s Division of Examinations announced that its 2024 examination priorities will focus on key risk factors related to information security and operational resiliency, crypto assets and emerging financial technology, regulation systems compliance and integrity, and anti-money laundering. SEC registrants, including investment advisers, investment companies, broker dealers, self-regulatory organizations, clearing agencies, and other market participants are reminded of their obligations to address, manage, and mitigate these key risks. Notably, ESG was a “significant focus area[]” in 2022 (covered by InfoBytes here) and 2023, but it is not directly mentioned in the 2024 examination priorities.

    According to the report, examiners plan to increase their engagement to support the evolving market and new regulatory requirements. Regarding information security and operational resiliency, examiners will focus on registrants’ procedures surrounding “internal controls, oversight of third-party vendors (where applicable), governance practices, and responses to cyber-related incidents, including those related to ransomware attacks.” Additionally, regarding crypto assets and emerging fintech, examiners will focus on registrants’ business practices involving compliance practices, risk disclosures, and operational resiliency practices. The SEC also mentioned in the “Crypto Assets and Emerging Financial Technology”  section of the report that it will assess registrant preparations for the recently adopted rule for broker dealer transactions that shortens the standard settlement cycle to one business day (previously two days) after the trade, which has a compliance date of May 28, 2024. Among other things, the SEC will also focus on whether registrants’ regulation systems compliance and integrity are “reasonably designed” to ensure the security of its systems, including physical security of the systems housed in data centers.

    SEC chair Gary Gensler said that the Division of Examinations plays an important role in “protecting investors and facilitating capital formation,” adding that the commission will focus on “enhancing trust” in the changing markets.

    Securities SEC Examination Digital Assets Fintech Compliance Privacy, Cyber Risk & Data Security

Pages

Upcoming Events