The Consumer Financial Protection Bureau (CFPB) has launched the process for independent standard-setting bodies to receive formal recognition, as part of its efforts to shift towards open banking in the United States.

On June 5, 2024, the CFPB finalized a rule outlining the minimum attributes that standard-setting bodies must exhibit to issue standards in compliance with CFPB’s proposed Personal Financial Data Rights Rule.  The Personal Financial Data Rights Rule, proposed in October 2023, is the first federal legal framework for open banking under Section 1033 of the 2010 Consumer Financial Protection Act.  This previously untapped legal authority gives consumers the right to control their personal financial data and assigns the task of implementing personal financial data sharing standards and protections to the CFPB.

As it is currently drafted, the Personal Financial Data Rights Rule would grant companies the ability to utilize technical standards developed by standard-setting organizations recognized by the CFPB.  The CFPB “Industry standard-setting bodies that operate in a fair, open, and inclusive manner have a critical role to play in ensuring a safe, secure, reliable, and competitive data access framework,” stated the CFPB in the proposal.

Under the rule launching the approval process, industry standardard-setting bodies can apply to be recognized by the CFPB.  Those seeking approval must demonstrate the following attributes:

  • Openness: A standard-setting organization’s sources, procedures, and processes must be open to all interested parties, including public interest groups, consumer advocates, and app developers;
  • Balanced-Decision Making: The decision-making power to set standards must be balanced across all interested parties. There must be meaningful representation for large and small commercial entities, and balanced representation must be reflected at all levels of the standard-setting body;
  • Due Process: The standard-setting body must use documented and publicly available policies and procedures to provide a fair and impartial process. An appeals process is also available for the impartial handling of procedural appeals;
  • Consensus: Standards development must proceed by consensus but not necessarily unanimity; and
  • Transparency: Procedures must be transparent to participants and publicly available.

The CFPB also outlined the application process which involves requesting recognition, followed by additional information requests from the CFPB which may also involve public comment.  Next, the CFPB will review the available information against the requirements listed above, make a decision on the application, and if approved, officially recognize the organization as a standards-setting body. 

The CFPB also has the power to (a) revoke standard-setters’ recognition if they fail to meet the qualifications and (b) impose a maximum recognition duration of five years, after which recognized standard-setters will have to apply for re-recognition.  This rule will take effect 30 days after its publication in the Federal Registrar.

Fair standards issued by standard-setters outside the agency will help companies comply with the proposed Personal Financial Data Rights Rule.  Interested standard-setters are encouraged to begin ensuring their adherence to this new rule.

In a reminder that open source products can carry significant risks beyond intellectual property, a vulnerability in a compression tool commonly used by developers has triggered widespread concerns. 

XZ Utils (“XZ”) is an open source data compression utility, first published in 2009, and widely used in Linux and macOS systems. The tool is primarily used for data compression and decompression and may reside on routers and switches, VPN concentrators and firewalls. XZ is used on many smartphones, televisions and most web servers on the internet. Despite its wide adoption, as an open source tool, XZ was primarily maintained by one volunteer—who maintained the open source software for free.

When this volunteer ran into some personal matters, he turned over the maintenance responsibilities to JiaT75, known as Jia Tan (or what many now believe was a group of hackers working under this alias). In February 2024, Jia Tan updated XZ for versions 5.6.0 and 5.6.1, which included malicious code. This malicious code could be used to create a backdoor on infected devices, including the potential for stealing encryption keys or installing malware. Jia Tan took several steps to obfuscate the addition of malicious code. For example, the malicious code was not included in the public GitHub repository but instead included only in tarball releases. Further, the backdoor was deployed only in certain environments to avoid detection.

On March 29, 2024, a security researcher stumbled onto a software bug that led him to discover and report the XZ attack. The Cybersecurity & Infrastructure Security Agency (CISA) issued an alert recommending that users downgrade to an uncompromised version of XZ, and Linux vendors promptly issued press releases and remediation efforts to minimize the effects of the attack.

The XZ attack has raised questions within the open source community about the risks of having critical software maintained and governed by unpaid volunteers. The attack also serves as a reminder that even widely adopted software is at risk of attack and companies should prepare for future attacks to its or its third party vendor’s software. As part of that process, companies should:

Know your dependencies. Be able to quickly identify, or ensure that third-party vendors can quickly identify, whether a certain software package with a known vulnerability is used in any of a company’s software.

Develop disaster recovery and incident response plans. Response plans are typically more comprehensive and more effective when they are created before an incident. Companies should consult with a variety of subject matter experts, internally and externally, to evaluate the size and scope of its business activities, amount and type of personal information that is collected and stored, the locations of operations and applicable federal, state and sector-specific regulatory requirements.

Apply technology mitigation strategies. Limit access to critical software systems to only those employees and independent contractors who need access to such systems and monitor usage for any unusual behavior.

Review applicable policies and procedures often. Outside counsel can review policies and procedures to ensure compliance with the fast-changing regulatory landscape.

Review vendor contracts for liability protections. Scrutinize vendor contracts for appropriate risk shifting terms. Indemnification clauses may be appropriate and might include claims related to cybersecurity incidents, data breaches and regulatory liability. Representations and warranties might be used to represent that certain mitigation strategies will be deployed and compliance standards will be maintained. Insurance can be a helpful tool, and covenants to purchase adequate insurance might be considered. Further, it may be appropriate to carve out some cybersecurity incidents from a liability cap. In-house and outside counsel can review risk shifting terms in light of current market and legal trends.

Minnesota becomes the latest state to move to pass legislation regulating the processing and controlling of personal data (HF 4757 / SF 4782). If signed into law by Governor Tim Walz, the Minnesota Consumer Data Privacy Act, or MCDPA, would go into effect on July 31, 2025 and provide various consumer data privacy rights and impose obligations on entities that control or process Minnesota residents’ personal data.

The MCDPA applies to entities controlling or processing personal data of 100,000 consumers or more or that derive over 25% of their revenue from the sale of personal data and process or control personal data of 25,000 consumers or more. Following in the footsteps of Texas and Nebraska, the MCDPA exempts small businesses as defined by the United States Small Business Administration. The law also contains targeted data-level exemptions for health and financial data processing, but not entity-level exemptions.

In addition to the right to access, rectification, erasure, portability, and opt-out of targeted advertising, sale of personal data, and profiling, under the MCDPA, consumers also would also have the novel right to question the result of a profiling decision and request additional information from a controller regarding that decision.

The MCDPA outlines several responsibilities to which controllers of data must comply, some of which are new obligations beyond what are contained in other laws. For example, the MCDPA requires that a “controller shall establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data, including the maintenance of an inventory of the data that must be managed to exercise these responsibilities.” The maintenance of this type of inventory is a first under U.S. state privacy law.

There are also data obligations relating to transparency in privacy notice and disclosure; limitation of the use of data in relation to processing and physical data security practices; nondiscrimination in the processing of personal data; and an obligation to appoint a chief privacy officer or privacy lead for the organization.

Enforcement would fall under the purview of the Minnesota Attorney General and businesses would have a thirty-day right to cure period, which expires January 31, 2026.

Assuming it is signed into law as expected, Minnesota will join the ranks of the seventeen other states (eighteen counting Florida) that have passed comprehensive consumer privacy acts. As with each of the other states’ acts, Minnesota’s bill shares some similarities with these other acts while also containing some unique provisions. Businesses in Minnesota would do well to start reviewing procedures and processes in preparation for the MCDPA.

Colorado has become the first state to pass legislation (SB24-205) regulating the use of artificial intelligence (AI) within the United States. This legislation is designed to address the influence and implications, ethically, legally, and socially, of AI technology across various sectors.

Any person doing business in Colorado, including developers or deployers of high-risk AI systems that are intended to interact with consumers. The bill defines a high-risk AI system as any AI system that is a substantial factor in making a consequential decision. Notably it does not include (among others) anti-fraud technology that doesn’t use facial recognition, anti-malware, data storage, databases, video games, and chat features so long as they do not make consequential decisions.

The bill includes a comprehensive framework governing the use of AI within government, education, and business with a focus on promoting ethical standards, transparency, and accountability in AI development and deployment. The bill requires disclosure for the use of AI in decision-making processes, sets out ethical standards to guide AI development, and provides mechanisms of recourse and oversight in cases of AI-related biases or errors. These recourse mechanisms include opportunities for consumers to correct any incorrect personal data processed by a high-risk AI system as well as an opportunity to appeal an adverse decision made by a system with human review (if possible).  The disclosure requirements will apply to developers, requiring a publicly available statement that describes methods used to manage risks of algorithmic discrimination.  

The bill requires the development of several compliance mechanisms if an entity uses high-risk AI systems. These include impact assessments, risk management policies and programs, and annual review of the high-risk systems. These mechanisms are designed to promote transparency in the development and use of these systems.

The passage of this bill positions Colorado at the forefront of AI regulation in the US, setting a precedent for other states and jurisdictions grappling with similar challenges.

Judges are beginning to address the increasing use of AI tools in court filings—including reacting to instances of abuses by lawyers using it for generative purposes and requiring disclosures regarding the scope of AI use in the drafting of legal submissions. Now JAMS, the largest private provider of alternative dispute resolution services worldwide, has issued rules—effective immediately—designed to address the use and impact of AI.

The Upshot

  • Alternative Dispute Resolution (ADR) providers are now joining courts in trying to grapple with the impact of AI on the practice of law.
  • As they should when dealing with courts, litigants need to pay close attention to the rules of a particular forum as they pertain to AI.
  • When choosing an arbitration forum in agreements, there may be reasons to choose a forum with AI rules and to specify that those procedures be followed. 

The Bottom Line

AI is reshaping the legal landscape and compelling the industry to adapt. Staying up-to-date with these changes has become as fundamental to litigation as other procedural rules. The Artificial Intelligence Team at Ballard Spahr monitors developments in AI and is advising clients on required disclosures, risk mitigation, the use of AI tools, and other evolving issues. 

Judges are beginning to address the increasing use of artificial intelligence (AI) tools in court filings—including reacting to instances of abuse by lawyers using it for generative purposes and requiring disclosures regarding scope of use in the drafting of legal submissions—by issuing standing orders, as detailed here. Staying up-to-date with these changes will soon be as fundamental to litigation as other procedural rules.

In line with court trends, Judicial Arbitration and Mediation Services (JAMS), an alternative dispute resolution (ADR) services company, recently released new rules for cases involving AI. JAMS emphasized that the purpose of the guidelines is to “refine and clarify procedures for cases involving AI systems,” and to “equip legal professionals and parties engaged in dispute resolution with clear guidelines and procedures that address the unique challenges presented by AI, such as questions of liability, algorithmic transparency, and ethical considerations.”

Although courts have not settled on a definition yet for AI, JAMS took deliberate steps to define AI specifically as “a machine-based system capable of completing tasks that would otherwise require cognition.” Such change makes the scope of its rules clearer. Additionally, the rules encompass an electronically stored information (ESI) protocol approved for AI cases, along with procedures for overseeing the examination of AI systems, materials, and experts to accommodate instances where the existing ADR process lacks adequate safeguards for handling the intricate and proprietary nature of such data.

Specifically, the procedures dictate that before any preliminary conference, each party must voluntarily exchange their non-privileged and relevant documents and other ESI. JAMS suggests that prior to such exchange, the parties should enter into their AI Disputes Protective Order to protect each party’s confidential information.

The form protective order, which is not provided under JAMS’s regular rules, limits the disclosure of certain designated documents, and information to the following specific parties: counsel, named parties, experts, consultants, investigators, the arbitrator, court reporters and staff, witnesses, the mediator, author or recipient of the document, other persons after notice to the other side, and “outside photocopying, microfilming or database service providers; trial support firms; graphic production services; litigation support services; and translators engaged by the parties during this Action to whom disclosure is reasonably necessary for this Action.” The list of parties privy to such confidential information does not include any AI generative services, a choice that is consistent with the broad concern that confidential client information is currently unprotected in this new AI world.

The rules further provide that in cases where the AI systems themselves are under dispute and require production or inspection, the disclosing party must provide access to the systems and corresponding materials to at least one expert in a secured environment established by that party. The expert is prohibited from removing any materials or information from this designated environment.

Additionally, experts providing opinions on AI systems during the ADR process must be mutually agreed upon by the parties or designated by the arbitrator in cases of disagreement. Moreover, the rules confine expert testimony on technical issues related to AI systems to a written report addressing questions posed by the arbitrator, supplemented by testimony during the hearing. These changes recognize the general need for both security and technical expertise in this area so that the ADR process can remain digestible to the arbitrator/mediator, who likely has no, or limited, prior experience in the area.

While JAMS claims to be the first ADR services company to issue such guidance, other similar organizations have advertised that their existing protocols are already suited to the current AI landscape and court rules.

Indeed, how to handle AI in the ADR context has been top of mind for many in the field. Last year, the Silicon Valley Arbitration & Mediation Center (SVAMC), a nonprofit organization focused on educating about the intersection of technology and ADR, released its proposed, “Guidelines on the Use of Artificial Intelligence in Arbitration.” SVAMC recommends that those participating in ADR should use their guidelines as a “model” for navigating the procedural aspects of ADR related to AI, which may involve incorporating such guidelines into some form of protective order.

In part, the clauses (1) require that the parties familiarize themselves with the relevant AI tool’s uses, risks, and biases, (2) make clear the parties of record remain subject to “applicable ethical rules or professional standards” and that parties are required to verify the accuracy of any work product that AI generates, as that party will be held responsible for the inaccuracies, and (3) provide that disclosure regarding the use of AI should be determined on a case-by-case basis.

The SVAMC guidelines also focus on confidentiality considerations, requiring that the parties redact privileged information prior to inputting them into AI in certain instances. SVAMC even goes so far as to make clear that the arbitrator themselves cannot substitute their own decision making power for AI’s. SVAMC’s Guidelines are a useful tool for identifying the significant factors that parties engaging in ADR should contemplate, and that the ADR community at large is contemplating. 

As courts provide additional legal guidance, and more related AI-use issues arise, we expect that more ADR service companies will move in a similar direction as JAMS, and potentially adopt versions of SVAMC’s guidance, as the procedures and technology continues to evolve.

Newly effective regulations governing confidentiality of Substance Use Disorder (SUD) records now more closely mirror regulations implementing the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and other federal law. The new measures ease the administrative burden on programs by aligning regulations governing the privacy of Part 2 SUD records with the regulatory framework governing HIPAA and the Health Information Technology for Economic and Clinical Health (HITECH) Act. Part 2 programs have until February 16, 2026, to implement any necessary changes.

The Upshot

  • Jointly, the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) and the Substance Abuse and Mental Health Services Administration (SAMHSA) revised 42 C.F.R. Part 2 (Part 2), which governs certain patient-identifying substance use disorder (SUD) records, effective April 16, 2024.
  • Significant regulatory changes implement Section 3221 of the Coronavirus Aid, Relief, and Economic Security (CARES) Act (relating to “confidentiality and disclosure of records relating to substance use disorder”), which required HHS to update both HIPAA and Part 2 regulations in order to better align respective privacy protections.
  • The changes consist of new and revised Part 2 regulations governing, in part: patient consent for SUD record use, disclosure, and redisclosure; individual patient rights relating to notices, accountings of disclosures, and complaint procedures; and increased penalties for noncompliance. 
  • Part 2 programs may now maintain, use, and disclose records in a manner more consistent with HIPAA regulations.
  • As a result, Part 2 programs have expanded flexibility in utilizing Part 2 records, but must carefully note additional compliance responsibilities and civil penalties for noncompliance.

The Bottom Line

Part 2 violations will now be subject not only to criminal penalties, but also the civil monetary penalties established by HIPAA, HITECH, and their implementing regulations. These regulations (along with the CARES Act) fundamentally alter the potential cost of noncompliance for Part 2 programs and may ultimately result in increased enforcement activity.

Read the full alert here.

In a regulatory filing, Reddit announced that the FTC is probing Reddit’s proposal to sell, license and share user-generated content with third parties to train artificial intelligence (AI) models.  This move underscores the growing scrutiny over how online platforms harness the vast amounts of data they collect, particularly in the context of AI development. 

The investigation brings to light several legal considerations that could have far-reaching consequences. Importantly, it highlights the importance of clear and transparent user agreements including terms of service and privacy policies. Users must be fully aware of how their data is used, especially when it contributes to the development of AI technologies. This approach tracks with the FTC’s stance that companies seeking to use consumer personal data to train AI models should notify consumers meaningfully rather than surreptitiously change user agreements.

The FTC’s actions signal a more aggressive stance on data privacy and usage, particularly in relation to AI. For the tech industry, this could mean a shift towards more stringent data handling and consent practices. Companies may need to reassess their data collection and usage policies to ensure compliance with emerging legal standards. Furthermore, this investigation could pave the way for new regulations specifically addressing the use of personal data in AI development.

The UN General Assembly has adopted a landmark resolution focusing on the safe, secure, and trustworthy use of Artificial Intelligence (AI). This resolution, led by the United States and supported by over 120 Member States, marks the first time the Assembly has adopted a resolution on regulating AI. The resolution calls for protection of rights both online and offline, urges cooperation to bridge the digital divide, and aims for AI to advance sustainable development globally. While the UN Resolution generally consists of high level principles as opposed to specific compliance steps, it is an important reminder that there is a growing legal and regulatory consensus on a responsible AI governance framework.

The UN Resolution emphasizes the development of AI systems in a way that is safe, secure, trustworthy, and sustainable. It recognizes the potential of AI to progress the Sustainable Development Goals (SDGs) and underlines the importance of human-centric, reliable, and ethical AI systems. It stresses the need for global consensus on AI governance and capacity building in developing countries, ensuring AI benefits are shared globally. The UN Resolution also highlights the urgency of developing effective safeguards and standards for AI, promoting transparent, inclusive, and equitable use of AI while respecting intellectual property rights and privacy.

The UN Resolution specifically encourages all Member states to be cognizant of data security issues when promoting AI systems by “[f]ostering the development, implementation and disclosure of mechanisms of risk monitoring and management, mechanisms for securing data, including personal data protection and privacy policies, as well as impact assessments as appropriate, across the life cycle of artificial intelligence systems.” The UN Resolution suggests that businesses need to develop and implement comprehensive risk monitoring and management systems for their AI technologies. This includes securing data throughout the AI lifecycle, ensuring robust personal data protection, and regularly conducting privacy and impact assessments. Essentially, companies should be proactive in identifying and managing potential risks associated with AI use, particularly regarding data privacy. This approach is crucial for compliance with emerging international standards and for maintaining trust in AI systems and applications.

The UN Resolution also encourages Member states to consider data privacy when promoting AI systems by “[s]afeguarding privacy and the protection of personal data when testing and evaluating systems, and for transparency and reporting requirements in compliance with applicable international, national and subnational legal frameworks, including on the use of personal data throughout the life cycle of artificial intelligence systems.” The UN Resolution implies a need to prioritize privacy in all stages of AI system development and usage. Transparency should be ensured in how personal data is handled and the relevant legal frameworks should be complied with at all levels. This includes establishing clear policies and procedures for data privacy, regularly reviewing and reporting on AI system operations in relation to personal data use, and staying updated with evolving regulations in different jurisdictions. Adherence to these standards is essential for legal compliance and maintaining consumer trust.

In some ways, after the EU AI Act’s passing, the UN Resolution could be seen as less significant as it does not carry the regulatory enforcement threat. However, at the very least, the UN Resolution should serve as a warning to countries around the globe—including those that operate only in the United States—that all regulators are looking for certain core governance positions when it comes to AI.  Companies should be sure to document those governance steps or risk becoming the focus of enforcement actions under myriad theories. 

In this month’s webcast, Greg Szewczyk and Kelsey Fayer of Ballard Spahr’s privacy & data security group, discuss new state consumer health data laws in Connecticut, Nevada, and Washington; highlighting the laws’ scope, obligations for regulated entities, and enforcement mechanisms.

On March 7, 2024, a bipartisan coalition of 43 state attorneys general sent to the Federal Trade Commission (“FTC”) a letter urging the FTC to update the regulations (“COPPA Rules”) implementing the Children’s Online Privacy Protection Act (“COPPA”).

Through regulations known as the “COPPA Rule,” state attorneys general are authorized to bring actions as parens patriae in order to protect their citizens from harm.  In the March 7 letter, the AGs noted that it has been more than ten years since the COPPA Rule was amended, and the digital landscape is much different now than in 2013.  The AGs specifically point to the use of mobile devices and social networking.

Unsurprisingly, the AGs recommend that the COPPA Rule container stronger protections.  For example, the AGs recommend that the definition of “personal information” be updated to include avatars generated from a child’s image even if no photograph is uploaded to the site or service; biometric identifiers and data derived from voice, gait, and facial data; and government-issued identifiers such as student ID numbers. 

Additionally, the AGs suggest that the phrase “concerning the child or parents of that child”, which is contained in the definition of “personal information” should be clarified.  Specifically, the AGs state that if companies are linking profiles of both parent and child, then the aggregated information of both profiles can indirectly expose the child’s personal information such as their home address even when it was not originally submitted by the child.  To “clos[e] this gap,” the AGs suggest amending the definition of personal information to include the phrase “which may otherwise be linked or reasonably linkable to personal information of the child.”

In addition to broadening the definition of personal information, the AGs also suggest two revisions that could materially impact how businesses use data.  First, the AGs suggest that the exception for uses to support “internal operations of the website or online services” to limit personalization to user-driven actions and prohibit the use of personal information to maximize user engagement.  Second, the AGs suggest that the FTC limit businesses’ ability to use persistent identifiers for contextual advertising.  While the focus in general privacy laws is on cross-contextual advertising – i.e., advertising that is based on activity across businesses – the AGs argue that advancements in technology allow operators to serve sophisticated and precise contextual ads, often leveraging AI.

While it still remains to be seen how the FTC amends the COPPA Rule, it is safe to say that the bipartisan AG letter should be interpreted as a signal that AGs across the country are increasingly focused on how businesses process children’s data.  Especially in states where comprehensive privacy laws are in effect (or going into effect soon), we should expect aggressive enforcement on the issue.