Judges are beginning to address the increasing use of AI tools in court filings—including reacting to instances of abuses by lawyers using it for generative purposes and requiring disclosures regarding the scope of AI use in the drafting of legal submissions. Now JAMS, the largest private provider of alternative dispute resolution services worldwide, has issued rules—effective immediately—designed to address the use and impact of AI.

The Upshot

  • Alternative Dispute Resolution (ADR) providers are now joining courts in trying to grapple with the impact of AI on the practice of law.
  • As they should when dealing with courts, litigants need to pay close attention to the rules of a particular forum as they pertain to AI.
  • When choosing an arbitration forum in agreements, there may be reasons to choose a forum with AI rules and to specify that those procedures be followed. 

The Bottom Line

AI is reshaping the legal landscape and compelling the industry to adapt. Staying up-to-date with these changes has become as fundamental to litigation as other procedural rules. The Artificial Intelligence Team at Ballard Spahr monitors developments in AI and is advising clients on required disclosures, risk mitigation, the use of AI tools, and other evolving issues. 

Judges are beginning to address the increasing use of artificial intelligence (AI) tools in court filings—including reacting to instances of abuse by lawyers using it for generative purposes and requiring disclosures regarding scope of use in the drafting of legal submissions—by issuing standing orders, as detailed here. Staying up-to-date with these changes will soon be as fundamental to litigation as other procedural rules.

In line with court trends, Judicial Arbitration and Mediation Services (JAMS), an alternative dispute resolution (ADR) services company, recently released new rules for cases involving AI. JAMS emphasized that the purpose of the guidelines is to “refine and clarify procedures for cases involving AI systems,” and to “equip legal professionals and parties engaged in dispute resolution with clear guidelines and procedures that address the unique challenges presented by AI, such as questions of liability, algorithmic transparency, and ethical considerations.”

Although courts have not settled on a definition yet for AI, JAMS took deliberate steps to define AI specifically as “a machine-based system capable of completing tasks that would otherwise require cognition.” Such change makes the scope of its rules clearer. Additionally, the rules encompass an electronically stored information (ESI) protocol approved for AI cases, along with procedures for overseeing the examination of AI systems, materials, and experts to accommodate instances where the existing ADR process lacks adequate safeguards for handling the intricate and proprietary nature of such data.

Specifically, the procedures dictate that before any preliminary conference, each party must voluntarily exchange their non-privileged and relevant documents and other ESI. JAMS suggests that prior to such exchange, the parties should enter into their AI Disputes Protective Order to protect each party’s confidential information.

The form protective order, which is not provided under JAMS’s regular rules, limits the disclosure of certain designated documents, and information to the following specific parties: counsel, named parties, experts, consultants, investigators, the arbitrator, court reporters and staff, witnesses, the mediator, author or recipient of the document, other persons after notice to the other side, and “outside photocopying, microfilming or database service providers; trial support firms; graphic production services; litigation support services; and translators engaged by the parties during this Action to whom disclosure is reasonably necessary for this Action.” The list of parties privy to such confidential information does not include any AI generative services, a choice that is consistent with the broad concern that confidential client information is currently unprotected in this new AI world.

The rules further provide that in cases where the AI systems themselves are under dispute and require production or inspection, the disclosing party must provide access to the systems and corresponding materials to at least one expert in a secured environment established by that party. The expert is prohibited from removing any materials or information from this designated environment.

Additionally, experts providing opinions on AI systems during the ADR process must be mutually agreed upon by the parties or designated by the arbitrator in cases of disagreement. Moreover, the rules confine expert testimony on technical issues related to AI systems to a written report addressing questions posed by the arbitrator, supplemented by testimony during the hearing. These changes recognize the general need for both security and technical expertise in this area so that the ADR process can remain digestible to the arbitrator/mediator, who likely has no, or limited, prior experience in the area.

While JAMS claims to be the first ADR services company to issue such guidance, other similar organizations have advertised that their existing protocols are already suited to the current AI landscape and court rules.

Indeed, how to handle AI in the ADR context has been top of mind for many in the field. Last year, the Silicon Valley Arbitration & Mediation Center (SVAMC), a nonprofit organization focused on educating about the intersection of technology and ADR, released its proposed, “Guidelines on the Use of Artificial Intelligence in Arbitration.” SVAMC recommends that those participating in ADR should use their guidelines as a “model” for navigating the procedural aspects of ADR related to AI, which may involve incorporating such guidelines into some form of protective order.

In part, the clauses (1) require that the parties familiarize themselves with the relevant AI tool’s uses, risks, and biases, (2) make clear the parties of record remain subject to “applicable ethical rules or professional standards” and that parties are required to verify the accuracy of any work product that AI generates, as that party will be held responsible for the inaccuracies, and (3) provide that disclosure regarding the use of AI should be determined on a case-by-case basis.

The SVAMC guidelines also focus on confidentiality considerations, requiring that the parties redact privileged information prior to inputting them into AI in certain instances. SVAMC even goes so far as to make clear that the arbitrator themselves cannot substitute their own decision making power for AI’s. SVAMC’s Guidelines are a useful tool for identifying the significant factors that parties engaging in ADR should contemplate, and that the ADR community at large is contemplating. 

As courts provide additional legal guidance, and more related AI-use issues arise, we expect that more ADR service companies will move in a similar direction as JAMS, and potentially adopt versions of SVAMC’s guidance, as the procedures and technology continues to evolve.

Newly effective regulations governing confidentiality of Substance Use Disorder (SUD) records now more closely mirror regulations implementing the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and other federal law. The new measures ease the administrative burden on programs by aligning regulations governing the privacy of Part 2 SUD records with the regulatory framework governing HIPAA and the Health Information Technology for Economic and Clinical Health (HITECH) Act. Part 2 programs have until February 16, 2026, to implement any necessary changes.

The Upshot

  • Jointly, the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) and the Substance Abuse and Mental Health Services Administration (SAMHSA) revised 42 C.F.R. Part 2 (Part 2), which governs certain patient-identifying substance use disorder (SUD) records, effective April 16, 2024.
  • Significant regulatory changes implement Section 3221 of the Coronavirus Aid, Relief, and Economic Security (CARES) Act (relating to “confidentiality and disclosure of records relating to substance use disorder”), which required HHS to update both HIPAA and Part 2 regulations in order to better align respective privacy protections.
  • The changes consist of new and revised Part 2 regulations governing, in part: patient consent for SUD record use, disclosure, and redisclosure; individual patient rights relating to notices, accountings of disclosures, and complaint procedures; and increased penalties for noncompliance. 
  • Part 2 programs may now maintain, use, and disclose records in a manner more consistent with HIPAA regulations.
  • As a result, Part 2 programs have expanded flexibility in utilizing Part 2 records, but must carefully note additional compliance responsibilities and civil penalties for noncompliance.

The Bottom Line

Part 2 violations will now be subject not only to criminal penalties, but also the civil monetary penalties established by HIPAA, HITECH, and their implementing regulations. These regulations (along with the CARES Act) fundamentally alter the potential cost of noncompliance for Part 2 programs and may ultimately result in increased enforcement activity.

Read the full alert here.

In a regulatory filing, Reddit announced that the FTC is probing Reddit’s proposal to sell, license and share user-generated content with third parties to train artificial intelligence (AI) models.  This move underscores the growing scrutiny over how online platforms harness the vast amounts of data they collect, particularly in the context of AI development. 

The investigation brings to light several legal considerations that could have far-reaching consequences. Importantly, it highlights the importance of clear and transparent user agreements including terms of service and privacy policies. Users must be fully aware of how their data is used, especially when it contributes to the development of AI technologies. This approach tracks with the FTC’s stance that companies seeking to use consumer personal data to train AI models should notify consumers meaningfully rather than surreptitiously change user agreements.

The FTC’s actions signal a more aggressive stance on data privacy and usage, particularly in relation to AI. For the tech industry, this could mean a shift towards more stringent data handling and consent practices. Companies may need to reassess their data collection and usage policies to ensure compliance with emerging legal standards. Furthermore, this investigation could pave the way for new regulations specifically addressing the use of personal data in AI development.

The UN General Assembly has adopted a landmark resolution focusing on the safe, secure, and trustworthy use of Artificial Intelligence (AI). This resolution, led by the United States and supported by over 120 Member States, marks the first time the Assembly has adopted a resolution on regulating AI. The resolution calls for protection of rights both online and offline, urges cooperation to bridge the digital divide, and aims for AI to advance sustainable development globally. While the UN Resolution generally consists of high level principles as opposed to specific compliance steps, it is an important reminder that there is a growing legal and regulatory consensus on a responsible AI governance framework.

The UN Resolution emphasizes the development of AI systems in a way that is safe, secure, trustworthy, and sustainable. It recognizes the potential of AI to progress the Sustainable Development Goals (SDGs) and underlines the importance of human-centric, reliable, and ethical AI systems. It stresses the need for global consensus on AI governance and capacity building in developing countries, ensuring AI benefits are shared globally. The UN Resolution also highlights the urgency of developing effective safeguards and standards for AI, promoting transparent, inclusive, and equitable use of AI while respecting intellectual property rights and privacy.

The UN Resolution specifically encourages all Member states to be cognizant of data security issues when promoting AI systems by “[f]ostering the development, implementation and disclosure of mechanisms of risk monitoring and management, mechanisms for securing data, including personal data protection and privacy policies, as well as impact assessments as appropriate, across the life cycle of artificial intelligence systems.” The UN Resolution suggests that businesses need to develop and implement comprehensive risk monitoring and management systems for their AI technologies. This includes securing data throughout the AI lifecycle, ensuring robust personal data protection, and regularly conducting privacy and impact assessments. Essentially, companies should be proactive in identifying and managing potential risks associated with AI use, particularly regarding data privacy. This approach is crucial for compliance with emerging international standards and for maintaining trust in AI systems and applications.

The UN Resolution also encourages Member states to consider data privacy when promoting AI systems by “[s]afeguarding privacy and the protection of personal data when testing and evaluating systems, and for transparency and reporting requirements in compliance with applicable international, national and subnational legal frameworks, including on the use of personal data throughout the life cycle of artificial intelligence systems.” The UN Resolution implies a need to prioritize privacy in all stages of AI system development and usage. Transparency should be ensured in how personal data is handled and the relevant legal frameworks should be complied with at all levels. This includes establishing clear policies and procedures for data privacy, regularly reviewing and reporting on AI system operations in relation to personal data use, and staying updated with evolving regulations in different jurisdictions. Adherence to these standards is essential for legal compliance and maintaining consumer trust.

In some ways, after the EU AI Act’s passing, the UN Resolution could be seen as less significant as it does not carry the regulatory enforcement threat. However, at the very least, the UN Resolution should serve as a warning to countries around the globe—including those that operate only in the United States—that all regulators are looking for certain core governance positions when it comes to AI.  Companies should be sure to document those governance steps or risk becoming the focus of enforcement actions under myriad theories. 

In this month’s webcast, Greg Szewczyk and Kelsey Fayer of Ballard Spahr’s privacy & data security group, discuss new state consumer health data laws in Connecticut, Nevada, and Washington; highlighting the laws’ scope, obligations for regulated entities, and enforcement mechanisms.

On March 7, 2024, a bipartisan coalition of 43 state attorneys general sent to the Federal Trade Commission (“FTC”) a letter urging the FTC to update the regulations (“COPPA Rules”) implementing the Children’s Online Privacy Protection Act (“COPPA”).

Through regulations known as the “COPPA Rule,” state attorneys general are authorized to bring actions as parens patriae in order to protect their citizens from harm.  In the March 7 letter, the AGs noted that it has been more than ten years since the COPPA Rule was amended, and the digital landscape is much different now than in 2013.  The AGs specifically point to the use of mobile devices and social networking.

Unsurprisingly, the AGs recommend that the COPPA Rule container stronger protections.  For example, the AGs recommend that the definition of “personal information” be updated to include avatars generated from a child’s image even if no photograph is uploaded to the site or service; biometric identifiers and data derived from voice, gait, and facial data; and government-issued identifiers such as student ID numbers. 

Additionally, the AGs suggest that the phrase “concerning the child or parents of that child”, which is contained in the definition of “personal information” should be clarified.  Specifically, the AGs state that if companies are linking profiles of both parent and child, then the aggregated information of both profiles can indirectly expose the child’s personal information such as their home address even when it was not originally submitted by the child.  To “clos[e] this gap,” the AGs suggest amending the definition of personal information to include the phrase “which may otherwise be linked or reasonably linkable to personal information of the child.”

In addition to broadening the definition of personal information, the AGs also suggest two revisions that could materially impact how businesses use data.  First, the AGs suggest that the exception for uses to support “internal operations of the website or online services” to limit personalization to user-driven actions and prohibit the use of personal information to maximize user engagement.  Second, the AGs suggest that the FTC limit businesses’ ability to use persistent identifiers for contextual advertising.  While the focus in general privacy laws is on cross-contextual advertising – i.e., advertising that is based on activity across businesses – the AGs argue that advancements in technology allow operators to serve sophisticated and precise contextual ads, often leveraging AI.

While it still remains to be seen how the FTC amends the COPPA Rule, it is safe to say that the bipartisan AG letter should be interpreted as a signal that AGs across the country are increasingly focused on how businesses process children’s data.  Especially in states where comprehensive privacy laws are in effect (or going into effect soon), we should expect aggressive enforcement on the issue. 

On March 7, 2024, the California Privacy Protection Agency (CPPA) released new materials for review and discussion at the agency’s board meeting today, March 8, 2024. Among the materials released were draft risk assessment and automated decision making regulations, updates to existing regulations, and enforcement updates and priorities.

We will be exploring the details of these various updates in coming weeks, and more changes are likely to come before the regulations are finalized. But, companies should take note that the CPPA is moving quickly—and apparently aggressively.

Even before the new regulations are finalized, companies should pay particular attention to the three areas flagged as enforcement priorities: privacy policies and implementing consumer requests. Especially in light of California Attorney General Bonta’s promise of aggressive enforcement without the opportunity to cure, it is safe to assume that these policies will be in the crosshairs. 

The FTC published guidance warning companies that “[i]t may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.”  In other words, the long-standing practice of simply updating the privacy policy may not provide sufficient notice to consumers depending on the nature of the changes.

As companies turn to leverage consumer data to power AI systems, the FTC signaled that such practices constitute material changes to its data practices.  These changes require companies to square new business goals with existing privacy commitments.  The FTC made clear that companies cannot simply do away with existing privacy commitments by changing its privacy policies and terms to apply retroactively; instead, businesses must inform consumers before adopting permissive data practices such as using personal data for AI training.  Therefore, companies seeking to share data with AI developers or process data in-house in ways that aren’t reflected in current privacy policies and terms should update both and notify consumers of such updates as a pre-requisite to taking on new processing activities such as AI.

However, although the announcement focused on the use of data to train AI, the FTC’s warning went noticeably broader by specifically referencing sharing personal data with third parties.

It is worth noting that the FTC’s stance is generally in line with some state privacy laws that require notification to consumers of any material change in their privacy policies.  For example, under the Colorado Privacy Act, certain types of changes require notice to consumers beyond simply updating the privacy policy—even if the policy states that changes are effective upon posting.  Similarly, if the change constitutes a secondary use, affirmative consent may be required.

Given the changing landscape, companies should be particularly diligent in assessing what type of notice must be given—and when it must be given—before engaging in a new processing activity with data that has already been collected.  Or as the FTC punnily puts it, “there’s nothing intelligent about obtaining artificial consent.”

On February 21st, the California Attorney General (AG) Rob Bonta announced a settlement with DoorDash for violations of the California Consumer Privacy Act (CCPA) and the California Online Privacy Protection Act (CalOPPA) relating to its participation in a marketing co-operative.  This action represents only the second public enforcement action since the CCPA went into effect in 2020.

According to the complaint and settlement, DoorDash participated in a marketing co-operative, as part of which unrelated businesses contribute personal information of their customers for the purpose of advertising their own products to customers from other participating businesses.  According to AG Bonta, this was an exchange of personal information for the benefit of DoorDash and therefore a “sale” under the CCPA.  As a sale, DoorDash was required under California law to provide notice of the sale as well as the opportunity to opt out of the sale.  AG Bonta alleged that DoorDash failed to provide the necessary notice and opt-out rights. 

While the participation in such a market co-operative is largely recognized as a sale under the CCPA at this point, the enforcement action is notable for a couple reasons.  First, the complaint takes positions that arguably require disclosures in privacy policies that go beyond the plain language of the regulations.  So, even for companies that feel confident that they comply with the regulations, it would be wise to assess their policies in light of the allegations.  

Second, the conduct at issue occurred in 2020 and 2021.  While the complaint notes that DoorDash did not cure when provided with a notice of violation in 2020, it indicates that it may not have been possible to cure because curing would mean making affected consumers whole by restoring them to the same position they would have been in if their data had never been sold.  Additionally, AG Bonta states in his press release that “The CCPA has been in effect for over four years now, and businesses must comply with this important law.  Violations cannot be cured, and my office will hold businesses accountable if they sell data without protecting consumers’ rights.”  

There are many lessons to learn from this action, but perhaps the most important is that businesses should prepare for what may be an increasingly aggressive enforcement policy without the opportunity to cure.  To do so, businesses should not only assess where they have gaps and how they can close those gaps, but also what can be done to best position for any arguments about past non-compliance.  

On February 9, 2024, California’s Third District Court of Appeals reinstated the California Privacy Protection Agency’s (“CPPA”) ability to enforce the California Privacy Rights Act of 2020 (“CPRA”) regulations. The CPRA regulations aim to enhance consumer privacy rights and protections in an ever-increasing digital age.

The court of appeal’s decision comes after the California Chamber of Commerce filed a lawsuit in 2023 challenging the CPPA’s authority to enforce the CPRA regulations, citing government overreach, conflicts with existing law, and the imposition of unnecessary burdens upon businesses, and which resulted in the trial court imposing a 12-month delay on enforcement. Holding that the trial court erred in imposing the one-year delay, the court of appeals reaffirmed the CPPA’s role in overseeing compliance with the state’s privacy laws, noting that no “explicit and forceful” language exists that mandates the CPPA must wait one year from the date the final regulations were approved to begin enforcement. It remains to be seen whether the California Chamber of Commerce will seek a rehearing or review.

This development is significant for both consumers and businesses. Consumers will continue to have significant rights (with the backing of the CPPA) related to their personal information. For businesses operating and doing business in California, the potential stay on enforcement activities by the CPPA that was once a possibility is no longer a reality; the February 9th decision serves as an important reminder to covered businesses to ensure their privacy practices comply with the CPRA regulations.

As consumer privacy rights continue to expand in an ever-increasing digital environment and data privacy remains an important issue, it is essential for covered business to stay informed and adhere to the CPRA regulations.