Minnesota becomes the latest state to move to pass legislation regulating the processing and controlling of personal data (HF 4757 / SF 4782). If signed into law by Governor Tim Walz, the Minnesota Consumer Data Privacy Act, or MCDPA, would go into effect on July 31, 2025 and provide various consumer data privacy rights and impose obligations on entities that control or process Minnesota residents’ personal data.

The MCDPA applies to entities controlling or processing personal data of 100,000 consumers or more or that derive over 25% of their revenue from the sale of personal data and process or control personal data of 25,000 consumers or more. Following in the footsteps of Texas and Nebraska, the MCDPA exempts small businesses as defined by the United States Small Business Administration. The law also contains targeted data-level exemptions for health and financial data processing, but not entity-level exemptions.

In addition to the right to access, rectification, erasure, portability, and opt-out of targeted advertising, sale of personal data, and profiling, under the MCDPA, consumers also would also have the novel right to question the result of a profiling decision and request additional information from a controller regarding that decision.

The MCDPA outlines several responsibilities to which controllers of data must comply, some of which are new obligations beyond what are contained in other laws. For example, the MCDPA requires that a “controller shall establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data, including the maintenance of an inventory of the data that must be managed to exercise these responsibilities.” The maintenance of this type of inventory is a first under U.S. state privacy law.

There are also data obligations relating to transparency in privacy notice and disclosure; limitation of the use of data in relation to processing and physical data security practices; nondiscrimination in the processing of personal data; and an obligation to appoint a chief privacy officer or privacy lead for the organization.

Enforcement would fall under the purview of the Minnesota Attorney General and businesses would have a thirty-day right to cure period, which expires January 31, 2026.

Assuming it is signed into law as expected, Minnesota will join the ranks of the seventeen other states (eighteen counting Florida) that have passed comprehensive consumer privacy acts. As with each of the other states’ acts, Minnesota’s bill shares some similarities with these other acts while also containing some unique provisions. Businesses in Minnesota would do well to start reviewing procedures and processes in preparation for the MCDPA.

Colorado has become the first state to pass legislation (SB24-205) regulating the use of artificial intelligence (AI) within the United States. This legislation is designed to address the influence and implications, ethically, legally, and socially, of AI technology across various sectors.

Any person doing business in Colorado, including developers or deployers of high-risk AI systems that are intended to interact with consumers. The bill defines a high-risk AI system as any AI system that is a substantial factor in making a consequential decision. Notably it does not include (among others) anti-fraud technology that doesn’t use facial recognition, anti-malware, data storage, databases, video games, and chat features so long as they do not make consequential decisions.

The bill includes a comprehensive framework governing the use of AI within government, education, and business with a focus on promoting ethical standards, transparency, and accountability in AI development and deployment. The bill requires disclosure for the use of AI in decision-making processes, sets out ethical standards to guide AI development, and provides mechanisms of recourse and oversight in cases of AI-related biases or errors. These recourse mechanisms include opportunities for consumers to correct any incorrect personal data processed by a high-risk AI system as well as an opportunity to appeal an adverse decision made by a system with human review (if possible).  The disclosure requirements will apply to developers, requiring a publicly available statement that describes methods used to manage risks of algorithmic discrimination.  

The bill requires the development of several compliance mechanisms if an entity uses high-risk AI systems. These include impact assessments, risk management policies and programs, and annual review of the high-risk systems. These mechanisms are designed to promote transparency in the development and use of these systems.

The passage of this bill positions Colorado at the forefront of AI regulation in the US, setting a precedent for other states and jurisdictions grappling with similar challenges.

Judges are beginning to address the increasing use of AI tools in court filings—including reacting to instances of abuses by lawyers using it for generative purposes and requiring disclosures regarding the scope of AI use in the drafting of legal submissions. Now JAMS, the largest private provider of alternative dispute resolution services worldwide, has issued rules—effective immediately—designed to address the use and impact of AI.

The Upshot

  • Alternative Dispute Resolution (ADR) providers are now joining courts in trying to grapple with the impact of AI on the practice of law.
  • As they should when dealing with courts, litigants need to pay close attention to the rules of a particular forum as they pertain to AI.
  • When choosing an arbitration forum in agreements, there may be reasons to choose a forum with AI rules and to specify that those procedures be followed. 

The Bottom Line

AI is reshaping the legal landscape and compelling the industry to adapt. Staying up-to-date with these changes has become as fundamental to litigation as other procedural rules. The Artificial Intelligence Team at Ballard Spahr monitors developments in AI and is advising clients on required disclosures, risk mitigation, the use of AI tools, and other evolving issues. 

Judges are beginning to address the increasing use of artificial intelligence (AI) tools in court filings—including reacting to instances of abuse by lawyers using it for generative purposes and requiring disclosures regarding scope of use in the drafting of legal submissions—by issuing standing orders, as detailed here. Staying up-to-date with these changes will soon be as fundamental to litigation as other procedural rules.

In line with court trends, Judicial Arbitration and Mediation Services (JAMS), an alternative dispute resolution (ADR) services company, recently released new rules for cases involving AI. JAMS emphasized that the purpose of the guidelines is to “refine and clarify procedures for cases involving AI systems,” and to “equip legal professionals and parties engaged in dispute resolution with clear guidelines and procedures that address the unique challenges presented by AI, such as questions of liability, algorithmic transparency, and ethical considerations.”

Although courts have not settled on a definition yet for AI, JAMS took deliberate steps to define AI specifically as “a machine-based system capable of completing tasks that would otherwise require cognition.” Such change makes the scope of its rules clearer. Additionally, the rules encompass an electronically stored information (ESI) protocol approved for AI cases, along with procedures for overseeing the examination of AI systems, materials, and experts to accommodate instances where the existing ADR process lacks adequate safeguards for handling the intricate and proprietary nature of such data.

Specifically, the procedures dictate that before any preliminary conference, each party must voluntarily exchange their non-privileged and relevant documents and other ESI. JAMS suggests that prior to such exchange, the parties should enter into their AI Disputes Protective Order to protect each party’s confidential information.

The form protective order, which is not provided under JAMS’s regular rules, limits the disclosure of certain designated documents, and information to the following specific parties: counsel, named parties, experts, consultants, investigators, the arbitrator, court reporters and staff, witnesses, the mediator, author or recipient of the document, other persons after notice to the other side, and “outside photocopying, microfilming or database service providers; trial support firms; graphic production services; litigation support services; and translators engaged by the parties during this Action to whom disclosure is reasonably necessary for this Action.” The list of parties privy to such confidential information does not include any AI generative services, a choice that is consistent with the broad concern that confidential client information is currently unprotected in this new AI world.

The rules further provide that in cases where the AI systems themselves are under dispute and require production or inspection, the disclosing party must provide access to the systems and corresponding materials to at least one expert in a secured environment established by that party. The expert is prohibited from removing any materials or information from this designated environment.

Additionally, experts providing opinions on AI systems during the ADR process must be mutually agreed upon by the parties or designated by the arbitrator in cases of disagreement. Moreover, the rules confine expert testimony on technical issues related to AI systems to a written report addressing questions posed by the arbitrator, supplemented by testimony during the hearing. These changes recognize the general need for both security and technical expertise in this area so that the ADR process can remain digestible to the arbitrator/mediator, who likely has no, or limited, prior experience in the area.

While JAMS claims to be the first ADR services company to issue such guidance, other similar organizations have advertised that their existing protocols are already suited to the current AI landscape and court rules.

Indeed, how to handle AI in the ADR context has been top of mind for many in the field. Last year, the Silicon Valley Arbitration & Mediation Center (SVAMC), a nonprofit organization focused on educating about the intersection of technology and ADR, released its proposed, “Guidelines on the Use of Artificial Intelligence in Arbitration.” SVAMC recommends that those participating in ADR should use their guidelines as a “model” for navigating the procedural aspects of ADR related to AI, which may involve incorporating such guidelines into some form of protective order.

In part, the clauses (1) require that the parties familiarize themselves with the relevant AI tool’s uses, risks, and biases, (2) make clear the parties of record remain subject to “applicable ethical rules or professional standards” and that parties are required to verify the accuracy of any work product that AI generates, as that party will be held responsible for the inaccuracies, and (3) provide that disclosure regarding the use of AI should be determined on a case-by-case basis.

The SVAMC guidelines also focus on confidentiality considerations, requiring that the parties redact privileged information prior to inputting them into AI in certain instances. SVAMC even goes so far as to make clear that the arbitrator themselves cannot substitute their own decision making power for AI’s. SVAMC’s Guidelines are a useful tool for identifying the significant factors that parties engaging in ADR should contemplate, and that the ADR community at large is contemplating. 

As courts provide additional legal guidance, and more related AI-use issues arise, we expect that more ADR service companies will move in a similar direction as JAMS, and potentially adopt versions of SVAMC’s guidance, as the procedures and technology continues to evolve.

Newly effective regulations governing confidentiality of Substance Use Disorder (SUD) records now more closely mirror regulations implementing the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and other federal law. The new measures ease the administrative burden on programs by aligning regulations governing the privacy of Part 2 SUD records with the regulatory framework governing HIPAA and the Health Information Technology for Economic and Clinical Health (HITECH) Act. Part 2 programs have until February 16, 2026, to implement any necessary changes.

The Upshot

  • Jointly, the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) and the Substance Abuse and Mental Health Services Administration (SAMHSA) revised 42 C.F.R. Part 2 (Part 2), which governs certain patient-identifying substance use disorder (SUD) records, effective April 16, 2024.
  • Significant regulatory changes implement Section 3221 of the Coronavirus Aid, Relief, and Economic Security (CARES) Act (relating to “confidentiality and disclosure of records relating to substance use disorder”), which required HHS to update both HIPAA and Part 2 regulations in order to better align respective privacy protections.
  • The changes consist of new and revised Part 2 regulations governing, in part: patient consent for SUD record use, disclosure, and redisclosure; individual patient rights relating to notices, accountings of disclosures, and complaint procedures; and increased penalties for noncompliance. 
  • Part 2 programs may now maintain, use, and disclose records in a manner more consistent with HIPAA regulations.
  • As a result, Part 2 programs have expanded flexibility in utilizing Part 2 records, but must carefully note additional compliance responsibilities and civil penalties for noncompliance.

The Bottom Line

Part 2 violations will now be subject not only to criminal penalties, but also the civil monetary penalties established by HIPAA, HITECH, and their implementing regulations. These regulations (along with the CARES Act) fundamentally alter the potential cost of noncompliance for Part 2 programs and may ultimately result in increased enforcement activity.

Read the full alert here.

In a regulatory filing, Reddit announced that the FTC is probing Reddit’s proposal to sell, license and share user-generated content with third parties to train artificial intelligence (AI) models.  This move underscores the growing scrutiny over how online platforms harness the vast amounts of data they collect, particularly in the context of AI development. 

The investigation brings to light several legal considerations that could have far-reaching consequences. Importantly, it highlights the importance of clear and transparent user agreements including terms of service and privacy policies. Users must be fully aware of how their data is used, especially when it contributes to the development of AI technologies. This approach tracks with the FTC’s stance that companies seeking to use consumer personal data to train AI models should notify consumers meaningfully rather than surreptitiously change user agreements.

The FTC’s actions signal a more aggressive stance on data privacy and usage, particularly in relation to AI. For the tech industry, this could mean a shift towards more stringent data handling and consent practices. Companies may need to reassess their data collection and usage policies to ensure compliance with emerging legal standards. Furthermore, this investigation could pave the way for new regulations specifically addressing the use of personal data in AI development.

The UN General Assembly has adopted a landmark resolution focusing on the safe, secure, and trustworthy use of Artificial Intelligence (AI). This resolution, led by the United States and supported by over 120 Member States, marks the first time the Assembly has adopted a resolution on regulating AI. The resolution calls for protection of rights both online and offline, urges cooperation to bridge the digital divide, and aims for AI to advance sustainable development globally. While the UN Resolution generally consists of high level principles as opposed to specific compliance steps, it is an important reminder that there is a growing legal and regulatory consensus on a responsible AI governance framework.

The UN Resolution emphasizes the development of AI systems in a way that is safe, secure, trustworthy, and sustainable. It recognizes the potential of AI to progress the Sustainable Development Goals (SDGs) and underlines the importance of human-centric, reliable, and ethical AI systems. It stresses the need for global consensus on AI governance and capacity building in developing countries, ensuring AI benefits are shared globally. The UN Resolution also highlights the urgency of developing effective safeguards and standards for AI, promoting transparent, inclusive, and equitable use of AI while respecting intellectual property rights and privacy.

The UN Resolution specifically encourages all Member states to be cognizant of data security issues when promoting AI systems by “[f]ostering the development, implementation and disclosure of mechanisms of risk monitoring and management, mechanisms for securing data, including personal data protection and privacy policies, as well as impact assessments as appropriate, across the life cycle of artificial intelligence systems.” The UN Resolution suggests that businesses need to develop and implement comprehensive risk monitoring and management systems for their AI technologies. This includes securing data throughout the AI lifecycle, ensuring robust personal data protection, and regularly conducting privacy and impact assessments. Essentially, companies should be proactive in identifying and managing potential risks associated with AI use, particularly regarding data privacy. This approach is crucial for compliance with emerging international standards and for maintaining trust in AI systems and applications.

The UN Resolution also encourages Member states to consider data privacy when promoting AI systems by “[s]afeguarding privacy and the protection of personal data when testing and evaluating systems, and for transparency and reporting requirements in compliance with applicable international, national and subnational legal frameworks, including on the use of personal data throughout the life cycle of artificial intelligence systems.” The UN Resolution implies a need to prioritize privacy in all stages of AI system development and usage. Transparency should be ensured in how personal data is handled and the relevant legal frameworks should be complied with at all levels. This includes establishing clear policies and procedures for data privacy, regularly reviewing and reporting on AI system operations in relation to personal data use, and staying updated with evolving regulations in different jurisdictions. Adherence to these standards is essential for legal compliance and maintaining consumer trust.

In some ways, after the EU AI Act’s passing, the UN Resolution could be seen as less significant as it does not carry the regulatory enforcement threat. However, at the very least, the UN Resolution should serve as a warning to countries around the globe—including those that operate only in the United States—that all regulators are looking for certain core governance positions when it comes to AI.  Companies should be sure to document those governance steps or risk becoming the focus of enforcement actions under myriad theories. 

In this month’s webcast, Greg Szewczyk and Kelsey Fayer of Ballard Spahr’s privacy & data security group, discuss new state consumer health data laws in Connecticut, Nevada, and Washington; highlighting the laws’ scope, obligations for regulated entities, and enforcement mechanisms.

On March 7, 2024, a bipartisan coalition of 43 state attorneys general sent to the Federal Trade Commission (“FTC”) a letter urging the FTC to update the regulations (“COPPA Rules”) implementing the Children’s Online Privacy Protection Act (“COPPA”).

Through regulations known as the “COPPA Rule,” state attorneys general are authorized to bring actions as parens patriae in order to protect their citizens from harm.  In the March 7 letter, the AGs noted that it has been more than ten years since the COPPA Rule was amended, and the digital landscape is much different now than in 2013.  The AGs specifically point to the use of mobile devices and social networking.

Unsurprisingly, the AGs recommend that the COPPA Rule container stronger protections.  For example, the AGs recommend that the definition of “personal information” be updated to include avatars generated from a child’s image even if no photograph is uploaded to the site or service; biometric identifiers and data derived from voice, gait, and facial data; and government-issued identifiers such as student ID numbers. 

Additionally, the AGs suggest that the phrase “concerning the child or parents of that child”, which is contained in the definition of “personal information” should be clarified.  Specifically, the AGs state that if companies are linking profiles of both parent and child, then the aggregated information of both profiles can indirectly expose the child’s personal information such as their home address even when it was not originally submitted by the child.  To “clos[e] this gap,” the AGs suggest amending the definition of personal information to include the phrase “which may otherwise be linked or reasonably linkable to personal information of the child.”

In addition to broadening the definition of personal information, the AGs also suggest two revisions that could materially impact how businesses use data.  First, the AGs suggest that the exception for uses to support “internal operations of the website or online services” to limit personalization to user-driven actions and prohibit the use of personal information to maximize user engagement.  Second, the AGs suggest that the FTC limit businesses’ ability to use persistent identifiers for contextual advertising.  While the focus in general privacy laws is on cross-contextual advertising – i.e., advertising that is based on activity across businesses – the AGs argue that advancements in technology allow operators to serve sophisticated and precise contextual ads, often leveraging AI.

While it still remains to be seen how the FTC amends the COPPA Rule, it is safe to say that the bipartisan AG letter should be interpreted as a signal that AGs across the country are increasingly focused on how businesses process children’s data.  Especially in states where comprehensive privacy laws are in effect (or going into effect soon), we should expect aggressive enforcement on the issue. 

On March 7, 2024, the California Privacy Protection Agency (CPPA) released new materials for review and discussion at the agency’s board meeting today, March 8, 2024. Among the materials released were draft risk assessment and automated decision making regulations, updates to existing regulations, and enforcement updates and priorities.

We will be exploring the details of these various updates in coming weeks, and more changes are likely to come before the regulations are finalized. But, companies should take note that the CPPA is moving quickly—and apparently aggressively.

Even before the new regulations are finalized, companies should pay particular attention to the three areas flagged as enforcement priorities: privacy policies and implementing consumer requests. Especially in light of California Attorney General Bonta’s promise of aggressive enforcement without the opportunity to cure, it is safe to assume that these policies will be in the crosshairs. 

The FTC published guidance warning companies that “[i]t may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.”  In other words, the long-standing practice of simply updating the privacy policy may not provide sufficient notice to consumers depending on the nature of the changes.

As companies turn to leverage consumer data to power AI systems, the FTC signaled that such practices constitute material changes to its data practices.  These changes require companies to square new business goals with existing privacy commitments.  The FTC made clear that companies cannot simply do away with existing privacy commitments by changing its privacy policies and terms to apply retroactively; instead, businesses must inform consumers before adopting permissive data practices such as using personal data for AI training.  Therefore, companies seeking to share data with AI developers or process data in-house in ways that aren’t reflected in current privacy policies and terms should update both and notify consumers of such updates as a pre-requisite to taking on new processing activities such as AI.

However, although the announcement focused on the use of data to train AI, the FTC’s warning went noticeably broader by specifically referencing sharing personal data with third parties.

It is worth noting that the FTC’s stance is generally in line with some state privacy laws that require notification to consumers of any material change in their privacy policies.  For example, under the Colorado Privacy Act, certain types of changes require notice to consumers beyond simply updating the privacy policy—even if the policy states that changes are effective upon posting.  Similarly, if the change constitutes a secondary use, affirmative consent may be required.

Given the changing landscape, companies should be particularly diligent in assessing what type of notice must be given—and when it must be given—before engaging in a new processing activity with data that has already been collected.  Or as the FTC punnily puts it, “there’s nothing intelligent about obtaining artificial consent.”