In a regulatory filing, Reddit announced that the FTC is probing Reddit’s proposal to sell, license and share user-generated content with third parties to train artificial intelligence (AI) models.  This move underscores the growing scrutiny over how online platforms harness the vast amounts of data they collect, particularly in the context of AI development. 

The investigation brings to light several legal considerations that could have far-reaching consequences. Importantly, it highlights the importance of clear and transparent user agreements including terms of service and privacy policies. Users must be fully aware of how their data is used, especially when it contributes to the development of AI technologies. This approach tracks with the FTC’s stance that companies seeking to use consumer personal data to train AI models should notify consumers meaningfully rather than surreptitiously change user agreements.

The FTC’s actions signal a more aggressive stance on data privacy and usage, particularly in relation to AI. For the tech industry, this could mean a shift towards more stringent data handling and consent practices. Companies may need to reassess their data collection and usage policies to ensure compliance with emerging legal standards. Furthermore, this investigation could pave the way for new regulations specifically addressing the use of personal data in AI development.

The UN General Assembly has adopted a landmark resolution focusing on the safe, secure, and trustworthy use of Artificial Intelligence (AI). This resolution, led by the United States and supported by over 120 Member States, marks the first time the Assembly has adopted a resolution on regulating AI. The resolution calls for protection of rights both online and offline, urges cooperation to bridge the digital divide, and aims for AI to advance sustainable development globally. While the UN Resolution generally consists of high level principles as opposed to specific compliance steps, it is an important reminder that there is a growing legal and regulatory consensus on a responsible AI governance framework.

The UN Resolution emphasizes the development of AI systems in a way that is safe, secure, trustworthy, and sustainable. It recognizes the potential of AI to progress the Sustainable Development Goals (SDGs) and underlines the importance of human-centric, reliable, and ethical AI systems. It stresses the need for global consensus on AI governance and capacity building in developing countries, ensuring AI benefits are shared globally. The UN Resolution also highlights the urgency of developing effective safeguards and standards for AI, promoting transparent, inclusive, and equitable use of AI while respecting intellectual property rights and privacy.

The UN Resolution specifically encourages all Member states to be cognizant of data security issues when promoting AI systems by “[f]ostering the development, implementation and disclosure of mechanisms of risk monitoring and management, mechanisms for securing data, including personal data protection and privacy policies, as well as impact assessments as appropriate, across the life cycle of artificial intelligence systems.” The UN Resolution suggests that businesses need to develop and implement comprehensive risk monitoring and management systems for their AI technologies. This includes securing data throughout the AI lifecycle, ensuring robust personal data protection, and regularly conducting privacy and impact assessments. Essentially, companies should be proactive in identifying and managing potential risks associated with AI use, particularly regarding data privacy. This approach is crucial for compliance with emerging international standards and for maintaining trust in AI systems and applications.

The UN Resolution also encourages Member states to consider data privacy when promoting AI systems by “[s]afeguarding privacy and the protection of personal data when testing and evaluating systems, and for transparency and reporting requirements in compliance with applicable international, national and subnational legal frameworks, including on the use of personal data throughout the life cycle of artificial intelligence systems.” The UN Resolution implies a need to prioritize privacy in all stages of AI system development and usage. Transparency should be ensured in how personal data is handled and the relevant legal frameworks should be complied with at all levels. This includes establishing clear policies and procedures for data privacy, regularly reviewing and reporting on AI system operations in relation to personal data use, and staying updated with evolving regulations in different jurisdictions. Adherence to these standards is essential for legal compliance and maintaining consumer trust.

In some ways, after the EU AI Act’s passing, the UN Resolution could be seen as less significant as it does not carry the regulatory enforcement threat. However, at the very least, the UN Resolution should serve as a warning to countries around the globe—including those that operate only in the United States—that all regulators are looking for certain core governance positions when it comes to AI.  Companies should be sure to document those governance steps or risk becoming the focus of enforcement actions under myriad theories. 

In this month’s webcast, Greg Szewczyk and Kelsey Fayer of Ballard Spahr’s privacy & data security group, discuss new state consumer health data laws in Connecticut, Nevada, and Washington; highlighting the laws’ scope, obligations for regulated entities, and enforcement mechanisms.

On March 7, 2024, a bipartisan coalition of 43 state attorneys general sent to the Federal Trade Commission (“FTC”) a letter urging the FTC to update the regulations (“COPPA Rules”) implementing the Children’s Online Privacy Protection Act (“COPPA”).

Through regulations known as the “COPPA Rule,” state attorneys general are authorized to bring actions as parens patriae in order to protect their citizens from harm.  In the March 7 letter, the AGs noted that it has been more than ten years since the COPPA Rule was amended, and the digital landscape is much different now than in 2013.  The AGs specifically point to the use of mobile devices and social networking.

Unsurprisingly, the AGs recommend that the COPPA Rule container stronger protections.  For example, the AGs recommend that the definition of “personal information” be updated to include avatars generated from a child’s image even if no photograph is uploaded to the site or service; biometric identifiers and data derived from voice, gait, and facial data; and government-issued identifiers such as student ID numbers. 

Additionally, the AGs suggest that the phrase “concerning the child or parents of that child”, which is contained in the definition of “personal information” should be clarified.  Specifically, the AGs state that if companies are linking profiles of both parent and child, then the aggregated information of both profiles can indirectly expose the child’s personal information such as their home address even when it was not originally submitted by the child.  To “clos[e] this gap,” the AGs suggest amending the definition of personal information to include the phrase “which may otherwise be linked or reasonably linkable to personal information of the child.”

In addition to broadening the definition of personal information, the AGs also suggest two revisions that could materially impact how businesses use data.  First, the AGs suggest that the exception for uses to support “internal operations of the website or online services” to limit personalization to user-driven actions and prohibit the use of personal information to maximize user engagement.  Second, the AGs suggest that the FTC limit businesses’ ability to use persistent identifiers for contextual advertising.  While the focus in general privacy laws is on cross-contextual advertising – i.e., advertising that is based on activity across businesses – the AGs argue that advancements in technology allow operators to serve sophisticated and precise contextual ads, often leveraging AI.

While it still remains to be seen how the FTC amends the COPPA Rule, it is safe to say that the bipartisan AG letter should be interpreted as a signal that AGs across the country are increasingly focused on how businesses process children’s data.  Especially in states where comprehensive privacy laws are in effect (or going into effect soon), we should expect aggressive enforcement on the issue. 

On March 7, 2024, the California Privacy Protection Agency (CPPA) released new materials for review and discussion at the agency’s board meeting today, March 8, 2024. Among the materials released were draft risk assessment and automated decision making regulations, updates to existing regulations, and enforcement updates and priorities.

We will be exploring the details of these various updates in coming weeks, and more changes are likely to come before the regulations are finalized. But, companies should take note that the CPPA is moving quickly—and apparently aggressively.

Even before the new regulations are finalized, companies should pay particular attention to the three areas flagged as enforcement priorities: privacy policies and implementing consumer requests. Especially in light of California Attorney General Bonta’s promise of aggressive enforcement without the opportunity to cure, it is safe to assume that these policies will be in the crosshairs. 

The FTC published guidance warning companies that “[i]t may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.”  In other words, the long-standing practice of simply updating the privacy policy may not provide sufficient notice to consumers depending on the nature of the changes.

As companies turn to leverage consumer data to power AI systems, the FTC signaled that such practices constitute material changes to its data practices.  These changes require companies to square new business goals with existing privacy commitments.  The FTC made clear that companies cannot simply do away with existing privacy commitments by changing its privacy policies and terms to apply retroactively; instead, businesses must inform consumers before adopting permissive data practices such as using personal data for AI training.  Therefore, companies seeking to share data with AI developers or process data in-house in ways that aren’t reflected in current privacy policies and terms should update both and notify consumers of such updates as a pre-requisite to taking on new processing activities such as AI.

However, although the announcement focused on the use of data to train AI, the FTC’s warning went noticeably broader by specifically referencing sharing personal data with third parties.

It is worth noting that the FTC’s stance is generally in line with some state privacy laws that require notification to consumers of any material change in their privacy policies.  For example, under the Colorado Privacy Act, certain types of changes require notice to consumers beyond simply updating the privacy policy—even if the policy states that changes are effective upon posting.  Similarly, if the change constitutes a secondary use, affirmative consent may be required.

Given the changing landscape, companies should be particularly diligent in assessing what type of notice must be given—and when it must be given—before engaging in a new processing activity with data that has already been collected.  Or as the FTC punnily puts it, “there’s nothing intelligent about obtaining artificial consent.”

On February 21st, the California Attorney General (AG) Rob Bonta announced a settlement with DoorDash for violations of the California Consumer Privacy Act (CCPA) and the California Online Privacy Protection Act (CalOPPA) relating to its participation in a marketing co-operative.  This action represents only the second public enforcement action since the CCPA went into effect in 2020.

According to the complaint and settlement, DoorDash participated in a marketing co-operative, as part of which unrelated businesses contribute personal information of their customers for the purpose of advertising their own products to customers from other participating businesses.  According to AG Bonta, this was an exchange of personal information for the benefit of DoorDash and therefore a “sale” under the CCPA.  As a sale, DoorDash was required under California law to provide notice of the sale as well as the opportunity to opt out of the sale.  AG Bonta alleged that DoorDash failed to provide the necessary notice and opt-out rights. 

While the participation in such a market co-operative is largely recognized as a sale under the CCPA at this point, the enforcement action is notable for a couple reasons.  First, the complaint takes positions that arguably require disclosures in privacy policies that go beyond the plain language of the regulations.  So, even for companies that feel confident that they comply with the regulations, it would be wise to assess their policies in light of the allegations.  

Second, the conduct at issue occurred in 2020 and 2021.  While the complaint notes that DoorDash did not cure when provided with a notice of violation in 2020, it indicates that it may not have been possible to cure because curing would mean making affected consumers whole by restoring them to the same position they would have been in if their data had never been sold.  Additionally, AG Bonta states in his press release that “The CCPA has been in effect for over four years now, and businesses must comply with this important law.  Violations cannot be cured, and my office will hold businesses accountable if they sell data without protecting consumers’ rights.”  

There are many lessons to learn from this action, but perhaps the most important is that businesses should prepare for what may be an increasingly aggressive enforcement policy without the opportunity to cure.  To do so, businesses should not only assess where they have gaps and how they can close those gaps, but also what can be done to best position for any arguments about past non-compliance.  

On February 9, 2024, California’s Third District Court of Appeals reinstated the California Privacy Protection Agency’s (“CPPA”) ability to enforce the California Privacy Rights Act of 2020 (“CPRA”) regulations. The CPRA regulations aim to enhance consumer privacy rights and protections in an ever-increasing digital age.

The court of appeal’s decision comes after the California Chamber of Commerce filed a lawsuit in 2023 challenging the CPPA’s authority to enforce the CPRA regulations, citing government overreach, conflicts with existing law, and the imposition of unnecessary burdens upon businesses, and which resulted in the trial court imposing a 12-month delay on enforcement. Holding that the trial court erred in imposing the one-year delay, the court of appeals reaffirmed the CPPA’s role in overseeing compliance with the state’s privacy laws, noting that no “explicit and forceful” language exists that mandates the CPPA must wait one year from the date the final regulations were approved to begin enforcement. It remains to be seen whether the California Chamber of Commerce will seek a rehearing or review.

This development is significant for both consumers and businesses. Consumers will continue to have significant rights (with the backing of the CPPA) related to their personal information. For businesses operating and doing business in California, the potential stay on enforcement activities by the CPPA that was once a possibility is no longer a reality; the February 9th decision serves as an important reminder to covered businesses to ensure their privacy practices comply with the CPRA regulations.

As consumer privacy rights continue to expand in an ever-increasing digital environment and data privacy remains an important issue, it is essential for covered business to stay informed and adhere to the CPRA regulations.

On Thursday, February 8, the Federal Communications Commission (FCC) finalized its plan to ban robocalls that feature voices generated by artificial intelligence, aiming to stem the tide of AI-generated scams and misinformation campaigns.  The FCC’s declaratory ruling formalized its position that the Telephone Consumer Protection Act (TCPA)—specifically, the provision prohibiting the initiation of calls “using an artificial prerecorded voice to deliver a message without the prior express consent of the called party”—applies to the use of AI-generated voices.  Hence, just as the TCPA requires businesses to obtain prior express written consent from consumers before robocalling them, businesses must now obtain consent for automated telemarketing calls using AI-generated voices.  Businesses seeking to deploy AI in its marketing calls using automated dialing systems should therefore consider reviewing and, if necessary, updating applicable disclosures and consents to account for the FCC’s new ruling and limit potential liability under the TCPA. 

On February 1, 2024, the Connecticut Office of the Attorney General (“OAG”) submitted to the Connecticut General Assembly its report on the first six months of the Connecticut Data Privacy Act (“CTDPA”).  While the report includes important information about its enforcement efforts to date, the most noteworthy aspect may be its recommendation to the legislature to remove various exemptions from the CTDPA. 

The report notes that the OAG has received more than thirty consumer complaints in the first six months of the CTDPA, which went into effect on July 1, 2023, many of which involved consumers’ attempts to exercise their new rights.  The OAG noted, however, that around one-third of the complaints involved data or entities that were exempt under the CTDPA. 

With respect to enforcement, the report provides summaries of four different areas:  privacy policies, sensitive data, teens’ data, and data brokers.  The report notes a different types of enforcement activities for each, but two are worth highlighting.  First, the report notes that the OAG is actively reviewing companies’ privacy policies to assess compliance, resulting in the issuing of ten cure notices on the topic.  Clearly, companies subject to the CTDPA should ensure that their public facing documents are at least facially sufficient. 

Second, the report notes that it has sent a cure notice to “a popular car brand” based on reports that its connected vehicles may be collecting sensitive personal data.  This focus on sensitive data is in line with what we have seen from other regulators, such as Colorado.  But, it also demonstrates that public reports on privacy issues can direct regulators to focus on specific industries.

Finally, the OAG makes several legislative recommendations.  One such recommendation is to scale back entity-level exemptions, specifically the non-profit, GLBA, and HIPAA exemptions.  The OAG also recommends adding a right to know specific third parties with which controllers share personal data, similar to the Oregon law that goes into effect later this year. 

Overall, the OAG’s report shows that regulators across states are taking generally similar approaches to enforcement, which appears to include a component of looking at companies’ privacy policies and opt-out mechanisms as an initial check on compliance.  Businesses should expect more of the same, and they would be wise to update accordingly.