On October 13, 2025, California Governor Gavin Newsom vetoed S.B. 7, which would have required human oversight in certain types of employment decisions made solely by automated decision systems (“ADS”).  If Gov. Newsom signed the bill, it would have required California employers using automated systems for actions such as hiring, firing, and discipling to implement human oversight and explain certain decisions made by AI. The bill would have also required robust notices and granted employees and contractors access rights to data used by ADS.

In his letter notifying the California State Senate of the veto Gov. Newsom cited concerns that S.B. 7 would have imposed “overly broad restrictions” on employer deployment of ADS. For example, the requirements could be interpreted to extend to “innocuous” technology such as scheduling and workflow management tools. Industry groups opposing the bill argued it would have also carried massive costs for compliance, particularly to small businesses.

Gov. Newsom shared concerns with the bill’s author of unregulated use of ADS and affording employees protection as it relates to ADS, but wrote that legislatures “should assess the efficacy of [such] regulations to address these concerns.” Still, California employers face restrictions with respect to certain uses of ADS under the regulations recently finalized by the California Privacy Protection Agency.

On September 30, 2025, the California Privacy Protection Agency (CPPA) issued a $1.35 million fine, the largest in the CPPA’s history, against Tractor Supply Company, the nation’s largest rural lifestyle retailer. The fine was issued based on allegations that the company violated its obligations under the California Consumer Privacy Act (CCPA). The CPPA coined its decision against Tractor Supply as “the first to address the importance of CCPA privacy notices and privacy rights of job applicants.”

Below, we have provided additional information on the CPPA’s decision and the key takeaways for businesses subject to the CCPA in light of the decision.

Background

The CPPA first opened an investigation into the Tractor Supply after it received a complaint from a consumer in Placerville, California. Based on its investigation, the CPPA alleged Tractor Supply:

  • Failed to maintain an adequate privacy policy notifying consumers of their rights;
  • Failed to notify California job applicants of their privacy rights and how to exercise them;
  • Failed to provide consumers with an effective mechanism to opt-out of the selling and sharing of their personal information, including through opt-out preference signals such as the Global Privacy Control; and
  • Disclosed personal information to other companies without the entering into contracts that contain the requisite privacy protections.

In addition to issuing the record $1.35 million penalty for these violations, the CPPA has also required Tractor Supply to implement broad remedial measures, which include, but are not limited to, the following:

  • Updating its privacy notices and notifying employees and job applicants of the updated notices;
  • Modifying the methods it provides consumers to submit requests to opt-out of targeted advertising;
  • Ensuring all required contractual terms are in place with all external recipients of personal information;
  • Recognizing opt-out preferences signals like the Global Privacy Control;
  • Regularly auditing its tracking technologies; and
  • Designating a compliance officer to certify adherence to its compliance for the next four (4) years.

Key Takeaways

This decision underscores several practical points for businesses subject to the CCPA:

  • Employees and job applicants are treated like consumers under the CCPA, so businesses should ensure their CCPA compliance programs adequately cover California workforce members and applicants;
  • A single consumer complaint can trigger a broad investigation; and
  • Investigations from the CPPA may not only result in remediation, but also significant monetary penalties.

On September 23, 2025, the California Privacy Protection Agency (CPPA) announced the approval of final regulations under the California Consumer Privacy Act (CCPA) covering cybersecurity audits, risk assessments, and automated decisionmaking technology (ADMT). The new rules, effective January 1, 2026, introduce significant new compliance obligations for businesses subject to the CCPA/CPRA, with phased deadlines for certain requirements.

Key requirements include:

  • Cybersecurity Audits: Businesses must conduct annual, independent cybersecurity audits if they (1) derive 50% or more of annual revenue from selling or sharing consumers’ personal information, or (2) meet the annual gross revenue threshold in the CCPA and process the personal information of 250,000 or more consumers or the sensitive personal information of 50,000 or more consumers. Audit certifications are due to the CPPA on a phased schedule: April 1, 2028 (for businesses with over $100 million in revenue), April 1, 2029 (for $50–100 million), and April 1, 2030 (for less than $50 million). Audits must be performed by qualified, objective, independent professionals and must assess a comprehensive set of technical and organizational safeguards, including authentication, encryption, access controls, vulnerability management, incident response, and more. Service providers and contractors must cooperate with the audit process.
  • Risk Assessments: Covered businesses must conduct and document risk assessments before engaging in processing activities that present significant risks to consumers’ privacy or security, such as selling or sharing personal information, processing sensitive personal information, using ADMT for significant decisions, or using personal information to train ADMT. Risk assessment compliance begins January 1, 2026, with attestation and summary submissions due by April 1, 2028. Assessments must document the purpose, categories of data, operational elements, benefits, risks, and mitigation measures, and must be reviewed and updated at least every three years or upon material changes.
  • Automated Decisionmaking Technology (ADMT): The regulations define ADMT as any technology that processes personal information and uses computation to replace or substantially replace human decisionmaking. Businesses using ADMT to make significant decisions about consumers (such as those affecting financial services, housing, employment, or healthcare) must, by January 1, 2027, provide clear pre-use notices, offer consumers the right to opt out, and respond to access requests with meaningful information about the logic, key parameters, and effects of the ADMT. The rules require plain-language explanations, transparency about the role of human involvement, and prohibit retaliation against consumers exercising their rights. Exceptions and specific requirements apply for certain employment and admissions uses.

These regulations significantly expand the compliance landscape for California businesses, requiring new documentation, consumer-facing notices, and ongoing governance. Businesses should review their data processing activities, update privacy notices and contracts, and ensure robust audit and risk assessment procedures are in place to meet the new standards.

In a reminder that the FTC’s new enforcement priorities will likely drive additional litigation risks, days after the settlement was announced, Disney Worldwide Services and Disney Entertainment Operations, LLC (together, “Disney”) were named as defendants in two class action complaints brought on behalf putative classes of minors. The first case, S.K. et al. v. Disney Worldwide Servs., Inc., No. 2:25-cv-08410, was filed on September 5, 2025, in the United States District Court for the Central District of California. The second case, captioned Does 1-3 ex rel. Sobalvarro v. Disney Worldwide Servs., was filed on September 9, 2025, in the Superior Court of California for the County of Los Angeles.

The complaints allege that Disney failed to appropriately mark certain videos it uploaded to the YouTube platform as “Made for Kids” between 2020 and September 2025—a designation necessary to ensure that automatic data collection practices are disabled on the platform—thus leading to the unlawful collection of the minors’ data. Plaintiffs in both cases brought causes of action for common law intrusion upon seclusion, invasion of privacy, trespass to chattels, unjust enrichment, and negligence. The California state court plaintiffs brought additional claims for violation of the California Unfair Competition Law and for invasion of privacy in contravention of the California Constitution. Both complaints seek actual, general, special, incidental, statutory, punitive, and consequential damages in excess of $5 million.

Both complaints were filed less than a week after Disney and the DOJ filed a proposed order authorizing a settlement for alleged violations of the Children’s Online Privacy Protection Act (COPPA) arising out of the same conduct alleged in the complaints. The proposed order was filed contemporaneously with the DOJ’s complaint, which the DOJ brought upon notification from the FTC, and requires Disney to pay a $10 million civil penalty, create a “Mandated Audience Designation Program” to ensure that all Disney videos are appropriately marked when uploaded, and submit to ten years of compliance reporting. 

Both the federal court and the state court complaints allege that the proposed settlement would not adequately remedy the putative classes’ injuries.


While FTC settlements always carry the risk of copycat litigation, these new developments further emphasize how serious this risk can be in the privacy field—and especially children’s privacy—and that plaintiffs’ firms that are active in other privacy fields are looking for new areas in which to expand. Given the FTC’s stated change in enforcement priorities, companies need to reassess their positions not just for “traditional” compliance, but also for enforcement and litigation mitigation.

The Food and Drug Administration (FDA) issued final guidance Monday that explains how medical device manufacturers can use a Predetermined Change Control Plan (PCCP) to update AI-enabled device software functions (AI-DSFs) after clearance or approval without submitting a new marketing application for each covered change.

The guidance is a practical how‑to for getting the FDA to preauthorize a playbook for future updates to AI medical software. The FDA calls the playbook a Predetermined Change Control Plan (PCCP).  The applicant submits the PCCP with the 510(k), De Novo, or PMA, and the FDA reviews it along with the device. If the FDA authorizes the PCCP, the company may later make the listed updates without filing a new submission, provided it follows the plan’s steps for data, training, testing, labeling, cybersecurity, and deployment under a quality system. The authorized PCCP becomes part of the device description, so updates must be implemented exactly as specified. If a change is not in the plan, or cannot meet the plan’s methods or acceptance criteria, a new submission will be needed. The guidance is nonbinding and is grounded in FDORA section 515C. It applies to AI‑enabled device software functions and explains what belongs in a PCCP, how the FDA evaluates it, and how users should be informed about updates. Figure 1 on page 18 illustrates the decision path for using an authorized PCCP to implement changes.

What a Compliant PCCP Looks Like:

  • Description of Modifications. List the specific, limited, verifiable changes you intend to make over time (e.g., improved quantitative performance, expanded input compatibility, or performance for a defined subpopulation). Specify whether changes are automatic vs. manual and global vs. local, and how frequently updates may occur. Changes must remain within intended use (and, generally, indications).
  • Modification Protocol. For each planned change, provide (1) data management practices (representative training/tuning/test data; multisite, sequestered test sets; bias‑mitigation strategies and reference‑standard processes); (2) retraining practices (what parts of the model may change; triggers; overfitting controls); (3) performance evaluation (study designs, metrics, acceptance criteria, statistical plans; verification that non‑targeted specs do not degrade); and (4) update procedures (deployment mechanics, user communication, labeling updates, cybersecurity validation, real‑world monitoring, and rollback criteria). A traceability table should map each proposed change to its supporting methods.
  • Impact Assessment. Analyze benefits and risks—including risks of harm and unintended bias—for each change individually and in combination, and explain how the protocol’s verification/validation and mitigations ensure continued safety and effectiveness across intended populations and environments.

Labeling and Transparency Requirements

The FDA may require labeling that informs users that the device contains machine learning and has an authorized PCCP; as updates roll out, labeling should summarize the implemented change, the data/evidence supporting it, impacted inputs/outputs, and how users will be informed (e.g., release notes/version history). Public‑facing device summaries (SSED/510(k) Summary/De Novo decision summary) should include a high‑level PCCP description. New unique device identifiers (UDIs) are required when a new version/model is created.

Cybersecurity and Post-Market Monitoring

Update procedures should cover cybersecurity risk management and validation; describe user communications; and outline real‑world performance monitoring (including triggers, frequency, and rollback plans) to detect adverse events, drifts, or subpopulation performance changes.

Quality‑System Expectations

All implementation under a PCCP must occur within the manufacturer’s quality system. The guidance reiterates record‑retention and design‑control duties and notes the FDA’s 2024 rule aligning Part 820 with ISO 13485 effective February 2, 2026 (QMSR). For PMAs, the FDA must deny approval if manufacturing controls do not conform; for 510(k)s, clearance may be withheld if QSR failures pose serious risk.

Using (and Not Misusing) a PCCP

The flowchart on page 18 (Figure 1) depicts the logic: If a contemplated modification is (1) listed in the PCCP’s Description of Modifications and (2) implemented exactly per the Protocol’s methods/specifications, document it under the Quality Management System (QMS)—no new submission. Otherwise, evaluate it under the FDA’s device‑modification rules. In most cases, a new submission will be required. Deviations from an authorized PCCP may render a device adulterated/misbranded.

Examples: What’s In vs. Out

Appendix B (pp. 38–45) walks through six scenarios: e.g., retraining a patient‑monitoring model to reduce false alarms (in‑scope) vs. adding a new predictive claim (out‑of‑scope); extending a skin‑lesion tool to additional smartphones meeting minimum camera specs (in‑scope) vs. adding thermography or turning the product patient‑facing (out‑of‑scope); and similar analyses for ventilator‑setting software, ultrasound acquisition aids, X‑ray triage, and a device‑led combination product.

What Companies Should Do Now

  1. Decide If a PCCP Fits the Product Roadmap. Identify foreseeable AI model updates (performance, inputs, defined subpopulations) that can be specified, validated, and governed in advance.
  2. Design the Protocol First. Build out data pipelines (representative, sequestered test sets; reference‑standard methods), retraining triggers, acceptance criteria, and cybersecurity validation.
  3. Plan Labeling and User Communications. Draft version histories, release‑note templates, and instructions that reflect how updates may change device behavior; prepare for UDI/version control impacts.
  4. Align QMS and Documentation. Ensure design controls, change control, bias‑monitoring, and record‑retention processes can support PCCP implementation; prepare for the ISO‑13485‑aligned QMSR effective February 2, 2026.
  5. Engage the FDA Early. Use the Q‑Submission program to vet scope, methods, and evidence, especially for higher‑risk devices, automatic/local adaptations, and device‑led combination products.
  6. Think Predicate Strategy. If you will rely on a predicate with a PCCP, be prepared to compare to the predicate’s pre‑PCCP version; consider timing of subsequent submissions so your updated device can become a predicate.

The lawyers in Ballard Spahr’s multidisciplinary Health Care Industry, Technology Industry, and Life Sciences Industry teams advise med‑tech, digital health, AI, and life sciences companies on regulatory compliance and the range of issues related to federal and state health care laws and regulations. We help clients develop and maintain the corporate infrastructure required to address these laws and regulations as they apply to telemedicine and other digital health products and services. We are monitoring the FDA’s implementation and related federal and state activity. Please reach out to your Ballard Spahr contact with questions.

A federal district court has vacated a federal regulation under HIPAA that provided special restrictions on the disclosure of reproductive health information.

The Upshot

  • The Biden administration issued regulations under HIPAA that prohibited the disclosure of protected health information relating to reproductive health for impermissible purposes, such as prosecutorial investigation.
  • Reproductive health information includes information about certain politically volatile subjects, such as abortion, transgender care, and fertility treatments.
  • The court order vacates the regulation nationwide, although it leaves intact amendments to HIPAA’s notice of privacy practices requiring certain disclosures relating to information regarding substance use disorders.

The Bottom Line

Employers will need to watch for relevant developments and consider how they affect their HIPAA compliance measures. They will, in any event, need to consider certain amendments to their notice of privacy practices.

The U.S. District Court in the Northern District of Texas has vacated a Biden administration regulation to limit the disclosure of reproductive health information (including information on highly sensitive subjects, such as abortion, transgender care, and fertility treatments) by covered entities under HIPAA. The regulation generally restricted health plans and health care providers from disclosing such information for impermissible purposes, such as prosecutorial or administrative investigation. The restrictions applied only when care was lawfully provided. The regulation also required HIPAA notices of privacy practices to be amended to reflect the new restrictions by February 16, 2026.

The case came before the court in an unusual posture. Currently reviewing the regulation, the Trump administration challenged only the standing of the plaintiffs and the scope of relief. However, finding that some of the administration’s arguments addressed the merits of the regulation, the court undertook a substantive review of the regulation’s validity. It found that the regulation exceeded the authority conferred to the Department of Health and Human Services in view of the major (and highly charged political) questions it addressed. The court also found that the regulation unlawfully redefined certain statutory terms and failed to account appropriately for a statutory exception to HIPAA’s privacy restrictions for the reporting of child abuse.

Many health plans and health care providers have already taken at least modest measures to comply with the regulations, which took effect on December 23, 2024. They should be watching to see how the Trump administration responds to the ruling and what comes of its review of the regulation generally. They may also look to see if the Supreme Court issues guidance on the authority of federal district courts to wholly vacate regulations and otherwise issue orders of nationwide effect. In any event, health plans and health care providers will need to consider revisions to their notice of privacy practices for changes that the court left intact regarding the privacy of certain information relating to substance use disorders.

Attorneys in our Employee Benefits and Executive Compensation Group and Health Care Team are monitoring developments on this issue and are available for counsel.

There have been numerous developments in the online safety and data privacy space for minors in particular over the last few months. Here we cover some notable decisions in the federal courts and cases with nationwide implications in addition to final and pending legislative and regulatory action by the Federal government.

Notable Court Decisions

The Salesforce Decision

A recent decision by the Fifth Circuit held that a suit brought by sex trafficking victims against Salesforce for allegedly participating in a sex trafficking venture could move forward. The Court ruled that under certain circumstances, companies such as Salesforce that provide web-based business services to entities or individuals engaged in sex trafficking may be civilly liable as a beneficiary of a sex trafficking venture. The decision interprets Section 230 of the Communications Decency Act (“Section 230”), which generally protects web platform hosts from liability for content created by users. This is the most recent in a series of decisions limiting Section 230’s protections for entities that fail to take measures to prevent the use of their services by criminal actors engaged in sex trafficking.

The plaintiffs in Doe v. Salesforce are a group of sex trafficking victims who were trafficked through Backpage.com (“Backpage”), a Craigslist-type platform notorious for its permissiveness and encouragement of sex trafficking advertisements. They seek to hold Salesforce civilly liable under 18 U.S.C. § 1595, which creates a cause of action for victims against anyone who “knowingly benefits … from participation in a [sex trafficking] venture.” Salesforce allegedly provided Backpage cloud-based software tools and related services, including customer relationship management support. The Plaintiffs allege that Salesforce was aware that Backpage was engaged in sex trafficking, citing, inter alia, emails between Salesforce employees and a highly publicized Congressional report that found that Backpage actively facilitated prostitution and child sex trafficking.

Salesforce moved for summary judgment, arguing that Section 230 served as a complete bar to liability. While courts have generally interpreted Section 230 broadly in dismissing claims against internet platform hosts that are premised on the ways in which others use those platforms, the statute has been increasingly under fire by legislators and courts alike. Lawmakers on both sides of the aisle have discussed amending or repealing Section 230 in recent years and courts have slowly chipped away at the broad immunity by interpreting the statute more narrowly. This trend has been especially stark in cases dealing with sex trafficking and child sexual abuse. The Fifth Circuit’s decision in Doe v. Salesforce is a prime example of this, and a substantial step away from the breadth of protections afforded under earlier interpretations of Section 230.

The Fifth Circuit rejected a “but-for test,” which would shield a defendant if a cause of action would not have accrued without content created and posted by a third party. Salesforce advocated for what the court dubbed the “only-link” test, which would protect defendants when the only link between the defendant and the victims is the publication of third-party content. The Court rejected that argument, instead ruling that “the proper standard is whether the duty the defendant allegedly violated derives from their status as a publisher or speaker or requires the exercise of functions traditionally associated with publication.” The key question is whether the claim treats the defendant as a publisher or speaker. The Fifth Circuit found that the duty the plaintiffs alleged Salesforce breached was “a statutory duty to not knowingly benefit from participation in a sex-trafficking venture.” Because this duty is unrelated to traditional publishing functions, Section 230 does not serve as a shield. This decision underscores the need for companies to establish processes that will identify potential dangers of trafficking in or in relation to their businesses including but not limited to facilitation of trafficking using online platforms. Without proper safeguards, even businesses providing neutral tools and operations support may be held civilly liable for the harms the users of their services perpetrate.

Garcia v. Character Technologies, Inc. et al.

The mother of a fourteen year old boy prevailed on a motion to dismiss her lawsuit against Character Technologies, Google, Alphabet, and two individual defendants in connection with the suicide of her child. The plaintiff alleged that her son was a user of Character A.I., which the Court describes as “an app that allows users to interact with various A.I. chatbots, referred to as ‘Characters.’” The Court also describes these “Characters” as “anthropomorphic; users interactions with Characters are meant to mirror interactions a user might have with another user on an ordinary messaging app.” In other words, it is intended to and gives the impression to the user that he is communicating with a real person. The plaintiff alleged that the app had its intended impact on her child; she asserted that her son was addicted to the app and could not go one day without communicating with his Characters, resulting in severe mental health issues and problems in school. When his parents threatened to take away his phone, he took his own life. The plaintiff filed suit asserting numerous tort claims, along with an alleged violation of Florida’s Unfair Trade and Deceptive Practices Act and under a theory of unjust enrichment

In denying the motion to dismiss, Judge Anne Conway, District Court Judge for the Middle District of Florida, made several notable rulings. Among them, she found that the plaintiff had adequately pled that Google is liable for the “harms caused by Character A.I. because Google was a component part manufacturer” of the app, deeming it sufficient that plaintiff pled that Google “substantially participated in integrating its models” into the app, which allegedly was necessary to build and maintain the platform. She also found that the plaintiff sufficiently pled that Google was potentially liable for aiding and abetting the tortious conduct because the amended complaint supported a “plausible inference” that Google possessed actual knowledge that Character’s product was defective. The Court further found that the app was a product, not a service, and that Character A.I.’s output is not speech protected by the First Amendment. The Court determined that plaintiff had sufficiently pled all her tort claims with the exception of her claim of intentional infliction of emotional distress, along with allowing her claims to go forward under Florida’s Deceptive and Unfair Trade Practices Act, and a theory of unjust enrichment.

New York v. TikTok

In October 2024, the Attorney General for State of New York filed suit against TikTok to hold it “accountable for the harms it has inflicted on the youngest New Yorkers by falsely marketing and promoting” its products. The following day, Attorney General James released a statement indicating that she was co-leading a coalition of 14 state Attorneys General each filing suit against TikTok for allegedly “misleading the public” about the safety of the platform and harming the mental health of children. Lawsuits were filed individually by each member of the coalition and all allege that TikTok violated the law “by falsely claiming its platform is safe for young people.” The press release can be found here.

The New York complaint includes allegations regarding the addictive nature of the app and its marketing and targeting of children, causing substantial mental health harm to minors. The complaint additionally includes allegations that TikTok resisted safety improvements to its app to boost profits, made false statements about the safety of the app for minors, and misrepresented the efficacy of certain safety features. The complaint asserts nine causes of action, including violations of New York law relating to fraudulent business conduct, deceptive business practices, and false advertising, along with claims asserting design defects, failure to warn, and ordinary negligence. In late May, Supreme Court Justice Anar Rathod Patel mostly denied TikTok’s motion to dismiss in a brief order that did not include her reasoning, allowing the case to proceed.

Federal Legislative and Regulatory Developments

President Trump Signs the TAKE IT DOWN Act; The Kids Online Safety Act (KOSA) is reintroduced

President Trump signed the “TAKE IT DOWN Act” on May 19, 2025. The bill criminalizes the online posting of nonconsensual intimate visual images of adults and minors and the publication of digital forgeries, defined as the intimate visual depiction of an identifiable individual created through various digital means that, when viewed as a whole, is indistinguishable from an authentic visual depiction. The statute also criminalizes threats to publish such images. The bill additionally requires online platforms to establish no later than one year from enactment clear processes by which individuals can notify companies of the existence of these images and a requirement that the images be removed “as soon as possible, but not later than 48 hours” after receiving a request. The bill in its entirety can be found here.

Also in May, the Kids Online Safety Act (KOSA) was reintroduced in the Senate by a bipartisan group of legislators. In connection with their announcement of the revised version of KOSA, Senators Blackburn and Blumenthal thanked Elon Musk and others at X for their partnership in modifying KOSA’s language to “strengthen the bill while safeguarding free speech online and ensuring it’s not used to stifle expression” and noted the support of Musk and X to pass the legislation by the end of 2025. In its May announcement, the senators noted that the legislation is supported by over 250 national, state and local organizations and further gained the support of Apple. KOSA provides that platforms “shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate” listed harms to minors where those harms were reasonably foreseeable. Those harms include eating disorders, depressive and anxiety disorders, compulsive use, online harassment, and sexual and financial exploitation. It requires that platforms provide minors (and parents) with readily accessible and easy to use safety tools that limit communication with the minor and limit by default access to and use of certain design features by minors. The legislation further mandates reporting tools for users and the establishment of internal processes to receive and substantively review all reports. The current version of KOSA is lengthy and contains numerous additional mandates and notice requirements including third party audits and public reporting regarding compliance. The most recent version of KOSA can be found here.

New COPPA Rule Takes Effect June 23, 2025

The Federal Trade Commissions (FTC) has amended the Children’s Online Privacy Protection Rule (“COPPA Rule”) effective June 23, 2025. COPPA imposes obligations on entities operating online that collect the personal information of children under the age of thirteen. The new COPPA Rule seeks to address new challenges in the digital landscape.

Under the new COPPA Rule, the FTC will consider additional evidence in determining whether a website or online service is directed at children. COPPA applies wherever children under the age of thirteen are a website or service’s intended or actual audience, and the FTC applies a multifactor test for assessing this. Under the new COPPA Rule, the FTC will now consider “marketing or promotional materials or plans, representations to consumers or to third parties, reviews by users or third parties, and the age of users on similar websites or services.” While FTC has stated that this amendment simply serves to clarify how it analyzes the question of whether a website is child-directed (rather than acting as a change in policy), online operators should note that whether they are subject to COPPA depends in part on elements outside of their control—such as online reviews and the age of users of their peer websites and services.

The type of information protected by COPPA will also expand. COPPA mandates that websites and online services directed at children under the age of thirteen obtain verifiable parental consent before collecting, using, or disclosing any personal information from children. To date, this has included details like names, addresses, phone numbers, email addresses, and other identifiable data. The new COPPA Rule expands this definition to include biometric identifiers “that can be used for the automated or semi-automated recognition of an individual, such as finger prints; handprints; retina patterns; iris patterns; genetic data, including a DNA sequence; voiceprints; gait patterns; facial templates; or faceprints[.]” The definition will also include government identifiers such as social security or passport numbers, and birth certificates.

Data security requirements have also been enhanced. Operators subject to COPPA must maintain a written data security program, designate one or more employees to coordinate it, and conduct an annual assessment of risks. If they share any protected data with third parties, the disclosing party must ensure that the third party has sufficient capability and policies in place to maintain the data securely and within the bounds of COPPA regulations. Notably, the new COPPA Rule forbids indefinite retention of data, requiring that operators only retain protected information as long as is reasonably necessary to serve the purposes for which it was collected.

The new COPPA Rule contains a number of other policy changes, such as enhanced requirements for parental notice and control regarding the data collected, stored, and shared with third parties, new mechanisms for obtaining parental consent, and changes to an exception to the bar on collecting children’s data without parental consent for the limited purpose of determining whether a user is a child under the age of thirteen.

Entities operating a business or service online that may be used by children under the age of thirteen—even where children are not the intended audience—should carefully review the new rule, and take steps to ensure they are in full compliance. The new rule underscores the FTC’s continued interest in this space and its desire to take action against online services for practices it views as posing unacceptable risks to children’s privacy and online safety.

Senate Judiciary Committee, Subcommittee on Privacy, Technology, and the Law Holds Hearing on AI-Generated Deep Fakes

On May 21, the Senate Judiciary Committee’s subcommittee on privacy, technology and the law held a hearing titled, “The Good, the Bad, and the Ugly: AI-Generated Deep Fakes in 2025.” Witnesses included representatives of the Recording Industry Association of America, Consumer Reports, and YouTube, along with multi-award winning musician Martina McBride. They all testified about the potential benefits of AI, but also the potential harms to creators, including musicians, and different but substantial harms to consumers. The witnesses discussed specific examples of the images and voices of both known and lesser-known innocent individuals used to defraud and exploit others, impacting reputations and livelihoods. A representative from the National Center on Sexual Exploitation (NCOSE) also testified about the pervasive and harmful impact of deep fakes on adults and children when their images are used to create pornography, which is then spread worldwide and unchecked on the internet. All of the witnesses testified in support of the NO FAKES Act of 2025, a bipartisan bill and a compliment to the TAKE IT DOWN Act. The language of the current legislation can be found here. The bill currently provides for a civil cause of action with a detailed penalty regime for individuals who have their image or voice used without their permission and protects online service providers from liability if those providers have systems in place to identify and address the publication and dissemination of deep fakes. The bill also provides for legal process for individuals to obtain information from providers regarding the source of the published materials. The current version additionally endeavors to preempt state law, stating that the “rights established under this Act shall preempt any cause of action under State law for the protection of an individual’s voice and visual likeness rights in connection with a digital replica, as defined in this Act, in an expressive work.”

On June 11, 2025, Connecticut passed Senate Bill 01295 (SB 01295).  If signed by the governor, SB 01295 will amend the existing Connecticut Data Privacy Act (CTDPA) in several important ways, with the amendments going into effect on July 1, 2026.

Expanded Scope: In what is seen as a general trend, SB 01295 broadens the reach of the CTDPA by lowering exemption thresholds: The law will apply to organizations that control or process the personal data of 35,000 consumers or more, controls or processes any sensitive data, or engage in the sale of personal data. The bill also expands the definition of sensitive data, thereby increasing the number of covered entities.  

Signaling another important trend, the amendment would remove the entity-level exemption for financial institutions under the Gramm-Leach-Bliley Act (GLBA), and instead only exempt data subject to the GLBA.  Notably, however, certain types of financial institutions may continue to enjoy entity-level exemptions.

Stricter Regulations for Minors: Social media platforms and online services targeting minors (individuals under 18) would also be subject to heightened obligations and standards, including restrictions related to processing minors’ personal data related to certain risks and automated decisions.

Additional Changes: Additionally, the amended changes would include additional responsibilities placed on data controllers, including those related to consumer rights requests, data protection assessments and privacy notices and disclosures.

***

Although this legislative season has not seen revolutionary new laws passed, amendments in states like Connecticut, Colorado, and Montana are important reminders that changes to existing laws can have significant impacts–both in broadening the scope of their application and in current compliance regimes. 

On June 4, 2025, the Digital Advertising Alliance (“DAA”), the self-regulatory body that sets and enforces privacy standards for digital advertising, announced it is launching a process to determine if it is necessary to issue new guidance to address how the DAA’s Self-Regulatory Principles apply to the use of artificial intelligence systems and tools that leverage interest-based advertising (“IBA”) data. 

The DAA intends to meet with relevant stakeholders, such as trade associations, advertisers, publishers, and ad tech over the coming weeks to consider the following issues:

  • the appropriate industry participants;
  • the current and anticipated use cases for IBA data by AI systems and tools;
  •  consumer expectations around the collection and use of such data; and
  • the legal and regulatory gaps/overlaps with any such guidance

While it is too early to tell what specific guidance will entail, the CEO of the DAA stated in the DAA’s announcement that the goal of the review is to “look at the steps companies can take to ensure they are providing appropriate information and control to consumers around the collection and use of IBA data by those [artificial intelligence] systems.”

On February 21, 2025, representatives in the California legislature introduced California Assembly Bill 1355, also known as the California Location Privacy Act (“AB 1355”).  AB 1355 seeks to amend the California Consumer Privacy Act (the “CCPA”) by imposing several new restrictions on the collection and use of consumer location data. 

Under AB 1355, “location data” means device information that reveals, directly or indirectly, where a person or device is or has been within the State of California, with precision sufficient to identify the street-level location of such person or device within a range of five miles or less.  AB 1355 provides examples including, but not limited to:

  • An IP address capable of revealing the physical or geographical location of an individual;
  • GPS coordinates;
  • Cell-site location information;
  • Information captured by an automated license plate recognition system that could be used to identify the specific location of an automobile at a point in time;
  • Information or image captured by a speed safety system or other traffic monitoring system that could be used to identify the specific location of an automobile at a point in time; and
  • A video or photographic image that is used as a probe image in a facial recognition technology system that could be used to identify the specific location of an individual at a point in time.

AB 1355 would impose the following restrictions on this broad category of location data:

  • Opt-In Consent:  Prior to collecting or using an individual’s location data, a covered entity would be required to obtain the individual’s express opt-in consent to collect and use their location data for the purpose of providing the goods or services requested.
  • Restrictions on Use & Disclosure:  Even if consent is collected, covered entities would be prohibited from (i) collecting more precise location data than necessary to provide the goods or services requested, (ii) retaining location data for longer than necessary to provide the goods or services requested, (iii) selling, renting, trading, or leasing location data to third parties, (iv) deriving or inferring from location data any information that is not necessary to provide the goods or services requested, or (v) disclosing the location data to any government agency without a valid court order.  The intent of these restrictions is to create “no-go zones” where data revealing visits to certain locations, such as reproductive health clinics or places of worship, cannot be used for discriminatory or otherwise improper or unlawful purposes.
  • Location Privacy Policy:  A covered entity would be required to maintain a “location privacy policy” that is presented to consumers at the point of collecting such location information.  The location privacy policy would be required to include, among other things, (i) the type of location data collected, (ii) the disclosures required to provide the requested goods or services, (iii) the identities of service providers and third parties to whom the location data is disclosed or could be disclosed, (iv) whether the location data is used for targeted advertising, and (v) the data security, retention, and deletion policies.
  • Changes to Location Privacy Policy:  A covered entity would be required to provide notice of any change to its location privacy policy at least twenty (20) business days in advance.
  • Enforcement & Penalties:  The California Attorney General, along with district attorneys, would be able to bring a civil action against a covered entity for violations of AB 1355, which may result in a civil penalty up to $25,000 per offense.

These proposed changes that are similar to the approach to consumer location data already adopted under Maryland’s Online Data Privacy Act, which takes effect October 1, 2025.  If enacted, AB 1355, however, would represent a significant departure from the opt-out framework currently set forth under California law under the CCPA, where businesses can generally sell and share sensitive personal information, such as geolocation information, unless the person opts out and directs the business to limit its usage.