There have been numerous developments in the online safety and data privacy space for minors in particular over the last few months. Here we cover some notable decisions in the federal courts and cases with nationwide implications in addition to final and pending legislative and regulatory action by the Federal government.
Notable Court Decisions
The Salesforce Decision
A recent decision by the Fifth Circuit held that a suit brought by sex trafficking victims against Salesforce for allegedly participating in a sex trafficking venture could move forward. The Court ruled that under certain circumstances, companies such as Salesforce that provide web-based business services to entities or individuals engaged in sex trafficking may be civilly liable as a beneficiary of a sex trafficking venture. The decision interprets Section 230 of the Communications Decency Act (“Section 230”), which generally protects web platform hosts from liability for content created by users. This is the most recent in a series of decisions limiting Section 230’s protections for entities that fail to take measures to prevent the use of their services by criminal actors engaged in sex trafficking.
The plaintiffs in Doe v. Salesforce are a group of sex trafficking victims who were trafficked through Backpage.com (“Backpage”), a Craigslist-type platform notorious for its permissiveness and encouragement of sex trafficking advertisements. They seek to hold Salesforce civilly liable under 18 U.S.C. § 1595, which creates a cause of action for victims against anyone who “knowingly benefits … from participation in a [sex trafficking] venture.” Salesforce allegedly provided Backpage cloud-based software tools and related services, including customer relationship management support. The Plaintiffs allege that Salesforce was aware that Backpage was engaged in sex trafficking, citing, inter alia, emails between Salesforce employees and a highly publicized Congressional report that found that Backpage actively facilitated prostitution and child sex trafficking.
Salesforce moved for summary judgment, arguing that Section 230 served as a complete bar to liability. While courts have generally interpreted Section 230 broadly in dismissing claims against internet platform hosts that are premised on the ways in which others use those platforms, the statute has been increasingly under fire by legislators and courts alike. Lawmakers on both sides of the aisle have discussed amending or repealing Section 230 in recent years and courts have slowly chipped away at the broad immunity by interpreting the statute more narrowly. This trend has been especially stark in cases dealing with sex trafficking and child sexual abuse. The Fifth Circuit’s decision in Doe v. Salesforce is a prime example of this, and a substantial step away from the breadth of protections afforded under earlier interpretations of Section 230.
The Fifth Circuit rejected a “but-for test,” which would shield a defendant if a cause of action would not have accrued without content created and posted by a third party. Salesforce advocated for what the court dubbed the “only-link” test, which would protect defendants when the only link between the defendant and the victims is the publication of third-party content. The Court rejected that argument, instead ruling that “the proper standard is whether the duty the defendant allegedly violated derives from their status as a publisher or speaker or requires the exercise of functions traditionally associated with publication.” The key question is whether the claim treats the defendant as a publisher or speaker. The Fifth Circuit found that the duty the plaintiffs alleged Salesforce breached was “a statutory duty to not knowingly benefit from participation in a sex-trafficking venture.” Because this duty is unrelated to traditional publishing functions, Section 230 does not serve as a shield. This decision underscores the need for companies to establish processes that will identify potential dangers of trafficking in or in relation to their businesses including but not limited to facilitation of trafficking using online platforms. Without proper safeguards, even businesses providing neutral tools and operations support may be held civilly liable for the harms the users of their services perpetrate.
Garcia v. Character Technologies, Inc. et al.
The mother of a fourteen year old boy prevailed on a motion to dismiss her lawsuit against Character Technologies, Google, Alphabet, and two individual defendants in connection with the suicide of her child. The plaintiff alleged that her son was a user of Character A.I., which the Court describes as “an app that allows users to interact with various A.I. chatbots, referred to as ‘Characters.’” The Court also describes these “Characters” as “anthropomorphic; users interactions with Characters are meant to mirror interactions a user might have with another user on an ordinary messaging app.” In other words, it is intended to and gives the impression to the user that he is communicating with a real person. The plaintiff alleged that the app had its intended impact on her child; she asserted that her son was addicted to the app and could not go one day without communicating with his Characters, resulting in severe mental health issues and problems in school. When his parents threatened to take away his phone, he took his own life. The plaintiff filed suit asserting numerous tort claims, along with an alleged violation of Florida’s Unfair Trade and Deceptive Practices Act and under a theory of unjust enrichment
In denying the motion to dismiss, Judge Anne Conway, District Court Judge for the Middle District of Florida, made several notable rulings. Among them, she found that the plaintiff had adequately pled that Google is liable for the “harms caused by Character A.I. because Google was a component part manufacturer” of the app, deeming it sufficient that plaintiff pled that Google “substantially participated in integrating its models” into the app, which allegedly was necessary to build and maintain the platform. She also found that the plaintiff sufficiently pled that Google was potentially liable for aiding and abetting the tortious conduct because the amended complaint supported a “plausible inference” that Google possessed actual knowledge that Character’s product was defective. The Court further found that the app was a product, not a service, and that Character A.I.’s output is not speech protected by the First Amendment. The Court determined that plaintiff had sufficiently pled all her tort claims with the exception of her claim of intentional infliction of emotional distress, along with allowing her claims to go forward under Florida’s Deceptive and Unfair Trade Practices Act, and a theory of unjust enrichment.
New York v. TikTok
In October 2024, the Attorney General for State of New York filed suit against TikTok to hold it “accountable for the harms it has inflicted on the youngest New Yorkers by falsely marketing and promoting” its products. The following day, Attorney General James released a statement indicating that she was co-leading a coalition of 14 state Attorneys General each filing suit against TikTok for allegedly “misleading the public” about the safety of the platform and harming the mental health of children. Lawsuits were filed individually by each member of the coalition and all allege that TikTok violated the law “by falsely claiming its platform is safe for young people.” The press release can be found here.
The New York complaint includes allegations regarding the addictive nature of the app and its marketing and targeting of children, causing substantial mental health harm to minors. The complaint additionally includes allegations that TikTok resisted safety improvements to its app to boost profits, made false statements about the safety of the app for minors, and misrepresented the efficacy of certain safety features. The complaint asserts nine causes of action, including violations of New York law relating to fraudulent business conduct, deceptive business practices, and false advertising, along with claims asserting design defects, failure to warn, and ordinary negligence. In late May, Supreme Court Justice Anar Rathod Patel mostly denied TikTok’s motion to dismiss in a brief order that did not include her reasoning, allowing the case to proceed.
Federal Legislative and Regulatory Developments
President Trump Signs the TAKE IT DOWN Act; The Kids Online Safety Act (KOSA) is reintroduced
President Trump signed the “TAKE IT DOWN Act” on May 19, 2025. The bill criminalizes the online posting of nonconsensual intimate visual images of adults and minors and the publication of digital forgeries, defined as the intimate visual depiction of an identifiable individual created through various digital means that, when viewed as a whole, is indistinguishable from an authentic visual depiction. The statute also criminalizes threats to publish such images. The bill additionally requires online platforms to establish no later than one year from enactment clear processes by which individuals can notify companies of the existence of these images and a requirement that the images be removed “as soon as possible, but not later than 48 hours” after receiving a request. The bill in its entirety can be found here.
Also in May, the Kids Online Safety Act (KOSA) was reintroduced in the Senate by a bipartisan group of legislators. In connection with their announcement of the revised version of KOSA, Senators Blackburn and Blumenthal thanked Elon Musk and others at X for their partnership in modifying KOSA’s language to “strengthen the bill while safeguarding free speech online and ensuring it’s not used to stifle expression” and noted the support of Musk and X to pass the legislation by the end of 2025. In its May announcement, the senators noted that the legislation is supported by over 250 national, state and local organizations and further gained the support of Apple. KOSA provides that platforms “shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate” listed harms to minors where those harms were reasonably foreseeable. Those harms include eating disorders, depressive and anxiety disorders, compulsive use, online harassment, and sexual and financial exploitation. It requires that platforms provide minors (and parents) with readily accessible and easy to use safety tools that limit communication with the minor and limit by default access to and use of certain design features by minors. The legislation further mandates reporting tools for users and the establishment of internal processes to receive and substantively review all reports. The current version of KOSA is lengthy and contains numerous additional mandates and notice requirements including third party audits and public reporting regarding compliance. The most recent version of KOSA can be found here.
New COPPA Rule Takes Effect June 23, 2025
The Federal Trade Commissions (FTC) has amended the Children’s Online Privacy Protection Rule (“COPPA Rule”) effective June 23, 2025. COPPA imposes obligations on entities operating online that collect the personal information of children under the age of thirteen. The new COPPA Rule seeks to address new challenges in the digital landscape.
Under the new COPPA Rule, the FTC will consider additional evidence in determining whether a website or online service is directed at children. COPPA applies wherever children under the age of thirteen are a website or service’s intended or actual audience, and the FTC applies a multifactor test for assessing this. Under the new COPPA Rule, the FTC will now consider “marketing or promotional materials or plans, representations to consumers or to third parties, reviews by users or third parties, and the age of users on similar websites or services.” While FTC has stated that this amendment simply serves to clarify how it analyzes the question of whether a website is child-directed (rather than acting as a change in policy), online operators should note that whether they are subject to COPPA depends in part on elements outside of their control—such as online reviews and the age of users of their peer websites and services.
The type of information protected by COPPA will also expand. COPPA mandates that websites and online services directed at children under the age of thirteen obtain verifiable parental consent before collecting, using, or disclosing any personal information from children. To date, this has included details like names, addresses, phone numbers, email addresses, and other identifiable data. The new COPPA Rule expands this definition to include biometric identifiers “that can be used for the automated or semi-automated recognition of an individual, such as finger prints; handprints; retina patterns; iris patterns; genetic data, including a DNA sequence; voiceprints; gait patterns; facial templates; or faceprints[.]” The definition will also include government identifiers such as social security or passport numbers, and birth certificates.
Data security requirements have also been enhanced. Operators subject to COPPA must maintain a written data security program, designate one or more employees to coordinate it, and conduct an annual assessment of risks. If they share any protected data with third parties, the disclosing party must ensure that the third party has sufficient capability and policies in place to maintain the data securely and within the bounds of COPPA regulations. Notably, the new COPPA Rule forbids indefinite retention of data, requiring that operators only retain protected information as long as is reasonably necessary to serve the purposes for which it was collected.
The new COPPA Rule contains a number of other policy changes, such as enhanced requirements for parental notice and control regarding the data collected, stored, and shared with third parties, new mechanisms for obtaining parental consent, and changes to an exception to the bar on collecting children’s data without parental consent for the limited purpose of determining whether a user is a child under the age of thirteen.
Entities operating a business or service online that may be used by children under the age of thirteen—even where children are not the intended audience—should carefully review the new rule, and take steps to ensure they are in full compliance. The new rule underscores the FTC’s continued interest in this space and its desire to take action against online services for practices it views as posing unacceptable risks to children’s privacy and online safety.
Senate Judiciary Committee, Subcommittee on Privacy, Technology, and the Law Holds Hearing on AI-Generated Deep Fakes
On May 21, the Senate Judiciary Committee’s subcommittee on privacy, technology and the law held a hearing titled, “The Good, the Bad, and the Ugly: AI-Generated Deep Fakes in 2025.” Witnesses included representatives of the Recording Industry Association of America, Consumer Reports, and YouTube, along with multi-award winning musician Martina McBride. They all testified about the potential benefits of AI, but also the potential harms to creators, including musicians, and different but substantial harms to consumers. The witnesses discussed specific examples of the images and voices of both known and lesser-known innocent individuals used to defraud and exploit others, impacting reputations and livelihoods. A representative from the National Center on Sexual Exploitation (NCOSE) also testified about the pervasive and harmful impact of deep fakes on adults and children when their images are used to create pornography, which is then spread worldwide and unchecked on the internet. All of the witnesses testified in support of the NO FAKES Act of 2025, a bipartisan bill and a compliment to the TAKE IT DOWN Act. The language of the current legislation can be found here. The bill currently provides for a civil cause of action with a detailed penalty regime for individuals who have their image or voice used without their permission and protects online service providers from liability if those providers have systems in place to identify and address the publication and dissemination of deep fakes. The bill also provides for legal process for individuals to obtain information from providers regarding the source of the published materials. The current version additionally endeavors to preempt state law, stating that the “rights established under this Act shall preempt any cause of action under State law for the protection of an individual’s voice and visual likeness rights in connection with a digital replica, as defined in this Act, in an expressive work.”