There have been numerous developments in the online safety and data privacy space for minors in particular over the last few months. Here we cover some notable decisions in the federal courts and cases with nationwide implications in addition to final and pending legislative and regulatory action by the Federal government.

Notable Court Decisions

The Salesforce Decision

A recent decision by the Fifth Circuit held that a suit brought by sex trafficking victims against Salesforce for allegedly participating in a sex trafficking venture could move forward. The Court ruled that under certain circumstances, companies such as Salesforce that provide web-based business services to entities or individuals engaged in sex trafficking may be civilly liable as a beneficiary of a sex trafficking venture. The decision interprets Section 230 of the Communications Decency Act (“Section 230”), which generally protects web platform hosts from liability for content created by users. This is the most recent in a series of decisions limiting Section 230’s protections for entities that fail to take measures to prevent the use of their services by criminal actors engaged in sex trafficking.

The plaintiffs in Doe v. Salesforce are a group of sex trafficking victims who were trafficked through Backpage.com (“Backpage”), a Craigslist-type platform notorious for its permissiveness and encouragement of sex trafficking advertisements. They seek to hold Salesforce civilly liable under 18 U.S.C. § 1595, which creates a cause of action for victims against anyone who “knowingly benefits … from participation in a [sex trafficking] venture.” Salesforce allegedly provided Backpage cloud-based software tools and related services, including customer relationship management support. The Plaintiffs allege that Salesforce was aware that Backpage was engaged in sex trafficking, citing, inter alia, emails between Salesforce employees and a highly publicized Congressional report that found that Backpage actively facilitated prostitution and child sex trafficking.

Salesforce moved for summary judgment, arguing that Section 230 served as a complete bar to liability. While courts have generally interpreted Section 230 broadly in dismissing claims against internet platform hosts that are premised on the ways in which others use those platforms, the statute has been increasingly under fire by legislators and courts alike. Lawmakers on both sides of the aisle have discussed amending or repealing Section 230 in recent years and courts have slowly chipped away at the broad immunity by interpreting the statute more narrowly. This trend has been especially stark in cases dealing with sex trafficking and child sexual abuse. The Fifth Circuit’s decision in Doe v. Salesforce is a prime example of this, and a substantial step away from the breadth of protections afforded under earlier interpretations of Section 230.

The Fifth Circuit rejected a “but-for test,” which would shield a defendant if a cause of action would not have accrued without content created and posted by a third party. Salesforce advocated for what the court dubbed the “only-link” test, which would protect defendants when the only link between the defendant and the victims is the publication of third-party content. The Court rejected that argument, instead ruling that “the proper standard is whether the duty the defendant allegedly violated derives from their status as a publisher or speaker or requires the exercise of functions traditionally associated with publication.” The key question is whether the claim treats the defendant as a publisher or speaker. The Fifth Circuit found that the duty the plaintiffs alleged Salesforce breached was “a statutory duty to not knowingly benefit from participation in a sex-trafficking venture.” Because this duty is unrelated to traditional publishing functions, Section 230 does not serve as a shield. This decision underscores the need for companies to establish processes that will identify potential dangers of trafficking in or in relation to their businesses including but not limited to facilitation of trafficking using online platforms. Without proper safeguards, even businesses providing neutral tools and operations support may be held civilly liable for the harms the users of their services perpetrate.

Garcia v. Character Technologies, Inc. et al.

The mother of a fourteen year old boy prevailed on a motion to dismiss her lawsuit against Character Technologies, Google, Alphabet, and two individual defendants in connection with the suicide of her child. The plaintiff alleged that her son was a user of Character A.I., which the Court describes as “an app that allows users to interact with various A.I. chatbots, referred to as ‘Characters.’” The Court also describes these “Characters” as “anthropomorphic; users interactions with Characters are meant to mirror interactions a user might have with another user on an ordinary messaging app.” In other words, it is intended to and gives the impression to the user that he is communicating with a real person. The plaintiff alleged that the app had its intended impact on her child; she asserted that her son was addicted to the app and could not go one day without communicating with his Characters, resulting in severe mental health issues and problems in school. When his parents threatened to take away his phone, he took his own life. The plaintiff filed suit asserting numerous tort claims, along with an alleged violation of Florida’s Unfair Trade and Deceptive Practices Act and under a theory of unjust enrichment

In denying the motion to dismiss, Judge Anne Conway, District Court Judge for the Middle District of Florida, made several notable rulings. Among them, she found that the plaintiff had adequately pled that Google is liable for the “harms caused by Character A.I. because Google was a component part manufacturer” of the app, deeming it sufficient that plaintiff pled that Google “substantially participated in integrating its models” into the app, which allegedly was necessary to build and maintain the platform. She also found that the plaintiff sufficiently pled that Google was potentially liable for aiding and abetting the tortious conduct because the amended complaint supported a “plausible inference” that Google possessed actual knowledge that Character’s product was defective. The Court further found that the app was a product, not a service, and that Character A.I.’s output is not speech protected by the First Amendment. The Court determined that plaintiff had sufficiently pled all her tort claims with the exception of her claim of intentional infliction of emotional distress, along with allowing her claims to go forward under Florida’s Deceptive and Unfair Trade Practices Act, and a theory of unjust enrichment.

New York v. TikTok

In October 2024, the Attorney General for State of New York filed suit against TikTok to hold it “accountable for the harms it has inflicted on the youngest New Yorkers by falsely marketing and promoting” its products. The following day, Attorney General James released a statement indicating that she was co-leading a coalition of 14 state Attorneys General each filing suit against TikTok for allegedly “misleading the public” about the safety of the platform and harming the mental health of children. Lawsuits were filed individually by each member of the coalition and all allege that TikTok violated the law “by falsely claiming its platform is safe for young people.” The press release can be found here.

The New York complaint includes allegations regarding the addictive nature of the app and its marketing and targeting of children, causing substantial mental health harm to minors. The complaint additionally includes allegations that TikTok resisted safety improvements to its app to boost profits, made false statements about the safety of the app for minors, and misrepresented the efficacy of certain safety features. The complaint asserts nine causes of action, including violations of New York law relating to fraudulent business conduct, deceptive business practices, and false advertising, along with claims asserting design defects, failure to warn, and ordinary negligence. In late May, Supreme Court Justice Anar Rathod Patel mostly denied TikTok’s motion to dismiss in a brief order that did not include her reasoning, allowing the case to proceed.

Federal Legislative and Regulatory Developments

President Trump Signs the TAKE IT DOWN Act; The Kids Online Safety Act (KOSA) is reintroduced

President Trump signed the “TAKE IT DOWN Act” on May 19, 2025. The bill criminalizes the online posting of nonconsensual intimate visual images of adults and minors and the publication of digital forgeries, defined as the intimate visual depiction of an identifiable individual created through various digital means that, when viewed as a whole, is indistinguishable from an authentic visual depiction. The statute also criminalizes threats to publish such images. The bill additionally requires online platforms to establish no later than one year from enactment clear processes by which individuals can notify companies of the existence of these images and a requirement that the images be removed “as soon as possible, but not later than 48 hours” after receiving a request. The bill in its entirety can be found here.

Also in May, the Kids Online Safety Act (KOSA) was reintroduced in the Senate by a bipartisan group of legislators. In connection with their announcement of the revised version of KOSA, Senators Blackburn and Blumenthal thanked Elon Musk and others at X for their partnership in modifying KOSA’s language to “strengthen the bill while safeguarding free speech online and ensuring it’s not used to stifle expression” and noted the support of Musk and X to pass the legislation by the end of 2025. In its May announcement, the senators noted that the legislation is supported by over 250 national, state and local organizations and further gained the support of Apple. KOSA provides that platforms “shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate” listed harms to minors where those harms were reasonably foreseeable. Those harms include eating disorders, depressive and anxiety disorders, compulsive use, online harassment, and sexual and financial exploitation. It requires that platforms provide minors (and parents) with readily accessible and easy to use safety tools that limit communication with the minor and limit by default access to and use of certain design features by minors. The legislation further mandates reporting tools for users and the establishment of internal processes to receive and substantively review all reports. The current version of KOSA is lengthy and contains numerous additional mandates and notice requirements including third party audits and public reporting regarding compliance. The most recent version of KOSA can be found here.

New COPPA Rule Takes Effect June 23, 2025

The Federal Trade Commissions (FTC) has amended the Children’s Online Privacy Protection Rule (“COPPA Rule”) effective June 23, 2025. COPPA imposes obligations on entities operating online that collect the personal information of children under the age of thirteen. The new COPPA Rule seeks to address new challenges in the digital landscape.

Under the new COPPA Rule, the FTC will consider additional evidence in determining whether a website or online service is directed at children. COPPA applies wherever children under the age of thirteen are a website or service’s intended or actual audience, and the FTC applies a multifactor test for assessing this. Under the new COPPA Rule, the FTC will now consider “marketing or promotional materials or plans, representations to consumers or to third parties, reviews by users or third parties, and the age of users on similar websites or services.” While FTC has stated that this amendment simply serves to clarify how it analyzes the question of whether a website is child-directed (rather than acting as a change in policy), online operators should note that whether they are subject to COPPA depends in part on elements outside of their control—such as online reviews and the age of users of their peer websites and services.

The type of information protected by COPPA will also expand. COPPA mandates that websites and online services directed at children under the age of thirteen obtain verifiable parental consent before collecting, using, or disclosing any personal information from children. To date, this has included details like names, addresses, phone numbers, email addresses, and other identifiable data. The new COPPA Rule expands this definition to include biometric identifiers “that can be used for the automated or semi-automated recognition of an individual, such as finger prints; handprints; retina patterns; iris patterns; genetic data, including a DNA sequence; voiceprints; gait patterns; facial templates; or faceprints[.]” The definition will also include government identifiers such as social security or passport numbers, and birth certificates.

Data security requirements have also been enhanced. Operators subject to COPPA must maintain a written data security program, designate one or more employees to coordinate it, and conduct an annual assessment of risks. If they share any protected data with third parties, the disclosing party must ensure that the third party has sufficient capability and policies in place to maintain the data securely and within the bounds of COPPA regulations. Notably, the new COPPA Rule forbids indefinite retention of data, requiring that operators only retain protected information as long as is reasonably necessary to serve the purposes for which it was collected.

The new COPPA Rule contains a number of other policy changes, such as enhanced requirements for parental notice and control regarding the data collected, stored, and shared with third parties, new mechanisms for obtaining parental consent, and changes to an exception to the bar on collecting children’s data without parental consent for the limited purpose of determining whether a user is a child under the age of thirteen.

Entities operating a business or service online that may be used by children under the age of thirteen—even where children are not the intended audience—should carefully review the new rule, and take steps to ensure they are in full compliance. The new rule underscores the FTC’s continued interest in this space and its desire to take action against online services for practices it views as posing unacceptable risks to children’s privacy and online safety.

Senate Judiciary Committee, Subcommittee on Privacy, Technology, and the Law Holds Hearing on AI-Generated Deep Fakes

On May 21, the Senate Judiciary Committee’s subcommittee on privacy, technology and the law held a hearing titled, “The Good, the Bad, and the Ugly: AI-Generated Deep Fakes in 2025.” Witnesses included representatives of the Recording Industry Association of America, Consumer Reports, and YouTube, along with multi-award winning musician Martina McBride. They all testified about the potential benefits of AI, but also the potential harms to creators, including musicians, and different but substantial harms to consumers. The witnesses discussed specific examples of the images and voices of both known and lesser-known innocent individuals used to defraud and exploit others, impacting reputations and livelihoods. A representative from the National Center on Sexual Exploitation (NCOSE) also testified about the pervasive and harmful impact of deep fakes on adults and children when their images are used to create pornography, which is then spread worldwide and unchecked on the internet. All of the witnesses testified in support of the NO FAKES Act of 2025, a bipartisan bill and a compliment to the TAKE IT DOWN Act. The language of the current legislation can be found here. The bill currently provides for a civil cause of action with a detailed penalty regime for individuals who have their image or voice used without their permission and protects online service providers from liability if those providers have systems in place to identify and address the publication and dissemination of deep fakes. The bill also provides for legal process for individuals to obtain information from providers regarding the source of the published materials. The current version additionally endeavors to preempt state law, stating that the “rights established under this Act shall preempt any cause of action under State law for the protection of an individual’s voice and visual likeness rights in connection with a digital replica, as defined in this Act, in an expressive work.”

On June 11, 2025, Connecticut passed Senate Bill 01295 (SB 01295).  If signed by the governor, SB 01295 will amend the existing Connecticut Data Privacy Act (CTDPA) in several important ways, with the amendments going into effect on July 1, 2026.

Expanded Scope: In what is seen as a general trend, SB 01295 broadens the reach of the CTDPA by lowering exemption thresholds: The law will apply to organizations that control or process the personal data of 35,000 consumers or more, controls or processes any sensitive data, or engage in the sale of personal data. The bill also expands the definition of sensitive data, thereby increasing the number of covered entities.  

Signaling another important trend, the amendment would remove the entity-level exemption for financial institutions under the Gramm-Leach-Bliley Act (GLBA), and instead only exempt data subject to the GLBA.  Notably, however, certain types of financial institutions may continue to enjoy entity-level exemptions.

Stricter Regulations for Minors: Social media platforms and online services targeting minors (individuals under 18) would also be subject to heightened obligations and standards, including restrictions related to processing minors’ personal data related to certain risks and automated decisions.

Additional Changes: Additionally, the amended changes would include additional responsibilities placed on data controllers, including those related to consumer rights requests, data protection assessments and privacy notices and disclosures.

***

Although this legislative season has not seen revolutionary new laws passed, amendments in states like Connecticut, Colorado, and Montana are important reminders that changes to existing laws can have significant impacts–both in broadening the scope of their application and in current compliance regimes. 

On June 4, 2025, the Digital Advertising Alliance (“DAA”), the self-regulatory body that sets and enforces privacy standards for digital advertising, announced it is launching a process to determine if it is necessary to issue new guidance to address how the DAA’s Self-Regulatory Principles apply to the use of artificial intelligence systems and tools that leverage interest-based advertising (“IBA”) data. 

The DAA intends to meet with relevant stakeholders, such as trade associations, advertisers, publishers, and ad tech over the coming weeks to consider the following issues:

  • the appropriate industry participants;
  • the current and anticipated use cases for IBA data by AI systems and tools;
  •  consumer expectations around the collection and use of such data; and
  • the legal and regulatory gaps/overlaps with any such guidance

While it is too early to tell what specific guidance will entail, the CEO of the DAA stated in the DAA’s announcement that the goal of the review is to “look at the steps companies can take to ensure they are providing appropriate information and control to consumers around the collection and use of IBA data by those [artificial intelligence] systems.”

On February 21, 2025, representatives in the California legislature introduced California Assembly Bill 1355, also known as the California Location Privacy Act (“AB 1355”).  AB 1355 seeks to amend the California Consumer Privacy Act (the “CCPA”) by imposing several new restrictions on the collection and use of consumer location data. 

Under AB 1355, “location data” means device information that reveals, directly or indirectly, where a person or device is or has been within the State of California, with precision sufficient to identify the street-level location of such person or device within a range of five miles or less.  AB 1355 provides examples including, but not limited to:

  • An IP address capable of revealing the physical or geographical location of an individual;
  • GPS coordinates;
  • Cell-site location information;
  • Information captured by an automated license plate recognition system that could be used to identify the specific location of an automobile at a point in time;
  • Information or image captured by a speed safety system or other traffic monitoring system that could be used to identify the specific location of an automobile at a point in time; and
  • A video or photographic image that is used as a probe image in a facial recognition technology system that could be used to identify the specific location of an individual at a point in time.

AB 1355 would impose the following restrictions on this broad category of location data:

  • Opt-In Consent:  Prior to collecting or using an individual’s location data, a covered entity would be required to obtain the individual’s express opt-in consent to collect and use their location data for the purpose of providing the goods or services requested.
  • Restrictions on Use & Disclosure:  Even if consent is collected, covered entities would be prohibited from (i) collecting more precise location data than necessary to provide the goods or services requested, (ii) retaining location data for longer than necessary to provide the goods or services requested, (iii) selling, renting, trading, or leasing location data to third parties, (iv) deriving or inferring from location data any information that is not necessary to provide the goods or services requested, or (v) disclosing the location data to any government agency without a valid court order.  The intent of these restrictions is to create “no-go zones” where data revealing visits to certain locations, such as reproductive health clinics or places of worship, cannot be used for discriminatory or otherwise improper or unlawful purposes.
  • Location Privacy Policy:  A covered entity would be required to maintain a “location privacy policy” that is presented to consumers at the point of collecting such location information.  The location privacy policy would be required to include, among other things, (i) the type of location data collected, (ii) the disclosures required to provide the requested goods or services, (iii) the identities of service providers and third parties to whom the location data is disclosed or could be disclosed, (iv) whether the location data is used for targeted advertising, and (v) the data security, retention, and deletion policies.
  • Changes to Location Privacy Policy:  A covered entity would be required to provide notice of any change to its location privacy policy at least twenty (20) business days in advance.
  • Enforcement & Penalties:  The California Attorney General, along with district attorneys, would be able to bring a civil action against a covered entity for violations of AB 1355, which may result in a civil penalty up to $25,000 per offense.

These proposed changes that are similar to the approach to consumer location data already adopted under Maryland’s Online Data Privacy Act, which takes effect October 1, 2025.  If enacted, AB 1355, however, would represent a significant departure from the opt-out framework currently set forth under California law under the CCPA, where businesses can generally sell and share sensitive personal information, such as geolocation information, unless the person opts out and directs the business to limit its usage.

On February 12, 2025, the House Energy and Commerce Committee Chair Brett Guthrie (R-Ky) and Vice Chair John Joyce (R-Pa) announced the formation of 12-member working group tasked with developing comprehensive data privacy legislation to establish a national privacy framework governing how companies can collect, use, and share personal data.

The announcement of the working group comes shortly after the U.S. Chamber Association submitted a letter to House and Senate leaders urging Congress to take legislative action to pass a comprehensive national privacy law.  As set forth in the letter by the U.S. Chamber Association, in the absence of a federal standard, a growing number of states have attempted to fill the gap by passing their own privacy laws.  According to the letter, the current situation has left businesses grappling with a confusing and inconsistent patchwork of rules and regulations that vary from state to state. 

Previous attempts to pass comprehensive federal privacy legislation have all failed.  Most recently, lawmakers introduced the American Data Privacy and Protection Act in 2022 and the American Privacy Rights Act in 2024, but neither garnered sufficient support to even proceed to a floor vote.

The working group now seeks to craft a bill that it claims will address issues that prevented the prior bills from garnering enough support to pass. However, it will face the same substantive and political roadblocks that have plagued attempts at a national privacy law in the past—including the fact that there will be pressure on California Republicans to object to a bill that preempts the CCPA.

Stakeholders interested in engaging with the working group can reach out to PrivacyWorkingGroup@mail.house.gov.

Recently, a federal court issued the first ruling on the closely watched issue of fair use in copyright infringement involving AI. The court ruled in favor of the plaintiff on its direct infringement claim, and ruled that the defendant’s use of plaintiff’s material to train its AI model was not a fair use.

The Upshot

  • On February 11, 2025, the court in Thomson Reuters v. Ross Intelligence reconsidered its prior decision that the question of fair use needed to be decided by the jury and instead ruled on renewed summary judgment motions that defendant’s use was not fair use.
  • The case involved defendant’s alleged infringement of Thomson Reuters’ Westlaw headnotes. Ross licensed “Bulk Memos” from a third party to train Ross’s AI-based legal search engine. The “Bulk Memos” were created using Westlaw headnotes.
  • The court found that the headnotes were original and copyrightable, and granted summary judgment to Thomson Reuters on direct infringement for certain headnotes.
  • On Ross’s fair use defense, the court found that the use was commercial and not transformative. It also found that the use impacted both the legal research market and the market for data to train AI tools. Overall, the fair use analysis favored Thomson Reuters.
  • Courts are just starting to reach decisions in AI-based copyright cases. The fair use analysis provides guidance for how future courts will think about these issues.

The Bottom Line

This closely watched decision is significant as it’s the first of its kind so far in the landscape of AI-related copyright litigation. While the infringement finding is fairly specific to the facts of the case, the fair use ruling will likely be influential for future courts’ analysis of this defense, particularly its discussion of the purpose and market impact of using copyrighted materials to train AI models. Ballard Spahr lawyers closely monitor this area of law to advise clients on issues of artificial intelligence and copyright infringement.

On February 11, 2025, Third Circuit Judge Stephanos Bilbas, sitting by designation in the District of Delaware, issued a summary judgment decision in the closely watched copyright infringement dispute between Thomson Reuters and Ross Intelligence concerning Ross’s AI-based legal search engine. The court granted most of Thomson Reuters’ motion on direct copyright infringement, and held that Ross’s defenses, including fair use, failed as a matter of law. This case is significant as it’s the first of its kind to address fair use in connection with artificial intelligence, though the court was careful to point out that this matter, unlike many others working their way through the court system, involved a non-generative AI system.

The underlying case concerns Ross’s AI-based legal search engine and Thomson Reuters’ claim that the use of Thomson Reuters’ Westlaw headnotes as training material for the AI tool constituted copyright infringement. Thomson Reuters’ Westlaw platform contains editorial content and annotations, like headnotes, that guide users to key points of law and case holdings. Ross, a competitor to Westlaw, made a legal research search engine based on artificial intelligence, and initially asked to license Westlaw content to train its product. When Thomson Reuters refused, Ross turned to a third party, LegalEase, which provided training data in the form of “Bulk Memos” consisting of legal questions and answers. The Bulk Memos were created using Westlaw headnotes.

Thomson Reuters brought claims of copyright infringement based on this use. In 2023, the court largely denied Thomson Reuters’ motions for summary judgment on copyright infringement and fair use, and held that those issues were properly decided by a jury. After reflection, the court “realized that [its] prior summary-judgment ruling had not gone far enough,” and invited the parties to renew their summary judgment briefing. This time, the court largely ruled in Thomson Reuters’ favor.

First, the court held that Thomson Reuters’ headnotes were sufficiently original to be copyrightable, even if they were based on the text of underlying court cases. The court found that”[i]dentifying which words matter and chiseling away the surrounding mass expresses the editor’s idea about what the important point of law from the opinion is,” and therefore has enough of a “creative spark” to overcome the low bar presented by the originality requirement. “Similarly, Westlaw’s Key Numbering System was also sufficiently original, as Thomson Reuters had chosen a particular way to organize these legal topics, even if it was not a novel one. The court then turned to actual copying and substantial similarity and granted summary judgment to Thomson Reuters on headnotes which “very closely track[ ] the language of the Bulk Memo question but not the language of the case opinion.” Other headnotes and the Key Numbering System were left for trial.

On fair use, the court granted summary judgment for Thomson Reuters, finding that Ross’s use was not fair. On the first fair use factor, the purpose and character of the use, the court found that Ross’s use was commercial and served the same purpose of Thomson Reuters’: a legal research tool. In the parlance of fair use law, Ross’s use was not “transformative.” The court also rejected Ross’s analogy to earlier computer programming cases where intermediate copying was necessary, and rejected Ross’s argument that the copying was allowed because the text of the headnotes was not reproduced in the final product.

The second and third factors (nature of the material and how much was used), went to Ross, but the fourth factor, the likely effect on the market for the original work, and “the single most important” of the four factors, went to Thomson Reuters. The court looked at both the current market for the original work and potential derivative ones, and found that Ross’s use impacted both the original market for legal research and the derivative market for data to train AI tools. The court found that it did “not matter whether Thomson Reuters has used the data to train its own legal search tools; the effect on a potential market for AI training data is enough.” Altogether, the four fair use factors favored Thomson Reuters, and it was granted summary judgment on fair use.

Looking beyond this opinion, it is the first decision to substantively address fair use in the context of artificial intelligence, so it will be an important guidepost for the multiple cases pending across the country, many of which involve companies who have used copyrighted works to train generative AI models. However, the opinion has an important caveat, which is that “only non-generative AI” was at issue in the case. Generative AI models use their training data set to create new text, image, video, or other outputs. Non-generative models, by contrast, analyze and classify data based on patterns learned from their training data. The cases involving generative AI may involve different analysis for the fair use factors like the question of transformativeness and the nature of the original works, but the opinion’s commentary on current and potential markets, as well as its willingness to weigh the four factors on summary judgment, may be highly applicable.

In short, this is an important decision but much remains unsettled in the law applying copyright to artificial intelligence. Ballard Spahr lawyers closely monitor developments concerning artificial intelligence and intellectual property, including copyright infringement and fair use. Our AI Legislation and Litigation Tracker provides a comprehensive view of AI-related legislative activities and important information about litigation matters with significant potential impact on clients.

On January 6, 2025, the U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) published a Notice of Proposed Rulemaking (“NPRM”) to amend the Health Insurance Portability and Accountability Act (“HIPAA”) Security Rule. The proposed changes, if enacted, would represent the first update to the HIPAA Security Rule since 2013.

The proposed updates, which apply to covered entities and business associates (collectively, “Regulated Entities”) aim to enhance cybersecurity measures within the healthcare sector, addressing the increasing frequency and sophistication of cyberattacks that threaten patient safety and the confidentiality of electronic protected health information (“ePHI”).

Below are some of the key proposals set forth in the NPRM:

  1. Strengthened Security Requirements: HHS proposes eliminating the current distinction between “required” and “addressable” provisions of the Security Rule, thereby requiring compliance with all implementation specifications in the future.  For example, with certain exceptions, ePHI would now be required to be encrypted at rest and in transit.  Regulated Entities would no longer be permitted to merely document rationale for noncompliance with “addressable” implementation specifications. HHS also proposes new implementation specifications.  As such, Regulated Entities would be required to strengthen and adopt security standards to ensure robust cybersecurity practices that keep pace with technological advancements and emerging threats, including by deploying anti-malware solutions, removing unnecessary software, disabling unused network ports, implementing multi-factor authentication for systems that handle ePHI, and conducting vulnerability scans every six months and annual penetration tests.
  2. Technology Asset Inventory and Network Map: Regulated Entities would be required to develop and maintain an inventory of their technology assets and create a network map illustrating the movement of ePHI within the Regulated Entities’ systems, which must be updated annually or when significant changes in the organizations’ operations or environment occur.
  3. Enhanced Risk Analyses: Regulated Entities would be required to include greater specificity when conducting a risk analysis, including, among other things:
    • “A review of the technology asset inventory and network map.
    • Identification of all reasonably anticipated threats to the confidentiality, integrity, and availability of ePHI.
    • Identification of potential vulnerabilities and predisposing conditions to the regulated entity’s relevant electronic information systems; [and]
    • An assessment of the risk level for each identified threat and vulnerability, based on the likelihood that each identified threat will exploit the identified vulnerabilities.”
    • The written risk assessment would need to be reviewed, verified, and updated at least every 12 months, with evaluations conducted when there are changes in the environment or operations. A written risk management plan must be maintained and reviewed annually.
  4. Contingency and Incident Response Plans with Notification Procedures: Regulated Entities would be required to implement detailed plans for restoring systems within 72 hours, prioritizing critical systems and establishing and test written security incident response plans regularly, and business associates and subcontractors would be required to notify covered entities within 24 hours of activating their contingency plans.
  5. Verification of Business Associates’ Safeguards: Business associates would be required to verify at least once every 12 months that they have deployed technical safeguards required by the Security Rule to protect ePHI through a written analysis of the business associate’s relevant electronic information systems by a subject matter expert and a written certification that the analysis has been performed and is accurate. Based on these written verifications, Regulated Entities would be required to conduct an assessment of the risks posed by new and existing business associate arrangements.

Along with the NPRM, OCR published a fact sheet that provides additional details on the proposed updates.

Public comments to the proposed rule are due on or before March 7, 2025, although it is possible that the change in Administrations later this month could affect the progress of this and other proposed rules. While HHS undertakes the rulemaking, the current Security Rule remains in effect.

The Dutch Data Protection Authority (the “Dutch DPA”) issued a €4.75 million (approximately $5 million USD) fine on Netflix in connection with a data access investigation that started in 2019.  The investigation arose out of a complaint was filed by nonprofit privacy and digital rights organization, noyb, which is run by European privacy campaigner Max Schrems.

In a press release dated December 18, 2024, the Dutch DPA stated that Netflix “did not give customers sufficient information about what the company does with their personal data between 2018 and 2020.”  In particular, the Dutch DPA alleged Netflix’s privacy notice was not clear about the following:

  • the purposes of and the legal basis for collecting and using personal data;
  • which personal data are shared by Netflix with other parties, and why precisely this is done;
  • how long Netflix retains the data; and
  • how Netflix ensures that personal data remain safe when the company transmits them to countries outside Europe.

Furthermore, the Dutch DPA stated that customers did not receive sufficient information when they asked Netflix what data the company collects about them.  According to the Dutch DPA, Netflix has since updated its privacy statement to improve to the relevant disclosures.

Netflix has objected to the fine.

On December 3, 2024, the Consumer Financial Protection Bureau (CFPB) published its long anticipated proposed rule aimed at regulating data brokers under the Fair Credit Reporting Act (FCRA).  Although the CFPB’s future is uncertain under the upcoming administration, if implemented, the rule would significantly expand the reach of the FCRA. 

In the accompanying press release, the CFPB stated that its “proposal would ensure data brokers comply with federal law and address critical threats from current data broker practices, including” national security and surveillance risks; criminal exploitation; and violence, stalking, and personal safety threats to law enforcement personnel and domestic violence survivors.  The CFPB expanded on these stated risks in a separate fact sheet.

To address these risks, the proposed rule would treat data brokers like credit bureaus and background check companies: Companies that sell data about income or financial tier, credit history, credit score, or debt payments would be considered consumer reporting agencies required to comply with the FCRA, regardless of how the information is used.  So, the rule would turn data brokers’ disclosure of such information into the communication of consumer reports subject to FCRA’s regulation.  The CFPB did not propose any express exceptions for use of credit header data for fraud prevention, identity verification, compliance with Bank Secrecy Act or Know-Your-Customer requirements, or law enforcement uses.    

If enacted, the proposed rule would significantly impact the data broker industry and restrict the information that data brokers can sell to third parties.  It would also likely increase compliance costs for all data brokers—regardless of the types of data in which they deal.  Unsurprisingly, as with other CFPB initiatives of late, industry reactions were immediate and clear.  For example, the Consumer Data Industry Association (CDIA) expressed concerns that the proposed rule could have “severe unintended consequences for public safety, law enforcement, and the consumer economy.”  Specifically, the CDIA noted that the proposed rule could make “it harder to identify and prevent fraudulent schemes” and that it “may become more difficult for police to identify and track fugitives or locate missing and exploited children.”  It therefore called “on the CFPB to engage in a more collaborative approach with industry stakeholders and lawmakers to address data privacy concerns without compromising the integrity and efficiency of the credit reporting system that has long been the envy of the world.”

In any event, the proposed rule has a 90-day comment period, meaning that it the comment period alone will run until March 3, 2025.  Based on the incoming Trump administration’s apparent position towards the CFPB and FCRA, it seems unlikely that the rule will go into effect as proposed.  But until anything becomes formal, companies that would be impacted by the proposed rule should still consider submitting comments to ensure that their interests are protected. 

On December 5, 2024, the Colorado Department of Law adopted amended rules to the Colorado Privacy Act (CPA). 

The DOL had released the first set of the proposed amended rules—which relate to the interpretative guidance and opinion letter process, biometric identifier consent, and additional requirements for the personal data of minors—on September 13, 2024. The Attorney General discussed the proposed rules at the 2024 Annual Colorado Privacy Summit, sought and received comments from the public, and revised the rules. The adopted rules will now be sent to the Attorney General, who will issue a formal opinion. After that formal opinion is issued, the rules will be filed with the Secretary of State, and they will become effective 30 days after they are published in the state register.