On February 21, 2025, representatives in the California legislature introduced California Assembly Bill 1355, also known as the California Location Privacy Act (“AB 1355”).  AB 1355 seeks to amend the California Consumer Privacy Act (the “CCPA”) by imposing several new restrictions on the collection and use of consumer location data. 

Under AB 1355, “location data” means device information that reveals, directly or indirectly, where a person or device is or has been within the State of California, with precision sufficient to identify the street-level location of such person or device within a range of five miles or less.  AB 1355 provides examples including, but not limited to:

  • An IP address capable of revealing the physical or geographical location of an individual;
  • GPS coordinates;
  • Cell-site location information;
  • Information captured by an automated license plate recognition system that could be used to identify the specific location of an automobile at a point in time;
  • Information or image captured by a speed safety system or other traffic monitoring system that could be used to identify the specific location of an automobile at a point in time; and
  • A video or photographic image that is used as a probe image in a facial recognition technology system that could be used to identify the specific location of an individual at a point in time.

AB 1355 would impose the following restrictions on this broad category of location data:

  • Opt-In Consent:  Prior to collecting or using an individual’s location data, a covered entity would be required to obtain the individual’s express opt-in consent to collect and use their location data for the purpose of providing the goods or services requested.
  • Restrictions on Use & Disclosure:  Even if consent is collected, covered entities would be prohibited from (i) collecting more precise location data than necessary to provide the goods or services requested, (ii) retaining location data for longer than necessary to provide the goods or services requested, (iii) selling, renting, trading, or leasing location data to third parties, (iv) deriving or inferring from location data any information that is not necessary to provide the goods or services requested, or (v) disclosing the location data to any government agency without a valid court order.  The intent of these restrictions is to create “no-go zones” where data revealing visits to certain locations, such as reproductive health clinics or places of worship, cannot be used for discriminatory or otherwise improper or unlawful purposes.
  • Location Privacy Policy:  A covered entity would be required to maintain a “location privacy policy” that is presented to consumers at the point of collecting such location information.  The location privacy policy would be required to include, among other things, (i) the type of location data collected, (ii) the disclosures required to provide the requested goods or services, (iii) the identities of service providers and third parties to whom the location data is disclosed or could be disclosed, (iv) whether the location data is used for targeted advertising, and (v) the data security, retention, and deletion policies.
  • Changes to Location Privacy Policy:  A covered entity would be required to provide notice of any change to its location privacy policy at least twenty (20) business days in advance.
  • Enforcement & Penalties:  The California Attorney General, along with district attorneys, would be able to bring a civil action against a covered entity for violations of AB 1355, which may result in a civil penalty up to $25,000 per offense.

These proposed changes that are similar to the approach to consumer location data already adopted under Maryland’s Online Data Privacy Act, which takes effect October 1, 2025.  If enacted, AB 1355, however, would represent a significant departure from the opt-out framework currently set forth under California law under the CCPA, where businesses can generally sell and share sensitive personal information, such as geolocation information, unless the person opts out and directs the business to limit its usage.

On February 12, 2025, the House Energy and Commerce Committee Chair Brett Guthrie (R-Ky) and Vice Chair John Joyce (R-Pa) announced the formation of 12-member working group tasked with developing comprehensive data privacy legislation to establish a national privacy framework governing how companies can collect, use, and share personal data.

The announcement of the working group comes shortly after the U.S. Chamber Association submitted a letter to House and Senate leaders urging Congress to take legislative action to pass a comprehensive national privacy law.  As set forth in the letter by the U.S. Chamber Association, in the absence of a federal standard, a growing number of states have attempted to fill the gap by passing their own privacy laws.  According to the letter, the current situation has left businesses grappling with a confusing and inconsistent patchwork of rules and regulations that vary from state to state. 

Previous attempts to pass comprehensive federal privacy legislation have all failed.  Most recently, lawmakers introduced the American Data Privacy and Protection Act in 2022 and the American Privacy Rights Act in 2024, but neither garnered sufficient support to even proceed to a floor vote.

The working group now seeks to craft a bill that it claims will address issues that prevented the prior bills from garnering enough support to pass. However, it will face the same substantive and political roadblocks that have plagued attempts at a national privacy law in the past—including the fact that there will be pressure on California Republicans to object to a bill that preempts the CCPA.

Stakeholders interested in engaging with the working group can reach out to PrivacyWorkingGroup@mail.house.gov.

Recently, a federal court issued the first ruling on the closely watched issue of fair use in copyright infringement involving AI. The court ruled in favor of the plaintiff on its direct infringement claim, and ruled that the defendant’s use of plaintiff’s material to train its AI model was not a fair use.

The Upshot

  • On February 11, 2025, the court in Thomson Reuters v. Ross Intelligence reconsidered its prior decision that the question of fair use needed to be decided by the jury and instead ruled on renewed summary judgment motions that defendant’s use was not fair use.
  • The case involved defendant’s alleged infringement of Thomson Reuters’ Westlaw headnotes. Ross licensed “Bulk Memos” from a third party to train Ross’s AI-based legal search engine. The “Bulk Memos” were created using Westlaw headnotes.
  • The court found that the headnotes were original and copyrightable, and granted summary judgment to Thomson Reuters on direct infringement for certain headnotes.
  • On Ross’s fair use defense, the court found that the use was commercial and not transformative. It also found that the use impacted both the legal research market and the market for data to train AI tools. Overall, the fair use analysis favored Thomson Reuters.
  • Courts are just starting to reach decisions in AI-based copyright cases. The fair use analysis provides guidance for how future courts will think about these issues.

The Bottom Line

This closely watched decision is significant as it’s the first of its kind so far in the landscape of AI-related copyright litigation. While the infringement finding is fairly specific to the facts of the case, the fair use ruling will likely be influential for future courts’ analysis of this defense, particularly its discussion of the purpose and market impact of using copyrighted materials to train AI models. Ballard Spahr lawyers closely monitor this area of law to advise clients on issues of artificial intelligence and copyright infringement.

On February 11, 2025, Third Circuit Judge Stephanos Bilbas, sitting by designation in the District of Delaware, issued a summary judgment decision in the closely watched copyright infringement dispute between Thomson Reuters and Ross Intelligence concerning Ross’s AI-based legal search engine. The court granted most of Thomson Reuters’ motion on direct copyright infringement, and held that Ross’s defenses, including fair use, failed as a matter of law. This case is significant as it’s the first of its kind to address fair use in connection with artificial intelligence, though the court was careful to point out that this matter, unlike many others working their way through the court system, involved a non-generative AI system.

The underlying case concerns Ross’s AI-based legal search engine and Thomson Reuters’ claim that the use of Thomson Reuters’ Westlaw headnotes as training material for the AI tool constituted copyright infringement. Thomson Reuters’ Westlaw platform contains editorial content and annotations, like headnotes, that guide users to key points of law and case holdings. Ross, a competitor to Westlaw, made a legal research search engine based on artificial intelligence, and initially asked to license Westlaw content to train its product. When Thomson Reuters refused, Ross turned to a third party, LegalEase, which provided training data in the form of “Bulk Memos” consisting of legal questions and answers. The Bulk Memos were created using Westlaw headnotes.

Thomson Reuters brought claims of copyright infringement based on this use. In 2023, the court largely denied Thomson Reuters’ motions for summary judgment on copyright infringement and fair use, and held that those issues were properly decided by a jury. After reflection, the court “realized that [its] prior summary-judgment ruling had not gone far enough,” and invited the parties to renew their summary judgment briefing. This time, the court largely ruled in Thomson Reuters’ favor.

First, the court held that Thomson Reuters’ headnotes were sufficiently original to be copyrightable, even if they were based on the text of underlying court cases. The court found that”[i]dentifying which words matter and chiseling away the surrounding mass expresses the editor’s idea about what the important point of law from the opinion is,” and therefore has enough of a “creative spark” to overcome the low bar presented by the originality requirement. “Similarly, Westlaw’s Key Numbering System was also sufficiently original, as Thomson Reuters had chosen a particular way to organize these legal topics, even if it was not a novel one. The court then turned to actual copying and substantial similarity and granted summary judgment to Thomson Reuters on headnotes which “very closely track[ ] the language of the Bulk Memo question but not the language of the case opinion.” Other headnotes and the Key Numbering System were left for trial.

On fair use, the court granted summary judgment for Thomson Reuters, finding that Ross’s use was not fair. On the first fair use factor, the purpose and character of the use, the court found that Ross’s use was commercial and served the same purpose of Thomson Reuters’: a legal research tool. In the parlance of fair use law, Ross’s use was not “transformative.” The court also rejected Ross’s analogy to earlier computer programming cases where intermediate copying was necessary, and rejected Ross’s argument that the copying was allowed because the text of the headnotes was not reproduced in the final product.

The second and third factors (nature of the material and how much was used), went to Ross, but the fourth factor, the likely effect on the market for the original work, and “the single most important” of the four factors, went to Thomson Reuters. The court looked at both the current market for the original work and potential derivative ones, and found that Ross’s use impacted both the original market for legal research and the derivative market for data to train AI tools. The court found that it did “not matter whether Thomson Reuters has used the data to train its own legal search tools; the effect on a potential market for AI training data is enough.” Altogether, the four fair use factors favored Thomson Reuters, and it was granted summary judgment on fair use.

Looking beyond this opinion, it is the first decision to substantively address fair use in the context of artificial intelligence, so it will be an important guidepost for the multiple cases pending across the country, many of which involve companies who have used copyrighted works to train generative AI models. However, the opinion has an important caveat, which is that “only non-generative AI” was at issue in the case. Generative AI models use their training data set to create new text, image, video, or other outputs. Non-generative models, by contrast, analyze and classify data based on patterns learned from their training data. The cases involving generative AI may involve different analysis for the fair use factors like the question of transformativeness and the nature of the original works, but the opinion’s commentary on current and potential markets, as well as its willingness to weigh the four factors on summary judgment, may be highly applicable.

In short, this is an important decision but much remains unsettled in the law applying copyright to artificial intelligence. Ballard Spahr lawyers closely monitor developments concerning artificial intelligence and intellectual property, including copyright infringement and fair use. Our AI Legislation and Litigation Tracker provides a comprehensive view of AI-related legislative activities and important information about litigation matters with significant potential impact on clients.

On January 6, 2025, the U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) published a Notice of Proposed Rulemaking (“NPRM”) to amend the Health Insurance Portability and Accountability Act (“HIPAA”) Security Rule. The proposed changes, if enacted, would represent the first update to the HIPAA Security Rule since 2013.

The proposed updates, which apply to covered entities and business associates (collectively, “Regulated Entities”) aim to enhance cybersecurity measures within the healthcare sector, addressing the increasing frequency and sophistication of cyberattacks that threaten patient safety and the confidentiality of electronic protected health information (“ePHI”).

Below are some of the key proposals set forth in the NPRM:

  1. Strengthened Security Requirements: HHS proposes eliminating the current distinction between “required” and “addressable” provisions of the Security Rule, thereby requiring compliance with all implementation specifications in the future.  For example, with certain exceptions, ePHI would now be required to be encrypted at rest and in transit.  Regulated Entities would no longer be permitted to merely document rationale for noncompliance with “addressable” implementation specifications. HHS also proposes new implementation specifications.  As such, Regulated Entities would be required to strengthen and adopt security standards to ensure robust cybersecurity practices that keep pace with technological advancements and emerging threats, including by deploying anti-malware solutions, removing unnecessary software, disabling unused network ports, implementing multi-factor authentication for systems that handle ePHI, and conducting vulnerability scans every six months and annual penetration tests.
  2. Technology Asset Inventory and Network Map: Regulated Entities would be required to develop and maintain an inventory of their technology assets and create a network map illustrating the movement of ePHI within the Regulated Entities’ systems, which must be updated annually or when significant changes in the organizations’ operations or environment occur.
  3. Enhanced Risk Analyses: Regulated Entities would be required to include greater specificity when conducting a risk analysis, including, among other things:
    • “A review of the technology asset inventory and network map.
    • Identification of all reasonably anticipated threats to the confidentiality, integrity, and availability of ePHI.
    • Identification of potential vulnerabilities and predisposing conditions to the regulated entity’s relevant electronic information systems; [and]
    • An assessment of the risk level for each identified threat and vulnerability, based on the likelihood that each identified threat will exploit the identified vulnerabilities.”
    • The written risk assessment would need to be reviewed, verified, and updated at least every 12 months, with evaluations conducted when there are changes in the environment or operations. A written risk management plan must be maintained and reviewed annually.
  4. Contingency and Incident Response Plans with Notification Procedures: Regulated Entities would be required to implement detailed plans for restoring systems within 72 hours, prioritizing critical systems and establishing and test written security incident response plans regularly, and business associates and subcontractors would be required to notify covered entities within 24 hours of activating their contingency plans.
  5. Verification of Business Associates’ Safeguards: Business associates would be required to verify at least once every 12 months that they have deployed technical safeguards required by the Security Rule to protect ePHI through a written analysis of the business associate’s relevant electronic information systems by a subject matter expert and a written certification that the analysis has been performed and is accurate. Based on these written verifications, Regulated Entities would be required to conduct an assessment of the risks posed by new and existing business associate arrangements.

Along with the NPRM, OCR published a fact sheet that provides additional details on the proposed updates.

Public comments to the proposed rule are due on or before March 7, 2025, although it is possible that the change in Administrations later this month could affect the progress of this and other proposed rules. While HHS undertakes the rulemaking, the current Security Rule remains in effect.

The Dutch Data Protection Authority (the “Dutch DPA”) issued a €4.75 million (approximately $5 million USD) fine on Netflix in connection with a data access investigation that started in 2019.  The investigation arose out of a complaint was filed by nonprofit privacy and digital rights organization, noyb, which is run by European privacy campaigner Max Schrems.

In a press release dated December 18, 2024, the Dutch DPA stated that Netflix “did not give customers sufficient information about what the company does with their personal data between 2018 and 2020.”  In particular, the Dutch DPA alleged Netflix’s privacy notice was not clear about the following:

  • the purposes of and the legal basis for collecting and using personal data;
  • which personal data are shared by Netflix with other parties, and why precisely this is done;
  • how long Netflix retains the data; and
  • how Netflix ensures that personal data remain safe when the company transmits them to countries outside Europe.

Furthermore, the Dutch DPA stated that customers did not receive sufficient information when they asked Netflix what data the company collects about them.  According to the Dutch DPA, Netflix has since updated its privacy statement to improve to the relevant disclosures.

Netflix has objected to the fine.

On December 3, 2024, the Consumer Financial Protection Bureau (CFPB) published its long anticipated proposed rule aimed at regulating data brokers under the Fair Credit Reporting Act (FCRA).  Although the CFPB’s future is uncertain under the upcoming administration, if implemented, the rule would significantly expand the reach of the FCRA. 

In the accompanying press release, the CFPB stated that its “proposal would ensure data brokers comply with federal law and address critical threats from current data broker practices, including” national security and surveillance risks; criminal exploitation; and violence, stalking, and personal safety threats to law enforcement personnel and domestic violence survivors.  The CFPB expanded on these stated risks in a separate fact sheet.

To address these risks, the proposed rule would treat data brokers like credit bureaus and background check companies: Companies that sell data about income or financial tier, credit history, credit score, or debt payments would be considered consumer reporting agencies required to comply with the FCRA, regardless of how the information is used.  So, the rule would turn data brokers’ disclosure of such information into the communication of consumer reports subject to FCRA’s regulation.  The CFPB did not propose any express exceptions for use of credit header data for fraud prevention, identity verification, compliance with Bank Secrecy Act or Know-Your-Customer requirements, or law enforcement uses.    

If enacted, the proposed rule would significantly impact the data broker industry and restrict the information that data brokers can sell to third parties.  It would also likely increase compliance costs for all data brokers—regardless of the types of data in which they deal.  Unsurprisingly, as with other CFPB initiatives of late, industry reactions were immediate and clear.  For example, the Consumer Data Industry Association (CDIA) expressed concerns that the proposed rule could have “severe unintended consequences for public safety, law enforcement, and the consumer economy.”  Specifically, the CDIA noted that the proposed rule could make “it harder to identify and prevent fraudulent schemes” and that it “may become more difficult for police to identify and track fugitives or locate missing and exploited children.”  It therefore called “on the CFPB to engage in a more collaborative approach with industry stakeholders and lawmakers to address data privacy concerns without compromising the integrity and efficiency of the credit reporting system that has long been the envy of the world.”

In any event, the proposed rule has a 90-day comment period, meaning that it the comment period alone will run until March 3, 2025.  Based on the incoming Trump administration’s apparent position towards the CFPB and FCRA, it seems unlikely that the rule will go into effect as proposed.  But until anything becomes formal, companies that would be impacted by the proposed rule should still consider submitting comments to ensure that their interests are protected. 

On December 5, 2024, the Colorado Department of Law adopted amended rules to the Colorado Privacy Act (CPA). 

The DOL had released the first set of the proposed amended rules—which relate to the interpretative guidance and opinion letter process, biometric identifier consent, and additional requirements for the personal data of minors—on September 13, 2024. The Attorney General discussed the proposed rules at the 2024 Annual Colorado Privacy Summit, sought and received comments from the public, and revised the rules. The adopted rules will now be sent to the Attorney General, who will issue a formal opinion. After that formal opinion is issued, the rules will be filed with the Secretary of State, and they will become effective 30 days after they are published in the state register.

On November 7, 2024, Michigan lawmakers in the Senate introduced the Reproductive Data Privacy Act (“RDPA”), also known as Senate Bill 1082 (SB 1082).  The bill aims to strengthen privacy protections for sensitive reproductive health data, including information on menstrual cycles, fertility, and contraception. 

The RDPA is largely modeled after Washington’s My Health, My Data Act, but it more narrowly applies to organizations that provide reproductive health-related products or services, such as diagnostic testing, fertility apps, or abortion care.  The bill regulates these organizations’ collection and processing of “reproductive health data,” which is defined to mean information that is linked or reasonably linkable to an individual and that identifies the individual’s past, present, or future reproductive health status.  The RDPA includes the following notable provisions:

  1. Consumer Control and Consent:  Entities must notify individuals and obtain explicit consent before collecting or processing their reproductive health data.  Additionally, consumers have the right to access, delete, and revoke consent for sharing or selling of, their reproductive health data.
  2. Restrictions on Data Use and Disclosure:  Data sharing with third parties or government agencies is prohibited without a warrant, legal obligation, or the individual’s consent.  The bill bans geofencing practices around reproductive health service locations to prevent tracking or targeting individuals. 
  3. Data Minimization. The RPDA mandates that information may only be collected for one of the following enumerated purposes:
    • To provide a product, service, or service feature to the individual to whom the reproductive health data pertains when that individual requested the product, service, or service feature by subscribing to, creating an account with, or otherwise contracting with the covered entity or service provider;
    • To initiate, manage, execute, or complete a financial or commercial transaction or to fulfill an order for a specific product or service requested by an individual to whom the reproductive health data pertains, including, but not limited to, associated routine administrative, operational, and account servicing activity such as billing, shipping, storage, and accounting;
    • To comply with an obligation under a law of Michigan or federal law; or
    • To protect public safety or public health.

      Entities are prohibited from retaining reproductive health data for longer than necessary to achieve these purposes.
  4. Enforcement and Penalties:  The Michigan Attorney General would oversee enforcement, and individuals could sue for damages ranging from $100 to $750 per violation.  Additional remedies like injunctions and declaratory relief are also included.

Supporters seek to pass the legislation before the year’s end, prior to President-elect Donald Trump assuming office.  The bill, however, must first pass through the Senate Committee on Housing and Human Services before it can be advanced to the Senate floor for potential amendment and vote.  If approved by the Senate, it would then be referred to the House for further consideration.

On November 12, 2024, the Consumer Financial Protection Bureau (CFPB) released a report examining the carve outs and limitations contained in comprehensive state privacy laws relating to financial institutions.  In an accompanying press release, the CFPB stated that in its assessment, “privacy protections for financial information now lag behind safeguards in other sectors of the economy.”

As the CFPB’s report notes, eighteen states had passed comprehensive privacy laws (nineteen, counting Florida, which has particular thresholds).  However, all of these state privacy laws have some level of carve outs or limitations for financial institutions.  Some state laws have a full entity-level exemption, where financial institutions regulated by the Gramm-Leach-Bliley Act (GLBA) are entirely exempt from the scope of the law.  Under other laws, non-public personal information (NPI) regulated by the GLBA is exempted from scope of the state privacy law.  Additionally, state privacy laws also contain exemptions for information regulated by the Fair Credit Reporting Act (FCRA).  Accordingly, financial information processed by financial institutions is, in large part, exempted from state privacy laws.

The CFPB report goes on to describe that the federal laws regulating financial information do not contain the same consumer privacy rights that are contained in state privacy laws—rights such as the right to know what data businesses have about them, to correct inaccurate information, or to request the business delete the information about them. 

Importantly, the report’s conclusion is that state policymakers should assess gaps in existing state privacy laws, and that they should consider whether their consumers are adequately protected under their state laws.  Seen in the context of the recent election, this advice is not surprising.  Indeed, recent CFPB initiatives like the Open Banking Rule—which would afford consumers with rights similar to those offered under state privacy laws—could be halted by the new administration through the Congressional Review Act or enjoined by ongoing litigation.  It is therefore expected that the current CFPB leadership would look for ways to secure its achievements through other avenues.

What is notable, however, is how this change would reshape the scope of state privacy laws.  To date, the discussion on financial institution exemptions has been on entity-level versus data-level.  No states have adopted comprehensive privacy laws that fully cover NPI that is already regulated by the GLBA.  But, with the report, the CFPB now argues that the GBLA’s general preemption provision would not prohibit such application.  If a state takes the CFPB up on its request, it would mark a radical shift in privacy law—and operational changes—in the financial world.  

On November 14, 2024, the California Privacy Protection Agency (“CPPA”), which is tasked with enforcing the California Consumer Privacy Act (the “CCPA”), announced it settled with two data brokers, Growbots, Inc. and UpLead LLC, for failing to register and pay required fees under Senate Bill 362, also known as the Delete Act. The companies will each pay fines—$35,400 for Growbots and $34,400 for UpLead—and agree to cover the CPPA’s legal costs for violations that occurred between February and July 2024.

The Delete Act, signed into law in 2023, mandates that data brokers register with the CPPA and pay an annual fee to fund the California Data Broker Registry.  The Delete Act imposes fines of $200 per day for failing to register by the deadline.  The registration fees are used to fund efforts like the development of the Data Broker Requests and Opt-Out Platform (“DROP”), which is a first-of-its-kind deletion mechanism that will allow consumers to request data deletion from all brokers with a single action. The CPPA expects that DROP will be available to consumers in 2026 via the CPPA website.

These recent settlements, in addition to newly adopted regulations by the CPPA (which further clarify data broker registration requirements under the Delete Act and require data brokers to disclose specific information about their exempt data collection practices) highlight the CPPA’s continued focus on the privacy practices of data brokers.