On April 22, 2026, the House Energy & Commerce Committee released the “Securing and Establishing Consumer Uniform Rights and Enforcement over Data Act” (the “SECURE Data Act”). The SECURE Data Act seeks to establish a comprehensive federal framework for consumer privacy rights and the protection of personal data. Subject to certain exemptions, the SECURE Data Act applies to businesses subject to the FTC Act or common carriers subject to title II of the Communications Act of 1934 that either (a) collect and process personal data of more than 200,000 consumers annually and have an annual gross revenue of $25 million or more, or (b) collect and process personal data of 100,000 consumers annually and “derive[] 25 percent or more of the[ir] annual gross revenue . . . from the sale of such personal data.” The SECURE Data Act’s framework will require operational changes for many businesses, including those already complying with state privacy laws.  Below is an overview of several material provisions of the SECURE Data Act.

Consumer Privacy Rights

Section 2 of the SECURE Data Act grants consumers the right to access, correct, delete, and obtain a copy of their personal data. It further grants consumers the right to opt out of the processing of their personal data for the purposes of targeted advertising, the sale of their personal data, and “[r]eliance on profiling to make a decision that had a legal or similarly significant effect on the consumer.” Controllers must establish and disclose in a privacy notice the means by which a consumer may submit a request to exercise these rights. 

Further, the SECURE Data Act prohibits controllers from processing sensitive data of a consumer without first obtaining the consumer’s consent.

Controller Data Use and Minimization Obligations

Section 3 of the SECURE Data Act requires controllers to provide a privacy notice to consumers that identifies, among other things, “[e]ach category of personal data processed by the controller,” “[e]ach purpose for processing personal data,” and “[e]ach category of personal data the controller shares with any other controller or any governmental entity.” Controllers also are required to disclose to consumers the sale of their personal data.

Section 3 further requires controllers to limit the collection of personal data to what is “adequate, relevant, and reasonably necessary” in relation to the controller’s disclosed data processing purposes. The SECURE Data Act also restricts the processing of personal data for purposes beyond those originally disclosed unless the controller first obtains the consumer’s consent.

State Preemption

The SECURE Data Act preempts all state laws that “relate[] to the provisions of this Act.” The SECURE Data Act, however, permits state attorneys general to bring civil actions on behalf of their residents in federal court to enjoin violations of the act, enforce compliance with the act, and seek damages and equitable relief.

Key Takeaways

The SECURE Data Act, if enacted, would represent a significant shift in the U.S. data privacy landscape by establishing a single federal standard that preempts the current patchwork of state privacy laws. If enacted, businesses that have already invested in compliance with state frameworks such as the California Consumer Privacy Act, as amended by the California Privacy Rights Act, should evaluate whether their existing programs satisfy the SECURE Data Act’s requirements, particularly with respect to data broker registration requirement, data use and minimization obligations, and the consumer rights provisions.

A recent decision from the Northern District of California reminds corporate defendants in Internet tracking cases that strategies to defeat class certification based on individualized issues can be just as critical as merit-based defenses.

In In re Meta Pixel Tax Filing Cases, No. 22-cv-07557-PCP (N.D. Cal. Mar. 30, 2026), a group of plaintiffs sought to certify classes of individuals who visited tax-preparation websites where the Meta Pixel was deployed, alleging that user data—including URLs, browsing behavior, and potentially sensitive financial information—was transmitted to Meta in violation of the California Invasion of Privacy Act (CIPA), among other statutes. Plaintiffs’ original complaint defined the class to include individuals whose “tax filing information” was collected. But in their certification motion, plaintiffs sought to define the class as anyone whose data from visiting the websites appeared in Meta’s internal data tables—a significantly broader group.

The court held that by broadening the class, plaintiffs swept in putative class members whose claims were likely barred by CIPA’s one-year statute of limitations. Under American Pipe, class action filings toll the statute of limitations only for individuals who fall within the original class definition. Because the expanded classes included individuals from whom no tax-filing data was allegedly collected, those individuals were not entitled to tolling and their claims could be time-barred. Critically, resolving whether each class member fell within the original definition would require individualized inquiries—potentially a line-by-line review of terabytes of data—that would overwhelm common questions and defeat predominance under Rule 23(b)(3).

For companies facing internet tracking litigation, this decision underscores the importance of using discovery not only to support technical defenses but also to highlight individualized issues that might defeat class certification. Pay close attention to how plaintiffs define their proposed classes—particularly when definitions shift from the complaint to the certification stage. Expansions may create tolling gaps and undercut commonality arguments. Class certification is not a foregone conclusion in tracking technology cases, and rigorous attention to procedural requirements can yield significant results for defendants.

On April 7, 2026, the Alabama legislature unanimously passed House Bill 351, the Alabama Personal Data Protection Act, sending it to Governor Kay Ivey for approval. The bill cleared the Alabama House 104-0 and the Alabama Senate 34-0, and if Governor Ivey signs the bill, Alabama will join the growing list of states that have enacted a comprehensive consumer privacy statute. If enacted, the law would take effect on May 1, 2027.

On its surface, the bill largely follows Virginia-model framework and lays out core consumer rights, AG-exclusive enforcement, no private right of action, and a 45-day cure period. However, the Alabama bill differs in a number of key aspects.

1. Low Applicability Threshold

    The Act sets out one of the lowest data threshold in the country. Specifically, the law applies to entities that control or process data of more than 25,000 Alabama consumers. Separately, the law applies if a business earns at least 25% of its revenue from selling personal data regardless of consumer count.

    2. Definition of “Sale”

        The Act defines a “sale” as the exchange of personal data for monetary or other valuable consideration where the controller receives a material benefit and the third party is unrestricted in its use.  This definition is narrower than the CCPA but broader than monetary-only states like Virginia and Iowa.  More importantly, the Act carves out two exceptions for data transfers that are found in no other state law: disclosures for “providing analytics services” and for “providing marketing services solely to the controller.” 

        First, if a business shares consumer data with a third-party analytics provider, that transfer is not considered a “sale,” even if the analytics company keeps and uses the data.  Second, if a business shares consumer data with a third party that provides marketing services back to that business, such as a firm running targeted ad campaigns on the business’s behalf, that transfer is also excluded. The result is that a large volume of data sharing that would give consumers opt-out rights in states like California, Colorado, or Connecticut falls entirely outside the scope of Alabama’s Personal Data Protection Act.

        3. Exemptions

        Entity Exemptions: Businesses with fewer than 500 employees and nonprofits with fewer than 100 employees are exempt, provided they do not engage in the sale of personal data.  The Act also exempts defined political organizations, a complication that has derailed privacy legislation in other states like Maine.

        Data Exemptions:  The Act exempts data already governed by federal law, as well as HR and B2B data. Specifically, the following federal-law data is carved out:

        • HIPAA-regulated health data
        • FCRA-covered consumer reports
        • DPPA-protected motor vehicle records
        • FERPA-covered education records
        • Farm Credit Act data
        • Airline Deregulation Act data

        Children’s Data:  Alabama sets the “known child” threshold at under 13 and treats COPPA compliance as sufficient for parental consent obligations under the Act. Consent is required for targeted advertising or sale of data for consumers ages 13 to 15, but, unlike Colorado, Connecticut, and Virginia, which have added heightened protections for minors beyond the COPPA baseline, the Alabama Act stops there.

        4. Enforcement Framework

        The Act sets out a lighter compliance burden and does not require data protection impact assessments, universal opt-out signal mandate, or a permanent cure period.  Under Alabama’s law, there will always be a chance to fix violations before facing enforcement.  

        The Act also does not require opt-outs when targeted ads are based on pseudonymous data—such as alphanumeric mobile device identifiers—as long as that data is stored separately from identifiable information. Most state privacy laws require opt-outs for behavioral targeting regardless of pseudonymity; Alabama joins only Kentucky, Iowa, and Tennessee in creating this gap. For the ad-tech industry, this is a welcome carveout; for consumer advocates, it is one of the bill’s biggest loopholes.

        Lastly, civil penalties are also capped at $15,000 per violation, making this one of the softest enforcement postures in the state privacy landscape.

        5. Industry and Advocacy Response

        Consumer Reports has urged Governor Ivey to veto the bill, calling it a “lowest-common-denominator approach to privacy” riddled with loopholes, including but not limited to, the weak “sale” and “targeted advertising” definitions, the lack of universal opt-out or authorized agent provisions, and the pseudonymous data gap. On the other hand, the bill’s sponsor, Representative Mike Shaw, has framed it as a practical approach shaped by two years of collaboration with the attorney general’s office.

        6. What Businesses Should Do Now

        Companies that assumed they were too small for state privacy law should take a closer look. The 25,000-consumer threshold is one of the lowest in the country, and businesses with any meaningful contact with Alabama residents may well be covered. The separate 25%-of-revenue trigger could also sweep in niche data brokers with relatively few Alabama contacts. Before May 1, 2027, companies that touch consumer data should evaluate whether they cross the 25,000-consumer line, whether their data-sharing arrangements genuinely fit within the analytics and marketing carveouts rather than relying on loopholes that may not hold up under AG scrutiny, and whether their pseudonymous data practices are truly pseudonymous enough to qualify for the targeted-advertising gap. The Act’s enforcement posture is lighter than most states, but $15,000-per-violation penalties still accumulate quickly.

        When the CCPA was first enacted, it was seemingly clear that its right to private action would be limited to traditional data breaches. Over the past two years, however, some courts have called this interpretation into question by expanding the CCPA’s private right of action clause beyond the traditional breach scenario—and instead into alleged privacy violations. A recent holding from the Northern District of California could signal that more of those claims could be tacked onto the wiretap cases that are already flooding dockets.

        In many ways, Allison v. PHH Mortgage is a fairly standard website tracking case predicated on allegations that tracking devices on a business’s website disclosed users’ personal information without their knowledge or consent. However, in addition to CIPA, ECPA, and the usual accompanying claims, the plaintiffs also brought a claim under the CCPA. On March 27, 2026, the Northern District of California denied PHH Mortgage’s motion to dismiss the CCPA claim, finding that the express language of the statute does not limit private rights of action to traditional data breaches. The court held that “[n]othing in the plain language of the provision limits its application to data breaches by third parties.” Instead, the court held that the CCPA’s private right of action covers unauthorized disclosure of personal information regardless of whether the disclosure was intentional or negligent, and regardless of whether it was made by a third party or the business’s own agents.

        Although earlier cases such as Shah v. Capital One Financial Corp. and M.G. v. Therapymatch Inc. came to similar outcomes, the Allison holding shows that courts continue to consider broadening the scope of the CCPA’s private right of action and that they will do so with more reasoned opinions. Businesses with an online presence should take time to audit their use of third-party tracking technologies and privacy disclosures now to help ensure privacy compliance and make conscious decisions regarding risk moving forward.

        On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence. This Framework contains a sweeping set of legislative recommendations intended to establish a coherent, nationally unified approach to AI governance. While the Framework does not itself create binding legal obligations, it is likely to shape federal AI legislation in the months and years ahead. This post summarizes the Framework’s key areas of focus and considers what its influence could mean for the current state regulatory landscape.

        1. Protecting Children and Empowering Parents

        The Framework recommends that Congress establish privacy protections and age-verification requirements for AI services likely to be accessed by children, including providing parents with tools to manage their children’s privacy settings, screen time, and content exposure. The Framework also urges Congress to require AI platforms to implement features that reduce the risks of sexual exploitation and self-harm to minors and to continue enforcing prohibitions on nonconsensual disclosures of intimate depictions. Notably, the Framework recommends that any federal legislation should not preempt states from enforcing their own generally applicable laws protecting children, such as prohibitions on child sexual abuse material. It also contemplates strengthening existing state-level restrictions on the use of children’s data for training AI models and targeted advertising.

        1. Safeguarding and Strengthening American Communities

        The Framework’s second goal focuses on enabling continued growth of AI infrastructure while protecting communities from associated harms. It recommends streamlining federal permitting for the construction and operation of AI facilities and supports AI developers’ ability to develop on-site power generation, while protecting residential ratepayers from increased energy costs related to AI data centers, providing AI resources to small businesses, and augmenting law enforcement tools to combat AI-enabled impersonation scams and fraud.

        1. Respecting Intellectual Property Rights and Supporting Creators

        The Framework recommends that Congress provide protections for individuals affected by the unauthorized distribution or commercial use of AI-generated digital replicas of their voice, likeness, or other identifiable attributes, while exempting parody, satire, news reporting, and other expressive works protected by the First Amendment. The Framework also recommends that Congress consider enabling collective licensing frameworks that would allow rights holders to negotiate compensation from AI providers.

        1. Preventing Censorship and Protecting Free Speech

        The Framework recommends that Congress take action to prevent the federal government from coercing AI providers to suppress or alter content based on partisan or ideological agendas and establish mechanisms for seeking redress where federal agencies attempt to censor expression on AI platforms.

        1. Enabling Innovation and Ensuring American AI Dominance

        The Administration recommends establishing regulatory sandboxes to support AI development and deployment, including making federal datasets accessible in AI-ready formats for use in model training. Significantly, the Framework expressly recommends against creating any new federal rulemaking body to regulate AI, calling instead for AI to be governed through existing regulatory agencies with subject-matter expertise and industry-led standards.

        1. Educating Americans and Developing an AI-Ready Workforce

        The Framework recommends that Congress incorporate AI training into existing education and workforce development programs, expand federal efforts to study trends in AI, and bolster capabilities at land-grant institutions to provide technical assistance, launch demonstration projects, and develop youth-centered AI programs.

        1. Establishing a Federal Policy Framework and Preempting State AI Laws

        The Framework’s most consequential section for the current regulatory landscape is its recommendation for federal preemption of state AI laws. The Administration recommends that Congress preempt state AI laws that “impose undue burdens,” with the stated goal of establishing a single, minimally burdensome national standard rather than fifty discordant ones.

        The Framework does, however, carve out several categories of state law from preemption. States would retain their powers to enforce generally applicable laws against AI developers and users, exercise zoning authority, and regulate states’ own uses of AI for law enforcement or other public services. Outside of these limited carve-outs, the Framework recommends that states not be permitted to regulate AI development, penalize AI developers for third-party unlawful conduct involving their models, or burden the use of AI for activities that would be lawful if performed without AI.

        Several states have already taken action to regulate AI development and deployment. Examples include Colorado’s AI Act, which is set to take effect later in 2026, and California’s amendments to the California Consumer Privacy Act regulating automated decision-making technologies. The Framework’s interaction with these laws will depend heavily on how Congress translates the Administration’s recommendations into legislation and how broadly any preemption provision is drawn. If broad preemption language is adopted to prohibit state regulation of “AI development,” these and similar statutes could be rendered unenforceable.

        Though the Framework provides insight into the Administration’s priorities and indicates a clear direction for future AI legislation, businesses should continue to closely monitor both state and federal legislative developments moving forward.

        On March 20, 2026, Oklahoma’s governor signed S.B. 546 making Oklahoma the latest state to enact a comprehensive state privacy law.  The law, effective January 1, 2027, applies to organizations doing business in Oklahoma or targeting residents in Oklahoma that either (i) process 100,000 Oklahoma consumers’ personal data or (ii) process 25,000 Oklahoma consumers’ personal data and derive more than half of its revenue from selling personal data. 

        The law has similar notice, consumer rights, and vendor management obligations as those set forth in many other analogous state comprehensive privacy laws.  For example, under the law, Oklahomans can request to access, correct, delete, and obtain copies of their personal data, as well as opt out of the sale of their personal data and certain targeted advertising practices. 

        There are, however, some notable differences between Oklahoma’s law and other state privacy laws. Unlike the approach adopted by most other states, Oklahoma narrowly defines “sale” as exchanges of personal data involving monetary consideration, while other states more broadly define sales to include exchanges of personal data for any valuable consideration.  Additionally, Oklahoma, similar to Minnesota, has adopted a definition of “biometric data” that includes information generated from photo, audio and video when that data is used to identify a specific individual.  In contrast, most other states with comprehensive privacy laws expressly exclude this type of information from their definitions of biometric data.

        The law will be enforced exclusively by the Oklahoma Attorney General.  Following receipt of a notice of violation by the Oklahoma Attorney General, and if the violation is cured within the 30-day period, then the Attorney General will not bring a formal action.

        With the Colorado legislative session coming to its waning days, many have been eagerly waiting for Colorado AI Act amendment proposals. Absent an amendment, the Colorado AI Act will go into effect as-is on June 30, 2026. This week, the AI Policy Working Group (“Working Group”) released its Proposed Bill. The Working Group’s proposed framework would  still need to be turned into a formal bill, introduced, and passed by the legislature before taking effect.

        In connection with its release, Governor Polis expressed his support, stating  he was “very grateful to the hardworking members of the Colorado AI Policy Working Group that have reached a unanimous agreement on AI policy to protect consumers and support innovation in our state.”

        Some members of the Working Group, however, were less enthusiastic about the proposal even though it was advanced unanimously. For example, one of the original Colorado AI Act sponsors, Rep. Brianne Titone (D) of Arvada, stated, “while the voting members did agree, there were many caveats to their ‘yes’ votes. It’s a meaningful step forward, but only if the proposed bill can stay on this trajectory.”

        Substantively, the Working Group’s proposal limits the scope to automated decision-making technology that processes personal data and takes a more streamlined approach for AI deployers than the current version of the Colorado AI Act, but it also scales back some exemptions. The new approach will almost certainly be the subject of heavy debate in the Colorado legislature.

        On the national front, on March 18, 2026, Sen. Marsha Blackburn (R-TN) released a discussion draft intended to spark congressional negotiations on a federal AI framework that prioritizes children’s online safety and creators’ copyright and publicity interests. The draft folds together provisions drawn from the Kids Online Safety Act (KOSA) and earlier Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES) proposals.  In doing so, it also proposes requirements such as age verification, chatbot disclosures, provenance/watermarking standards, third-party bias audits, and a private right of action for certain harms to children.

        The draft is explicitly a “discussion draft” intended to provide a negotiating position and to harmonize various existing proposals, so any expectations of quick passage of a federal bill should be curbed. But, given the federal government’s focus on preempting state laws (like the Colorado AI Act), the timing of Blackburn’s announcement highlights the upcoming clash between federal and state efforts to regulate the quickly advancing use of AI. At least for the foreseeable future, companies will need to keep an eye on these inevitable changes.

        Following the release of the Trump Administration’s new National Cyber Strategy, National Cyber Director Sean Cairncross noted in a virtual interview that the administration is considering changes to the existing cyber incident reporting rules previously promulgated by the Cybersecurity and Infrastructure Security Agency (CISA). According to Cairncross, the administration wants to ensure the rules “make[] sense for industry” while still providing the government with actionable threat intelligence.

        Federal agencies, including CISA, are still gathering feedback on proposals for changes to the rules, and the administration has not yet committed to any specific changes. Still, given Cairncross’s recent comments, companies should expect to see changes to the rules in the near future and be prepared to promptly conform their existing incident response plans and reporting protocols accordingly.

        In the span of just a couple days, the California Privacy Protection Agency (CalPrivacy) announced two significant privacy enforcement actions, highlighting the increasing scrutiny on companies’ handling of personal data. These actions underscore the agency’s commitment to ensuring that businesses comply with privacy laws designed to protect individuals’ rights, particularly focusing on transparency and ease of data control for consumers. The cases involve a youth sports media company and the automotive giant Ford, both of which were alleged to have engaged in practices that violated consumers’ opt-out rights.

        In the action against PlayOn Sports, CalPrivacy took particular issue with the fact that PlayOn directed users to opt out through the Network Advertising Initiative and the Digital Advertising Alliance as opposed to providing its own opt-out mechanism. CalPrivacy also alleged a failure to recognize opt-out signals and insufficient privacy notices. In its public announcement, CalPrivacy’s head of enforcement stated that “[s]tudents trying to go to prom or a high school football game shouldn’t have to leave their privacy rights at the door.” PlayOn was fined $1.1 million and agreed to modify its practices.

        In a separate action, CalPrivacy alleged that Ford added unnecessary friction to the opt-out process, making it cumbersome for consumers to exercise their right by requiring email verification. The agency acknowledged that Ford “didn’t intend” to require consumers to verify their identities, but it stated that the action shows it “will pursue violations regardless of intent.” As part of the settlement, Ford will pay a fine and has committed to streamlining its opt-out procedures.

        These enforcement actions serve as an important reminder that regulators are still extremely focused on public-facing aspects of privacy regimes, and especially the granular details of opt-out mechanisms. Companies should review their processes carefully.

        A new bill introduced in Connecticut—Connecticut Senate Bill 117, An Act Concerning Breaches of Security Involving Electronic Personal Information—would create mandatory forensic examination requirements for entities that experience a “massive breach of security,” defined as a data breach affecting at least 100,000 Connecticut residents, and imposes substantial penalties for noncompliance.

        SB 117 would require entities that experience a “massive breach of security” to:

        • Immediately retain a qualified third-party forensic examiner to conduct a forensic examination of the computer or computer system that was the subject of the data breach and to prepare a detailed forensic report disclosing how the breach occurred and its root causes;
        • Submit the detailed forensic report to the Connecticut Attorney General within 90 days of discovering the breach; and
        • Face civil penalties of $100,000 for small businesses and $500,000 for other entities for noncompliance.

        The entity that experiences a massive data breach bears the cost of the forensic examination and report, regardless of whether the entity retains a third party itself or fails to do so and the Connecticut Attorney General retains a forensic examiner on its behalf. SB 117 would grant the Connecticut Attorney General authority to retain a qualified third party to perform the forensic examination and prepare the forensic report if an entity fails to comply.

        If enacted, Connecticut would be the first state to impose automatic forensic examination and forensic reporting requirements for incidents based on a numerical threshold. It also raises serious issues regarding disclosure of confidential and proprietary information and privileged information.

        In any event, given the scale of the potential penalties and the mandatory nature of the new requirements, entities that collect, store, or process personal information of Connecticut residents should closely monitor SB 117’s progress in the Assembly. If it passes, companies should establish protocols for engaging qualified third-party forensic examiners immediately upon discovery of a massive data breach and ensure their incident response plans accommodate the 90-day reporting deadline to the Connecticut Attorney General.