China’s internet regulatory authority and top prosecutors have recently released a series of enforcement actions and cases, aimed at highlighting enforcement priorities in the data security realm over the last year. In 2025, enforcements under the Chinese Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Regulations on the Security Management of Network Data, focused primary on data security compliance, illegal cross-border transfers, and violations of personal information rights.

Companies were cited for violations related to over-collection of data, noncompliance with network data security rules, improper storage, allowing unrestricted access to sensitive information, and violating consent requirements.

Chinese authorities have stated that enforcement priorities in 2026 will include increased oversight in the “digital sphere” including activities of illegal collection and misuse of personal data. Enforcement actions in 2026 are also expected to focus on data leaks and improper network governance and increased penalties for data breaches.

To prevent being the focus of China’s regulatory authority, companies should focus on:

  • Ensuring adequate safeguards are in place to protect data in transit and at rest;
  • Completing required cybersecurity classification and grading, where required;
  • Implementing effective security management policies and access policies, including strong password requirements;
  • Completing necessary assessments or certifications where required for cross-border transfers; and
  • Publishing adequate notices disclosing personal information collection and processing practices, and do not exceed those disclosed practices.

While specific compliance requirements will vary depending on the business, any data collection, storage, or processing occurring in China will be subject to increased scrutiny in 2026, requiring businesses to take a closer look to ensure proper data security is in place.

Importantly, although the safeguards may have overlap across different international laws and standards, cross-border transfer and localization principles are causing harder operational issues for businesses. With enforcement ramping up, businesses should carefully consider their global compliance strategy.

Navigating the 2026 CCPA Updates

As forecasted, effective January 1, 2026, businesses that are subject to the California Consumer Privacy Act (CCPA) must comply with newly-updated regulations. For some businesses, complying with these updates will require the implementation of or updates to policies and procedures related to, among other things, risk assessments, cybersecurity audits, and the use of Automated Decision-Making Technologies. Businesses should review the updated regulations to determine if they might be affected and, if so, implement a plan to promptly ensure compliance.

Outlined below are just a few of the most notable CCPA updates businesses should be aware of:

  1. Risk Assessments
    • Businesses who engage in certain processing activities, including selling or sharing personal information, will need to conduct risk assessments. For new processing activities (beginning after January 1, 2026), risk assessments must be conducted prior to commencement of the new activity. Businesses that conduct a risk assessment in the prior calendar year must submit an attestation of the risk assessment to the California Privacy Protection Agency (“Agency”) by April 1 of the following year.
    • For activity that occurred prior to January 1, 2026, and continued thereafter, businesses must conduct a risk assessment by December 31, 2027, and provide attestation on or before April 1, 2028.
    • Risk assessments completed for other state laws can demonstrate compliance if they check the boxes of CCPA’s regulations as well.
  2. Automated Decision-Making Technology (ADMT) Rules
    • The updated regulations impose new requirements for businesses that use ADMT to make “significant decisions” about consumers. Those requirements will take effect in 2027. “Significant decisions” include granting or denying services like financial or lending products, housing, educational admissions or opportunities, job or contracting opportunities and compensation, or healthcare services.
    • Obligations associated with ADMT use include:
      • Providing consumers with pre-use notice of and access to information describing the manner in which ADMT is used and informing them of their associated opt-out and access rights;
      • Offering consumers ADMT opt-out, unless an exception applies;
      • Conducting risk assessments, as applicable; and
      • Updating privacy notices, as applicable.
  3. Cybersecurity Audits
    • The CCPA regulations will require certain businesses to conduct mandatory cybersecurity audits. The scope of the audits contains a long list of specifics, which generally tracks established audit standards such as the NIST Cybersecurity Framework. Businesses will also have to submit annual certifications of completion to the Agency.
    • While the deadlines for submitting certifications begin in 2028, businesses should be aware that implementing compliant cybersecurity programs—which often must include, among other things, incident response management, access controls, data inventory, retention and disposal procedures, and vendor oversight—often requires collaboration across businesses and can be very time consuming.
  4. Broadened Definition of “Sensitive Personal Information”
    • The updates also imported the statutory definition of “sensitive personal information,” with the addition of “personal information collected and analyzed concerning a consumer’s health, sex life, or sexual orientation,” and “personal information of consumers that the business has actual knowledge [or willfully disregards] are less than 16 years of age.”
    • With the broadened definition, businesses should reassess their need for notices, opt-outs, and back-end procedures related to the Right to Limit.
  5. Other Notable Updates
    • Stricter requirements relating to the use of dark patterns, highlighting the need for careful consideration relating to cookie banners and opt-out menus;
    • Additional notice requirements for businesses that disclose personal information collected through augmented or virtual reality devices;
    • Updates to data subject rights procedures; and
    • New transparency requirements, including in-app privacy policy posting requirement.

Although many of the deadlines outlined above seem distant, businesses should be auditing their current processing activities for compliance now rather than discovering potential issues right before the applicable deadlines expire.

The past year set up a clear clash between federal deregulatory efforts and state-level AI rulemaking, and 2026 is poised to be the year that conflict materializes in earnest.  The Trump Administration signaled a strong preference for scaling back AI-specific rules while exploring avenues to preempt state and local measures, even as a growing number of states moved forward with their own frameworks. In short, 2025 laid the groundwork, and 2026 is likely to deliver the confrontation.

On the federal side, the Administration’s posture included both legislative and policy initiatives aimed at limiting state restrictions on AI.  Although a proposed 10-year moratorium on enforcing state AI laws was removed during negotiations over the One Big Beautiful Bill Act, the America’s AI Action Plan soon followed, instructing agencies to consider using preemption to curb “burdensome” state AI regulations. In the final weeks of 2025, the White House issued an executive order, Ensuring a National Policy Framework for Artificial Intelligence (the “Executive Oder”). The Executive Orderdirects the U.S. Attorney General to establish an AI Litigation Task to challenge state AI laws deemed unconstitutional or preempted and tasks the Administration with developing a national AI legislative framework that would preempt conflicting state rules.

States, however, did not stand still.  California’s SB 53 established a first-in-the-nation set of standardized safety disclosure and governance obligations for developers of frontier AI systems, underscoring state willingness to regulate despite federal headwinds. Colorado’s Anti-Discrimination in AI Law remained intact through the 2025 session and is scheduled to take effect in June 2026, setting a near-term compliance deadline that will shape risk assessments and product planning.  Even traditionally deregulatory states like Texas pursued aggressive enforcement under existing biometric laws against alleged AI-driven facial recognition practices.

Looking ahead, expect 2026 to feature litigation over the scope of preemption, increased enforcement actions from federal agencies, and a push toward a federal legislative framework, alongside continued state innovation in AI governance.  Despite the uncertainty, companies should continue to comply with applicable state AI laws because the Executive Order itself cannot overturn state law. Only Congress and the courts have the power to do so, and until then, state laws remain enforceable. For companies, that means preparing for a two-track reality: monitor and implement state obligations while tracking federal moves that could reshape, narrow, or delay those obligations. The result is likely to be a dynamic, contested compliance environment throughout 2026, rather than quick regulatory convergence.

Over the last few years, businesses, nonprofits, and other website operators have seen thousands of lawsuits and arbitrations filed under the California Invasion of Privacy Act (CIPA) alleging that the use of ubiquitous cookies and pixels on websites violates CIPA’s wiretap and pen register provisions. The California legislature considered curbing that explosion of litigation with SB 690, which was introduced in the 2025 session with some enthusiasm but ultimately stalled.

Thus, although there is some hope of relief as the legislative session picks back up in 2026, organizations are left in the same situation as before—balancing business needs and value with risk mitigation.

SB 690 at a Glance

CIPA was originally enacted in 1967 to address traditional wiretapping and eavesdropping concerns which, at the time, primarily involved telephonic communications. It therefore did not address digital technologies or adtech now used for website marketing. Nonetheless, in recent years, several thousands of lawsuits and arbitrations have sought to apply CIPA in precisely that context.  

In 2025, California lawmakers advanced SB 690 to modernize CIPA for the internet era. The amendment aims to tamp down lawsuits targeting routine website technologies by adding a broad “commercial business purpose” exemption to CIPA’s wiretap and pen-register provisions and, thus, clarify that use of these technologies for business purposes would not trigger CIPA liability. Early versions of the bill included a retroactivity provision that would have applied to all pending cases as of January 1, 2026, but that provision was removed in the Senate amid criticism from consumer advocate groups.

Why SB 690 Stalled in 2025

Despite unanimous passage in the Senate, SB 690 did not clear the Assembly before the 2025 legislative session adjourned, and it was thus designated a two-year bill. Reports indicate the bill stalled in the Assembly Judiciary process, with lawmakers citing the need for further stakeholder dialogue and competing policy priorities, leaving businesses to face continued litigation through at least 2026.

The primary resistance to the bill came from privacy and consumer groups who argued that SB 690’s proposed exemption was too broad and would shield opaque tracking practices that CCPA enforcement has not yet addressed. Removal of the bill’s retroactivity provision would reduce immediate litigation relief, and plaintiffs’ filings continued apace through the close of the year amid split trial court decisions on CIPA’s scope.

What’s Next for SB 690 in 2026

Notably, it was SB 690’s author and primary sponsor that paused the bill in the Assembly in 2025, citing opposition from consumer privacy advocates and attorneys’ groups. So, it remains to be seen whether SB 690 will advance at all, at least in its current form, in 2026.

SB 690 is eligible for reconsideration as a two-year bill in the 2026 session, which reconvenes January 5. The last day to introduce bills is February 20, and the final day to pass bills August 31. Expect policy committee activity in spring, fiscal deadlines in May, and floor-only periods late May and in the final August push. If revived, SB 690 is likely to remain prospective only, reinforcing the importance of near-term compliance and risk mitigation for existing site technologies.

What Businesses and Organizations Should Do Now

Until reform is enacted, CIPA suits over pixels, analytics, chatbots, search bar functions, and replay tools will continue, complete with statutory damages of $5,000 per alleged violation and inconsistent early-dismissal outcomes. Further, even if SB 690 eventually passes in its current form, relief would not be coming until at least 2027, if not longer. Organizations therefore need to take steps now to assess their risk.

Risk mitigation for each specific organization will necessarily vary, but it will generally involve assessing practices, identifying any higher risk processing activities, and updating disclosures, consents, and vendor governance as appropriate.

On December 19, 2025, New York Governor Kathy Hochul vetoed the New York Health Information Privacy Act (NY HIPA), a health data privacy bill that would have afforded consumer protections to non-HIPAA health data.

Although NY HIPA resembled existing laws, like Washington’s My Health My Data Act, it had several important differences that would have greatly expanded its impact—including by applying to employee data, data held by financial institutions subject to the Gramm-Leach-Bliley Act, and data that had been de-identified in accordance with HIPAA. NY HIPA would have also required regulated entities to maintain a publicly available retention schedule and dispose of an individual’s regulated health information pursuant to that schedule subject to certain regulatory requirements.

Unsurprisingly, NY HIPA was the subject of intense lobbying on both sides of the debate. Ultimately, Governor Hochul stated in her veto memo that the legislation, as written, is too broad, “creating potentially significant uncertainty about the information subject to regulation and compliance challenges.”

NY HIPA could still technically pass into law if two-thirds of the members of each house vote to override the Governor’s veto. While NY HIPA did originally pass with that level of support, the likelihood of overriding a veto is very slim from a historical standpoint.

In many ways, 2025 was a relatively tame year for privacy legislation, and the NY HIPA veto is a fitting conclusion. As we move into 2026, companies should carefully monitor whether states push more aggressively, as well as the emerging fight between the states and federal government over the authority to regulate AI and privacy issues.

On October 13, 2025, California Governor Gavin Newsom vetoed S.B. 7, which would have required human oversight in certain types of employment decisions made solely by automated decision systems (“ADS”).  If Gov. Newsom signed the bill, it would have required California employers using automated systems for actions such as hiring, firing, and discipling to implement human oversight and explain certain decisions made by AI. The bill would have also required robust notices and granted employees and contractors access rights to data used by ADS.

In his letter notifying the California State Senate of the veto Gov. Newsom cited concerns that S.B. 7 would have imposed “overly broad restrictions” on employer deployment of ADS. For example, the requirements could be interpreted to extend to “innocuous” technology such as scheduling and workflow management tools. Industry groups opposing the bill argued it would have also carried massive costs for compliance, particularly to small businesses.

Gov. Newsom shared concerns with the bill’s author of unregulated use of ADS and affording employees protection as it relates to ADS, but wrote that legislatures “should assess the efficacy of [such] regulations to address these concerns.” Still, California employers face restrictions with respect to certain uses of ADS under the regulations recently finalized by the California Privacy Protection Agency.

On September 30, 2025, the California Privacy Protection Agency (CPPA) issued a $1.35 million fine, the largest in the CPPA’s history, against Tractor Supply Company, the nation’s largest rural lifestyle retailer. The fine was issued based on allegations that the company violated its obligations under the California Consumer Privacy Act (CCPA). The CPPA coined its decision against Tractor Supply as “the first to address the importance of CCPA privacy notices and privacy rights of job applicants.”

Below, we have provided additional information on the CPPA’s decision and the key takeaways for businesses subject to the CCPA in light of the decision.

Background

The CPPA first opened an investigation into the Tractor Supply after it received a complaint from a consumer in Placerville, California. Based on its investigation, the CPPA alleged Tractor Supply:

  • Failed to maintain an adequate privacy policy notifying consumers of their rights;
  • Failed to notify California job applicants of their privacy rights and how to exercise them;
  • Failed to provide consumers with an effective mechanism to opt-out of the selling and sharing of their personal information, including through opt-out preference signals such as the Global Privacy Control; and
  • Disclosed personal information to other companies without the entering into contracts that contain the requisite privacy protections.

In addition to issuing the record $1.35 million penalty for these violations, the CPPA has also required Tractor Supply to implement broad remedial measures, which include, but are not limited to, the following:

  • Updating its privacy notices and notifying employees and job applicants of the updated notices;
  • Modifying the methods it provides consumers to submit requests to opt-out of targeted advertising;
  • Ensuring all required contractual terms are in place with all external recipients of personal information;
  • Recognizing opt-out preferences signals like the Global Privacy Control;
  • Regularly auditing its tracking technologies; and
  • Designating a compliance officer to certify adherence to its compliance for the next four (4) years.

Key Takeaways

This decision underscores several practical points for businesses subject to the CCPA:

  • Employees and job applicants are treated like consumers under the CCPA, so businesses should ensure their CCPA compliance programs adequately cover California workforce members and applicants;
  • A single consumer complaint can trigger a broad investigation; and
  • Investigations from the CPPA may not only result in remediation, but also significant monetary penalties.

On September 23, 2025, the California Privacy Protection Agency (CPPA) announced the approval of final regulations under the California Consumer Privacy Act (CCPA) covering cybersecurity audits, risk assessments, and automated decisionmaking technology (ADMT). The new rules, effective January 1, 2026, introduce significant new compliance obligations for businesses subject to the CCPA/CPRA, with phased deadlines for certain requirements.

Key requirements include:

  • Cybersecurity Audits: Businesses must conduct annual, independent cybersecurity audits if they (1) derive 50% or more of annual revenue from selling or sharing consumers’ personal information, or (2) meet the annual gross revenue threshold in the CCPA and process the personal information of 250,000 or more consumers or the sensitive personal information of 50,000 or more consumers. Audit certifications are due to the CPPA on a phased schedule: April 1, 2028 (for businesses with over $100 million in revenue), April 1, 2029 (for $50–100 million), and April 1, 2030 (for less than $50 million). Audits must be performed by qualified, objective, independent professionals and must assess a comprehensive set of technical and organizational safeguards, including authentication, encryption, access controls, vulnerability management, incident response, and more. Service providers and contractors must cooperate with the audit process.
  • Risk Assessments: Covered businesses must conduct and document risk assessments before engaging in processing activities that present significant risks to consumers’ privacy or security, such as selling or sharing personal information, processing sensitive personal information, using ADMT for significant decisions, or using personal information to train ADMT. Risk assessment compliance begins January 1, 2026, with attestation and summary submissions due by April 1, 2028. Assessments must document the purpose, categories of data, operational elements, benefits, risks, and mitigation measures, and must be reviewed and updated at least every three years or upon material changes.
  • Automated Decisionmaking Technology (ADMT): The regulations define ADMT as any technology that processes personal information and uses computation to replace or substantially replace human decisionmaking. Businesses using ADMT to make significant decisions about consumers (such as those affecting financial services, housing, employment, or healthcare) must, by January 1, 2027, provide clear pre-use notices, offer consumers the right to opt out, and respond to access requests with meaningful information about the logic, key parameters, and effects of the ADMT. The rules require plain-language explanations, transparency about the role of human involvement, and prohibit retaliation against consumers exercising their rights. Exceptions and specific requirements apply for certain employment and admissions uses.

These regulations significantly expand the compliance landscape for California businesses, requiring new documentation, consumer-facing notices, and ongoing governance. Businesses should review their data processing activities, update privacy notices and contracts, and ensure robust audit and risk assessment procedures are in place to meet the new standards.

In a reminder that the FTC’s new enforcement priorities will likely drive additional litigation risks, days after the settlement was announced, Disney Worldwide Services and Disney Entertainment Operations, LLC (together, “Disney”) were named as defendants in two class action complaints brought on behalf putative classes of minors. The first case, S.K. et al. v. Disney Worldwide Servs., Inc., No. 2:25-cv-08410, was filed on September 5, 2025, in the United States District Court for the Central District of California. The second case, captioned Does 1-3 ex rel. Sobalvarro v. Disney Worldwide Servs., was filed on September 9, 2025, in the Superior Court of California for the County of Los Angeles.

The complaints allege that Disney failed to appropriately mark certain videos it uploaded to the YouTube platform as “Made for Kids” between 2020 and September 2025—a designation necessary to ensure that automatic data collection practices are disabled on the platform—thus leading to the unlawful collection of the minors’ data. Plaintiffs in both cases brought causes of action for common law intrusion upon seclusion, invasion of privacy, trespass to chattels, unjust enrichment, and negligence. The California state court plaintiffs brought additional claims for violation of the California Unfair Competition Law and for invasion of privacy in contravention of the California Constitution. Both complaints seek actual, general, special, incidental, statutory, punitive, and consequential damages in excess of $5 million.

Both complaints were filed less than a week after Disney and the DOJ filed a proposed order authorizing a settlement for alleged violations of the Children’s Online Privacy Protection Act (COPPA) arising out of the same conduct alleged in the complaints. The proposed order was filed contemporaneously with the DOJ’s complaint, which the DOJ brought upon notification from the FTC, and requires Disney to pay a $10 million civil penalty, create a “Mandated Audience Designation Program” to ensure that all Disney videos are appropriately marked when uploaded, and submit to ten years of compliance reporting. 

Both the federal court and the state court complaints allege that the proposed settlement would not adequately remedy the putative classes’ injuries.


While FTC settlements always carry the risk of copycat litigation, these new developments further emphasize how serious this risk can be in the privacy field—and especially children’s privacy—and that plaintiffs’ firms that are active in other privacy fields are looking for new areas in which to expand. Given the FTC’s stated change in enforcement priorities, companies need to reassess their positions not just for “traditional” compliance, but also for enforcement and litigation mitigation.

The Food and Drug Administration (FDA) issued final guidance Monday that explains how medical device manufacturers can use a Predetermined Change Control Plan (PCCP) to update AI-enabled device software functions (AI-DSFs) after clearance or approval without submitting a new marketing application for each covered change.

The guidance is a practical how‑to for getting the FDA to preauthorize a playbook for future updates to AI medical software. The FDA calls the playbook a Predetermined Change Control Plan (PCCP).  The applicant submits the PCCP with the 510(k), De Novo, or PMA, and the FDA reviews it along with the device. If the FDA authorizes the PCCP, the company may later make the listed updates without filing a new submission, provided it follows the plan’s steps for data, training, testing, labeling, cybersecurity, and deployment under a quality system. The authorized PCCP becomes part of the device description, so updates must be implemented exactly as specified. If a change is not in the plan, or cannot meet the plan’s methods or acceptance criteria, a new submission will be needed. The guidance is nonbinding and is grounded in FDORA section 515C. It applies to AI‑enabled device software functions and explains what belongs in a PCCP, how the FDA evaluates it, and how users should be informed about updates. Figure 1 on page 18 illustrates the decision path for using an authorized PCCP to implement changes.

What a Compliant PCCP Looks Like:

  • Description of Modifications. List the specific, limited, verifiable changes you intend to make over time (e.g., improved quantitative performance, expanded input compatibility, or performance for a defined subpopulation). Specify whether changes are automatic vs. manual and global vs. local, and how frequently updates may occur. Changes must remain within intended use (and, generally, indications).
  • Modification Protocol. For each planned change, provide (1) data management practices (representative training/tuning/test data; multisite, sequestered test sets; bias‑mitigation strategies and reference‑standard processes); (2) retraining practices (what parts of the model may change; triggers; overfitting controls); (3) performance evaluation (study designs, metrics, acceptance criteria, statistical plans; verification that non‑targeted specs do not degrade); and (4) update procedures (deployment mechanics, user communication, labeling updates, cybersecurity validation, real‑world monitoring, and rollback criteria). A traceability table should map each proposed change to its supporting methods.
  • Impact Assessment. Analyze benefits and risks—including risks of harm and unintended bias—for each change individually and in combination, and explain how the protocol’s verification/validation and mitigations ensure continued safety and effectiveness across intended populations and environments.

Labeling and Transparency Requirements

The FDA may require labeling that informs users that the device contains machine learning and has an authorized PCCP; as updates roll out, labeling should summarize the implemented change, the data/evidence supporting it, impacted inputs/outputs, and how users will be informed (e.g., release notes/version history). Public‑facing device summaries (SSED/510(k) Summary/De Novo decision summary) should include a high‑level PCCP description. New unique device identifiers (UDIs) are required when a new version/model is created.

Cybersecurity and Post-Market Monitoring

Update procedures should cover cybersecurity risk management and validation; describe user communications; and outline real‑world performance monitoring (including triggers, frequency, and rollback plans) to detect adverse events, drifts, or subpopulation performance changes.

Quality‑System Expectations

All implementation under a PCCP must occur within the manufacturer’s quality system. The guidance reiterates record‑retention and design‑control duties and notes the FDA’s 2024 rule aligning Part 820 with ISO 13485 effective February 2, 2026 (QMSR). For PMAs, the FDA must deny approval if manufacturing controls do not conform; for 510(k)s, clearance may be withheld if QSR failures pose serious risk.

Using (and Not Misusing) a PCCP

The flowchart on page 18 (Figure 1) depicts the logic: If a contemplated modification is (1) listed in the PCCP’s Description of Modifications and (2) implemented exactly per the Protocol’s methods/specifications, document it under the Quality Management System (QMS)—no new submission. Otherwise, evaluate it under the FDA’s device‑modification rules. In most cases, a new submission will be required. Deviations from an authorized PCCP may render a device adulterated/misbranded.

Examples: What’s In vs. Out

Appendix B (pp. 38–45) walks through six scenarios: e.g., retraining a patient‑monitoring model to reduce false alarms (in‑scope) vs. adding a new predictive claim (out‑of‑scope); extending a skin‑lesion tool to additional smartphones meeting minimum camera specs (in‑scope) vs. adding thermography or turning the product patient‑facing (out‑of‑scope); and similar analyses for ventilator‑setting software, ultrasound acquisition aids, X‑ray triage, and a device‑led combination product.

What Companies Should Do Now

  1. Decide If a PCCP Fits the Product Roadmap. Identify foreseeable AI model updates (performance, inputs, defined subpopulations) that can be specified, validated, and governed in advance.
  2. Design the Protocol First. Build out data pipelines (representative, sequestered test sets; reference‑standard methods), retraining triggers, acceptance criteria, and cybersecurity validation.
  3. Plan Labeling and User Communications. Draft version histories, release‑note templates, and instructions that reflect how updates may change device behavior; prepare for UDI/version control impacts.
  4. Align QMS and Documentation. Ensure design controls, change control, bias‑monitoring, and record‑retention processes can support PCCP implementation; prepare for the ISO‑13485‑aligned QMSR effective February 2, 2026.
  5. Engage the FDA Early. Use the Q‑Submission program to vet scope, methods, and evidence, especially for higher‑risk devices, automatic/local adaptations, and device‑led combination products.
  6. Think Predicate Strategy. If you will rely on a predicate with a PCCP, be prepared to compare to the predicate’s pre‑PCCP version; consider timing of subsequent submissions so your updated device can become a predicate.

The lawyers in Ballard Spahr’s multidisciplinary Health Care Industry, Technology Industry, and Life Sciences Industry teams advise med‑tech, digital health, AI, and life sciences companies on regulatory compliance and the range of issues related to federal and state health care laws and regulations. We help clients develop and maintain the corporate infrastructure required to address these laws and regulations as they apply to telemedicine and other digital health products and services. We are monitoring the FDA’s implementation and related federal and state activity. Please reach out to your Ballard Spahr contact with questions.