The California Privacy Protection Agency (CPPA) recently published two new sets of draft regulations addressing a range of cutting-edge data protection issues. Although the Agency has not officially started the formal rulemaking process, the Draft Cybersecurity Audit Regulations and the Draft Risk Assessment Regulations will serve as the foundation for the process moving forward. Discussion of the draft regulations will be a central topic of the Agency’s upcoming September 8th meeting.

Among the noteworthy aspects to the draft Regulations are (1) a proposed definition of “artificial intelligence” that differentiates the technology from automated decision-making; (2) transparency obligations for companies that train AI to be used by consumers or other businesses; and (3) a significant list of potential harms to be considered by businesses when conducting risk assessments.  

The Draft Cybersecurity Audit Regulations make both modifications and additions to the existing California Consumer Privacy Act (“CCPA”) regulations. At a high level, the draft regulations: 

  • Outline the requirement for annual cybersecurity audits for businesses “whose processing of consumers’ personal information presents significant risk to consumers’ security”;
  • Outline potential standards used to determine when processing poses a “significant risk”;
  • Propose options specifying the scope and requirements of cybersecurity audits; and
  • Propose new mandatory contractual terms for inclusion in Service Provider data protection agreements.

Similarly, the Draft Risk Assessment Regulations propose both modifications and additions to the existing CCPA regulations. The draft regulations:

  • Propose new and distinct definitions for Artificial Intelligence and Automated Decision-making technologies;
  • Identify specific processing activities that present a “significant” risk of harm to consumers, requiring a risk assessment. These activities include:
    • Selling or sharing personal information;Processing sensitive personal information (outside of the traditional employment context);Using automated decision-making technologies;Processing the information of children under the age of 16;Using technology to monitor the activity of employees, contractors, job applicants, or students; or
    • Processing personal information of consumers in publicly accessible places using technology to monitor behavior, location, movements, or actions.
  • Propose standards for stakeholder involvement in risk assessments;
  • Propose risk assessment content and review requirements;
  • Require that businesses that train AI for use by consumers or other businesses conduct a risk assessment and include with the software a plain statement of the appropriate uses of the AI; and
  • Outline new disclosure requirements for businesses that implement automated decision-making technologies.

Anybody that would like to submit comments or learn more about attending the CPPA’s September 8 meeting should click here.  We will continue to provide updates to these draft regulations as they become available. 

California continues to be at vanguard of data privacy rights.  The latest effort by California legislators to protect consumer privacy rights focuses on data brokers, who under the proposed California Senate Bill 362, aka the “Delete Act,” would be required to recognize and honor opt-out signals from Californians.  The law seeks to expand on the deletion and opt-out rights provided under the CCPA, which currently requires a Californians to submit their deletion and opt-out requests on a company-by-company basis. The “Delete Act” seeks to change this by implementing a single opt-out request that would apply to all data brokers, associated service providers, and contractors. The Delete Act would essentially create a California “do not sell” list for data brokers akin to a do not call list in the telemarketing context.

Application of the Delete Act

The Delete Act would apply to “data brokers,” which the Act defines as a “business that knowingly collects and sells to third parties the personal information of a consumer with whom the business does not have a direct relationship.” Importantly, the definition exempts those entities covered by the FCRA, GLBA, or the Insurance Information and Privacy Protection Act. Non-exempt data brokers would be required to register with the California Privacy Protection Agency (the “CPPA”), pay a registration fee, and provide the CPPA with detailed information, including whether or not the data broker collects personal information of minors, precise geolocation data of consumers, or reproductive health care data.

Prior to January 1, 2026, the Delete Act would require the CPPA to establish an accessible deletion mechanism that does all of the following:

  1. Allows a consumer to request – through a single verifiable method – that every data broker delete any personal information related to that consumer held by the data broker, associated service provider, or contractor;
  2. Allows a consumer to selectively exclude specific data brokers from a request;
  3. Allows a consumer to make a request to undo or alter a previous request made, after at least 31 days have passed since the consumer’s last request under the Act; and
  4. Implements and maintains reasonable security procedures and practices.

Additionally, the Delete Act would require the deletion mechanism to, in part:

  1. Allow a consumer to request the deletion of all personal information related to that consumer through a single deletion request, without a fee;
  2. Permit a consumer to securely submit information in one or more privacy-protecting ways determined by the California Privacy Protection Agency to aid in the deletion request;
  3. Allow data brokers registered with the California Privacy Protection Agency to determine whether an individual has submitted a verifiable consumer request to delete the personal information related to that consumer and shall not allow the disclosure of any additional personal information when the data broker accesses the accessible deletion mechanism;
  4. Allow a consumer to make a request in any language spoken by any consumer;
  5. Support the ability of a consumer’s authorized agents to aid in the deletion request; and
  6. Allow the consumer, or their authorized agent, to verify the status of the consumer’s deletion request.

Once the deletion mechanism is in place, data brokers would be required to begin complying with deletion requests on August 1, 2026, by accessing the mechanism at least once every 31 days. Unless the personal information is reasonably necessary to fulfill a purpose described under the CCPA’s Right to Delete exemptions (See Section 1798.105(d)), the data broker would be required to process the deletion request, direct all service providers or contractors associated with the data broker to also process the request, and send an affirmative representation to the CPPA indicating the number of records deleted by the data broker, service providers, and contractors.

After processing a deletion request, the data broker is prohibited from selling or sharing new personal information of the consumer and must continually delete all of the consumer’s personal data at least once every 31 days, unless the consumer requests otherwise.

Enforcement & Reporting

While the draft of the Delete Act that passed the Senate was enforceable by both the Attorney General and the California Privacy Protection Agency, the Assembly has since struck the enforcement provisions tied to the Attorney General. As the draft currently stands, the CPPA retains sole enforcement authority, and may issue administrative fines of $200 a day for the failure of a data broker to register and an additional $200 per day for each deletion request a data broker fails to properly comply with.  However, the Act places a statute of limitations upon administration actions regarding any violation that is older than five years. The Act would not provide for a private right of action.

In addition to the enforcement provisions, the Delete Act would require that data brokers compile annual reports containing:

  1. The number of deletion requests received under the Act;
  2. The number of deletion requests that were complied with and the number that were denied;
  3. The number of deletion requests deemed to be unverifiable, to have not been made by a consumer, or which called for the deletion of exempt information; and
  4. The median and the mean number of days it took the data broker to substantively respond to a request.

The above metrics must be disclosed on the data broker’s website, along with a link to their privacy policy, by January 31 of each year. The Act also forbids the use of dark patterns on the data broker’s website.

Beginning on January 1, 2028, and every three years thereafter, data brokers must also undergo an audit by an independent third party to determine compliance with the Act. While this audit will not be automatically submitted to the CPPA, a data broker must be able to provide the CPPA a copy within five days of a request from the agency. However, starting in 2029, a data broker would have to annually provide the CPPA with the last data that an audit occurred.

Status of the Delete Act

The Delete Act was passed by the California State Senate on May 31, and then unanimously passed out of the Assembly’s Committee on Privacy and Consumer Protection in June.  The bill is currently referred to the Assembly’s Committee on Appropriations. On August 16, the Delete Act was placed on the Assembly’s “suspense file” calendar. Suspense file bills are considered at a single hearing – without public comment or attendance – where the Committee on Appropriations compares the anticipated costs of a bill against the state’s available revenue. There is currently no public date set for next steps on the Delete Act.

Privacy advocates and data brokers will be carefully monitoring the progress of this proposed law, which goes further than any U.S. law to date in regulating the data broker industry.  If passed, the law – in combination with already existing consumer opt-out rights and Apple Store requirements for consumers to opt-in to online tracking – will further challenge the ad tech industry’s business model. 

After an extensive comment period, the SEC announced on July 26 that it was formally adopting new rules for public companies governing cybersecurity disclosures. The rules had generated significant backlash from public companies, who criticized the new reporting deadlines for data security incidents as well as the mandatory cyber-risk disclosures the Rules mandate.

Adoption of the new cybersecurity rules will create immediate compliance challenges for public companies. For companies whose fiscal year closes on December 31, 2023, the new cyber-risk disclosures will be mandatory for their upcoming annual report filings. The new breach reporting deadlines are likely to trigger a wave of scrutiny for public companies that suffer material security incidents, making it essential for public companies to carefully consider both the content of the risk disclosures as well as the maturity of their information security programs.

What the New SEC Cybersecurity Rules Require

The SEC Cybersecurity Rules strive to enhance and standardize disclosures regarding cybersecurity incidents, risk management, strategy, and governance. Public companies subject to the reporting requirements of the Securities Exchange Act of 1934 will be subject to new disclosure requirements regarding (1) cybersecurity incidents, and (2) cybersecurity risk management, strategy, and governance. The rules also significantly expand cyber compliance obligations for registered investment advisers (RIAs), investment companies and broker-dealers.

Public Companies

Breach Reporting

Beginning with the incident disclosure requirements, the rule amends Form 8-K to require disclosure of material cybersecurity incidents within four days of identifying that a material event has occurred. The definition of “materiality” has not been changed in the new rule, and continues to follow prior SEC guidance in this area. The rule also adds new items to Regulation S-K and Form 20-F that require public companies to provide updated disclosures relating to previously disclosed cybersecurity incidents. Further, these additions will require disclosure when a series of previously undisclosed and individually immaterial incidents become material in the aggregate. Finally, the rule amends Form 6-K to add cybersecurity incidents as a reporting topic.

The four-day reporting deadline is perhaps the most controversial of the new reporting requirements, and generated significant controversy during the comment period. Many commenters questioned whether such a short reporting deadline would impair ongoing FBI investigations, and force companies to make rushed and incomplete public disclosures that will only open the companies up to further second-guessing and potential liability.

Cyber Risk Management

The new rules create a swath of new reporting requirements regarding cybersecurity risk management, strategy, and governance. Specifically, the amendments to Regulation S-K and Form 20-F require a registrant to describe its policies and procedures, if any, for the identification and management of risks from cybersecurity threats. This includes disclosure of whether the company considers cybersecurity as part of its business strategy, financial planning, and capital allocation, and how management implements cybersecurity policies, procedures, and strategies. The SEC Rule also requires disclosure concerning whether the company has a chief information security officer (CISO) as well as policies and procedures targeted to identify and manage cyber risk.

RIAs, Investment Companies and Broker-Dealers

The new SEC rules also impose significant new compliance requirements on RIAs, investment companies and broker-dealers. More specifically, the new rules:

  • Require RIAs and investment companies to adopt and implement written policies and procedures that are reasonably tailored to address cybersecurity risks, engage in periodic risk assessments, security monitoring and vulnerability management;
  • Require RIAs to report “significant cybersecurity incidents” to the SEC within 48 hours of discovery, including incidents related to the adviser or registered funds or private funds managed by the adviser. Unlike reporting by public companies, these reports would be deemed confidential;
  • Require broker-dealers, RIAs, and investment companies to implement written policies and procedures for incident response programs, including requiring covered institutions to provide notice within 30 days to affected individuals whose sensitive customer information was accessed or used without authorization.

Timing for Compliance With New Rules

The new cybersecurity rules will become effective 30 days following the publication of the adopting release in the Federal Register.

Incident Reporting

Companies must begin reporting material cybersecurity incidents on Form 8-K or Form 6-K on the later of 90 days after the publication of the final rules in the Federal Register or December 18, 2023. Smaller reporting companies have an additional 180 days and must begin reporting incidents on the later of 270 days after the date of publication or June 15, 2024. If a company is unsure whether it will qualify as a smaller reporting company, best practice is to assume the effective time for companies other than smaller reporting companies applies.

Once the new rules come into effect, any cybersecurity incident a company deems material must be disclosed on new Item 1.05 of Form 8-K within four days after determining the incident is material — rather than the date the company discovers the incident. The SEC has clarified that the materiality determination must be made “without unreasonable delay” following discovery. A company may delay disclosure for up to 30 days if the U.S. Attorney General notifies the Commission that immediate disclosure may pose a significant risk to public safety or national security, with an additional 60-day delay for extraordinary circumstances.

Annual Reporting

Companies must annually disclose cybersecurity risk management, strategy, and governance on Form 10-K or 10-F, starting with annual reports for fiscal years ending on or after December 15, 2023. This effective date means companies with calendar-end fiscal years will be among the first to comply with these new disclosure requirements.

Analysis and Recommendations

The newly adopted rules aim to provide more transparency to investors by regulating disclosure requirements concerning a company’s cybersecurity incidents, risk management, strategy and governance. Many companies must undergo a significant effort in the upcoming months to switch from cybersecurity being an operational issue to a board issue. To ensure compliance, boards should carefully consider potential cybersecurity risk procedures and establish strategies for meeting annual disclosure requirements and reporting material incidents within four days.

Given the short turnaround period – particularly for companies filing annual reports for the calendar-end fiscal year – boards must act quickly to implement new disclosure controls and ensure proper disclosure. Companies must hone in on their cybersecurity risk management and governance processes as auditors expand their internal control analysis to pick up the new disclosure rules this fall. Public companies must be particularly mindful to develop additional disclosure controls to ensure timely and accurate reporting of the new disclosure relating to cybersecurity risk management, strategy, and governance, and cybersecurity incidents.

Beyond the accelerated reporting requirements, the SEC’s new cybersecurity procedures pose numerous challenges for public companies, including enhanced regulatory scrutiny, SEC enforcement actions for non-compliance, and shareholder and customer lawsuits. Publicly disclosing incidents can lead to reputational damage and vulnerability to bad actors obtaining potentially sensitive information about a company’s cybersecurity procedures.

We recommend companies work closely with legal counsel experienced in cybersecurity matters and SEC disclosure to implement board cybersecurity training, develop internal reporting mechanisms, assess the materiality of incidents and ensure compliance with the new disclosure rules.

Llama? Vicuña? Alpaca? You might be asking yourself, “what do these camelids have to do with licensing LLM artificial intelligence?” The answer is, “a lot.”

LLaMa, Vicuña, and Alpaca are the names of three recently developed large language models (LLMs). LLMs are a type of artificial intelligence (AI) that uses deep learning techniques and large data sets to understand, summarize, generate, and predict content (e.g., text). These and other LLMs are the brains behind the generative chatbots showing up in our daily lives, grabbing headlines, and sparking debate about generative artificial intelligence. The LLaMa model was developed by Meta (the parent company of Facebook). Vicuña is the result of a collaboration between UC Berkeley, Stanford University, UC San Diego, and Carnegie Mellon University. And Alpaca was developed by a team at Stanford. LLaMa was released in February, 2023; Alpaca was released on March 13, 2023; and Vicuña was released two weeks later on March 30, 2023.

LLMs like these are powerful tools and present attractive opportunities for businesses and researchers alike. Potential applications of LLMs are virtually limitless, but typical examples are customer service interfaces, content generation (both literary and visual), content editing, and text summarization.

While powerful, these tools present risks. Different models have diverse technical strengths and weaknesses. For example, the team that developed Vicuña recognizes “it is not good at tasks involving reasoning or mathematics, and it may have limitations in accurately identifying itself or ensuring the factual accuracy of its outputs.” Thus, Vicuña might not be the best choice for a virtual math tutor. Moreover, in a general sense, the most popular type of LLM – the recurrent neural network (RNN) – is well-suited for modeling sequential data, but suffers from something called the “vanishing gradient problem” (i.e., as more layers using certain activation functions are added to neural networks, the gradients of the loss function approach zero, making the network hard to train). Meanwhile, transformers (the “T” in GPT), are great with long-range dependencies which help with translation style tasks, but are limited in their ability to perform complex compositional reasoning.

Beyond understanding such technical differences, businesses must understand that using these tools may create legal liabilities. Decision makers must understand the differences in the terms of use (including licensing terms) under which various LLMs (and/or associated chatbots) are made available. For example, the terms of use of GPT-3 (by OpenAI), LaMDA (by Google), and LLaMa are all different. Some terms may overlap or are similar, but the organizations developing the models may have different objectives or motives and therefore may place different restrictions on the use of the models.

For example, Meta believes that “[b]y sharing the code for LLaMA, other researchers can more easily test new approaches to limiting or eliminating [] problems in large language models,” and thus Meta released LLaMa “under a noncommercial license focused on research use cases,” where “[a]ccess to the model will be granted on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world.” Thus, generally speaking, LLaMa is available for non-commercial purposes (e.g., research). Similarly, Vicuña, which is a fine-tuned LLaMa model that was trained on approximately 70,000 user shared conversations from ChatGPT, is also available for non-commercial uses. On the other hand, OpenAI’s GPT terms of service tell users “you can use Content (e.g., the inputs of users and outputs generated by the system) for any purpose, including commercial purposes such as sale or publication…” Meanwhile, the terms of use of Google’s Bard (which relies on the LaMDA model developed by Google), as laid out in the “Generative AI Additional Terms of Service,” make it plain that users “may not use the Services to develop machine learning models or related technology.” As is standard in industry, any misuse of the service gives rise to the LLM’s owner and operator to terminate the user’s use and likely creates exposure to civil liabilities under contract law and other related liabilities.

The waters are muddied further when these large corporations start lending and sharing availability of LLMs with each other. There are further indications that Meta is opening up access to its LLaMa model beyond the world of academia as reports surface about partnerships with Amazon and Microsoft. For example, Meta’s LLaMa large language model is now available to Microsoft Azure users.

Thus, in selecting LLMs for various purposes, users must weigh the technical advantages and drawbacks of the different models (e.g., network architecture, weights and biases of algorithms, performance parameters, computing budget and the actual data on which the model was trained) with the legal liabilities that may arise from using these LLMs. Critically, before investing too much time or resources into a product or service that makes use of an LLM, business leaders must review the terms associated with the model in order to fully understand the scope of legally permissible use and take actions to ensure legal compliance with those terms so as to avoid liabilities.

On July 10, 2023, the European Commission adopted its adequacy decision for the EU-US Data Privacy Framework (Framework).  The adequacy decision concludes the long process to open up new means by which companies transfer personal data from the European Economic Area (EEA) to the United States. 

The Framework will be administered by the US Department of Commerce, which will process applications for certification and monitor whether participating companies continue to meet the certification requirements.  Compliance by US companies will be enforced by the Federal Trade Commission.

The Framework will certainly face legal challenges.  But for now, given the recent challenges to the sufficiency of the Standard Contractual Clauses—including the recent decision by the Irish Data Protection Commission against Meta—it is a bit of welcome news.

Shortly before the July Fourth holiday, the California Superior Court issued an important, but subtly complex ruling that pushes back the date when the California Privacy Protection Agency (CPPA) may begin enforcing the latest round of privacy regulations.  These regulations were finalized in March 2023 and enforce provisions of the California Privacy Rights Act (CPRA), which amended the CCPA. Because of the hybrid manner in which the Court pushed back enforcement of some, though not all, CPRA-related obligations, the degree to which businesses will benefit from delayed enforcement is not at all clear. 

Background

The CPRA was passed via ballot initiative in November 2020, and amended the CCPA in significant ways. One of the provisions of the CPRA amendments enabled the creation of the CPPA, which is authorized to enforce the provisions of the CCPA beginning on July 1, 2023.  The CPRA, however, required that the CPPA issue regulations enforcing the new amendments no later than July 1, 2022.  The CPPA was unable to meet this deadline and issued final regulation for 12 of the 15 substantive areas of law covered by the CPRA on March 29, 2023 – nine (9) months after the deadline.

This delay led to an immediate lawsuit, which was filed by the Chamber of Commerce. The lawsuit sought an injunction to prevent the CPPA from enforcing the March 2023 amendments.  The main argument advanced by the Chamber of Commerce is that the CPRA implicitly if not explicitly contemplates a 12-month period of time for companies to prepare for enforcement of the law.  Because the regulations were not finalized until March 2023, the Chamber argued that the CPPA could not enforce the law until March 2024, at the earliest.

In an apparent victory for regulated entities, the Court agreed with the Chamber and held that the CPPA cannot begin enforcement of the March 2023 regulations until March 2024 – 12 months after the regulations were finalized.  Future amendments to the CCPA regulations may not be enforced until 12 months after such regulations are finalized. 

The Good, The Bad and the Complicated

The good news for many U.S. businesses is that, in theory, they will have an additional nine (9) months to prepare for enforcement of regulations finalized in March 2023. The bad news is that enforcement of the CPRA itself, as well as those regulations that predated passage of the CPRA, are not affected by the ruling.  The CPPA may commence enforcement of these provisions as of July 1, 2023.  And indeed, the CPPA has already publicly taken this position. 

All of this raises the complicated question of which obligations imposed by the CPRA are subject to the 9-month enforcement delay and which are not.  For example, Section 1798.135(a) of the CPRA includes a requirement that businesses provide a “Do Not Sell/Share” link on the homepage of the website.  This provision is arguably outside the scope of the recent California Superior Court ruling and may be enforced by the CPPA. But Sections 7010 and 7026 of the March 2023 regulations provides significant detail concerning the operational requirements for implementing the link, and these requirements would not be enforceable until March 2024.  In other words, the extent to which the CPPA must delay enforcement of violations of the obligation to provide a Do Not Sell/Share link turn on the specific violations alleged, making it difficult for business to assess when enforcement will commence.  Similarly, in the August 2022 Sephora consent decree, the California Attorney General’s position that businesses subject to  the CCPA need to recognize the Global Privacy Control (GPC) opt out signal. But the operational requirements for recognizing GPC were set forth by March 2023 regulations (§7025).   

Other notable provisions of the CPRA that the March 2023 regulations clarified include: the prohibition on dark patterns (§7004), obligations to notify service providers and third parties of deletion requests (§7022), operational requirements for honoring right to correction requests (§7023), the January 1, 2022 front-end date for right to know requests (§7024), operational requirements for honoring request to limit use of sensitive personal information (§7027), revised contractual requirements for service provider agreements (§§7050, 51), and third party contracts (§7052), among others.  Again, it is not clear the degree to which these new obligations are subject to March 2024 enforcement because many obligations stem, in full or part, from the CCPA or CPRA itself.

For many U.S. businesses these complexities may be moot because these companies were already striving to be fully compliant by July 1, 2023, if not earlier. But for other companies that have yet to fully comply with the CPRA, it is unclear how much of a reprieve the California Superior Court ruling really provides. Much will depend on whether the CPPA appeals the ruling, and prevails on appeal.  The CPPA has scheduled a public meeting the week of July 14, 2023 and will provide an update on enforcement. 

Even if the CPPA does not appeal the Court ruling, compliance will turn on the particularities of CPPA enforcement, in particular the degree to which the CPPA ties enforcement to a CPRA regulation or statutory obligation.  Businesses subject to the CCPA need to carefully monitor the CPPA’s position on this issue, both via enforcement activities as well as public statements.

One of the most significant trends in privacy law this year has been the surge in online child protection laws in U.S. states.  In a recent article for the Cybersecurity Law Report , Ballard Spahr privacy attorneys Phil Yannella, Greg Szewczyk, Tim Dickens and Emily Klode explore the legal and practical complexities associated with these laws — particularly requirements for the use of online age-verification technologies.

The European Parliament has approved a revised version of the EU Artificial Intelligence Act (AIA), which appears to be on a path to adoption by the EU later this year.  The AIA is the most comprehensive legislation in the world to address the risks associated with the use of artificial intelligence.  A final version of AIA will next be the subject of trilogue negotiations between the European Commission, European Council and the European Parliament.

While numerous other countries, including the United States and China, have expedited their efforts to regulate the rapidly evolving world of artificial intelligence, the EU is the furthest down the road to implementing legislating.  First proposed in 2021, the AIA borrows heavily from the privacy toolbox established by the GDPR to regulate artificial intelligence, with a heavy emphasis on transparency, consent mechanisms, data subject rights and technical and organizational safeguards.  At its core, the law takes a risk-based approach toward the use of artificial intelligence by requiring risk assessments and other controls for “high-risk”  artificial intelligence. 

The AIA also would ban certain technologies altogether.  For example, one of the areas of significant concern by EU regulators has been the use of facial recognition technologies, particularly the use of live facial recognition in public spaces, which the AIA would ban. 

Notably the most recent version of the AIA includes some provision that address ChatGPT — a technology that had not become commercially available when the law was first proposed. The latest version of the AIA would require that companies conspicuously label the outputs of ChatGPT and other forms of generative AI, and specifically disclose the inclusion of any copyrighted materials in training datasets, which has raised concerns by AI developers. 

Passage of the AIA by the EU Parliament came more quickly than many commentators had expected, and suggests that the EU seeks to become a global leader in the regulation of artificial intelligence. In the same way that passage of the GDPR set the stage for worldwide promulgation of data privacy laws, including in the U.S., the AIA may become a standard for other countries to follow in regulating artificial intelligence.

We have previously done a podcast covering the AIA.  For more details on the AIA, and artificial intelligence generally, continue to follow this blog.

On May 28, Texas became the sixth state this year to pass a comprehensive data protection law.  Although the Texas Data Privacy and Security Act (“TDPSA”) is largely in line with the Virginia Consumer Data Protection Act and other recently passed state privacy laws, it has a few key distinctions that may cause headaches for larger businesses.  The TDPSA becomes effective July 1, 2024.

Applicability: The TDPSA eschews the revenue and volume criteria implemented by other comprehensive state privacy laws.  Instead, the TDPSA applies to any person that:

  • Conducts business in Texas or produces products or services consumed by Texas residents;
  • Processes or engages in the sale of personal data; and
  • Does not qualify as a “Small Business” as defined by the United States Small Business Administration (“SBA”).

This final prong is unique to the TDPSA.  Whether a business qualifies as a small business may depend on its number of employees, average annual revenue, and industry.  The SBA has provided guidance that “most manufacturing companies with 500 employees or fewer, and most non-manufacturing businesses with average annual receipts under $7.5, will qualify as a small business.” However, each business will have to review relevant industry standards to determine applicability. 

One important compliance point is that the carve-out for small businesses is not total. Companies that qualify as small businesses are prohibited under the TDPSA from selling sensitive personal data without receiving prior consent from the relevant consumer regardless of their size, revenue, or the volume of information processed. Sensitive data means a category of personal data that:

  • Reveals racial or ethnic origin, religious beliefs, mental or physical health diagnosis, sexuality, or citizenship or immigration status;
  • Genetic or biometric data that is processed for the purpose of uniquely identifying an individual;
  • Personal data collected from a known child (under 13); or
  • Precise geolocation data.

The TDPSA specifically exempts financial institutions covered under the GLBA, covered entities and business associates covered under HIPAA, nonprofit organizations, and institutions of higher education. It also exempts data subject to the GLBA and HIPAA—a key distinction for potential processors under the law.

Data Subject Rights and Impact Assessments: The data subject rights and impact assessment requirements provided under the TDPSA are in line with those provided under Virginia law.  Namely, the TDPSA provides the rights to;

  • Access and portability;
  • Correction;
  • Deletion; and
  • Opt-out of:
    • The sale of personal information for monetary or other valuable consideration;
    • Targeted advertising; and
    • Profiling in furtherance of significant decision-making.

Additionally, like Virginia, the TDPSA requires that controllers conduct impact assessments prior to processing personal data in a manner that could pose a heightened risk of harm to consumers. This includes the sale of personal data and the processing of personal data for targeted advertising or profiling.

Enforcement: The TDPSA does not include a private right of action.  Instead, it is enforced exclusively by the Texas Attorney General.  The TDPSA does provide for a 30-day cure period, which is not scheduled to expire.

In sum, while the TDPSA is largely in line with its contemporaries, its novel applicability criteria are likely to cause compliance headaches.  Businesses will have to review relevant industry standards to determine the scope of their obligations under the law.

On April 24, the Governor of Kansas signed into law Kansas Senate Bill 44, which enacts the Financial Institutions Information Security Act (the “Act”). The Act requires credit services organizations, mortgage companies, supervised lenders, money transmitters, trust companies, and technology-enabled fiduciary financial institutions to comply with the requirements of the GLBA’s Safeguards Rule, as in effect on July 1, 2023. (16 C.F.R. § 314.1 et seq.). The only available exemption from the Act’s requirements is for entities that are directly regulated by a federal banking agency.

The Act requires covered entities in Kansas to create standards regarding the development, implementation, and maintenance of reasonable safeguards to protect the security, confidentiality, and integrity of customer information. For purposes of the Act, “customer information” is broadly defined as “any record containing nonpublic personal information about a customer of a covered entity, whether in paper, electronic or other form, that is handled or maintained by or on behalf of the covered entity or its affiliates.” However, the Act also requires that an entity’s customer information standards be consistent with, and made pursuant to, the GLBA’s Safeguard Rule.

The Safeguard Rule is a regulation stemming from the GLBA that requires non-banking financial institutions to develop, implement, and maintain a comprehensive security program to protect the information of their customers. The Safeguard Rule is currently implementing new requirements, set to become effective on June 9, 2023, which we previously covered in greater detail within the CyberAdviser blog, please see here and here. The Safeguard Rule lays out three main objectives for information security programs: (1) Insure the security and confidentiality of customer information; (2) Protect against any anticipated threats or hazards to the security or integrity of such information; and (3) Protect against unauthorized access to or use of such information that could result in substantial harm or inconvenience to any customer.

As of June 9, those objectives will require applicable companies to, in part: (1) Designate a qualified individual to oversee their information security program; (2) Develop a written risk assessment; (3) Limit and monitor who can access customer information; (4) Encrypt information in transit and at rest; (7) Train security personnel; (6) Develop a written incident response plan; and (8) Implement multifactor authentication whenever anyone accesses customer information. However, the Safeguards Rule does not fully apply to financial institutions that fit within certain exceptions or have primary regulators other than the FTC. Those entities in particular should assess whether the Act may require them to comply with the Safeguard Rule.  And, whereas covered entities subject to the FTC’s Safeguards Rule have been working for months if not years to comply, the Kansas Act will require compliance within a matter of months.

Additionally, the Act required covered entities to develop and organize their information security program “into one or more readily accessible parts,” and maintain that program in accordance with the books and record retention requirements of the covered entity. Lastly, the new act provides the Kansas Office of the State Bank Commissioner the discretionary ability to issue regulations to implement the Act.