Llama? Vicuña? Alpaca? You might be asking yourself, “what do these camelids have to do with licensing LLM artificial intelligence?” The answer is, “a lot.”

LLaMa, Vicuña, and Alpaca are the names of three recently developed large language models (LLMs). LLMs are a type of artificial intelligence (AI) that uses deep learning techniques and large data sets to understand, summarize, generate, and predict content (e.g., text). These and other LLMs are the brains behind the generative chatbots showing up in our daily lives, grabbing headlines, and sparking debate about generative artificial intelligence. The LLaMa model was developed by Meta (the parent company of Facebook). Vicuña is the result of a collaboration between UC Berkeley, Stanford University, UC San Diego, and Carnegie Mellon University. And Alpaca was developed by a team at Stanford. LLaMa was released in February, 2023; Alpaca was released on March 13, 2023; and Vicuña was released two weeks later on March 30, 2023.

LLMs like these are powerful tools and present attractive opportunities for businesses and researchers alike. Potential applications of LLMs are virtually limitless, but typical examples are customer service interfaces, content generation (both literary and visual), content editing, and text summarization.

While powerful, these tools present risks. Different models have diverse technical strengths and weaknesses. For example, the team that developed Vicuña recognizes “it is not good at tasks involving reasoning or mathematics, and it may have limitations in accurately identifying itself or ensuring the factual accuracy of its outputs.” Thus, Vicuña might not be the best choice for a virtual math tutor. Moreover, in a general sense, the most popular type of LLM – the recurrent neural network (RNN) – is well-suited for modeling sequential data, but suffers from something called the “vanishing gradient problem” (i.e., as more layers using certain activation functions are added to neural networks, the gradients of the loss function approach zero, making the network hard to train). Meanwhile, transformers (the “T” in GPT), are great with long-range dependencies which help with translation style tasks, but are limited in their ability to perform complex compositional reasoning.

Beyond understanding such technical differences, businesses must understand that using these tools may create legal liabilities. Decision makers must understand the differences in the terms of use (including licensing terms) under which various LLMs (and/or associated chatbots) are made available. For example, the terms of use of GPT-3 (by OpenAI), LaMDA (by Google), and LLaMa are all different. Some terms may overlap or are similar, but the organizations developing the models may have different objectives or motives and therefore may place different restrictions on the use of the models.

For example, Meta believes that “[b]y sharing the code for LLaMA, other researchers can more easily test new approaches to limiting or eliminating [] problems in large language models,” and thus Meta released LLaMa “under a noncommercial license focused on research use cases,” where “[a]ccess to the model will be granted on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world.” Thus, generally speaking, LLaMa is available for non-commercial purposes (e.g., research). Similarly, Vicuña, which is a fine-tuned LLaMa model that was trained on approximately 70,000 user shared conversations from ChatGPT, is also available for non-commercial uses. On the other hand, OpenAI’s GPT terms of service tell users “you can use Content (e.g., the inputs of users and outputs generated by the system) for any purpose, including commercial purposes such as sale or publication…” Meanwhile, the terms of use of Google’s Bard (which relies on the LaMDA model developed by Google), as laid out in the “Generative AI Additional Terms of Service,” make it plain that users “may not use the Services to develop machine learning models or related technology.” As is standard in industry, any misuse of the service gives rise to the LLM’s owner and operator to terminate the user’s use and likely creates exposure to civil liabilities under contract law and other related liabilities.

The waters are muddied further when these large corporations start lending and sharing availability of LLMs with each other. There are further indications that Meta is opening up access to its LLaMa model beyond the world of academia as reports surface about partnerships with Amazon and Microsoft. For example, Meta’s LLaMa large language model is now available to Microsoft Azure users.

Thus, in selecting LLMs for various purposes, users must weigh the technical advantages and drawbacks of the different models (e.g., network architecture, weights and biases of algorithms, performance parameters, computing budget and the actual data on which the model was trained) with the legal liabilities that may arise from using these LLMs. Critically, before investing too much time or resources into a product or service that makes use of an LLM, business leaders must review the terms associated with the model in order to fully understand the scope of legally permissible use and take actions to ensure legal compliance with those terms so as to avoid liabilities.

On July 10, 2023, the European Commission adopted its adequacy decision for the EU-US Data Privacy Framework (Framework).  The adequacy decision concludes the long process to open up new means by which companies transfer personal data from the European Economic Area (EEA) to the United States. 

The Framework will be administered by the US Department of Commerce, which will process applications for certification and monitor whether participating companies continue to meet the certification requirements.  Compliance by US companies will be enforced by the Federal Trade Commission.

The Framework will certainly face legal challenges.  But for now, given the recent challenges to the sufficiency of the Standard Contractual Clauses—including the recent decision by the Irish Data Protection Commission against Meta—it is a bit of welcome news.

Shortly before the July Fourth holiday, the California Superior Court issued an important, but subtly complex ruling that pushes back the date when the California Privacy Protection Agency (CPPA) may begin enforcing the latest round of privacy regulations.  These regulations were finalized in March 2023 and enforce provisions of the California Privacy Rights Act (CPRA), which amended the CCPA. Because of the hybrid manner in which the Court pushed back enforcement of some, though not all, CPRA-related obligations, the degree to which businesses will benefit from delayed enforcement is not at all clear. 

Background

The CPRA was passed via ballot initiative in November 2020, and amended the CCPA in significant ways. One of the provisions of the CPRA amendments enabled the creation of the CPPA, which is authorized to enforce the provisions of the CCPA beginning on July 1, 2023.  The CPRA, however, required that the CPPA issue regulations enforcing the new amendments no later than July 1, 2022.  The CPPA was unable to meet this deadline and issued final regulation for 12 of the 15 substantive areas of law covered by the CPRA on March 29, 2023 – nine (9) months after the deadline.

This delay led to an immediate lawsuit, which was filed by the Chamber of Commerce. The lawsuit sought an injunction to prevent the CPPA from enforcing the March 2023 amendments.  The main argument advanced by the Chamber of Commerce is that the CPRA implicitly if not explicitly contemplates a 12-month period of time for companies to prepare for enforcement of the law.  Because the regulations were not finalized until March 2023, the Chamber argued that the CPPA could not enforce the law until March 2024, at the earliest.

In an apparent victory for regulated entities, the Court agreed with the Chamber and held that the CPPA cannot begin enforcement of the March 2023 regulations until March 2024 – 12 months after the regulations were finalized.  Future amendments to the CCPA regulations may not be enforced until 12 months after such regulations are finalized. 

The Good, The Bad and the Complicated

The good news for many U.S. businesses is that, in theory, they will have an additional nine (9) months to prepare for enforcement of regulations finalized in March 2023. The bad news is that enforcement of the CPRA itself, as well as those regulations that predated passage of the CPRA, are not affected by the ruling.  The CPPA may commence enforcement of these provisions as of July 1, 2023.  And indeed, the CPPA has already publicly taken this position. 

All of this raises the complicated question of which obligations imposed by the CPRA are subject to the 9-month enforcement delay and which are not.  For example, Section 1798.135(a) of the CPRA includes a requirement that businesses provide a “Do Not Sell/Share” link on the homepage of the website.  This provision is arguably outside the scope of the recent California Superior Court ruling and may be enforced by the CPPA. But Sections 7010 and 7026 of the March 2023 regulations provides significant detail concerning the operational requirements for implementing the link, and these requirements would not be enforceable until March 2024.  In other words, the extent to which the CPPA must delay enforcement of violations of the obligation to provide a Do Not Sell/Share link turn on the specific violations alleged, making it difficult for business to assess when enforcement will commence.  Similarly, in the August 2022 Sephora consent decree, the California Attorney General’s position that businesses subject to  the CCPA need to recognize the Global Privacy Control (GPC) opt out signal. But the operational requirements for recognizing GPC were set forth by March 2023 regulations (§7025).   

Other notable provisions of the CPRA that the March 2023 regulations clarified include: the prohibition on dark patterns (§7004), obligations to notify service providers and third parties of deletion requests (§7022), operational requirements for honoring right to correction requests (§7023), the January 1, 2022 front-end date for right to know requests (§7024), operational requirements for honoring request to limit use of sensitive personal information (§7027), revised contractual requirements for service provider agreements (§§7050, 51), and third party contracts (§7052), among others.  Again, it is not clear the degree to which these new obligations are subject to March 2024 enforcement because many obligations stem, in full or part, from the CCPA or CPRA itself.

For many U.S. businesses these complexities may be moot because these companies were already striving to be fully compliant by July 1, 2023, if not earlier. But for other companies that have yet to fully comply with the CPRA, it is unclear how much of a reprieve the California Superior Court ruling really provides. Much will depend on whether the CPPA appeals the ruling, and prevails on appeal.  The CPPA has scheduled a public meeting the week of July 14, 2023 and will provide an update on enforcement. 

Even if the CPPA does not appeal the Court ruling, compliance will turn on the particularities of CPPA enforcement, in particular the degree to which the CPPA ties enforcement to a CPRA regulation or statutory obligation.  Businesses subject to the CCPA need to carefully monitor the CPPA’s position on this issue, both via enforcement activities as well as public statements.

One of the most significant trends in privacy law this year has been the surge in online child protection laws in U.S. states.  In a recent article for the Cybersecurity Law Report , Ballard Spahr privacy attorneys Phil Yannella, Greg Szewczyk, Tim Dickens and Emily Klode explore the legal and practical complexities associated with these laws — particularly requirements for the use of online age-verification technologies.

The European Parliament has approved a revised version of the EU Artificial Intelligence Act (AIA), which appears to be on a path to adoption by the EU later this year.  The AIA is the most comprehensive legislation in the world to address the risks associated with the use of artificial intelligence.  A final version of AIA will next be the subject of trilogue negotiations between the European Commission, European Council and the European Parliament.

While numerous other countries, including the United States and China, have expedited their efforts to regulate the rapidly evolving world of artificial intelligence, the EU is the furthest down the road to implementing legislating.  First proposed in 2021, the AIA borrows heavily from the privacy toolbox established by the GDPR to regulate artificial intelligence, with a heavy emphasis on transparency, consent mechanisms, data subject rights and technical and organizational safeguards.  At its core, the law takes a risk-based approach toward the use of artificial intelligence by requiring risk assessments and other controls for “high-risk”  artificial intelligence. 

The AIA also would ban certain technologies altogether.  For example, one of the areas of significant concern by EU regulators has been the use of facial recognition technologies, particularly the use of live facial recognition in public spaces, which the AIA would ban. 

Notably the most recent version of the AIA includes some provision that address ChatGPT — a technology that had not become commercially available when the law was first proposed. The latest version of the AIA would require that companies conspicuously label the outputs of ChatGPT and other forms of generative AI, and specifically disclose the inclusion of any copyrighted materials in training datasets, which has raised concerns by AI developers. 

Passage of the AIA by the EU Parliament came more quickly than many commentators had expected, and suggests that the EU seeks to become a global leader in the regulation of artificial intelligence. In the same way that passage of the GDPR set the stage for worldwide promulgation of data privacy laws, including in the U.S., the AIA may become a standard for other countries to follow in regulating artificial intelligence.

We have previously done a podcast covering the AIA.  For more details on the AIA, and artificial intelligence generally, continue to follow this blog.

On May 28, Texas became the sixth state this year to pass a comprehensive data protection law.  Although the Texas Data Privacy and Security Act (“TDPSA”) is largely in line with the Virginia Consumer Data Protection Act and other recently passed state privacy laws, it has a few key distinctions that may cause headaches for larger businesses.  The TDPSA becomes effective July 1, 2024.

Applicability: The TDPSA eschews the revenue and volume criteria implemented by other comprehensive state privacy laws.  Instead, the TDPSA applies to any person that:

  • Conducts business in Texas or produces products or services consumed by Texas residents;
  • Processes or engages in the sale of personal data; and
  • Does not qualify as a “Small Business” as defined by the United States Small Business Administration (“SBA”).

This final prong is unique to the TDPSA.  Whether a business qualifies as a small business may depend on its number of employees, average annual revenue, and industry.  The SBA has provided guidance that “most manufacturing companies with 500 employees or fewer, and most non-manufacturing businesses with average annual receipts under $7.5, will qualify as a small business.” However, each business will have to review relevant industry standards to determine applicability. 

One important compliance point is that the carve-out for small businesses is not total. Companies that qualify as small businesses are prohibited under the TDPSA from selling sensitive personal data without receiving prior consent from the relevant consumer regardless of their size, revenue, or the volume of information processed. Sensitive data means a category of personal data that:

  • Reveals racial or ethnic origin, religious beliefs, mental or physical health diagnosis, sexuality, or citizenship or immigration status;
  • Genetic or biometric data that is processed for the purpose of uniquely identifying an individual;
  • Personal data collected from a known child (under 13); or
  • Precise geolocation data.

The TDPSA specifically exempts financial institutions covered under the GLBA, covered entities and business associates covered under HIPAA, nonprofit organizations, and institutions of higher education. It also exempts data subject to the GLBA and HIPAA—a key distinction for potential processors under the law.

Data Subject Rights and Impact Assessments: The data subject rights and impact assessment requirements provided under the TDPSA are in line with those provided under Virginia law.  Namely, the TDPSA provides the rights to;

  • Access and portability;
  • Correction;
  • Deletion; and
  • Opt-out of:
    • The sale of personal information for monetary or other valuable consideration;
    • Targeted advertising; and
    • Profiling in furtherance of significant decision-making.

Additionally, like Virginia, the TDPSA requires that controllers conduct impact assessments prior to processing personal data in a manner that could pose a heightened risk of harm to consumers. This includes the sale of personal data and the processing of personal data for targeted advertising or profiling.

Enforcement: The TDPSA does not include a private right of action.  Instead, it is enforced exclusively by the Texas Attorney General.  The TDPSA does provide for a 30-day cure period, which is not scheduled to expire.

In sum, while the TDPSA is largely in line with its contemporaries, its novel applicability criteria are likely to cause compliance headaches.  Businesses will have to review relevant industry standards to determine the scope of their obligations under the law.

On April 24, the Governor of Kansas signed into law Kansas Senate Bill 44, which enacts the Financial Institutions Information Security Act (the “Act”). The Act requires credit services organizations, mortgage companies, supervised lenders, money transmitters, trust companies, and technology-enabled fiduciary financial institutions to comply with the requirements of the GLBA’s Safeguards Rule, as in effect on July 1, 2023. (16 C.F.R. § 314.1 et seq.). The only available exemption from the Act’s requirements is for entities that are directly regulated by a federal banking agency.

The Act requires covered entities in Kansas to create standards regarding the development, implementation, and maintenance of reasonable safeguards to protect the security, confidentiality, and integrity of customer information. For purposes of the Act, “customer information” is broadly defined as “any record containing nonpublic personal information about a customer of a covered entity, whether in paper, electronic or other form, that is handled or maintained by or on behalf of the covered entity or its affiliates.” However, the Act also requires that an entity’s customer information standards be consistent with, and made pursuant to, the GLBA’s Safeguard Rule.

The Safeguard Rule is a regulation stemming from the GLBA that requires non-banking financial institutions to develop, implement, and maintain a comprehensive security program to protect the information of their customers. The Safeguard Rule is currently implementing new requirements, set to become effective on June 9, 2023, which we previously covered in greater detail within the CyberAdviser blog, please see here and here. The Safeguard Rule lays out three main objectives for information security programs: (1) Insure the security and confidentiality of customer information; (2) Protect against any anticipated threats or hazards to the security or integrity of such information; and (3) Protect against unauthorized access to or use of such information that could result in substantial harm or inconvenience to any customer.

As of June 9, those objectives will require applicable companies to, in part: (1) Designate a qualified individual to oversee their information security program; (2) Develop a written risk assessment; (3) Limit and monitor who can access customer information; (4) Encrypt information in transit and at rest; (7) Train security personnel; (6) Develop a written incident response plan; and (8) Implement multifactor authentication whenever anyone accesses customer information. However, the Safeguards Rule does not fully apply to financial institutions that fit within certain exceptions or have primary regulators other than the FTC. Those entities in particular should assess whether the Act may require them to comply with the Safeguard Rule.  And, whereas covered entities subject to the FTC’s Safeguards Rule have been working for months if not years to comply, the Kansas Act will require compliance within a matter of months.

Additionally, the Act required covered entities to develop and organize their information security program “into one or more readily accessible parts,” and maintain that program in accordance with the books and record retention requirements of the covered entity. Lastly, the new act provides the Kansas Office of the State Bank Commissioner the discretionary ability to issue regulations to implement the Act.

Following recent Senate testimony in which OpenAI CEO Sam Altman proposed additional Congressional oversight for the development of artificial intelligence (AI), Colorado Senator Michael Bennet has re-introduced the Digital Platform Commission Act, a bill that would enable the creation of a federal agency to oversee the use of AI by digital platforms.  The proposed Federal Digital Platform Commission (FDPC) would have a broad mandate to “protect consumers from  deceptive, unfair, unjust and unreasonable or abusive practices committed by digital platforms.”

Under the proposed bill, the Commission would have specific power to regulate the use of algorithmic systems used by “systemically important digital platforms.”  The bill delegates to the FDPC rulemaking authority to designate a platform as systematically important based on a number of factors, including whether a platform is available to the public and has “significant nationwide economic, social or political impacts”, the platform’s market power, unique daily users, and “the dependence of business users of the platform to reach customers.”  Digital platforms that qualify as systemically important could face new rules to require fairness and transparency of AI processes, as well as risk assessments and third party audits to assess harmful content and anti-competitive bias. 

According to media reports, the proposed bill includes updated definitions to specifically address AI, and in particular generative AI.  These changes include a revised definition of “algorithmic processes”, which now include computational processes using personal data that generate content or make a decision.  Media reports also claim that the new bill would expand the definition of a digital platform to include companies that “offer content primarily generated by algorithmic processes.”

The proposed bill contains some of the hallmarks of other proposed AI regulation, such as the EU AI Act.  Lawmakers worldwide appear to be focused on fairness and transparency of AI processes, safety and trust issues, and the potential for algorithmic bias.  Lawmakers also appear to be coalescing around the idea of mandating third-party assessments for high-risk or systematically important AI.  

One notable aspect of the Digital Platform Commission Act is its definition of AI, which does not provide exceptions for automated processes that include human decision making authority or a requirement that the automated processes have a legal or substantially similar effect.  This approach differs from other laws that propose to regulate AI, such as the General Data Protection Regulation Act and the Colorado Privacy Act , which are more limited in their definition of “profiling” or “automated processing” that trigger compliance obligations and establish different obligations based on the level of human involvement.  The scope of rulemaking for different kinds of AI is currently under consideration by the California Privacy Protection Agency, which has sought public comment on this question.  How regulators address the threshold issue of what kind of AI triggers compliance obligations is a key issue, with potentially significant impact.

Whether Congress moves forward on the Digital Platform Commission Act remains an open question.  As with other proposed bills regulating AI, lawmakers appear wary of stifling technologies innovations that are moving forward at a lightning place.  On the other hand, there appears to be some bipartisan recognition of the potential power and danger of wholly unregulated AI technologies and an interest in creation of a new executive agency with oversight responsibilities for AI. 

On May 17, 2023, Montana Governor Greg Gianforte signed into law a bill banning the use of the popular app, TikTok, by the general public within the state. Absent court intervention, the ban takes effect on January 1, 2024. While users of the popular app, which is owned by Chinese company ByteDance, can breathe a little easier knowing they will not be liable for accessing the app, TikTok (and mobile stores offering the app to users within the state) will be fined $10,000 for every day its platform operates on devices in Montana. It is unclear from the law’s current text exactly how the State intends to enforce the removal of the app that has already been installed on Montana residents’ devices. Use of TikTok by law enforcement and for security research purposes are exempt from the statewide ban.

Governor Gianforte tweeted that the protection of Montana residents’ personal and private data was a reason for the ban and further called out the “Chinese Communist Party” using TikTok as a spy tool to violate Americans’ privacy. The bill’s text, authored by Montana Attorney General Austin Knudsen, called out similar concerns and noted the need to protect minors from the dangerous activities being promoted on the app, such as pouring hot wax on a user’s face, placing metal objects in electrical outlets, and taking excessive amounts of medication. According to the newly enacted law, if TikTok is acquired by a company “not incorporated in any other country designated as a foreign adversary,” the ban would be void.

The reaction by TikTok to the statewide ban has, unsurprisingly, been negative, with a spokesperson for the app questioning the constitutionality and the mechanics of enforcing the ban. The ACLU and tech trade groups have called the constitutionality of the ban into question, citing First Amendment rights and constitutionally protected speech. Keegan Medrano, policy director at the ACLU of Montana, has raised free speech concerns and Carl Szabo, Vice President and General Counsel at NetChoice, noted his disappointment in Governor Gianforte for signing a “plainly unconstitutional bill.” 

Montana is the first state to ban the app’s use by the general public, however, such bans are already in place on government devices and networks throughout the country. To date, the U.S. government and a number of states have enacted TikTok bans on government devices. It remains to be seen what states will follow suit and enact similar bans of the app by the general public on personal devices. What does appear clear is that, despite the current administration’s ongoing negotiations with ByteDance to resolve concerns related to national security, the issue of privacy protection and data security and if or how TikTok does or does not provide either, is far from being resolved.

In a ruling published May, 4, the Federal District Court of Idaho granted defendant data broker Kochava’s motion to dismiss a complaint filed by the Federal Trade Commission (“FTC”).  In its complaint, the FTC alleged that Kochava’s sale of precise consumer geolocation data constituted an unfair act or practice in violation of Section 5 of the FTC Act. Despite dismissing the complaint, the Court was not convinced that the deficiencies could not be cured. Therefore, the Court granted the FTC 30 days to amend.

In its ruling, the Court rejected a number of the defendant’s arguments. It found that the FTC had reason to believe that Kochava is, or is about to violate the FTC Act and was not “only challenging past practices.” Next, it found that the FTC need not allege a predicate violation of law or policy to state a claim under Section 5(a) as claimed by the defendant.  Finally, it found that the FTC was not obligated to allege that the defendant’s practices were immoral, unethical, oppressive, or unscrupulous.

Despite these findings, the Court held that the FTC failed to allege a sufficient likelihood of substantial consumer injury.  On this point, the FTC put forth two theories of consumer injury.  First, it argued that “a company could substantially injure consumers by selling their sensitive location information and thereby subjecting them to a significant risk of suffering concrete harms at the hands of third parties.” While the Court found this plausible, it found that the “FTC has not alleged that consumers are suffering or are likely to suffer such secondary harms.” The mere possibility of secondary harms were insufficient to establish standing. Second, the FTC alleged that the non-obvious tracking itself constituted a “substantial injury” under the Act. Although the Court recognized that an invasion of privacy alone can constitute such an injury, it found that the present facts did not support that conclusion in this case.

In a separate ruling on the same matter, the Court rejected defendant’s attempt to dismiss the case under the Declaratory Judgement Act, describing it as “awkwardly” raising issues without identifying any relevant cause of action or adequate remedy at law.

The opinions demonstrate the reality that the laws surrounding data brokers and the collection and sale of tracking information are still very much in development.  Any company that is considering sharing personal data—whether sensitive or not—should therefore ensure that it complies with relevant any disclosure and choice obligations, or risk being in the crosshairs of the next regulatory enforcement action.