On November 21, the Federal Trade Commission (“FTC”) approved in a 3-0 vote a resolution authorizing the use of compulsory process in nonpublic investigations involving products and services that involve or claim to involve Artificial Intelligence (AI). 

Compulsory process is akin to a subpoena, and it allows the FTC to request the production of information, documents, or testimony relevant to an investigation.  The FTC reports that the omnibus resolution will streamline FTC staff’s ability to issue civil investigative demands (CID), while retaining the Commission’s authority to determine when demands are issued.  The resolution will be in effect for 10 years.

While the resolution will have a clear impact on companies that develop AI, it will also have an impact on all companies that offer products or services that involve or claim to involve AI.  Indeed, given the FTC’s prior warnings relating to misleading advertising about AI practices, it should be expected that the FTC will use compulsory process to investigate it. 

In any event, the resolution should also be seen as a general indication that the FTC plans to focus on regulating AI, and it will seek the investigative tools it deems necessary.  Companies should therefore ensure that they have the proper AI governance plans in place to assess and defend their practices. 

On November 27, 2023, the California Privacy Protection Agency (CPPA) published proposed Automated Decision-Making Rules to be discussed by the CCPA board at its upcoming meeting on December 8, 2023.  While the proposed rules are far from final—indeed, they are not even official draft rules—they signal that the CPPA is considering rules that would have significant impact on businesses subject to the California Consumer Privacy Act (CCPA).

The proposed rules define “automated decisionmaking technology” broadly as “any system, software, or process—including one derived from machine-learning, or other data-processing or artificial intelligence—that processes personal information and uses computation as a whole or part of a system to make or execute a decision or facilitate human decisionmaking.”  Automated decisionmaking technology includes, but is not limited to, “profiling,” defined to mean any form of automated processing of personal information to evaluate, predict or analyze a person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements. 

The proposed rules require companies to provide pre-use notice, the ability to opt-out, and a right of access with respect to automated decisionmaking technologies in six specific scenarios:

  1. For decisions that produce legal or similarly significant effects concerning a consumer;
  2. Profiling a consumer in their capacity as an employee, independent contractor, job applicant, or student;
  3. Profiling a consumer while they are in a publicly accessible place;
  4. Profiling a consumer for behavioral advertising (listed as a discussion topic);
  5. Profiling a consumer that the business has actual knowledge is under the age of 16 (listed as an additional option for board discussion); and
  6. Processing personal information to train automated decisionmaking technology (listed as an additional option for board discussion).

The application to employees will be particularly important, as the only other rules on automated decision-making (Colorado) do not apply in the employment context.  Further, the proposed rules make clear that profiling employees would include keystroke loggers, productivity or attention monitors, video or audio recording or live-streaming, facial- or speech-recognition or –detection, automated emotion assessment, location trackers, speed trackers, and web-browsing, mobile-application, or social-media monitoring tools.  In other words, the proposed rules would have big impacts on common technologies used in the employment context—which may not currently be configured in a way where opt-outs or information could be easily shared.

With respect to the right of access, companies would have to disclose not only that automated decision-making technology is used and how decisions affects the individual, they would also have to provide details on the system’s logic and possible range of outcomes, as well as how human decision-making influences the final outcome.  These requirements will be difficult in practice, and, as with Colorado Privacy Act’s regulations and the CPPA’s proposed regulations on risk assessments, should be influencing the nature and amount of information that companies require from vendors that leverage AI now. 

As noted above, the proposed rules have a long way to go.  Additionally, various exceptions are incorporated into the rules that may mitigate the operational burden in some contexts.  However, the proposed rules will almost certainly result in expanded regulatory obligations for subject companies over what they currently face.  While compliance efforts may be premature, companies should start assessing whether they could, if necessary, comply with the proposed rules from an operational standpoint. 

On November 16th, the Federal Communications Commission (“FCC”) and Federal Trade Commission (“FTC”) announced new independent initiatives regarding the use and implications of AI technologies on consumers in the context of telephone and voice communications. Learn more about these initiatives on our sister blog, the Consumer Finance Monitor.

The Colorado Department of Law (“DoL”) has published a shortlist of potential universal opt-out mechanisms (“UOOMs”).  Beginning on July 1, 2024, companies will be required to allow consumers to opt out of the sale of their personal data or use of their personal data for targeted advertising using any UOOMs that are ultimately included in the final list.

The shortlist only includes three UOOMs:  OptOut Code, the Global Privacy Control (“GPC”), and Opt-Out Machine.  The inclusion of these three UOOMs likely signals the DoL’s desire to cover the spectrum of contexts in which sales or targeted ads likely arise.  Specifically, the GPC essentially operates as a do not track signal for website browsing, OptOut Code states that its UOOM function applies across a multitude of Internet of Things scenarios, and the Opt-Out Machine is email-based for more traditional data sales and data broker activities.  Notably, the GPC has been a requirement under the CCPA, which should be welcome news for businesses hoping for leverage their existing privacy program for CPA compliance.

The inclusion of these three UOOMs on the shortlist represents a creative approach by the DoL to address a practical reality—although the Colorado Privacy Act calls for a mechanism that universally opts a consumer out of all sales and targeted ads, that technology does not fully exist because targeted advertising cookies are usually not associated with the personally identifying information used for traditional data sales.  By selecting a list that addresses these three different scenarios, the DoL is creating a structure that allows a consumer to universally opt out, even if it requires the use of three UOOMs rather than one.  Additionally, by including only one UOOM for each scenario, the DoL is adhering to its practical-minded approach and not overloading businesses with numerous UOOMs.

Businesses and consumers now have until December 11, 2023, to submit additional feedback and comments on the shortlist, and the DoL must publish the final list by January 1, 2024. 

On October 27, the Federal Trade Commission (“FTC”) unanimously voted to amend the Safeguards Rule to require non-banking financial institutions, such as mortgage brokers, motor vehicle dealers, and payday lenders, to report data breaches and security events to the Agency. This amendment will become effective 180 days after its publication in the Federal Register.

Under the amended rule, financial institutions subject to the authority of the FTC will be required to notify the Agency as soon as possible, and no later than 30 days after discovery of a “Notification Event” impacting 500 or more consumers. A Notification Event is defined as any acquisition of unencrypted customer information without the authorization of the data subject. Information is presumed unencrypted if the relevant encryption key was accessed by an unauthorized person.

Importantly, there will be a presumption of unauthorized access unless there is “reliable evidence showing that there has not been, or could not reasonably have been, unauthorized acquisition.” This presumption is likely to expand the number of security incidents that qualify as notification incidents and cuts directly against the ‘risk of harm’ exemption present in many state data breach notification laws. 

Notice to the Agency will be provided through the FTC’s website. After review by the Agency, notices will be made publicly available through an online database. Notice to the FTC must include:

  • The name and contact information of the reporting entity;
  • A description of the types of information impacted;
  • The date or range of the event, if possible to determine;
  • The number of consumers impacted;
  • A general description of the event; and
  • Whether any law enforcement has requested a delay of public notification.

We will continue to monitor this amendment as it develops. To learn more, the FTC’s announcement is available here and the final rule is available here

On October 19, 2023, the Consumer Financial Protection Board (“CFPB”) released a proposed rule that, if enacted, would grant consumers greater access rights to the data their financial institutions hold. Under the proposed Personal Financial Data Rights Rule (the “Proposed Rule”), bank customers nationwide would have privacy rights similar to what is afforded under the dozen state privacy laws that have been enacted in recent years.  Comments on the Proposed Rule are due on or before December 29th of this year, while the Proposed Rule would likely go into effect in the fall of 2024.

The Proposed Rule would provide consumers the right to request information related to their account transactions, balances, and third-party bill payments from their financial institutions. Consumers may also request the information used to initiate ACH transactions, information regarding the terms and conditions of the financial products and services (such as applicable fee schedules, APRs, and rewards program terms), and basic account verification information. Notably, the access right excludes information that would constitute confidential business information. Information concerning mortgages, auto loans, and student loans are similarly out of scope of the Proposed Rule, although the CFPB has indicated its intent to broaden the Proposed Rule’s coverage in future rulemaking. Financial institutions must make covered data available in a readily usable electronic form to the consumer and, if applicable, a third-party authorized by the consumer (such as a competing financial institution) upon request at no cost.

The Proposed Rule signals the first step toward the implementation of regulations aimed at “open banking” that were initially required under the 2010 Dodd-Frank Act and is the first proposal to implement Section 1033 of the Consumer Financial Protection Act (the “CFPA”). Under the CFPA, the CFPB was tasked with implementing personal financial data sharing standards and protections (refer to Ballard Spahr’s Consumer Finance Monitor articles and podcast regarding Section 1033 rulemaking here, here, here, and here). By facilitating access to consumers’ personal information and removing certain bureaucratic hurdles currently involved with switching to a new financial services provider, CFPB Director Rohit Chopra noted the Proposed Rule “will help accelerate the shift” to a more decentralized financial market structure while guarding consumers’ personal data against abuse and misuse.

Although there will be operational costs, the CFPB believes that the Proposed Rule would actually foster competition by benefitting smaller financial institutions, fintechs, and startups.  For example, the ability to transfer consumer transaction history with greater ease, speed, and efficiency may cut down on administrative costs the smaller players in the industry and startups face when onboarding new customers. According to CFPB Director Chopra, “jumpstarting competition in banking and consumer finance” will lead to companies incentivized to provide better customer service and products. Future rulemaking that widens the scope of the Proposed Rule to include other types of covered data will, as CFPB Director Chopra pointed out, “continue to foster more competition and consumer choice throughout the market.”

The Proposed Rule would also prohibit companies from using consumer account data for purposes other than providing the requested services and products. Financial institutions would therefore be prohibited from using the subject data for targeted advertising and marketing purposes or to sell to data brokers. Upon termination of the customer relationship, financial institutions would be required to delete subject data in its possession (subject to applicable law and retention requirements). Screen scraping—a form of data collection used by companies that requires the use of log-in credentials—would also be prohibited.

Not surprisingly, financial institutions reactions are mixed. American Bankers Association President and CEO Rob Nichols issued a statement that simultaneously commended the Proposed Rule for uniting the banking industry’s and the CFPB’s common goal of “enhancing consumers’ access to their financial data and allowing them to share it safely with companies of their own choosing” while also expressing concerns over the scope of the Proposed Rule, whether it adequately addresses liability, and implementation costs. Nichols also questioned whether the CFPB’s parallel efforts at amending the Fair Credit Reporting Act created ambiguity under the Proposed Rule.

Consumer Bankers Association (CBA) President and CEO Lindsey Johnson released a statement noting that the CBA “looks forward to working with” the CFPB to develop a final Section 1033 rule that fosters access to consumers’ own personal financial data and provides uniform protection of such data across banks and non-banks. Similarly, Paige Pidano Paridon, senior associate general counsel for the Bank Policy Institute, a nonpartisan public policy, research and advocacy group representing US banks, issued a statement on the Proposed Rule calling for the CFPB to “prioritize data security in its rulemaking process, put an end to unsafe practices like screen scraping, and require fintechs to adhere to the same data privacy and security standards that already apply to banks.”

While the Proposed Rule attempts to establish a framework by which access to third party financial products and services is easier and the collection practices of consumer financial data is further regulated, whether the Proposed Rule will accomplish all of the goals outlined in Director Chopra’s statement is yet to be determined.

Those who wish to comment on the proposed rule have until December 29, 2023 to do so. The CFPB has indicated its intent to finalize the rule by the fall of 2024. Compliance dates will vary depending on the asset size and type of financial institution.

The California Privacy Protection Agency (CPPA) recently published two new sets of draft regulations addressing a range of cutting-edge data protection issues. Although the Agency has not officially started the formal rulemaking process, the Draft Cybersecurity Audit Regulations and the Draft Risk Assessment Regulations will serve as the foundation for the process moving forward. Discussion of the draft regulations will be a central topic of the Agency’s upcoming September 8th meeting.

Among the noteworthy aspects to the draft Regulations are (1) a proposed definition of “artificial intelligence” that differentiates the technology from automated decision-making; (2) transparency obligations for companies that train AI to be used by consumers or other businesses; and (3) a significant list of potential harms to be considered by businesses when conducting risk assessments.  

The Draft Cybersecurity Audit Regulations make both modifications and additions to the existing California Consumer Privacy Act (“CCPA”) regulations. At a high level, the draft regulations: 

  • Outline the requirement for annual cybersecurity audits for businesses “whose processing of consumers’ personal information presents significant risk to consumers’ security”;
  • Outline potential standards used to determine when processing poses a “significant risk”;
  • Propose options specifying the scope and requirements of cybersecurity audits; and
  • Propose new mandatory contractual terms for inclusion in Service Provider data protection agreements.

Similarly, the Draft Risk Assessment Regulations propose both modifications and additions to the existing CCPA regulations. The draft regulations:

  • Propose new and distinct definitions for Artificial Intelligence and Automated Decision-making technologies;
  • Identify specific processing activities that present a “significant” risk of harm to consumers, requiring a risk assessment. These activities include:
    • Selling or sharing personal information;Processing sensitive personal information (outside of the traditional employment context);Using automated decision-making technologies;Processing the information of children under the age of 16;Using technology to monitor the activity of employees, contractors, job applicants, or students; or
    • Processing personal information of consumers in publicly accessible places using technology to monitor behavior, location, movements, or actions.
  • Propose standards for stakeholder involvement in risk assessments;
  • Propose risk assessment content and review requirements;
  • Require that businesses that train AI for use by consumers or other businesses conduct a risk assessment and include with the software a plain statement of the appropriate uses of the AI; and
  • Outline new disclosure requirements for businesses that implement automated decision-making technologies.

Anybody that would like to submit comments or learn more about attending the CPPA’s September 8 meeting should click here.  We will continue to provide updates to these draft regulations as they become available. 

California continues to be at vanguard of data privacy rights.  The latest effort by California legislators to protect consumer privacy rights focuses on data brokers, who under the proposed California Senate Bill 362, aka the “Delete Act,” would be required to recognize and honor opt-out signals from Californians.  The law seeks to expand on the deletion and opt-out rights provided under the CCPA, which currently requires a Californians to submit their deletion and opt-out requests on a company-by-company basis. The “Delete Act” seeks to change this by implementing a single opt-out request that would apply to all data brokers, associated service providers, and contractors. The Delete Act would essentially create a California “do not sell” list for data brokers akin to a do not call list in the telemarketing context.

Application of the Delete Act

The Delete Act would apply to “data brokers,” which the Act defines as a “business that knowingly collects and sells to third parties the personal information of a consumer with whom the business does not have a direct relationship.” Importantly, the definition exempts those entities covered by the FCRA, GLBA, or the Insurance Information and Privacy Protection Act. Non-exempt data brokers would be required to register with the California Privacy Protection Agency (the “CPPA”), pay a registration fee, and provide the CPPA with detailed information, including whether or not the data broker collects personal information of minors, precise geolocation data of consumers, or reproductive health care data.

Prior to January 1, 2026, the Delete Act would require the CPPA to establish an accessible deletion mechanism that does all of the following:

  1. Allows a consumer to request – through a single verifiable method – that every data broker delete any personal information related to that consumer held by the data broker, associated service provider, or contractor;
  2. Allows a consumer to selectively exclude specific data brokers from a request;
  3. Allows a consumer to make a request to undo or alter a previous request made, after at least 31 days have passed since the consumer’s last request under the Act; and
  4. Implements and maintains reasonable security procedures and practices.

Additionally, the Delete Act would require the deletion mechanism to, in part:

  1. Allow a consumer to request the deletion of all personal information related to that consumer through a single deletion request, without a fee;
  2. Permit a consumer to securely submit information in one or more privacy-protecting ways determined by the California Privacy Protection Agency to aid in the deletion request;
  3. Allow data brokers registered with the California Privacy Protection Agency to determine whether an individual has submitted a verifiable consumer request to delete the personal information related to that consumer and shall not allow the disclosure of any additional personal information when the data broker accesses the accessible deletion mechanism;
  4. Allow a consumer to make a request in any language spoken by any consumer;
  5. Support the ability of a consumer’s authorized agents to aid in the deletion request; and
  6. Allow the consumer, or their authorized agent, to verify the status of the consumer’s deletion request.

Once the deletion mechanism is in place, data brokers would be required to begin complying with deletion requests on August 1, 2026, by accessing the mechanism at least once every 31 days. Unless the personal information is reasonably necessary to fulfill a purpose described under the CCPA’s Right to Delete exemptions (See Section 1798.105(d)), the data broker would be required to process the deletion request, direct all service providers or contractors associated with the data broker to also process the request, and send an affirmative representation to the CPPA indicating the number of records deleted by the data broker, service providers, and contractors.

After processing a deletion request, the data broker is prohibited from selling or sharing new personal information of the consumer and must continually delete all of the consumer’s personal data at least once every 31 days, unless the consumer requests otherwise.

Enforcement & Reporting

While the draft of the Delete Act that passed the Senate was enforceable by both the Attorney General and the California Privacy Protection Agency, the Assembly has since struck the enforcement provisions tied to the Attorney General. As the draft currently stands, the CPPA retains sole enforcement authority, and may issue administrative fines of $200 a day for the failure of a data broker to register and an additional $200 per day for each deletion request a data broker fails to properly comply with.  However, the Act places a statute of limitations upon administration actions regarding any violation that is older than five years. The Act would not provide for a private right of action.

In addition to the enforcement provisions, the Delete Act would require that data brokers compile annual reports containing:

  1. The number of deletion requests received under the Act;
  2. The number of deletion requests that were complied with and the number that were denied;
  3. The number of deletion requests deemed to be unverifiable, to have not been made by a consumer, or which called for the deletion of exempt information; and
  4. The median and the mean number of days it took the data broker to substantively respond to a request.

The above metrics must be disclosed on the data broker’s website, along with a link to their privacy policy, by January 31 of each year. The Act also forbids the use of dark patterns on the data broker’s website.

Beginning on January 1, 2028, and every three years thereafter, data brokers must also undergo an audit by an independent third party to determine compliance with the Act. While this audit will not be automatically submitted to the CPPA, a data broker must be able to provide the CPPA a copy within five days of a request from the agency. However, starting in 2029, a data broker would have to annually provide the CPPA with the last data that an audit occurred.

Status of the Delete Act

The Delete Act was passed by the California State Senate on May 31, and then unanimously passed out of the Assembly’s Committee on Privacy and Consumer Protection in June.  The bill is currently referred to the Assembly’s Committee on Appropriations. On August 16, the Delete Act was placed on the Assembly’s “suspense file” calendar. Suspense file bills are considered at a single hearing – without public comment or attendance – where the Committee on Appropriations compares the anticipated costs of a bill against the state’s available revenue. There is currently no public date set for next steps on the Delete Act.

Privacy advocates and data brokers will be carefully monitoring the progress of this proposed law, which goes further than any U.S. law to date in regulating the data broker industry.  If passed, the law – in combination with already existing consumer opt-out rights and Apple Store requirements for consumers to opt-in to online tracking – will further challenge the ad tech industry’s business model. 

After an extensive comment period, the SEC announced on July 26 that it was formally adopting new rules for public companies governing cybersecurity disclosures. The rules had generated significant backlash from public companies, who criticized the new reporting deadlines for data security incidents as well as the mandatory cyber-risk disclosures the Rules mandate.

Adoption of the new cybersecurity rules will create immediate compliance challenges for public companies. For companies whose fiscal year closes on December 31, 2023, the new cyber-risk disclosures will be mandatory for their upcoming annual report filings. The new breach reporting deadlines are likely to trigger a wave of scrutiny for public companies that suffer material security incidents, making it essential for public companies to carefully consider both the content of the risk disclosures as well as the maturity of their information security programs.

What the New SEC Cybersecurity Rules Require

The SEC Cybersecurity Rules strive to enhance and standardize disclosures regarding cybersecurity incidents, risk management, strategy, and governance. Public companies subject to the reporting requirements of the Securities Exchange Act of 1934 will be subject to new disclosure requirements regarding (1) cybersecurity incidents, and (2) cybersecurity risk management, strategy, and governance. The rules also significantly expand cyber compliance obligations for registered investment advisers (RIAs), investment companies and broker-dealers.

Public Companies

Breach Reporting

Beginning with the incident disclosure requirements, the rule amends Form 8-K to require disclosure of material cybersecurity incidents within four days of identifying that a material event has occurred. The definition of “materiality” has not been changed in the new rule, and continues to follow prior SEC guidance in this area. The rule also adds new items to Regulation S-K and Form 20-F that require public companies to provide updated disclosures relating to previously disclosed cybersecurity incidents. Further, these additions will require disclosure when a series of previously undisclosed and individually immaterial incidents become material in the aggregate. Finally, the rule amends Form 6-K to add cybersecurity incidents as a reporting topic.

The four-day reporting deadline is perhaps the most controversial of the new reporting requirements, and generated significant controversy during the comment period. Many commenters questioned whether such a short reporting deadline would impair ongoing FBI investigations, and force companies to make rushed and incomplete public disclosures that will only open the companies up to further second-guessing and potential liability.

Cyber Risk Management

The new rules create a swath of new reporting requirements regarding cybersecurity risk management, strategy, and governance. Specifically, the amendments to Regulation S-K and Form 20-F require a registrant to describe its policies and procedures, if any, for the identification and management of risks from cybersecurity threats. This includes disclosure of whether the company considers cybersecurity as part of its business strategy, financial planning, and capital allocation, and how management implements cybersecurity policies, procedures, and strategies. The SEC Rule also requires disclosure concerning whether the company has a chief information security officer (CISO) as well as policies and procedures targeted to identify and manage cyber risk.

RIAs, Investment Companies and Broker-Dealers

The new SEC rules also impose significant new compliance requirements on RIAs, investment companies and broker-dealers. More specifically, the new rules:

  • Require RIAs and investment companies to adopt and implement written policies and procedures that are reasonably tailored to address cybersecurity risks, engage in periodic risk assessments, security monitoring and vulnerability management;
  • Require RIAs to report “significant cybersecurity incidents” to the SEC within 48 hours of discovery, including incidents related to the adviser or registered funds or private funds managed by the adviser. Unlike reporting by public companies, these reports would be deemed confidential;
  • Require broker-dealers, RIAs, and investment companies to implement written policies and procedures for incident response programs, including requiring covered institutions to provide notice within 30 days to affected individuals whose sensitive customer information was accessed or used without authorization.

Timing for Compliance With New Rules

The new cybersecurity rules will become effective 30 days following the publication of the adopting release in the Federal Register.

Incident Reporting

Companies must begin reporting material cybersecurity incidents on Form 8-K or Form 6-K on the later of 90 days after the publication of the final rules in the Federal Register or December 18, 2023. Smaller reporting companies have an additional 180 days and must begin reporting incidents on the later of 270 days after the date of publication or June 15, 2024. If a company is unsure whether it will qualify as a smaller reporting company, best practice is to assume the effective time for companies other than smaller reporting companies applies.

Once the new rules come into effect, any cybersecurity incident a company deems material must be disclosed on new Item 1.05 of Form 8-K within four days after determining the incident is material — rather than the date the company discovers the incident. The SEC has clarified that the materiality determination must be made “without unreasonable delay” following discovery. A company may delay disclosure for up to 30 days if the U.S. Attorney General notifies the Commission that immediate disclosure may pose a significant risk to public safety or national security, with an additional 60-day delay for extraordinary circumstances.

Annual Reporting

Companies must annually disclose cybersecurity risk management, strategy, and governance on Form 10-K or 10-F, starting with annual reports for fiscal years ending on or after December 15, 2023. This effective date means companies with calendar-end fiscal years will be among the first to comply with these new disclosure requirements.

Analysis and Recommendations

The newly adopted rules aim to provide more transparency to investors by regulating disclosure requirements concerning a company’s cybersecurity incidents, risk management, strategy and governance. Many companies must undergo a significant effort in the upcoming months to switch from cybersecurity being an operational issue to a board issue. To ensure compliance, boards should carefully consider potential cybersecurity risk procedures and establish strategies for meeting annual disclosure requirements and reporting material incidents within four days.

Given the short turnaround period – particularly for companies filing annual reports for the calendar-end fiscal year – boards must act quickly to implement new disclosure controls and ensure proper disclosure. Companies must hone in on their cybersecurity risk management and governance processes as auditors expand their internal control analysis to pick up the new disclosure rules this fall. Public companies must be particularly mindful to develop additional disclosure controls to ensure timely and accurate reporting of the new disclosure relating to cybersecurity risk management, strategy, and governance, and cybersecurity incidents.

Beyond the accelerated reporting requirements, the SEC’s new cybersecurity procedures pose numerous challenges for public companies, including enhanced regulatory scrutiny, SEC enforcement actions for non-compliance, and shareholder and customer lawsuits. Publicly disclosing incidents can lead to reputational damage and vulnerability to bad actors obtaining potentially sensitive information about a company’s cybersecurity procedures.

We recommend companies work closely with legal counsel experienced in cybersecurity matters and SEC disclosure to implement board cybersecurity training, develop internal reporting mechanisms, assess the materiality of incidents and ensure compliance with the new disclosure rules.

Llama? Vicuña? Alpaca? You might be asking yourself, “what do these camelids have to do with licensing LLM artificial intelligence?” The answer is, “a lot.”

LLaMa, Vicuña, and Alpaca are the names of three recently developed large language models (LLMs). LLMs are a type of artificial intelligence (AI) that uses deep learning techniques and large data sets to understand, summarize, generate, and predict content (e.g., text). These and other LLMs are the brains behind the generative chatbots showing up in our daily lives, grabbing headlines, and sparking debate about generative artificial intelligence. The LLaMa model was developed by Meta (the parent company of Facebook). Vicuña is the result of a collaboration between UC Berkeley, Stanford University, UC San Diego, and Carnegie Mellon University. And Alpaca was developed by a team at Stanford. LLaMa was released in February, 2023; Alpaca was released on March 13, 2023; and Vicuña was released two weeks later on March 30, 2023.

LLMs like these are powerful tools and present attractive opportunities for businesses and researchers alike. Potential applications of LLMs are virtually limitless, but typical examples are customer service interfaces, content generation (both literary and visual), content editing, and text summarization.

While powerful, these tools present risks. Different models have diverse technical strengths and weaknesses. For example, the team that developed Vicuña recognizes “it is not good at tasks involving reasoning or mathematics, and it may have limitations in accurately identifying itself or ensuring the factual accuracy of its outputs.” Thus, Vicuña might not be the best choice for a virtual math tutor. Moreover, in a general sense, the most popular type of LLM – the recurrent neural network (RNN) – is well-suited for modeling sequential data, but suffers from something called the “vanishing gradient problem” (i.e., as more layers using certain activation functions are added to neural networks, the gradients of the loss function approach zero, making the network hard to train). Meanwhile, transformers (the “T” in GPT), are great with long-range dependencies which help with translation style tasks, but are limited in their ability to perform complex compositional reasoning.

Beyond understanding such technical differences, businesses must understand that using these tools may create legal liabilities. Decision makers must understand the differences in the terms of use (including licensing terms) under which various LLMs (and/or associated chatbots) are made available. For example, the terms of use of GPT-3 (by OpenAI), LaMDA (by Google), and LLaMa are all different. Some terms may overlap or are similar, but the organizations developing the models may have different objectives or motives and therefore may place different restrictions on the use of the models.

For example, Meta believes that “[b]y sharing the code for LLaMA, other researchers can more easily test new approaches to limiting or eliminating [] problems in large language models,” and thus Meta released LLaMa “under a noncommercial license focused on research use cases,” where “[a]ccess to the model will be granted on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world.” Thus, generally speaking, LLaMa is available for non-commercial purposes (e.g., research). Similarly, Vicuña, which is a fine-tuned LLaMa model that was trained on approximately 70,000 user shared conversations from ChatGPT, is also available for non-commercial uses. On the other hand, OpenAI’s GPT terms of service tell users “you can use Content (e.g., the inputs of users and outputs generated by the system) for any purpose, including commercial purposes such as sale or publication…” Meanwhile, the terms of use of Google’s Bard (which relies on the LaMDA model developed by Google), as laid out in the “Generative AI Additional Terms of Service,” make it plain that users “may not use the Services to develop machine learning models or related technology.” As is standard in industry, any misuse of the service gives rise to the LLM’s owner and operator to terminate the user’s use and likely creates exposure to civil liabilities under contract law and other related liabilities.

The waters are muddied further when these large corporations start lending and sharing availability of LLMs with each other. There are further indications that Meta is opening up access to its LLaMa model beyond the world of academia as reports surface about partnerships with Amazon and Microsoft. For example, Meta’s LLaMa large language model is now available to Microsoft Azure users.

Thus, in selecting LLMs for various purposes, users must weigh the technical advantages and drawbacks of the different models (e.g., network architecture, weights and biases of algorithms, performance parameters, computing budget and the actual data on which the model was trained) with the legal liabilities that may arise from using these LLMs. Critically, before investing too much time or resources into a product or service that makes use of an LLM, business leaders must review the terms associated with the model in order to fully understand the scope of legally permissible use and take actions to ensure legal compliance with those terms so as to avoid liabilities.