On February 21st, the California Attorney General (AG) Rob Bonta announced a settlement with DoorDash for violations of the California Consumer Privacy Act (CCPA) and the California Online Privacy Protection Act (CalOPPA) relating to its participation in a marketing co-operative.  This action represents only the second public enforcement action since the CCPA went into effect in 2020.

According to the complaint and settlement, DoorDash participated in a marketing co-operative, as part of which unrelated businesses contribute personal information of their customers for the purpose of advertising their own products to customers from other participating businesses.  According to AG Bonta, this was an exchange of personal information for the benefit of DoorDash and therefore a “sale” under the CCPA.  As a sale, DoorDash was required under California law to provide notice of the sale as well as the opportunity to opt out of the sale.  AG Bonta alleged that DoorDash failed to provide the necessary notice and opt-out rights. 

While the participation in such a market co-operative is largely recognized as a sale under the CCPA at this point, the enforcement action is notable for a couple reasons.  First, the complaint takes positions that arguably require disclosures in privacy policies that go beyond the plain language of the regulations.  So, even for companies that feel confident that they comply with the regulations, it would be wise to assess their policies in light of the allegations.  

Second, the conduct at issue occurred in 2020 and 2021.  While the complaint notes that DoorDash did not cure when provided with a notice of violation in 2020, it indicates that it may not have been possible to cure because curing would mean making affected consumers whole by restoring them to the same position they would have been in if their data had never been sold.  Additionally, AG Bonta states in his press release that “The CCPA has been in effect for over four years now, and businesses must comply with this important law.  Violations cannot be cured, and my office will hold businesses accountable if they sell data without protecting consumers’ rights.”  

There are many lessons to learn from this action, but perhaps the most important is that businesses should prepare for what may be an increasingly aggressive enforcement policy without the opportunity to cure.  To do so, businesses should not only assess where they have gaps and how they can close those gaps, but also what can be done to best position for any arguments about past non-compliance.  

On February 9, 2024, California’s Third District Court of Appeals reinstated the California Privacy Protection Agency’s (“CPPA”) ability to enforce the California Privacy Rights Act of 2020 (“CPRA”) regulations. The CPRA regulations aim to enhance consumer privacy rights and protections in an ever-increasing digital age.

The court of appeal’s decision comes after the California Chamber of Commerce filed a lawsuit in 2023 challenging the CPPA’s authority to enforce the CPRA regulations, citing government overreach, conflicts with existing law, and the imposition of unnecessary burdens upon businesses, and which resulted in the trial court imposing a 12-month delay on enforcement. Holding that the trial court erred in imposing the one-year delay, the court of appeals reaffirmed the CPPA’s role in overseeing compliance with the state’s privacy laws, noting that no “explicit and forceful” language exists that mandates the CPPA must wait one year from the date the final regulations were approved to begin enforcement. It remains to be seen whether the California Chamber of Commerce will seek a rehearing or review.

This development is significant for both consumers and businesses. Consumers will continue to have significant rights (with the backing of the CPPA) related to their personal information. For businesses operating and doing business in California, the potential stay on enforcement activities by the CPPA that was once a possibility is no longer a reality; the February 9th decision serves as an important reminder to covered businesses to ensure their privacy practices comply with the CPRA regulations.

As consumer privacy rights continue to expand in an ever-increasing digital environment and data privacy remains an important issue, it is essential for covered business to stay informed and adhere to the CPRA regulations.

On Thursday, February 8, the Federal Communications Commission (FCC) finalized its plan to ban robocalls that feature voices generated by artificial intelligence, aiming to stem the tide of AI-generated scams and misinformation campaigns.  The FCC’s declaratory ruling formalized its position that the Telephone Consumer Protection Act (TCPA)—specifically, the provision prohibiting the initiation of calls “using an artificial prerecorded voice to deliver a message without the prior express consent of the called party”—applies to the use of AI-generated voices.  Hence, just as the TCPA requires businesses to obtain prior express written consent from consumers before robocalling them, businesses must now obtain consent for automated telemarketing calls using AI-generated voices.  Businesses seeking to deploy AI in its marketing calls using automated dialing systems should therefore consider reviewing and, if necessary, updating applicable disclosures and consents to account for the FCC’s new ruling and limit potential liability under the TCPA. 

On February 1, 2024, the Connecticut Office of the Attorney General (“OAG”) submitted to the Connecticut General Assembly its report on the first six months of the Connecticut Data Privacy Act (“CTDPA”).  While the report includes important information about its enforcement efforts to date, the most noteworthy aspect may be its recommendation to the legislature to remove various exemptions from the CTDPA. 

The report notes that the OAG has received more than thirty consumer complaints in the first six months of the CTDPA, which went into effect on July 1, 2023, many of which involved consumers’ attempts to exercise their new rights.  The OAG noted, however, that around one-third of the complaints involved data or entities that were exempt under the CTDPA. 

With respect to enforcement, the report provides summaries of four different areas:  privacy policies, sensitive data, teens’ data, and data brokers.  The report notes a different types of enforcement activities for each, but two are worth highlighting.  First, the report notes that the OAG is actively reviewing companies’ privacy policies to assess compliance, resulting in the issuing of ten cure notices on the topic.  Clearly, companies subject to the CTDPA should ensure that their public facing documents are at least facially sufficient. 

Second, the report notes that it has sent a cure notice to “a popular car brand” based on reports that its connected vehicles may be collecting sensitive personal data.  This focus on sensitive data is in line with what we have seen from other regulators, such as Colorado.  But, it also demonstrates that public reports on privacy issues can direct regulators to focus on specific industries.

Finally, the OAG makes several legislative recommendations.  One such recommendation is to scale back entity-level exemptions, specifically the non-profit, GLBA, and HIPAA exemptions.  The OAG also recommends adding a right to know specific third parties with which controllers share personal data, similar to the Oregon law that goes into effect later this year. 

Overall, the OAG’s report shows that regulators across states are taking generally similar approaches to enforcement, which appears to include a component of looking at companies’ privacy policies and opt-out mechanisms as an initial check on compliance.  Businesses should expect more of the same, and they would be wise to update accordingly. 


In this month’s webcast, “Financial Services 2024 Privacy and Cybersecurity Preview,” Greg Szewczyk and Sarah Dannecker give an overview of how the privacy and cybersecurity landscape is evolving in the financial sector.  From more specific data security reporting requirements to potential data subject rights to the use of artificial intelligence, the members of Ballard Spahr’s PDS group will highlight key developments and provide practical insights on how to stay ahead of the regulatory, litigation, and hacker threats.

You are the HIPAA privacy official of a hospital or health plan (a covered entity under HIPAA). You receive an email from a vendor that handles protected health information (a business associate), informing you that one month ago an unauthorized actor infiltrated its information systems. The intruder may have gained access to information about your organization. The vendor learned about the incident two weeks ago and immediately shut off that access, implemented patches to its systems to prevent further intrusion, and launched a forensic analysis to determine the customers and individuals affected by the incident and the nature of the information that was accessed. The vendor does not know how long that will take, but expects it will be months.

How do you respond to this news in view of HIPAA’s requirement to provide timely notice of the breach to affected individuals?

Timing Requirements for Business Associates

HIPAA requires a business associate to notify a covered entity of a breach without unreasonable delay, but within 60 days of the date the business associate discovers the breach. 45 CFR 164.410(b). Discovery occurs not only when the business associate actually learns of the breach, but when it should have learned of the breach if it exercised reasonable diligence. 45 CFR 164.404(a)(2). Note, however, that some business associates will contractually agree to notify the covered entity sooner than the maximum deadline required by HIPAA; see “Business Associate Agreements” below for more information on how your business associate agreement may impact this deadline.

The preamble to the HIPAA regulations recognizes that a business associate may not know all of the information that it is required to disclose to the covered entity when it learns of the breach, but states that “a business associate should not delay its initial notification to the covered entity of the breach in order to collect information needed for the notification to the individual.” 78 Fed. Reg. 5565, 5656. The business associate is to supplement its initial notice at a later time, even if it provides the additional information after the 60-day period has elapsed and after notice has already been provided to affected individuals.

But neither the regulations nor the preamble address the situation where the missing information involves who is affected. Until that information is known, neither the business associate nor the covered entity knows who needs to be notified. 

Timing Requirements for Covered Entities

A covered entity must follow a timeframe similar to the one that applies to business associates. It is required to notify affected individuals without unreasonable delay, but within 60 days of discovering the breach. 45 CFR 164.404(b). In this case, your discovery occurs when the business associate informs you of the breach.

The Dilemma

Assuming that it has worked with appropriate diligence, the business associate may have met its initial notification obligations under HIPAA, but it places you in a situation where you need to make important decisions. HIPAA sets an outside limit of 60 days for you to notify affected individuals, but you probably will not know who has been affected by the breach at that time.

Do you promptly provide a general notice to all who may have been affected by the breach, even if the number actually affected was small or the information accessed relatively trivial? That notice may cause undue anxiety and trigger many phone calls and questions that neither you nor your vendor can answer. Whether an individual was affected or not, that individual may seek reassurance and may expect immediate protection, such as credit monitoring, but your vendor may be prepared to pay for that protection only for those who were substantially affected.

On the other hand, if you wait for further information, some individuals – will remain unaware that their information has been exposed, delaying the time when they could take their own protective measures.

Mitigation

The HIPAA regulations impose a duty on covered entities to “mitigate, to the extent practicable, any harmful effect that is known to the covered entity” of a breach. 45 CFR 164.530(f). In that regard, the covered entity may consider what is known and what is practical in this situation.

Based on the limited information provided by your business associate in this example, you do not know that any particular individual has been affected by the breach until you receive more information. And you do not know the sensitivity of the information that may have been accessed. The disclosure of a Social Security Number, for example, is of much greater concern than the disclosure of an address or birthdate. You should continue to reach out to your vendor to learn what you can. Even if the investigation has not determined all of the individuals affected, it may have narrowed the affected group or determined that Social Security Numbers and credit card information was or was not revealed.

You do know that a suspicious actor was involved, which is a cause of concern and a factor that would favor an earlier, if incomplete notice.

The Business Associate Agreement

Your business associate agreement with the vendor may play a large role in how you address the breach. Although HIPAA makes a covered entity responsible for notifying affected individuals, the business associate agreement may contractually place that obligation on the business associate. This is especially true when the business associate has a direct relationship with individuals affected by the breach.

The business associate agreement may also shorten timeframes for notification, sometimes requiring notice within a few days or a period measured by 24-hour increments. While your vendor may be meeting the standard for notification set forth in the HIPAA regulations, it may have failed to meet its obligations under the business associate agreement. Because the clock for providing notice to affected individuals starts with the notice that you receive from your vendor, an earlier notice requirement in the business associate agreement could advance the date that you or the business associate have to notify affected individuals.

The business associate agreement may have other implications, for example, setting forth the business associate’s duty to mitigate harm or to serve as the contact person for a breach. It may establish whether the business associate or covered entity has responsibility to determine whether an incident actually constitutes a HIPAA breach.

Additional Complications

The responsibilities of the parties can be complicated by a number of additional factors. For example:

  • If the business associate is the agent of the covered entity, the covered entity will be deemed to have discovered the breach when the business associate discovers the breach. It will not have an additional 60 days to provide notice.
  • If the business associate has engaged a subcontractor, and the subcontractor is responsible for the breach, there is another layer of responsibility, notifications, and potential delays added to the mix.
  • If the breach is large enough, HIPAA will require the covered entity to notify the Department of Health and Human Services (HHS) and the media when it notifies affected individuals. If the breach is small, no notice to the media is required, and notice to HHS may be logged and submitted in an annual report of all the year’s breaches within 60 days after the end of the year.
  • State laws may also require breach notification.
  • If law enforcement is involved and believes that notice will impede its investigation into the incident, the business associate or covered entity may need to delay providing notice. This delay may serve to provide the business associate more time to identify the affected individuals.

Conclusion

No one likes to deliver bad news, and there is a natural and practical reluctance to cause unnecessary worry if you don’t know whether someone has been affected. Those practicalities need to be weighed against the risks of the harm that may come from a delay and of a potential failure to meet applicable HIPAA rules and business associate agreement obligations. In situations like this, you should carefully review your business associate agreement to assess where responsibility lies and make sure that your decisions are based on up-to-date information, so you can make reasoned decisions. As with so many actions under HIPAA, you should document the basis for your actions.

On November 14, 2023, the Colorado Division of Insurance’s AI insurance regulations went into effect.  Colorado is now the first state in the nation to adopt regulations specifically aimed at insurance algorithms.

Colorado’s regulation requires life insurance companies to report how they review AI models and use External Consumer Data and Information Sources (ECDIS), which includes nontraditional data such as social media posts, shopping habits, Internet of Things data, biometric data, and occupation information that does not have a direct relationship to mortality, among others.  Lift insurance companies are also required to develop a governance and risk management framework that includes thirteen specific components.

While this regulation is specific to life insurance, Colorado has also proposed adopting it for auto insurers, which has met resistance from auto insurance trade groups.  The deadline for comments on adopting the framework for the auto insurance industry is December 1.

Although Colorado is the first state, several others regulators—including from California, Connecticut, New York, and Washington, D.C.—have issued warnings or notices requiring carriers to demonstrate that their models and data aren’t discriminatory, and New Jersey has introduced similar legislation.  Subject entities should therefore pay close attention to the changing regulatory landscape.

On November 21, the Federal Trade Commission (“FTC”) approved in a 3-0 vote a resolution authorizing the use of compulsory process in nonpublic investigations involving products and services that involve or claim to involve Artificial Intelligence (AI). 

Compulsory process is akin to a subpoena, and it allows the FTC to request the production of information, documents, or testimony relevant to an investigation.  The FTC reports that the omnibus resolution will streamline FTC staff’s ability to issue civil investigative demands (CID), while retaining the Commission’s authority to determine when demands are issued.  The resolution will be in effect for 10 years.

While the resolution will have a clear impact on companies that develop AI, it will also have an impact on all companies that offer products or services that involve or claim to involve AI.  Indeed, given the FTC’s prior warnings relating to misleading advertising about AI practices, it should be expected that the FTC will use compulsory process to investigate it. 

In any event, the resolution should also be seen as a general indication that the FTC plans to focus on regulating AI, and it will seek the investigative tools it deems necessary.  Companies should therefore ensure that they have the proper AI governance plans in place to assess and defend their practices. 

On November 27, 2023, the California Privacy Protection Agency (CPPA) published proposed Automated Decision-Making Rules to be discussed by the CCPA board at its upcoming meeting on December 8, 2023.  While the proposed rules are far from final—indeed, they are not even official draft rules—they signal that the CPPA is considering rules that would have significant impact on businesses subject to the California Consumer Privacy Act (CCPA).

The proposed rules define “automated decisionmaking technology” broadly as “any system, software, or process—including one derived from machine-learning, or other data-processing or artificial intelligence—that processes personal information and uses computation as a whole or part of a system to make or execute a decision or facilitate human decisionmaking.”  Automated decisionmaking technology includes, but is not limited to, “profiling,” defined to mean any form of automated processing of personal information to evaluate, predict or analyze a person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements. 

The proposed rules require companies to provide pre-use notice, the ability to opt-out, and a right of access with respect to automated decisionmaking technologies in six specific scenarios:

  1. For decisions that produce legal or similarly significant effects concerning a consumer;
  2. Profiling a consumer in their capacity as an employee, independent contractor, job applicant, or student;
  3. Profiling a consumer while they are in a publicly accessible place;
  4. Profiling a consumer for behavioral advertising (listed as a discussion topic);
  5. Profiling a consumer that the business has actual knowledge is under the age of 16 (listed as an additional option for board discussion); and
  6. Processing personal information to train automated decisionmaking technology (listed as an additional option for board discussion).

The application to employees will be particularly important, as the only other rules on automated decision-making (Colorado) do not apply in the employment context.  Further, the proposed rules make clear that profiling employees would include keystroke loggers, productivity or attention monitors, video or audio recording or live-streaming, facial- or speech-recognition or –detection, automated emotion assessment, location trackers, speed trackers, and web-browsing, mobile-application, or social-media monitoring tools.  In other words, the proposed rules would have big impacts on common technologies used in the employment context—which may not currently be configured in a way where opt-outs or information could be easily shared.

With respect to the right of access, companies would have to disclose not only that automated decision-making technology is used and how decisions affects the individual, they would also have to provide details on the system’s logic and possible range of outcomes, as well as how human decision-making influences the final outcome.  These requirements will be difficult in practice, and, as with Colorado Privacy Act’s regulations and the CPPA’s proposed regulations on risk assessments, should be influencing the nature and amount of information that companies require from vendors that leverage AI now. 

As noted above, the proposed rules have a long way to go.  Additionally, various exceptions are incorporated into the rules that may mitigate the operational burden in some contexts.  However, the proposed rules will almost certainly result in expanded regulatory obligations for subject companies over what they currently face.  While compliance efforts may be premature, companies should start assessing whether they could, if necessary, comply with the proposed rules from an operational standpoint. 

On November 16th, the Federal Communications Commission (“FCC”) and Federal Trade Commission (“FTC”) announced new independent initiatives regarding the use and implications of AI technologies on consumers in the context of telephone and voice communications. Learn more about these initiatives on our sister blog, the Consumer Finance Monitor.