On November 15, 2022, the FTC announced that it was extending by six months the deadline for companies to comply with some portions of the updated Safeguards Rule. The extension comes as a welcome relief to companies racing to meet the rapidly nearing effective date.

The FTC approved changes to the longstanding Safeguards Rule in October 2021.  The updated rule includes several components that could require significant operational modifications, such as encryption at rest and multifactor authentication whenever nonpublic personal information is accessed.  While some components went into effect 30 days after publication, the most substantive changes were set to go into effect on December 9, 2022. 

The FTC voted unanimously to extend that December 9 date to June 9, 2023.  Accordingly, subject companies will have an additional six months to:

  • Designate a qualified individual to oversee their information security program;
  • Develop a written risk assessment;
  • Limit and monitor who can access customer information;
  • Encrypt information in transit and at rest;
  • Train security personnel;
  • Develop a written incident response plan; and
  • Implement multifactor authentication whenever anyone accesses customer information.

While the new deadline certainly provides breathing room, companies should not take it as an opportunity to delay.  Indeed, between the holidays and state law compliance initiatives, the new deadline will also soon be rapidly approaching. 

On November 9, 2022, New York Department of Financial Services (NYDFS) Superintendent Adrienne Harris announced that the NYDFS formally proposed an updated cybersecurity regulation.  Although the updates had previously been released in draft form, the formal announcement commences the 60-day comment period. 

The proposed regulations would create three different tiers of companies based on their size, operations, and nature of their businesses.  The compliance obligations of those different tiers would vary, but in general, the proposed regulations would:

  • Enhance governance requirements in an attempt to increase accountability for cybersecurity at the Board and C-Suite levels;   
  • Require controls to prevent initial unauthorized access to technology systems and to prevent or mitigate the spread of an attack;   
  • Require more regular risk and vulnerability assessments, as well as more robust incident response, business continuity and disaster recovery planning; and   
  • Direct companies to invest in regular training and cybersecurity awareness programs that are relevant to their business model and personnel.   

Companies now have 60 days to submit comments to these proposed regulations, after which NYDFS will either propose a revised version or adopt the final regulation.  However, even though the final regulation may still change, companies should be assessing their current compliance regimes – including whether their policies are properly documented in a fashion likely to satisfy legal regulators. 

As we discussed in a recent webcast, there has been a surge in litigation focused on companies’ use of Meta Pixel, which is tracking code that enables the sharing of user online activity with Facebook.  Recent litigation has alleged that use of Meta Pixel with online videos violates the Video Privacy Protection Act (VPPA).  An even more recent variant of Meta Pixel litigation alleges that use of Meta Pixel constitutes wiretapping under federal and state wiretapping laws.  These recent cases have focused on hospitals that have enabled Meta Pixel for patient portals and thus allegedly allow Meta to eavesdrop and/or wiretap private doctor-patient communications, allegedly violating HIPAA, state healthcare privacy laws, and a host of other state privacy laws – in addition to wiretapping laws.

A recently filed case, Stewart v. Advocate Aurora Health, Inc. (N.D. Ill), is a good example of this recent trend in Meta Pixel litigation.  The case alleges that Advocate configured Meta Pixel to track patient communications through its patient portal, allowing Meta to capture in real time medical information of Advocate patients.  The complaint alleges that Advocate recently disclosed the sharing with Meta Pixel under applicable data breach notification laws, which Plaintiff alleges is evidence that Advocate and Meta had not obtained patient consent to the alleged interception/eavesdropping of communications.

The lawsuit asserts claims against Advocate and Meta under the Electronic Communications Privacy Act, which provides liquidated damages of up to $10,000 per person for wiretap violations, the Stored Communications Act (SCA), as well as Illinois state law claims for breach of contract, breach of implied duty of confidentiality, and invasion of privacy.  There have been at least half a dozen similar class action lawsuits filed in the past two months against hospitals and Meta under similar theories of liability, and it is likely that plaintiff’s attorneys will file more in the coming months.  Indeed, a recent study by Markup asserts that one third of the 100 largest hospitals in the U.S. share patient PHI through their websites with Meta. 

The recent variant in Meta Pixel litigation combines allegations about the usage of Meta Pixel, prevalent in recent VPPA cases, with recent favorable rulings in the Third and  Ninth Circuits finding that tools like Meta Pixel intercept communications under state wiretap laws.  The litigation against Advocate raises a number of thorny legal issues, such as whether Meta’s terms of service immunize Meta from liability for a website operator’s configuration of Meta Pixel, the extent to which patient consent can be inferred from website disclosures, and the availability of liquidated damages in the absence of actual damages under the SCA.  In the short term, however, it is likely that plaintiff’s lawyers will continue to pursue wiretap claims against hospitals allowing Meta Pixel to run on patient portals.  Hospitals would be wise to review their usage of tools like Meta Pixel and to consider changes to their privacy policy and/or obtain formal patient consent. 

On October 20, 2022, Texas Attorney General Ken Paxton brought suit in Texas district court against Google for alleged violations of the Texas Capture or Use of Biometric Identifier Act (“CUBI”).  The  petition claims that Google violated CUBI by collecting, analyzing, and storing the facial geometry of individuals who appear in photos that have uploaded to the Google Photos app, and additionally, that Google violates CUBI via its Nest platform by recording and forming a voice print of individuals.  While Illinois’ BIPA garners the headlines due to its private right of action, the Texas action is a reminder that biometric data implicates laws nationwide.

CUBI (Tex. Bus. & Com. Code § 503) restricts the collection and use of “biometric identifiers,” defined as a “retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry.”  Under the Act, it is unlawful to “capture” an individual’s biometric identifier for a commercial purpose unless the individual is informed and consents to having such information collected before capture.  Further, the CUBI mandates that those who are in possession of individuals’ biometric identifiers must not disclose the information without consent and must destroy such information “within a reasonable time, but not later than the first anniversary of the date the purpose for collecting the [biometric] identifier expires.”  Each violation is potentially subject to a civil penalty of up to $25,000.

Texas alleges that Google, since at least 2015, has violated CUBI by collecting photos and biometric data of individuals’ faces through a feature in Google Photos that it calls “Face Grouping.”  Face grouping employs facial-recognition technology that allegedly detects an individual’s face and creates a record (or ‘face template’) for that specific face.  Texas states that Google evaluates whether the faces detected in each new photo or video uploaded is similar to face templates Google has previously recorded from other photos and videos.  Google then groups together any photos and videos depicting similar faces—known as “face groups”—based, in part, on the similarity of face geometry.  However, Texas also argues that a face template is created for any minor child or any passerby who happens to be in the background the photo.

With respect to Nest, Texas argues that the platform violates CUBI via its ‘voice match’ feature.  Specifically, Texas alleges that “Google uses each voice inflection, mumble, stutter, accent, pattern, and whisper to identify speakers and, through machine learning, to improve Google’s products and service.”  The petition states that the Nest platform accomplishes this by “listening to and analyzing every voice it hears, without regard to whether a speaker has consented to Google’s indiscriminate voice printing,” and then storing their voiceprints indefinitely.  Further, Texas claims that Google subcontractors “may have reviewed millions of recordings made by unsuspecting individuals speaking near a device with Google Assistant.  In other words, Google had human beings listen to the most intimate conversations about everything that people discuss in the safety of their own home including sex, religion, politics, and health.”

This suit follows a prior case brought by Texas against Meta earlier this year, and a $100 million dollar settlement in a class action lawsuit against Google LLC for alleged violations of the Illinois Biometric Information Privacy Act.

While it may be tempting to dismiss the Texas action as a focus against big tech, it should serve as an alarm bell to companies that regulators are seriously focused on biometric identifiers.  Accordingly, companies should carefully review its technology—especially security cameras, timekeeping procedures, and headsets or smartphone functionalities—to ensure that it is not unintentionally stepping into the crosshairs. 

The “Highlights” — To Russia, With Crypto

The Financial Crimes Enforcement Network (“FinCEN”) issued on November 1 a Financial Trend Analysis regarding ransomware-related Bank Secrecy Act (“BSA”) filings during the second half of 2021 (the “Report”).  This publication follows up on a similar ransomware trend analysis issued by FinCEN regarding the first half of 2021, on which we blogged here.  

In the most recent analysis, FinCEN found that both the number of ransomware-related Suspicious Activity Reports (“SAR”) filed, and the dollar amounts at issue, nearly tripled from 2020 to 2021.  The notable takeaways from the Report include:

  • Ransomware-related SARs were the highest ever in 2021 (both in number of SARs and in dollar amounts of activity reported).
  • Ransomware-related SARs reported amounts totaling almost $1.2 billion in 2021.
  • Approximately 75% of ransomware-related incidents between June 2021 and December 2021 were connected to Russia-related ransomware variants.

The Report, which stated that the majority of these ransomware payments were made in Bitcoin, serves as a particular reminder to cryptocurrency exchanges of their role in both identifying and reporting ransomware-related transactions facilitated through their platforms.  The Report stresses that SAR filings play an essential role in helping FinCEN identify ransomware trends.

Ransomware Trends and SAR Data

Ransomware is malicious software that encrypts a victim’s files and holds the data hostage until a ransom is paid, generally in cryptocurrency like Bitcoin.  Over the past two years, FinCEN has noted a shift in ransomware strategy from high-volume, opportunistic attacks to more selective ransomware attacks, targeting larger enterprises and bigger payouts.  This included an increase in “double extortion” tactics, in which ransomware operators not only hold the victim’s data hostage, but also threaten to publish the stolen data if ransom demands are not met.  FinCEN also noted that the ransomware “business model” has expanded to include Ransomware-as-a-Service (“Raas”), in which ransomware creators sell user-friendly ransomware kits on the dark web in exchange for a percentage of the ransom.

FinCEN observed a staggering increase in the number and monetary amount of ransomware-related SAR filings in 2021.  In 2020, 487 ransomware-related SARs were filed, totaling nearly $416 million.  In 2021, 1,489 ransomware-related SARs were filed, totaling nearly $1.2 billion.  On average, there were 132 ransomware-related incidents per month in the second half of 2021.  This increase in filings may have resulted from FinCEN’s and Treasury’s Office of Foreign Assets Control’s (“OFAC”) Fall 2021 advisories promoting reporting of ransomware-related incidents (herehere, and here).  As we have blogged, OFAC has indicated that it may impose civil penalties for sanctions violations resulting from ransomware payments based on strict liability – i.e., a company can be held liable even if it did not know or have reason to know that it was engaging in a transaction that was prohibited by OFAC – even if OFAC states that it applies a self-imposed presumption of non-enforcement that it still may disregard in any particular case.

To highlight the upward trend in ransomware-related SARs, FinCEN provided the following graphs charting (1) the increase in filings, and (2) the increase in dollar value:

According to FinCEN, the “[f]iling date data is slightly higher than incident date data in Figures 1 and 2 because filing date data can include ransomware events that occurred outside the timeframe covered by the report.  Filing date reflects both detection and compliance, where incident date reflects the actual date of payments or demanded payments associated with ransomware events.”

Although the above figures are attention-grabbing, they of course represent incidents reported in filed SARs – which begs the question of how many incidents are not being identified and reported.

Uptick in Russia-Related Ransomware Variants

FinCEN reported than in the second half of 2021 alone, roughly 75% of ransomware-related SARs, and 69% of ransomware incident value, were connected to Russia-related ransomware variants.  Although it is difficult to attribute malware, these variants were identified as using Russian-language code, being specifically coded not to attack Russia or post-Soviet states, or as advertising primarily on Russian-language sites.  Combined, the top five Russia-related ransomware variants were connected to 376 ransomware incidents, totaling $219.5 million.

Role of Cryptocurrency Exchanges Facilitating Ransomware Payments

In its November 8, 2021 Advisory on Ransomware and the Use of the Financial System to Facilitate Ransomware Payments, FinCEN outlined the typical flow of funds in a ransomware attack, highlighting the role that financial institutions, including money services businesses (“MSB”), play in facilitating these ransom payments.  Most ransomware payments involve a victim transmitting funds via wire transfer, ACH transfer, or credit card payment to a convertible virtual currency (“CVC”) exchange, in order to purchase the type and amount of CVC specified by perpetrator.  The victim then sends the CVC, often from a virtual wallet hosted by the cryptocurrency exchange, directly to the perpetrator’s designated account or CVC address.  The perpetrator then launders the funds to convert them into other CVCs.  Cyber insurance companies (“CIC”) and digital forensic incident response companies (“DFIR”) may also play a role in ransomware transactions.  CICs may reimburse victim policyholders for remediation services, including hiring DFIRs to negotiate with cybercriminals and facilitate payments.  

While ransom payments are most commonly requested in Bitcoin, cybercriminals are increasingly incentivizing victims to pay in Anonymity-Enhanced Cryptocurrencies (“AEC”), such as monero, in order to reduce transparency of financial flows through anonymizing features.  Monero recently received a specific “shout out” in the FinCEN enforcement action against Bittrex, which described monero as including “features that prevent tracking by using advanced programming to purposefully insert false information into every transaction on its private blockchain.”

In a November 8, 2021 Advisory on ransomware, FinCEN provided this flowchart highlighting the typical movement of CVC in ransomware attacks:

Ransonware incidents also may trigger OFAC-related restrictions, if payments involve sanctioned persons or jurisdictions.  In October 2021, OFAC issued a 28-page sanctions compliance guide for the virtual currency industry, explaining reporting instructions, consequences for non-compliance, and best practices.

Detection, Mitigation, and Reporting

Ransomware continues to pose a significant threat to the U.S. critical infrastructure sectors, businesses, and the public.  Financial institutions and MSBs dealing in CVCs play an important role in protecting the U.S. financial system from these types of attacks, through compliance with BSA and OFAC obligations.  To detect and mitigate ransomware attacks, FinCEN recommends:

  • Incorporating indicators of compromise (“IOC”) from threat data sources into intrusion detection and security alert systems to enable blocking and reporting.
  • Contacting law enforcement immediately upon identifying ransomware-related activity, and contacting OFAC where the ransom involves sanctioned payments.
  • Reporting suspicious activity to FinCEN by highlighting the presence of “Cyber Event Indicators,” and including IOCs like suspicious email addresses, file names, hashes, domains, and IP addresses on the SAR form.

FinCEN also reminds financial institutions to review the potential “red flag” financial indicators of ransomware in FinCEN’s November 8, 2021 Advisory, which are:

  • A financial institution or its customer detects IT enterprise activity that is connected to ransomware cyber indicators or known cyber threat actors. Malicious cyber activity may be evident in system log files, network traffic, or file information.
  • When opening a new account or during other interactions with the financial institution, a customer provides information that a payment is in response to a ransomware incident.
  • A customer’s CVC address, or an address with which a customer conducts transactions is connected to ransomware variants, payments, or related activity. These connections may appear in open sources or commercial or government analyses.
  • An irregular transaction occurs between an organization, especially an organization from a sector at high risk for targeting by ransomware (e.g., government, financial, educational, healthcare) and a DFIR or CIC, especially one known to facilitate ransomware payments.
  • A DFIR or CIC customer receives funds from a counterparty and shortly after receipt of funds sends equivalent amounts to a CVC exchange.
  • A customer shows limited knowledge of CVC during onboarding or via other interactions with the financial institution, yet inquires about or purchases CVC (particularly if in a large amount or rush requests), which may indicate the customer is a victim of ransomware.
  • A customer that has no or limited history of CVC transactions sends a large CVC transaction, particularly when outside a company’s normal business practices.
  • A customer that has not identified itself to the CVC exchanger, or registered with FinCEN as a money transmitter, appears to be using the liquidity provided by the exchange to execute large numbers of offsetting transactions between various CVCs, which may indicate that the customer is acting as an unregistered MSB.
  • A customer uses a foreign-located CVC exchanger in a high-risk jurisdiction lacking, or known to have inadequate, AML/CFT regulations for CVC entities.
  • A customer receives CVC from an external wallet, and immediately initiates multiple, rapid trades among multiple CVCs, especially AECs, with no apparent related purpose, followed by a transaction off the platform. This may be indicative of attempts to break the chain of custody on the respective blockchains or further obfuscate the transaction.
  • A customer initiates a transfer of funds involving a mixing service.
  • A customer uses an encrypted network (e.g., the onion router) or an unidentified web portal to communicate with the recipient of the CVC transaction.

I

Related Posts

In a recent enforcement action against online alcohol delivery service Drizly and its CEO, James Rellas, the Federal Trade Commission (FTC) made clear its focus on data minimization and limitations on the secondary uses of data. Although the action arose out of a common security failure—the sort that has been the subject of numerous prior FTC consent decrees—the enforcement requirements extend beyond the standard implementation of an information security program. Indeed, the FTC’s order focuses on data minimization principles—a potential harbinger of how existing data security laws and new privacy laws may be converging.  It therefore emphasizes the need for businesses to harmonize the roles and responsibilities of data privacy and security professionals, which are connected but frequently siloed.

In its Complaint, the FTC alleged that both Drizly and its CEO were aware of security issues exposed during a prior data security incident as early as 2018.  It further alleged that Drizly’s failure to take adequate steps to address its known security vulnerabilities resulted in a second hack involving the theft of customer data. Specifically, the FTC alleged that Drizly and its CEO:

  • Failed to implement basic security measures,including two-factor authentication, role based access provisioning, written security policies and procedures, and employee training;
  • Stored critical database information on an unsecured platform, storing login credentials on GitHub contrary to the platform’s guidance and “well-publicized security incidents involving GitHub;” and
  • Neglected to monitor network security threats, failing to put a senior executive in charge of data security and failing to monitor its network for unauthorized access attempts.

To address these deficiencies, the FTC’s proposed order requires Drizly and its CEO to:

  • Destroy unnecessary data, including any personal data “that is not necessary for [Drizly] to provide products or services to consumers,” which must be both documented and reported to the FTC;
  • Limit future data collection, by “refraining from collecting or storing personal information unless it is necessary for specific purposes outlined in a retention schedule,” information about which Drizly must publish on its website; and
  • Implement an information security program, designed to address the issues identified in the complaint and which must include security training for employees, designation of a high-level employee to oversee the information security program; implementation of access controls, and implementation of MFA on systems containing consumer data.

The Drizly enforcement action’s data minimization requirements go above and beyond the traditional information security program requirements contained in prior FTC enforcement actions. Data minimization is critical to the security of consumer data—in the words of Commissioner Slaughter—because “hackers cannot steal data that companies did not collect in the first place.”  Additionally, these requirements represent the next step in the FTC’s continued focus on what it refers to as “commercial surveillance,” and are likely to be a signpost for continued discussions around the FTC’s Advanced Notice of Public Rulemaking.

The FTC’s increasing focus on data minimization is consistent with overall regulatory awareness of the dangers of over-collection and over retention, a focus reflected in new U.S. state privacy laws that likewise mandate data minimization standards.  Businesses should consider reviewing data management practices and considering the implementation of data minimization principles.  

In the past several months, plaintiff’s lawyers have filed dozens of class action lawsuits under state wiretap laws, some of which provide for statutory damages of $5000 per occurrence or more.  The lawsuits focus on the use of chatbots, “session replay” software, and tracking code embedded in websites. Plaintiffs contend these tools enable the surreptitious sharing of personal information with third parties and are illegal wiretaps.  In this webcast, Phil Yannella and Greg Szewczyk will explore the reason for this surge in litigation, discuss the status of pending cases, and potential defenses. 

The Third Circuit recently became the first federal appellate court to address the question of whether the victim of a data breach has Article III standing to bring a claim for damages based on the fear of identity theft since the Supreme Court’s decision in TransUnion v. Ramirez in 2021.  The Third Circuit, in Clemens v. ExecuPharm Inc., found that the plaintiff had established an injury in fact sufficient to satisfy federal standing requirements despite TransUnion’s holding that the mere fear of future harm was generally insufficient to establish a claim for monetary damages under Article III.  The Third Circuit’s opinion may provide a roadmap for plaintiff’s attorneys seeking to bring future data breach claims premised on the fear of identity theft.

The plaintiff in Clemens was a former ExecuPharm employee whose sensitive personal data, including her social security number, was stolen by threat actors in a 2020 ransomware attack against ExecuPharm.  After ExecuPharm declined to pay the ransom, the threat actors published the stolen data on the dark web.  When plaintiff became aware of the theft and publication of her sensitive personal information, she purchased credit monitoring, placed fraud alerts on her accounts, and spent time monitoring those accounts for signs of fraud or identity theft.  Plaintiff brought suit against ExecuPharm in 2020 in the Eastern District of Pennsylvania, asserting claims for negligence, negligence per se and breach of implied contract.  The District Court dismissed the case, relying on a pre-TransUnion Third Circuit case – Reilly v. Ceridian Corp. – which held that the risk of future harm was too speculative to establish Article III standing.  However, notwithstanding the TransUnion decision, the Third Circuit came to the opposite conclusion here and reversed.

Most of the Court’s standing analysis was focused on the “injury in fact” requirement of Article III.  This requires that a plaintiff demonstrate that he or she has suffered an injury in fact that is concrete, particularized and actual or imminent.  The Third Circuit noted that there are a number of factors that serve as guideposts to determining whether an injury is “imminent or certainly impending” in the data breach context, including whether the breach was intentional, whether data was misused, as well as the nature of the information accessed through the breach, with sensitive data that could be used to commit fraud or identity theft increasing the likelihood of harm. 

Applying these guideposts, the Third Circuit noted that the breach had been perpetrated by a known ransomware gang – and was thus clearly intentional — involved sensitive data such as the p-plaintiff’s social security number, and had been misused already through publication of the data on the dark web, which is used as a marketplace for illegal sale of personal data by hackers and fraudsters.  Based on this facts, the Court found that the plaintiff’s risk of harm was “imminent or certainly impending”.

The most potentially impactful part of the Third Circuit’s opinion, however, was its analysis of whether the plaintiff’s injury was sufficiently concrete.  Citing TransUnion, the Court focused on whether the plaintiff’s asserted harm, “has a close relationship to a harm traditionally recognized as providing a basis for a lawsuit in American courts[.]” The Court found that the plaintiff’s alleged harm was analogous to harms contemplated by privacy torts “well-ensconced in the fabric of American law.” Though intangible, the Court found the plaintiff’s asserted harm to be concrete.

TransUnion, however, held that in lawsuits premised on the “mere risk of future harm”, courts also need to consider the type of relief sought.  TransUnion held that where plaintiffs, like Clemens, seek monetary damages, something more than mere risk of future harm is necessary to establish standing.  The Third Circuit noted that TransUnion recognized that a plaintiff can satisfy the concreteness inquire where “the exposure to the risk of future harm itself causes a separate concrete harm.”  The Court found that the plaintiff had asserted several concrete present harms that she had already experienced as a result of the data breach including emotional distress and time and money spent mitigating the fallout of the data breach.  Accordingly, she had established an injury in fact. 

In many ways, the Third Circuit’s opinion is not significantly different from other Courts of Appeal that have likewise found Article III standing in data breach cases by focusing on whether the breach was the result of a malicious hack, the nature of the data accessed, and allegations of misuse.  The Second Circuit, for example, has articulated a very similar test for determining whether a data breach plaintiff has established Article III standing.  What is notable about the Clemens opinion is that it is post-TransUnion, which case generally raised the bar for plaintiffs seeking to establish federal standing.  The Third Circuit’s methodology for assessing standing under TransUnion’s heightened standards will likely be studied by plaintiff’s attorneys seeking to establish standing in data breach class actions. It would not be surprising, for example, to see plaintiffs allege emotional distress in future breach claims in order to satisfy the concreteness requirement.

It is too early to assess whether other Circuits will follow the Third Circuit’s lead.  Standing in breach cases remains a highly fact intensive analysis, and in the wake of TransUnion and now Clemens it is doubtful that a data breach plaintiff can establish standing if the data breach did not involve malicious hacking, the acquisition of sensitive data, or misuse of such data.  

On October 17, the California Privacy Protection Agency (“CPPA”) published the first revisions to the CPRA regulations. This draft includes an extensive list of proposed changes in advance of the CPPA Board public hearing, scheduled to begin on October 21st. In addition to the newest draft regulations, the CPPA published a 16 page change log explaining its reasoning. Of note, the CPPA incorporates more of a purpose-driven approach, with restrictions on use based on the purpose of collection—an approach similar to the draft Colorado rules. The CPPA also eased a range of third party restrictions, including disclosure requirements, for the purpose of “simplifying implementation.” However, notably absent from the rules are privacy risk assessments (known as data protection impact assessments under the GDPR and data protection assessments under the Colorado Privacy Act) and profiling.  Accordingly, this modified draft may not be the final version of the rules.

We will be following up with in-depth analysis in coming weeks, including on how these rules compare to the draft Colorado rules.  

Although the replacement for the Privacy Shield has garnered bigger headlines, the United States government also took another step towards a more coordinated international privacy framework by entering into the data access agreement (the “Data Access Agreement”) with the United Kingdom.  While increasingly harmonized laws are likely a positive development for businesses in the long run, the Data Access Agreement demonstrates that companies need to keep apprised of the changing legal landscape.

The Data Access Agreement stems from the 2018 Clarifying Lawful Overseas Use of Data Act (the “CLOUD Act”), which allows the U.S. to enter into executive agreements with foreign governments for access to data held by U.S.-based electronic service providers abroad.  Pursuant to the Data Access Agreement, the United States and the United Kingdom may now demand, with much greater speed and ease, user data from service providers held overseas. Such data will, as the DOJ stated in its October 3, 2022 press release, “greatly enhance the ability of the United States and the United Kingdom to prevent, detect, investigate, and prosecute serious crime, including terrorism, transnational organized crime, and child exploitation, among others.”

Under the Data Access Agreement, U.S. and U.K. officials must meet “numerous requirements” in order to demand user data from service providers overseas.  Orders submitted by investigators cannot target specific individuals in the other country and must relate to the serious crime.  Service providers will that receive “qualifying, lawful orders” will be afforded certain protections.  Moreover, the two entities tasked with overseeing the implementation of the Data Access Agreement in their respective countries—the DOJ’s Office of International Affairs and the U.K. Home Office—will likely need to coordinate efforts as requests under the agreement get underway.  

This international cooperation between the United States and the United Kingdom is not an isolated occurrence:  Other countries are working on similar bilateral data-access agreements under the authorization of the CLOUD Act.  Indeed, at the end of last year, Australia and the United States announced their intent to enter into a similar data access arrangement, and Canada is seemingly not far behind as Canadian officials issued a joint statement with the DOJ announcing their plans to begin negotiating their own bilateral agreement.  Discussions are also underway with the European Union.

How these bilateral agreements will interact with other international privacy initiatives—including concerns relating to investigative access to personal data—remains to be seen.  However, what is clear is that technology companies need to start preparing for the impending intercept orders, and all companies need to keep an eye on the frequently changing landscape.