On January 6, 2021, a bipartisan group of New York state lawmakers released a copy of Assembly Bill 27 (AB 27), the  New York Biometric Privacy Act.  If New York passes AB 27, it will join Illinois, Texas, and Washington as states that have adopted laws that strictly regulate the notice, collection, and handling of biometric information.  Significantly, however, it would join Illinois as only the second state to provide a private right of action with statutory damages for violations.

The proposed bill is similar to the three other states with biometric-specific bills in that it would prohibit businesses from collecting biometric identifiers or information—defined to include retina or iris scans, fingerprints, voiceprints, or scans of hand or face geometry, and any information derived therefrom—without first receiving written consent from the individual or their authorized representative.  AB 27 would also prohibit businesses from selling, leasing, trading, or otherwise profiting from a person’s biometric information, as well as require businesses to develop a publicly available written retention and destruction policy.  Notably, AB 27 follows the Illinois model of enforcement by affording individuals a private of action with statutory damages of up to $1,000 for negligent violations and $5,000 for intentional or reckless violations, as well as reasonable attorneys’ fees.  As we have discussed in other posts involving lawsuits under the Illinois law, these types of statutory damages can lead to significant amounts quickly when violations involve large numbers of individuals.

This is not the first time New York lawmakers have attempted to pass a biometric privacy bill.  Indeed, there have been no fewer than three attempts since 2018, none of which have succeeded.  There is therefore reason to believe that AB 27 will face similar fate.  However, businesses should pay close attention, as its passage would have serious consequences.

On December 14, 2020, the Federal Trade Commission (FTC) announced in a press release that it is issuing orders under the FTC’s authority in Section 6(b) of the FTC Act to the following nine social media and video streaming companies: Amazon.com, Inc., ByteDance Ltd. (which operates the short video service TikTok), Discord Inc., Facebook, Inc., Reddit, Inc., Snap Inc., Twitter, Inc., WhatsApp Inc., and You Tube LLC.

The FTC made publicly available samples of the letter and order sent to each company. Specifically, the FTC is seeking privacy policies, procedures, and practices related to:

  • how social media and video streaming services collect, use, track, estimate, or derive personal and demographic information;
  • how they determine which ads and other content are shown to consumers;
  • whether they apply algorithms or data analytics to personal information;
  • how they measure, promote, and research user engagement; and
  • how their practices affect children and teens.

The FTC voted 4-1 to issue the orders with a majority of the commissioners releasing a joint statement saying that the FTC’s study is timely and important as “concerns mount regarding the impact of the tech companies on Americans’ privacy and behavior.” However, Commissioner Noah Joshua Phillips issued a dissenting statement, stating that “[t]he breadth of the inquiry, the tangential relationship of its parts, and the dissimilarity of the recipients combine to render these orders unlikely to produce the kind of information the public needs, and certain to divert scarce Commission resources better directed elsewhere.”

On December 18, 2020, the Office of the Comptroller of the Current (OCC), Federal Reserve Board (FRB), and Federal Deposit Insurance Corporation (FDIC) announced an interagency notice of proposed rulemaking that would require supervised banking organizations to provide notification of significant computer security incidents to their primary federal regulator.  Under the proposed rule, for incidents that could result in a banking organization’s inability to deliver services to a material portion of its customer base, jeopardize the viability of key operations of a banking organization, or impact the stability of the financial sector, the banking organization must notify its primary federal regulator no later than 36 hours after determining an incident has occurred.  Additionally, service providers to banking organizations would be required to notify at least two individuals at affected banking organization customers immediately after the bank service provider experiences a computer-security incident that it believes in good faith could disrupt, degrade, or impair services provided for four or more hours.

By requiring notice of these computer security incidents, the proposed rule broadens the type of reportable events that banking organizations and their service providers are required to report to federal agencies.  The agencies stated that, “current reporting requirements related to cyber incidents are neither designed nor intended to provide timely information to regulators regarding such incidents.”  Specifically, the agencies noted that the filing of Suspicious Activity Reports under the Bank Secrecy Act do not provide the agencies with sufficiently timely information about every notification incident, and notices under the Gramm-Leach-Bliley Act focus on incidents that result in the compromise of sensitive customer information and do not include the reporting of incidents that disrupt operations.

Comments on the proposal must be received within 90 days of publication in the Federal Register.

A recent settlement between the U.S. Department of Justice and a media conglomerate underscores the importance of implementing robust Telephone Consumer Protection Act compliance measures, including for third-party vendors.  In 2017, a jury found DISH Network LLC liable for its vendors’ violations of the Telemarketing Sales Rule and the Telephone Consumer Protection Act, as well as several state statutes.  Earlier this year, the Seventh Circuit affirmed DISH’s liability, but vacated the award and remanded for a recalculation of damages.

Now, following the Seventh Circuit’s remand, the Department of Justice’s Civil Division has announced a $210 million settlement with DISH for over sixty-six million telemarketing calls made by DISH and retailers that marketed DISH’s products and services in violation of the TSR, the TCPA,  and other statutes.  The Stipulated Judgment requires DISH to pay $126 million in civil penalties to the United States (which is the largest civil penalty ever paid) for TSR violations, and $84 million to four states (California, Illinois, North Carolina, and Ohio) for TCPA and various state statutory violations.  The settlement also obligates DISH to continue robust compliance measures imposed by the court in 2017, including prohibitions on future telemarketing violations and significantly restricts DISH’s telemarketing practices.  DISH has also been ordered to prepare a telemarketing plan, submit compliance materials to the Department of Justice and the Federal Trade Commission for review through 2027, and provide compliance reports upon request by the agencies.

The lawsuit was originally filed in 2009, with the federal and state governments alleging that DISH repeatedly initiated calls to phone numbers on the DNC registry, violated the TSR’s prohibition on abandoned calls, enabled and assisted telemarketing violations by the retailers marketing DISH’s products and services, violated the TCPA’s ban on autodialed phone calls, and violated a handful of state statutes. The district court awarded the plaintiffs a record-breaking $280 million in civil penalties and damages in 2017.  DISH’s liability was affirmed on appeal by the Seventh Circuit, but the $280 million award was vacated and remanded for recalculation.  DISH, the DOJ, and the States reached this expansive settlement on remand.

The government’s suit, and this historic settlement, underscore the necessity of a strong compliance regime, including with respect to third-party vendors, to insure that “unscrupulous sales persons” do not use “illegal practices to sell” products.

The California Attorney General’s Office recently released a fourth set of proposed regulatory modifications to the California Consumer Privacy Act (the “CCPA”).

As background, the Attorney General’s Office had only just recently given notice of a third set of modifications on October 12, 2020.  The third set of modifications revised the regulations relating to the notice of a consumer’s right to opt-out of the sale of their personal information.  Our previous post detailed the specific changes in the third set of modifications.

The Attorney General’s Office received around 20 comments in response to the third set of modifications; these modifications have not yet been accepted and finalized.  The fourth set of modifications are in response to the comments to the third set of modifications and are intended to clarify and conform the proposed regulations to existing law.  The changes made include:

  • Revisions to section 999.306, subd. (b)(3), which clarifies that a business selling personal information collected from consumers in the course of interacting with them offline shall inform consumers of their right to opt out of the sale of their personal information by an offline method.
  • Proposed section 999.315, subd. (f), which reinstates the requirement for a uniform opt-out button to be used “in addition to . . . but not in lieu of . . . a ‘Do Not Sell My Personal Information link.”

The Attorney General’s Office is accepting written comments regarding the fourth set of proposed modifications until December 28, 2020.

The recently proposed modifications show that the Attorney General has no intention to slow the rollout of CCPA regulations after the recent voter approval of the California Privacy Rights Act (the “CPRA”), which further modifies and strengthens the existing protections in the CCPA.  Notably, the Attorney General is also allowed to issue regulations under the CPRA until that power is ultimately transferred to the newly created California Privacy Protection Agency.

On November 17, 2020, H.R. 1668, the “Internet of Things Cybersecurity Improvement Act of 2020”, was unanimously passed by the Senate. The bill is now on its way to President Trump for signature or veto.

The bill would require the National Institute of Standards and Technology (“NIST”) and the Office of Management and Budget (“OMB”) to take certain steps to increase cybersecurity for Internet of Things (“IoT”) devices. IoT describes the extension of internet connectivity into physical devices and everyday objects. Examples of IoT devices include internet connected appliances, thermostats, locks, or smoke detectors, but they are now pervasive across virtually all types of retail products.

The bill would specifically require NIST to develop minimum or baseline IoT cybersecurity standards. The OMB would then be tasked with issuing guidelines to agencies in consultation with NIST.

Notably, the bill also requires federal agencies to only use devices that meet the NIST standards and expressly prohibits the government from entering into any contract that would prevent compliance with those standards. Because the bill would, in effect, prohibit the government from entering into any contracts with third parties that would result in the purchase or use of IoT devices that are not compliant with the NIST standards, it is likely that the bill will encourage manufacturers of such products to adopt the NIST standards.

On November 4, 2020, California voters approved of the ballot initiative Proposition 24, more commonly known as the California Privacy Rights Act (the “CPRA”).  The CPRA goes into effect on January 1, 2023, and will expand several of the existing protections in the California Consumer Privacy Act (the “CCPA”).

As background, the original CCPA emerged in 2018 as a compromise between legislators and the advocacy group, Californians for Consumer Privacy, which had secured a ballot measure vote for its proposed privacy law.  Californians for Consumer Privacy withdrew the ballot measure upon the passing of the CCPA.  However, the group became concerned that amendments to the CCPA resulted in diluted privacy protections, and it thereafter secured a spot on the 2020 ballot for California citizens to vote on the CPRA.

As mentioned in our prior posts, the CPRA creates some of the following new rights and requirements:

  • Right to restrict use of “sensitive personal information”;
  • Right to correct data;
  • Storage limitation: right to prevent companies from storing information longer than necessary and right to know the length of time a business intends to retain each category of personal information;
  • Data minimization: right to prevent companies from collecting more information than necessary;
  • Right to opt out of advertisers using precise geolocation (< than 1/3 mile);
  • Penalties if email address and email password are stolen due to negligence;
  • Restrictions on onward transfers of personal information;
  • Establishes California Privacy Protection Agency to protect consumers;
  • Requires high risk data processors to perform regular cybersecurity audits and risk assessments; and
  • Requires the appointment of a chief auditor with power to audit businesses’ data practices.

The CPRA mandates a minimum of $10 million in annual funding to the newly created Privacy Protection Agency.  The Privacy Protection Agency has the power to draft additional regulations, which may provide further clarity or raise new questions on the CPRA’s scope.  Businesses will therefore need to stay apprised of changes over the coming months and years in order to fully understand their compliance obligations.

The Cybersecurity Infrastructure Security Agency, Federal Bureau of Investigation, and Department of Health and Human Services have jointly posted an advisory to warn hospitals and other health care providers about the threat of malicious attacks on their information systems.  At least six hospitals across the United States were recently victimized by attacks using Trickbot malware within a 24-hour period.  These attacks have led to requests for ransom to release data, data theft, and the disruption of services.

The advisory describes how the malware works, identifies indicators that a system may have been infected with the malware, and sets forth measures that health care providers may take to prevent and minimize damage from an attack and to respond to an attack if one occurs.  With the surge in coronavirus hospitalizations, the disruptions that such threats may cause raise more and more serious concerns, and health care providers should be on heightened alert.

Assaults on Section 230 of the Communications Decency Act (the “CDA”)—which shields online platforms from civil liability for third party content on their services—are abundant these days.  On October 15, 2020, FCC Chairman Ajit Pai announced that his agency, at the request of President Trump, will draft rules explaining when platforms’ efforts to moderate user-posted content will leave them exposed to potential liability.  Two days earlier, Justice Thomas issued a scathing critique of the Court’s current interpretation of Section 230, arguing for a much more limited interpretation that would drastically narrow the liability shield.

Most of the discussion has focused on concerns relating to free speech, the spread of misinformation, and accusations of biases in moderation practices.  However, the case in which Justice Thomas issued his statement demonstrates another important issue at stake—the ability of platforms to use privacy and information security screening tools.

Subsection (c)(2)(A) protects decisions to remove “objectionable” content made in good faith, while Subsection (c)(2)(B) protects software providers who give internet users the technical means to screen or filter such content.  It is the latter provision that was at issue in Enigma Software Group USA, LLC v. Malwarebytes, Inc., which involved two companies that both provide software to enable individuals to filter unwanted or malicious content, such as malware.  Enigma sued Malwarebytes alleging that Malwarebytes engaged in anticompetitive conduct by configuring its product to make it difficult for consumers to download and use Enigma products.  In its defense, Malwarebytes invoked Section 230(c)(2)(B).

The Ninth Circuit had previously held in Zango, Inc. v. Kasperskey Lab, Inc., 568 F.3d 1169 (9th Cir. 2009), that providers of software filtering tools (like Enigma and Malwarebytes) were in fact protected by Section 230(c)(2) because those tools allowed users to block objectionable content, such as malware.  The Zango court did not, however, address whether there were limitations on the provider’s discretion to declare online content objectionable.

The Ninth Circuit rejected Malwarebytes’ defense under Section 230, finding that “filtering decisions that are driven by anticompetitive animus are not entitled to immunity under section 230(c)(2).”  946 F.3d 1040, 1047 (9th Cir. 2019).  The Ninth Circuit explained that, in passing the CDA, Congress wanted to encourage the development of filtration technologies, not to enable software developers to drive each other out of business.  Accordingly, the Ninth Circuit found that this filtering function was not protected.  The Supreme Court denied Malwarebytes’ petition for certiorari, in connection with which Justice Thomas wrote his statement advocating for narrowing the scope of Section 230.

The Ninth Circuit’s opinion and the Supreme Court’s denial of certiorari mark the first chip in the immunity armor for makers of malware software and other filters.  Indeed, various cybersecurity experts, technology think tanks, and law and computer science professors submitted amicus curiae briefs in connection with the certiorari petition arguing that leaving the Ninth Circuit’s opinion intact would open the door to litigation against malware screening tool producers—and not just for allegedly anticompetitive behavior.

The Ninth Circuit’s decision, now left intact by the Supreme Court, could have a chilling effect on innovation of malware detection and filtration systems.  Makers of these filtering and screening tools may now have to spend resources to assess litigation risks associated with developing software that identifies and quarantines threats.  To minimize the risks and costs associated with litigation, these companies may begin to take a more conservative approach in identifying threats that might plausibly claim to be a rival.  A more conservative approach that errs against classifying potential rival software as a threat is particularly problematic where malware already often actively disguises itself as legitimate software.

The data security implications could be significant.  Malware detection and filtration systems must constantly keep up with the evolution of malware itself.  These tools can alert users of certain potentially unwanted programs, which slow down the overall performance of the user’s computer and ultimately create additional access points for hackers.  Likewise, malware detection and filtration systems are vital to businesses, which use these tools to protect company and customer data from hacker attacks that utilize malware—for example, ransomware.  The privacy implications could also be significant as many individuals use filtration tools to help screen unwanted spam or content, the opening of which can lead to online tracking, placement of cookies, or other additional unwanted content.

While the recent assaults on the CDA’s liability shield widely focus on the First Amendment implications, as applied to actions by social media giants like Facebook and Twitter to filter and remove user content, an unintended consequence of these assaults could be an overall decrease in privacy and data security protections for us all.