On April 24, the Governor of Kansas signed into law Kansas Senate Bill 44, which enacts the Financial Institutions Information Security Act (the “Act”). The Act requires credit services organizations, mortgage companies, supervised lenders, money transmitters, trust companies, and technology-enabled fiduciary financial institutions to comply with the requirements of the GLBA’s Safeguards Rule, as in effect on July 1, 2023. (16 C.F.R. § 314.1 et seq.). The only available exemption from the Act’s requirements is for entities that are directly regulated by a federal banking agency.

The Act requires covered entities in Kansas to create standards regarding the development, implementation, and maintenance of reasonable safeguards to protect the security, confidentiality, and integrity of customer information. For purposes of the Act, “customer information” is broadly defined as “any record containing nonpublic personal information about a customer of a covered entity, whether in paper, electronic or other form, that is handled or maintained by or on behalf of the covered entity or its affiliates.” However, the Act also requires that an entity’s customer information standards be consistent with, and made pursuant to, the GLBA’s Safeguard Rule.

The Safeguard Rule is a regulation stemming from the GLBA that requires non-banking financial institutions to develop, implement, and maintain a comprehensive security program to protect the information of their customers. The Safeguard Rule is currently implementing new requirements, set to become effective on June 9, 2023, which we previously covered in greater detail within the CyberAdviser blog, please see here and here. The Safeguard Rule lays out three main objectives for information security programs: (1) Insure the security and confidentiality of customer information; (2) Protect against any anticipated threats or hazards to the security or integrity of such information; and (3) Protect against unauthorized access to or use of such information that could result in substantial harm or inconvenience to any customer.

As of June 9, those objectives will require applicable companies to, in part: (1) Designate a qualified individual to oversee their information security program; (2) Develop a written risk assessment; (3) Limit and monitor who can access customer information; (4) Encrypt information in transit and at rest; (7) Train security personnel; (6) Develop a written incident response plan; and (8) Implement multifactor authentication whenever anyone accesses customer information. However, the Safeguards Rule does not fully apply to financial institutions that fit within certain exceptions or have primary regulators other than the FTC. Those entities in particular should assess whether the Act may require them to comply with the Safeguard Rule.  And, whereas covered entities subject to the FTC’s Safeguards Rule have been working for months if not years to comply, the Kansas Act will require compliance within a matter of months.

Additionally, the Act required covered entities to develop and organize their information security program “into one or more readily accessible parts,” and maintain that program in accordance with the books and record retention requirements of the covered entity. Lastly, the new act provides the Kansas Office of the State Bank Commissioner the discretionary ability to issue regulations to implement the Act.

Following recent Senate testimony in which OpenAI CEO Sam Altman proposed additional Congressional oversight for the development of artificial intelligence (AI), Colorado Senator Michael Bennet has re-introduced the Digital Platform Commission Act, a bill that would enable the creation of a federal agency to oversee the use of AI by digital platforms.  The proposed Federal Digital Platform Commission (FDPC) would have a broad mandate to “protect consumers from  deceptive, unfair, unjust and unreasonable or abusive practices committed by digital platforms.”

Under the proposed bill, the Commission would have specific power to regulate the use of algorithmic systems used by “systemically important digital platforms.”  The bill delegates to the FDPC rulemaking authority to designate a platform as systematically important based on a number of factors, including whether a platform is available to the public and has “significant nationwide economic, social or political impacts”, the platform’s market power, unique daily users, and “the dependence of business users of the platform to reach customers.”  Digital platforms that qualify as systemically important could face new rules to require fairness and transparency of AI processes, as well as risk assessments and third party audits to assess harmful content and anti-competitive bias. 

According to media reports, the proposed bill includes updated definitions to specifically address AI, and in particular generative AI.  These changes include a revised definition of “algorithmic processes”, which now include computational processes using personal data that generate content or make a decision.  Media reports also claim that the new bill would expand the definition of a digital platform to include companies that “offer content primarily generated by algorithmic processes.”

The proposed bill contains some of the hallmarks of other proposed AI regulation, such as the EU AI Act.  Lawmakers worldwide appear to be focused on fairness and transparency of AI processes, safety and trust issues, and the potential for algorithmic bias.  Lawmakers also appear to be coalescing around the idea of mandating third-party assessments for high-risk or systematically important AI.  

One notable aspect of the Digital Platform Commission Act is its definition of AI, which does not provide exceptions for automated processes that include human decision making authority or a requirement that the automated processes have a legal or substantially similar effect.  This approach differs from other laws that propose to regulate AI, such as the General Data Protection Regulation Act and the Colorado Privacy Act , which are more limited in their definition of “profiling” or “automated processing” that trigger compliance obligations and establish different obligations based on the level of human involvement.  The scope of rulemaking for different kinds of AI is currently under consideration by the California Privacy Protection Agency, which has sought public comment on this question.  How regulators address the threshold issue of what kind of AI triggers compliance obligations is a key issue, with potentially significant impact.

Whether Congress moves forward on the Digital Platform Commission Act remains an open question.  As with other proposed bills regulating AI, lawmakers appear wary of stifling technologies innovations that are moving forward at a lightning place.  On the other hand, there appears to be some bipartisan recognition of the potential power and danger of wholly unregulated AI technologies and an interest in creation of a new executive agency with oversight responsibilities for AI. 

On May 17, 2023, Montana Governor Greg Gianforte signed into law a bill banning the use of the popular app, TikTok, by the general public within the state. Absent court intervention, the ban takes effect on January 1, 2024. While users of the popular app, which is owned by Chinese company ByteDance, can breathe a little easier knowing they will not be liable for accessing the app, TikTok (and mobile stores offering the app to users within the state) will be fined $10,000 for every day its platform operates on devices in Montana. It is unclear from the law’s current text exactly how the State intends to enforce the removal of the app that has already been installed on Montana residents’ devices. Use of TikTok by law enforcement and for security research purposes are exempt from the statewide ban.

Governor Gianforte tweeted that the protection of Montana residents’ personal and private data was a reason for the ban and further called out the “Chinese Communist Party” using TikTok as a spy tool to violate Americans’ privacy. The bill’s text, authored by Montana Attorney General Austin Knudsen, called out similar concerns and noted the need to protect minors from the dangerous activities being promoted on the app, such as pouring hot wax on a user’s face, placing metal objects in electrical outlets, and taking excessive amounts of medication. According to the newly enacted law, if TikTok is acquired by a company “not incorporated in any other country designated as a foreign adversary,” the ban would be void.

The reaction by TikTok to the statewide ban has, unsurprisingly, been negative, with a spokesperson for the app questioning the constitutionality and the mechanics of enforcing the ban. The ACLU and tech trade groups have called the constitutionality of the ban into question, citing First Amendment rights and constitutionally protected speech. Keegan Medrano, policy director at the ACLU of Montana, has raised free speech concerns and Carl Szabo, Vice President and General Counsel at NetChoice, noted his disappointment in Governor Gianforte for signing a “plainly unconstitutional bill.” 

Montana is the first state to ban the app’s use by the general public, however, such bans are already in place on government devices and networks throughout the country. To date, the U.S. government and a number of states have enacted TikTok bans on government devices. It remains to be seen what states will follow suit and enact similar bans of the app by the general public on personal devices. What does appear clear is that, despite the current administration’s ongoing negotiations with ByteDance to resolve concerns related to national security, the issue of privacy protection and data security and if or how TikTok does or does not provide either, is far from being resolved.

In a ruling published May, 4, the Federal District Court of Idaho granted defendant data broker Kochava’s motion to dismiss a complaint filed by the Federal Trade Commission (“FTC”).  In its complaint, the FTC alleged that Kochava’s sale of precise consumer geolocation data constituted an unfair act or practice in violation of Section 5 of the FTC Act. Despite dismissing the complaint, the Court was not convinced that the deficiencies could not be cured. Therefore, the Court granted the FTC 30 days to amend.

In its ruling, the Court rejected a number of the defendant’s arguments. It found that the FTC had reason to believe that Kochava is, or is about to violate the FTC Act and was not “only challenging past practices.” Next, it found that the FTC need not allege a predicate violation of law or policy to state a claim under Section 5(a) as claimed by the defendant.  Finally, it found that the FTC was not obligated to allege that the defendant’s practices were immoral, unethical, oppressive, or unscrupulous.

Despite these findings, the Court held that the FTC failed to allege a sufficient likelihood of substantial consumer injury.  On this point, the FTC put forth two theories of consumer injury.  First, it argued that “a company could substantially injure consumers by selling their sensitive location information and thereby subjecting them to a significant risk of suffering concrete harms at the hands of third parties.” While the Court found this plausible, it found that the “FTC has not alleged that consumers are suffering or are likely to suffer such secondary harms.” The mere possibility of secondary harms were insufficient to establish standing. Second, the FTC alleged that the non-obvious tracking itself constituted a “substantial injury” under the Act. Although the Court recognized that an invasion of privacy alone can constitute such an injury, it found that the present facts did not support that conclusion in this case.

In a separate ruling on the same matter, the Court rejected defendant’s attempt to dismiss the case under the Declaratory Judgement Act, describing it as “awkwardly” raising issues without identifying any relevant cause of action or adequate remedy at law.

The opinions demonstrate the reality that the laws surrounding data brokers and the collection and sale of tracking information are still very much in development.  Any company that is considering sharing personal data—whether sensitive or not—should therefore ensure that it complies with relevant any disclosure and choice obligations, or risk being in the crosshairs of the next regulatory enforcement action.

United States Capitol Building

As we have previously posted, it has been an active year on the state privacy law front.  Indeed, the number of states with privacy laws is about to nearly double in a matter of months,  with Iowa, Indiana, Montana, and Tennessee have already passed or are about to pass comprehensive privacy laws. 

Perhaps not surprisingly, the House Energy and Commerce Committee announced that the Subcommittee on Innovation, Data, and Commerce will hold a hearing titled “Addressing America’s Data Privacy Shortfalls:  How a National Standard Fills Gaps to Protect American’s Personal Information” on April 27.  At that meeting, it is expected that the Subcommittee will reconsider whether to move the American Data Privacy and Protection Act (ADPPA).  The ADDPA would largely preempt the growing patchwork of state privacy laws with a comprehensive national law.

However, while the patchwork of state privacy laws continues to grow, it is not clear that it will put enough pressure on Congress to overcome the hurdles that led to it stalling last year.  Indeed, while the laws that have passed or will likely soon pass do have important differences, they have all generally followed the model set by Virginia and Colorado (as opposed to California).  And, to date, only California and Colorado have created rulemaking authority—which in practice mitigates the compliance burden.  So, Congress may not be facing strong political pressure from business lobbyists. 

Further, the reported reason for the ADPPA stalling last year still exists—California leaders and representatives have argued that the ADPPA does not go far enough in providing the same privacy protections as the CCPA, and the CCPA should therefore not be preempted.  Although the Speaker of the House has changed, he is still from California and likely faces similar pressures.  And, even if it reaches the House floor and passes, it will still need to clear Senate Commerce Chair Maria Cantwell, who has criticized the bill in the past.  So, although the ADPPA may still have strong bipartisan support, it is not clear that its fate will be different this time around. 

Nonetheless, the fact that the House Energy and Commerce Committee is focusing on privacy reinforces what is shown from the action at the state level—this is an issue important to legislators.  Businesses should expect the regulators to be enforcing these laws aggressively.

The State of Washington appears close to enacting a new law that regulates the privacy of consumer health information.   If passed, the new law – the My Health My Data Act (MHMDA) –would take effect March 31, 2024 and apply to non-governmental entities that collect, process, share, or sell health information that can be linked to an individual if that individual is a Washington resident or the information is collected in the State.   Health information is defined to cover broad categories, such as symptoms, conditions, treatments, bodily functions, and testing and more specific matters, such as behavioral interventions, gender-affirming and reproductive care, biometric and genetic data, and the precise location or other data that identifies an individual as seeking health care services. The law would apply to any organization that does business in Washington or targets Washington consumers and alone, or jointly with others, determines the purpose and means of collecting, processing, sharing, or selling of consumer health data.”

The New Rules.  Entities that are subject to the law MHMDA must disclose:

•            What types of consumer health data they collect, why they collect it, and how it will be used,

•            The sources of the consumer health data they collect,

•            The types of consumer health data that they share,

•            The specific affiliates and types of third parties with whom they share consumer health data,

•            The ways in which consumers can exercise their rights with respect to their own health data, including the right to: (i) confirm the consumer health data that is being collected and shared;  (ii) withdraw their consent for the use of the data, and (iii) have their data deleted by the entity and others with whom the entity has shared data.

A regulated entity may not generally collect, use, or share a consumer’s health data in a manner that has not been disclosed without obtaining an individual’s informed consent. 

No later than June 30, 2024, regulated entities must take certain actions to protect the consumer health data they maintain.  They must restrict access to consumer health data to those who need it to fulfill an appropriate purpose, and they must also and implement appropriate safeguards to protect the confidentiality, integrity, and accessibility of consumer health data. 

Certain requirements extend to vendors engaged by a regulated entity to process data.  A processor must have a binding contract that it has with a regulated entity that sets forth processing instructions and limits.  The processor must act in accordance with that contract and otherwise assist the regulated entity in meeting its privacy obligations under the MHMDA.

The MHMDA applies more broadly to prohibit any person from selling a consumer’s health data without obtaining the consumer’s written authorization and to ban “geofences,” which use spatial or location detection technology to establish a virtual boundary around a physical location or locate a consumer within a virtual boundary.

The law carves out exemptions for certain entities and types of information.  Perhaps most significantly, information that is protected by certain other privacy laws, including HIPAA, is exempt from the requirements.

Implications.   The MHMDA was initially proposed to ensure the privacy of reproductive health information in the wake of the Dobbs decision.  But the law has some practical real-world effects that go beyond the initial purpose. 

To begin, the proposed law is fairly broad. Any company that maintains an app that gathers individuals’ health data, other than as a business associate of a health care provider or health plan, will generally be subject to the new rules if they do business in Washington or target Washington consumers. The “doing business” trigger is a staple of state privacy laws, like the California Consumer Privacy Act (CCPA), and state courts have typically interpreted the provision fairly broadly.  Moreover, unlike the CCPA and other state privacy laws, the MHDMA does not require that covered entities satisfy other requirements, such as a monetary threshold, for the law to apply. 

Although not as detailed as HIPAA in various respects, the rules extend beyond HIPAA in other ways.  For example, the Washington law gives individuals the right to delete data, which the HIPAA rules do not ( and, for obvious reasons, would not) require of health care providers and health plans. 

As drafted, the MHMDA would also complicate the use of website tracking technologies by covered entities to the extent those technologies capture health data.  This has been a recent focus of the FTC, which has issued a pair of consent decrees against health tech companies that utilize tracking technologies to share health related information with advertising partners.  If deemed a “sale” – as such sharing would under California law – the use of third party tracking technologies like Meta Pixel would require consumer consent.  Even if not a sale, the use of tracking technologies by covered entities to collect consumer health data would require written disclosures. 

Entities have approximately one year to comply with most of the new requirements. The rules will be enforced through the Washington Attorney General.  Importantly, the proposed law would deem a violation of the law to be an unfair and deceptive trade practice under Washington law, which would enable consumers to pursue a private right of action.

On March 30, 3023, the Financial Crimes Enforcement Network (FinCEN) issued a Financial Trend Analysis focusing on business email compromise (BEC) trends and patterns in the real estate sector (referred to as “RE BEC”). The report is required under Section 6206 of the Anti-Money Laundering Act of 2020 (AMLA). This section of AMLA requires FinCEN to periodically publish threat patterns and trend information derived from BSA filings. To date, FinCEN has published four other reports. BEC attacks and scams continue to rise and FinCEN has issued several pieces of guidance in recent years, including an updated advisory and a fact sheet regarding the Rapid Response Program (RRP), which assists victims of BEC attacks.

The real estate sector is not immune from BEC attacks and is particularly vulnerable given the high-dollar value of transactions and numerous entities involved. This vulnerability was likely exacerbated given the average price of homes increased significantly during the review period. FinCEN previously reported in 2019 that the real estate sector was the third most targeted sector for BEC attacks. BEC attackers target businesses and organizations that conduct wire transfers and rely on email communications regarding the transfers, typically compromising a key email account to fraudulently direct funds to the attacker.

The analysis provides data filed with FinCEN between January 2020 and December 2021. During the reporting period, there were a total of 2,260 filings reporting $893 million in RE BEC incidents.

Key highlights of the analysis include:

  • Four money laundering typologies were identified: money mules used to obfuscate ties to attackers, money mules recruited through romance scams, ties to other fraud types, and the use of alternative payment systems to convert illicit proceeds (such as convertible virtual currency).
  • The average value of RE BEC incidents increased in 2021 with an average monthly value of $116,233.
  • Nearly 88% of incidents involved initial domestic transfers of funds to accounts at U.S. depository institutions. The top three international destinations of transfers included Hong Kong, China, and Mexico.
  • The report could not fully analyze fund recovery success rates, as some filings did not include this information or recovery efforts were initiated but not yet determined. Of the filings that did include this information, roughly 22.21% of depository institutions recovered the full amount of the funds and 20.37% indicted no funds could be recovered.
  • As reflected by the following chart, title companies and closing entities were the most frequently impersonated party, followed by investors and realtors as the most frequent impersonations.

The report also highlighted the importance of detecting and mitigating RE BEC attacks through system assessments of vulnerability and taking action to increase resiliency against attacks. In addition, FinCEN encourages the adoption of a multi-faceted transaction verification process and training and awareness to identify and evade phishing attempts.

In the press release accompanying the analysis, FinCEN noted that “[t]oday’s report emphasizes the critical role of timely reporting of cyber-enabled crime to enable FinCEN and law enforcement to interdict, freeze, and recover stolen funds through cyber-enabled fraud, such as BEC, through FinCEN’s Rapid Response Program (RRP).” As indicated in the report, the success rates of recovering funds are mixed but FinCEN has had greater success rates in identifying and freezing funds when victims or financial institutions report unauthorized and fraudulent BEC wire transfers to law enforcement within 72 hours of the transaction. The report also promoted the use of information sharing under a Section 314(b) program, and the continued reporting of RE BEC attacks through SAR filings.

If you would like to remain updated on these issues, please click here to subscribe to Money Laundering Watch. Please click here to find out about Ballard Spahr’s Anti-Money Laundering Team.

The emergence of tools like ChatGPT has demonstrated the tremendous business potential for artificial intelligence.  At the same time, businesses need to be aware of the growing patchwork of laws and regulations in the U.S. and EU governing the development and use of AI.  In this webinar, Ballard Spahr privacy & data security lawyers Phil Yannella, Greg Szewczyk, John Kerkorian and Tim Dickens will provide an overview of the current regulatory landscape for AI in the U.S. and EU and identify some best practices for businesses to employ as they consider use of AI tools.

Utah Governor Spencer Cox is expected to signed SB 152, which was passed by both the House and Senate as of March 13, 2023.  With this bill, Utah would join California in passing legislation designed to protect children from online harms.  However, unlike the California Age-Appropriate Design Code, the Utah law will provide a private right of action with statutory damages—potentially leading to a flood of litigation.

Under the Utah law, beginning on March 1, 2024, social media companies must verify the age of existing or new Utah account holders.  Social media company is defined to mean any person or entity that provides a social media platform with at least 5 million account holders worldwide.  Utahns under the age of 18 will need parental consent to open an account.  Parents must also be provided with their own credentials for minors, which will allow the parents to view all posts of the minor, messages sent by the minor, and responses received by the minor.  Social media companies must also limit minor’s access during night time hours unless the parent changes their permissions. 

Notably, the Utah law will provide a private right of action with statutory damages of $2,500 per each incident of violation, plus attorneys’ fees.  Accordingly, social media companies can expect class action litigation similar to what we have seen with other privacy laws containing similar enforcement rights.

The Utah law will almost certainly be challenged in courts.  For example, tech industry groups told Governor Cox in a letter that they believe the law will violate the First Amendment and lead to frivolous lawsuits.  In any event, regardless of whether the Utah law stands, it is likely a harbinger of other state law focused on protecting children from online harms.