The California Privacy Protection Agency (“CPPA”) discussed at its July 16 meeting new enforcement focuses in addition to current goals.  While the new focuses are largely in line with general trends, they also serve as a reminder that specific and nuanced compliance decisions can make a big difference.

As the CPPA has made clear in multiple statements, its focus over the past year has largely been on monitoring privacy notices and policies, the right of deletion, and the handling and implementation of consumer requests.  For example, the CPPA issued a formal enforcement advisory in April emphasizing that businesses must not request unnecessary personal information when consumers opt out of data sales.

The discussion at the July 16 meeting continues this trend, but adds four new focus areas including dark patterns, honoring consumer opt-out requests, providing proper notice of information sales and sharing, and the agencys prioritization of cases that affect vulnerable groups.

The CPPA notes that investigations typically span about 18 months and are ongoing across a variety of sectors, encouraging businesses to continue to be proactive in their privacy compliance efforts.  The agency hopes to continue issuing enforcement advisories as they collaborate with other states and federal partners in addressing privacy issues.

Additionally, the agency is considering new draft regulations that would address a variety of privacy issues, including privacy rights around automated technology and artificial intelligence (“AI”).  These regulations include increased audit requirements and rules guiding companies on using AI in their businesses. These proposals will be closely watched as they may impact common practices – including relating to employee monitoring.

In other words, although the state legislative season has ended for the year, the privacy compliance landscape continues to evolve.

Over the course of the past few months, the Office of Civil Rights (OCR) and the Office of the National Coordinator for Health Information Technology (ONC), both of which are divisions of the U.S. Department of Health and Human Services (HHS), have issued a series of new regulations and guidance related to the Health Insurance Portability and Accountability Act of 1996 (HIPAA).

The Upshot

  • OCR issued a final rule that modifies HIPAA to support reproductive health care privacy.
  • OCR issued new guidance which clarifies and revises how the HIPAA rules apply to a Regulated Entity’s use of tracking technologies, although a recent court decision struck down a significant portion of that guidance.
  • OCR published frequently asked questions to address notice and breach procedure questions related to the Change Healthcare cyber attack.
  • ONC issued a final rule that requires Health IT Modules to provide an “internet-based method” for an individual to request a restriction on the use or disclosure of their PHI.

The Bottom Line

Covered entities under HIPAA (including employer-sponsored health benefit plans), as well as their business associates, should be aware of these new rules and guidance in order to maintain compliance with HIPAA. Attorneys in Ballard Spahr’s Health Care Industry Group are continuously tracking the developments and are available for counsel.

In the first half of 2024, OCR and ONC have issued rules and guidance related to HIPAA on four topics of importance to health plans, health care clearinghouses, and health care providers that are subject to HIPAA, as well as their business associates (collectively “Regulated Entities”).

Reproductive Health Care Privacy Final Rule

On April 22, 2024, OCR issued a final rule to modify HIPAA to support reproductive health care privacy. The final rule makes a number of significant changes to the HIPAA regulations. For example, the new rule:

  • Prohibits the use or disclosure of Reproductive Health Care Information (RHI) by Regulated Entities for the purpose of investigating or imposing liability on any person for the mere act of seeking, obtaining, providing, or facilitating reproductive health care that is lawful under the circumstances in which it was provided, or to identify any person for such purposes. These prohibited purposes include, but are not limited to, law enforcement investigations, third-party investigations in furtherance of civil proceedings, state licensure proceedings, criminal prosecutions, and family law proceedings.
  • Requires Regulated Entities to obtain a signed attestation that certain requests, including subpoenas, for RHI are not for these prohibited purposes.
  • Requires Regulated Entities to modify their Notice of Privacy Practices to address reproductive health care privacy.
  • Includes a presumption that reproductive care is, for HIPAA purposes, presumed to be legal unless the Regulated Entity has “actual knowledge” that the care was not lawful under the circumstances.

Compliance is required by Dec. 23, 2024, except for required updates to the Notice of Privacy Practices that are required by Feb. 16, 2026.

OCR Guidance Regarding the Use of Tracking Technologies

On March 18, 2024, OCR issued new guidance on how the HIPAA rules apply to a Regulated Entity’s use of third-party tracking technologies, such as cookies and pixels. The new publication updates guidance that OCR originally published on these technologies in December 2022 and includes a number of significant revisions and clarifications. For example, the new guidance:

  • Clarifies that not all data elements collected by website tracking technologies constitute PHI. In order to constitute PHI, the information must be related to an individual’s past, present, or future health, health care, or payment for health care.
  • Suggests an alternative solution for dealing with a technology vendor who will not sign a Business Associate Agreement (BAA): the Regulated Entity can establish a BAA with a Customer Data Platform vendor, who would then de-identify online tracking information that includes PHI. The Customer Data Platform vendor can then only disclose de-identified information to tracking technology vendors.
  • Emphasizes that OCR is going to prioritize compliance with the HIPAA Security Rule in investigations into the use of online tracking technologies.

However, on June 20, 2024, the U.S. District Court for the Northern District of Texas vacated a significant portion of OCR’s tracking technology guidance on the grounds that it exceeded OCR’s statutory authority under HIPAA. Specifically, the court stated that metadata from a user’s search of a provider’s public-facing web page does not meet the definition of “individually identifiable health information” under HIPAA. As of now, the tracking technology guidance is still on the HHS website, but HHS has stated that it is evaluating its next steps in light of this recent decision.

OCR Updates and FAQs Regarding the Change Healthcare Cyber Attack

On April 19, 2024, OCR published a webpage with frequently asked questions (FAQs) concerning the Change Healthcare (a unit of UnitedHealth Group (UHG)) cybersecurity incident which occurred in late February 2024. OCR then updated the FAQs on May 31, 2024, to address additional concerns. In summary, the FAQs explain that:

  • OCR has initiated an investigation into the Change Healthcare cybersecurity incident to determine whether a breach of unsecured PHI occurred and into Change Healthcare’s and UHG’s compliance with the HIPAA Rules.
  • OCR’s investigation is not prioritizing the investigation of covered entities and business associates engaged with Change Healthcare and UHG. However, the guidance reminds these other entities of their obligation to have BAAs in place and to make sure that timely breach notifications to HHS and the affected individuals are provided if and when they receive notice from Change Healthcare.
  • If a covered entity receives notice that it has been affected by a breach by Change Healthcare, it may delegate to Change Healthcare the task of providing the required HIPAA breach notifications on its behalf. Only one entity – which could be the covered entity itself, UHG, or Change Healthcare – needs to complete breach notifications to affected individuals and HHS, and a covered entity and Change Healthcare may cooperatively satisfy any breach obligations under HIPAA.

Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing Final Rule

On February 8, 2024, ONC issued a final rule that, in part, supports the HIPAA Privacy Rule. Under the HIPAA Privacy Rule, covered entities are required to allow individuals to request a restriction on the use or disclosure of their PHI for treatment, payment, or health care operations and to have policies in place by which to accept or deny such requests. However, the HIPAA Privacy Rule does not specify a particular process to be used by individuals to make such requests or for the entity to accept or deny the request. In guidance that addresses various technical standards applicable to electronic health information, the ONC sets forth a standard that requires Health IT Modules to support an internet-based method for an individual to request such a restriction.

The authors express their thanks to Summer Associate Sofia E. Reed for her efforts in the preparation of this Briefing.

The Consumer Financial Protection Bureau (CFPB) has launched the process for independent standard-setting bodies to receive formal recognition, as part of its efforts to shift towards open banking in the United States.

On June 5, 2024, the CFPB finalized a rule outlining the minimum attributes that standard-setting bodies must exhibit to issue standards in compliance with CFPB’s proposed Personal Financial Data Rights Rule.  The Personal Financial Data Rights Rule, proposed in October 2023, is the first federal legal framework for open banking under Section 1033 of the 2010 Consumer Financial Protection Act.  This previously untapped legal authority gives consumers the right to control their personal financial data and assigns the task of implementing personal financial data sharing standards and protections to the CFPB.

As it is currently drafted, the Personal Financial Data Rights Rule would grant companies the ability to utilize technical standards developed by standard-setting organizations recognized by the CFPB.  The CFPB “Industry standard-setting bodies that operate in a fair, open, and inclusive manner have a critical role to play in ensuring a safe, secure, reliable, and competitive data access framework,” stated the CFPB in the proposal.

Under the rule launching the approval process, industry standardard-setting bodies can apply to be recognized by the CFPB.  Those seeking approval must demonstrate the following attributes:

  • Openness: A standard-setting organization’s sources, procedures, and processes must be open to all interested parties, including public interest groups, consumer advocates, and app developers;
  • Balanced-Decision Making: The decision-making power to set standards must be balanced across all interested parties. There must be meaningful representation for large and small commercial entities, and balanced representation must be reflected at all levels of the standard-setting body;
  • Due Process: The standard-setting body must use documented and publicly available policies and procedures to provide a fair and impartial process. An appeals process is also available for the impartial handling of procedural appeals;
  • Consensus: Standards development must proceed by consensus but not necessarily unanimity; and
  • Transparency: Procedures must be transparent to participants and publicly available.

The CFPB also outlined the application process which involves requesting recognition, followed by additional information requests from the CFPB which may also involve public comment.  Next, the CFPB will review the available information against the requirements listed above, make a decision on the application, and if approved, officially recognize the organization as a standards-setting body. 

The CFPB also has the power to (a) revoke standard-setters’ recognition if they fail to meet the qualifications and (b) impose a maximum recognition duration of five years, after which recognized standard-setters will have to apply for re-recognition.  This rule will take effect 30 days after its publication in the Federal Registrar.

Fair standards issued by standard-setters outside the agency will help companies comply with the proposed Personal Financial Data Rights Rule.  Interested standard-setters are encouraged to begin ensuring their adherence to this new rule.

In a reminder that open source products can carry significant risks beyond intellectual property, a vulnerability in a compression tool commonly used by developers has triggered widespread concerns. 

XZ Utils (“XZ”) is an open source data compression utility, first published in 2009, and widely used in Linux and macOS systems. The tool is primarily used for data compression and decompression and may reside on routers and switches, VPN concentrators and firewalls. XZ is used on many smartphones, televisions and most web servers on the internet. Despite its wide adoption, as an open source tool, XZ was primarily maintained by one volunteer—who maintained the open source software for free.

When this volunteer ran into some personal matters, he turned over the maintenance responsibilities to JiaT75, known as Jia Tan (or what many now believe was a group of hackers working under this alias). In February 2024, Jia Tan updated XZ for versions 5.6.0 and 5.6.1, which included malicious code. This malicious code could be used to create a backdoor on infected devices, including the potential for stealing encryption keys or installing malware. Jia Tan took several steps to obfuscate the addition of malicious code. For example, the malicious code was not included in the public GitHub repository but instead included only in tarball releases. Further, the backdoor was deployed only in certain environments to avoid detection.

On March 29, 2024, a security researcher stumbled onto a software bug that led him to discover and report the XZ attack. The Cybersecurity & Infrastructure Security Agency (CISA) issued an alert recommending that users downgrade to an uncompromised version of XZ, and Linux vendors promptly issued press releases and remediation efforts to minimize the effects of the attack.

The XZ attack has raised questions within the open source community about the risks of having critical software maintained and governed by unpaid volunteers. The attack also serves as a reminder that even widely adopted software is at risk of attack and companies should prepare for future attacks to its or its third party vendor’s software. As part of that process, companies should:

Know your dependencies. Be able to quickly identify, or ensure that third-party vendors can quickly identify, whether a certain software package with a known vulnerability is used in any of a company’s software.

Develop disaster recovery and incident response plans. Response plans are typically more comprehensive and more effective when they are created before an incident. Companies should consult with a variety of subject matter experts, internally and externally, to evaluate the size and scope of its business activities, amount and type of personal information that is collected and stored, the locations of operations and applicable federal, state and sector-specific regulatory requirements.

Apply technology mitigation strategies. Limit access to critical software systems to only those employees and independent contractors who need access to such systems and monitor usage for any unusual behavior.

Review applicable policies and procedures often. Outside counsel can review policies and procedures to ensure compliance with the fast-changing regulatory landscape.

Review vendor contracts for liability protections. Scrutinize vendor contracts for appropriate risk shifting terms. Indemnification clauses may be appropriate and might include claims related to cybersecurity incidents, data breaches and regulatory liability. Representations and warranties might be used to represent that certain mitigation strategies will be deployed and compliance standards will be maintained. Insurance can be a helpful tool, and covenants to purchase adequate insurance might be considered. Further, it may be appropriate to carve out some cybersecurity incidents from a liability cap. In-house and outside counsel can review risk shifting terms in light of current market and legal trends.

Minnesota becomes the latest state to move to pass legislation regulating the processing and controlling of personal data (HF 4757 / SF 4782). If signed into law by Governor Tim Walz, the Minnesota Consumer Data Privacy Act, or MCDPA, would go into effect on July 31, 2025 and provide various consumer data privacy rights and impose obligations on entities that control or process Minnesota residents’ personal data.

The MCDPA applies to entities controlling or processing personal data of 100,000 consumers or more or that derive over 25% of their revenue from the sale of personal data and process or control personal data of 25,000 consumers or more. Following in the footsteps of Texas and Nebraska, the MCDPA exempts small businesses as defined by the United States Small Business Administration. The law also contains targeted data-level exemptions for health and financial data processing, but not entity-level exemptions.

In addition to the right to access, rectification, erasure, portability, and opt-out of targeted advertising, sale of personal data, and profiling, under the MCDPA, consumers also would also have the novel right to question the result of a profiling decision and request additional information from a controller regarding that decision.

The MCDPA outlines several responsibilities to which controllers of data must comply, some of which are new obligations beyond what are contained in other laws. For example, the MCDPA requires that a “controller shall establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data, including the maintenance of an inventory of the data that must be managed to exercise these responsibilities.” The maintenance of this type of inventory is a first under U.S. state privacy law.

There are also data obligations relating to transparency in privacy notice and disclosure; limitation of the use of data in relation to processing and physical data security practices; nondiscrimination in the processing of personal data; and an obligation to appoint a chief privacy officer or privacy lead for the organization.

Enforcement would fall under the purview of the Minnesota Attorney General and businesses would have a thirty-day right to cure period, which expires January 31, 2026.

Assuming it is signed into law as expected, Minnesota will join the ranks of the seventeen other states (eighteen counting Florida) that have passed comprehensive consumer privacy acts. As with each of the other states’ acts, Minnesota’s bill shares some similarities with these other acts while also containing some unique provisions. Businesses in Minnesota would do well to start reviewing procedures and processes in preparation for the MCDPA.

Colorado has become the first state to pass legislation (SB24-205) regulating the use of artificial intelligence (AI) within the United States. This legislation is designed to address the influence and implications, ethically, legally, and socially, of AI technology across various sectors.

Any person doing business in Colorado, including developers or deployers of high-risk AI systems that are intended to interact with consumers. The bill defines a high-risk AI system as any AI system that is a substantial factor in making a consequential decision. Notably it does not include (among others) anti-fraud technology that doesn’t use facial recognition, anti-malware, data storage, databases, video games, and chat features so long as they do not make consequential decisions.

The bill includes a comprehensive framework governing the use of AI within government, education, and business with a focus on promoting ethical standards, transparency, and accountability in AI development and deployment. The bill requires disclosure for the use of AI in decision-making processes, sets out ethical standards to guide AI development, and provides mechanisms of recourse and oversight in cases of AI-related biases or errors. These recourse mechanisms include opportunities for consumers to correct any incorrect personal data processed by a high-risk AI system as well as an opportunity to appeal an adverse decision made by a system with human review (if possible).  The disclosure requirements will apply to developers, requiring a publicly available statement that describes methods used to manage risks of algorithmic discrimination.  

The bill requires the development of several compliance mechanisms if an entity uses high-risk AI systems. These include impact assessments, risk management policies and programs, and annual review of the high-risk systems. These mechanisms are designed to promote transparency in the development and use of these systems.

The passage of this bill positions Colorado at the forefront of AI regulation in the US, setting a precedent for other states and jurisdictions grappling with similar challenges.

Judges are beginning to address the increasing use of AI tools in court filings—including reacting to instances of abuses by lawyers using it for generative purposes and requiring disclosures regarding the scope of AI use in the drafting of legal submissions. Now JAMS, the largest private provider of alternative dispute resolution services worldwide, has issued rules—effective immediately—designed to address the use and impact of AI.

The Upshot

  • Alternative Dispute Resolution (ADR) providers are now joining courts in trying to grapple with the impact of AI on the practice of law.
  • As they should when dealing with courts, litigants need to pay close attention to the rules of a particular forum as they pertain to AI.
  • When choosing an arbitration forum in agreements, there may be reasons to choose a forum with AI rules and to specify that those procedures be followed. 

The Bottom Line

AI is reshaping the legal landscape and compelling the industry to adapt. Staying up-to-date with these changes has become as fundamental to litigation as other procedural rules. The Artificial Intelligence Team at Ballard Spahr monitors developments in AI and is advising clients on required disclosures, risk mitigation, the use of AI tools, and other evolving issues. 

Judges are beginning to address the increasing use of artificial intelligence (AI) tools in court filings—including reacting to instances of abuse by lawyers using it for generative purposes and requiring disclosures regarding scope of use in the drafting of legal submissions—by issuing standing orders, as detailed here. Staying up-to-date with these changes will soon be as fundamental to litigation as other procedural rules.

In line with court trends, Judicial Arbitration and Mediation Services (JAMS), an alternative dispute resolution (ADR) services company, recently released new rules for cases involving AI. JAMS emphasized that the purpose of the guidelines is to “refine and clarify procedures for cases involving AI systems,” and to “equip legal professionals and parties engaged in dispute resolution with clear guidelines and procedures that address the unique challenges presented by AI, such as questions of liability, algorithmic transparency, and ethical considerations.”

Although courts have not settled on a definition yet for AI, JAMS took deliberate steps to define AI specifically as “a machine-based system capable of completing tasks that would otherwise require cognition.” Such change makes the scope of its rules clearer. Additionally, the rules encompass an electronically stored information (ESI) protocol approved for AI cases, along with procedures for overseeing the examination of AI systems, materials, and experts to accommodate instances where the existing ADR process lacks adequate safeguards for handling the intricate and proprietary nature of such data.

Specifically, the procedures dictate that before any preliminary conference, each party must voluntarily exchange their non-privileged and relevant documents and other ESI. JAMS suggests that prior to such exchange, the parties should enter into their AI Disputes Protective Order to protect each party’s confidential information.

The form protective order, which is not provided under JAMS’s regular rules, limits the disclosure of certain designated documents, and information to the following specific parties: counsel, named parties, experts, consultants, investigators, the arbitrator, court reporters and staff, witnesses, the mediator, author or recipient of the document, other persons after notice to the other side, and “outside photocopying, microfilming or database service providers; trial support firms; graphic production services; litigation support services; and translators engaged by the parties during this Action to whom disclosure is reasonably necessary for this Action.” The list of parties privy to such confidential information does not include any AI generative services, a choice that is consistent with the broad concern that confidential client information is currently unprotected in this new AI world.

The rules further provide that in cases where the AI systems themselves are under dispute and require production or inspection, the disclosing party must provide access to the systems and corresponding materials to at least one expert in a secured environment established by that party. The expert is prohibited from removing any materials or information from this designated environment.

Additionally, experts providing opinions on AI systems during the ADR process must be mutually agreed upon by the parties or designated by the arbitrator in cases of disagreement. Moreover, the rules confine expert testimony on technical issues related to AI systems to a written report addressing questions posed by the arbitrator, supplemented by testimony during the hearing. These changes recognize the general need for both security and technical expertise in this area so that the ADR process can remain digestible to the arbitrator/mediator, who likely has no, or limited, prior experience in the area.

While JAMS claims to be the first ADR services company to issue such guidance, other similar organizations have advertised that their existing protocols are already suited to the current AI landscape and court rules.

Indeed, how to handle AI in the ADR context has been top of mind for many in the field. Last year, the Silicon Valley Arbitration & Mediation Center (SVAMC), a nonprofit organization focused on educating about the intersection of technology and ADR, released its proposed, “Guidelines on the Use of Artificial Intelligence in Arbitration.” SVAMC recommends that those participating in ADR should use their guidelines as a “model” for navigating the procedural aspects of ADR related to AI, which may involve incorporating such guidelines into some form of protective order.

In part, the clauses (1) require that the parties familiarize themselves with the relevant AI tool’s uses, risks, and biases, (2) make clear the parties of record remain subject to “applicable ethical rules or professional standards” and that parties are required to verify the accuracy of any work product that AI generates, as that party will be held responsible for the inaccuracies, and (3) provide that disclosure regarding the use of AI should be determined on a case-by-case basis.

The SVAMC guidelines also focus on confidentiality considerations, requiring that the parties redact privileged information prior to inputting them into AI in certain instances. SVAMC even goes so far as to make clear that the arbitrator themselves cannot substitute their own decision making power for AI’s. SVAMC’s Guidelines are a useful tool for identifying the significant factors that parties engaging in ADR should contemplate, and that the ADR community at large is contemplating. 

As courts provide additional legal guidance, and more related AI-use issues arise, we expect that more ADR service companies will move in a similar direction as JAMS, and potentially adopt versions of SVAMC’s guidance, as the procedures and technology continues to evolve.

Newly effective regulations governing confidentiality of Substance Use Disorder (SUD) records now more closely mirror regulations implementing the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and other federal law. The new measures ease the administrative burden on programs by aligning regulations governing the privacy of Part 2 SUD records with the regulatory framework governing HIPAA and the Health Information Technology for Economic and Clinical Health (HITECH) Act. Part 2 programs have until February 16, 2026, to implement any necessary changes.

The Upshot

  • Jointly, the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) and the Substance Abuse and Mental Health Services Administration (SAMHSA) revised 42 C.F.R. Part 2 (Part 2), which governs certain patient-identifying substance use disorder (SUD) records, effective April 16, 2024.
  • Significant regulatory changes implement Section 3221 of the Coronavirus Aid, Relief, and Economic Security (CARES) Act (relating to “confidentiality and disclosure of records relating to substance use disorder”), which required HHS to update both HIPAA and Part 2 regulations in order to better align respective privacy protections.
  • The changes consist of new and revised Part 2 regulations governing, in part: patient consent for SUD record use, disclosure, and redisclosure; individual patient rights relating to notices, accountings of disclosures, and complaint procedures; and increased penalties for noncompliance. 
  • Part 2 programs may now maintain, use, and disclose records in a manner more consistent with HIPAA regulations.
  • As a result, Part 2 programs have expanded flexibility in utilizing Part 2 records, but must carefully note additional compliance responsibilities and civil penalties for noncompliance.

The Bottom Line

Part 2 violations will now be subject not only to criminal penalties, but also the civil monetary penalties established by HIPAA, HITECH, and their implementing regulations. These regulations (along with the CARES Act) fundamentally alter the potential cost of noncompliance for Part 2 programs and may ultimately result in increased enforcement activity.

Read the full alert here.

In a regulatory filing, Reddit announced that the FTC is probing Reddit’s proposal to sell, license and share user-generated content with third parties to train artificial intelligence (AI) models.  This move underscores the growing scrutiny over how online platforms harness the vast amounts of data they collect, particularly in the context of AI development. 

The investigation brings to light several legal considerations that could have far-reaching consequences. Importantly, it highlights the importance of clear and transparent user agreements including terms of service and privacy policies. Users must be fully aware of how their data is used, especially when it contributes to the development of AI technologies. This approach tracks with the FTC’s stance that companies seeking to use consumer personal data to train AI models should notify consumers meaningfully rather than surreptitiously change user agreements.

The FTC’s actions signal a more aggressive stance on data privacy and usage, particularly in relation to AI. For the tech industry, this could mean a shift towards more stringent data handling and consent practices. Companies may need to reassess their data collection and usage policies to ensure compliance with emerging legal standards. Furthermore, this investigation could pave the way for new regulations specifically addressing the use of personal data in AI development.

The UN General Assembly has adopted a landmark resolution focusing on the safe, secure, and trustworthy use of Artificial Intelligence (AI). This resolution, led by the United States and supported by over 120 Member States, marks the first time the Assembly has adopted a resolution on regulating AI. The resolution calls for protection of rights both online and offline, urges cooperation to bridge the digital divide, and aims for AI to advance sustainable development globally. While the UN Resolution generally consists of high level principles as opposed to specific compliance steps, it is an important reminder that there is a growing legal and regulatory consensus on a responsible AI governance framework.

The UN Resolution emphasizes the development of AI systems in a way that is safe, secure, trustworthy, and sustainable. It recognizes the potential of AI to progress the Sustainable Development Goals (SDGs) and underlines the importance of human-centric, reliable, and ethical AI systems. It stresses the need for global consensus on AI governance and capacity building in developing countries, ensuring AI benefits are shared globally. The UN Resolution also highlights the urgency of developing effective safeguards and standards for AI, promoting transparent, inclusive, and equitable use of AI while respecting intellectual property rights and privacy.

The UN Resolution specifically encourages all Member states to be cognizant of data security issues when promoting AI systems by “[f]ostering the development, implementation and disclosure of mechanisms of risk monitoring and management, mechanisms for securing data, including personal data protection and privacy policies, as well as impact assessments as appropriate, across the life cycle of artificial intelligence systems.” The UN Resolution suggests that businesses need to develop and implement comprehensive risk monitoring and management systems for their AI technologies. This includes securing data throughout the AI lifecycle, ensuring robust personal data protection, and regularly conducting privacy and impact assessments. Essentially, companies should be proactive in identifying and managing potential risks associated with AI use, particularly regarding data privacy. This approach is crucial for compliance with emerging international standards and for maintaining trust in AI systems and applications.

The UN Resolution also encourages Member states to consider data privacy when promoting AI systems by “[s]afeguarding privacy and the protection of personal data when testing and evaluating systems, and for transparency and reporting requirements in compliance with applicable international, national and subnational legal frameworks, including on the use of personal data throughout the life cycle of artificial intelligence systems.” The UN Resolution implies a need to prioritize privacy in all stages of AI system development and usage. Transparency should be ensured in how personal data is handled and the relevant legal frameworks should be complied with at all levels. This includes establishing clear policies and procedures for data privacy, regularly reviewing and reporting on AI system operations in relation to personal data use, and staying updated with evolving regulations in different jurisdictions. Adherence to these standards is essential for legal compliance and maintaining consumer trust.

In some ways, after the EU AI Act’s passing, the UN Resolution could be seen as less significant as it does not carry the regulatory enforcement threat. However, at the very least, the UN Resolution should serve as a warning to countries around the globe—including those that operate only in the United States—that all regulators are looking for certain core governance positions when it comes to AI.  Companies should be sure to document those governance steps or risk becoming the focus of enforcement actions under myriad theories.