The UN General Assembly has adopted a landmark resolution focusing on the safe, secure, and trustworthy use of Artificial Intelligence (AI). This resolution, led by the United States and supported by over 120 Member States, marks the first time the Assembly has adopted a resolution on regulating AI. The resolution calls for protection of rights both online and offline, urges cooperation to bridge the digital divide, and aims for AI to advance sustainable development globally. While the UN Resolution generally consists of high level principles as opposed to specific compliance steps, it is an important reminder that there is a growing legal and regulatory consensus on a responsible AI governance framework.

The UN Resolution emphasizes the development of AI systems in a way that is safe, secure, trustworthy, and sustainable. It recognizes the potential of AI to progress the Sustainable Development Goals (SDGs) and underlines the importance of human-centric, reliable, and ethical AI systems. It stresses the need for global consensus on AI governance and capacity building in developing countries, ensuring AI benefits are shared globally. The UN Resolution also highlights the urgency of developing effective safeguards and standards for AI, promoting transparent, inclusive, and equitable use of AI while respecting intellectual property rights and privacy.

The UN Resolution specifically encourages all Member states to be cognizant of data security issues when promoting AI systems by “[f]ostering the development, implementation and disclosure of mechanisms of risk monitoring and management, mechanisms for securing data, including personal data protection and privacy policies, as well as impact assessments as appropriate, across the life cycle of artificial intelligence systems.” The UN Resolution suggests that businesses need to develop and implement comprehensive risk monitoring and management systems for their AI technologies. This includes securing data throughout the AI lifecycle, ensuring robust personal data protection, and regularly conducting privacy and impact assessments. Essentially, companies should be proactive in identifying and managing potential risks associated with AI use, particularly regarding data privacy. This approach is crucial for compliance with emerging international standards and for maintaining trust in AI systems and applications.

The UN Resolution also encourages Member states to consider data privacy when promoting AI systems by “[s]afeguarding privacy and the protection of personal data when testing and evaluating systems, and for transparency and reporting requirements in compliance with applicable international, national and subnational legal frameworks, including on the use of personal data throughout the life cycle of artificial intelligence systems.” The UN Resolution implies a need to prioritize privacy in all stages of AI system development and usage. Transparency should be ensured in how personal data is handled and the relevant legal frameworks should be complied with at all levels. This includes establishing clear policies and procedures for data privacy, regularly reviewing and reporting on AI system operations in relation to personal data use, and staying updated with evolving regulations in different jurisdictions. Adherence to these standards is essential for legal compliance and maintaining consumer trust.

In some ways, after the EU AI Act’s passing, the UN Resolution could be seen as less significant as it does not carry the regulatory enforcement threat. However, at the very least, the UN Resolution should serve as a warning to countries around the globe—including those that operate only in the United States—that all regulators are looking for certain core governance positions when it comes to AI.  Companies should be sure to document those governance steps or risk becoming the focus of enforcement actions under myriad theories.