Several international organisations are involved in artificial intelligence. We have compiled the most important information for you.
In September 2024, the AI Advisory Body, established within the Office of the Secretary-General’s Envoy on Technology (OSET), published its report titled "Governing AI for Humanity". The report offers a range of perspectives and approaches to the governance of artificial intelligence (AI) in the public interest, with a particular focus on human rights and the Sustainable Development Goals (SDGs). It outlines seven key recommendations to strengthen international AI governance:
On 21 March 2024, the UN General Assembly adopted the first ever global resolutionpromoting safe and trustworthy artificial intelligence systems. Supported by over 120 UN Member States – including both China and the United States – the resolution is intended to serve as a foundation for future international guidance on AI regulation.
The resolution reaffirms the Assembly’s commitment to the protection and promotion of fundamental and human rights. It emphasizes that the rights individuals enjoy in the offline world must also be upheld online throughout the entire lifecycle of AI systems. The General Assembly therefore calls on all Member States and relevant stakeholders to refrain from deploying AI systems that fail to comply with international human rights standards or pose undue risks to human rights.
Although legally non-binding, the resolution underscores the importance of data protection and stresses the need to develop and implement mechanisms for risk monitoring and impact assessments throughout the lifecycle of AI technologies. It also calls for increased investment in effective safeguards, including physical security, AI system security, and risk management frameworks.
Furthermore, the Assembly acknowledges the differing levels of technological development between and within countries and recognizes that developing nations face particular challenges in keeping pace with rapid innovation. It encourages Member States and stakeholders to cooperate with and support developing countries in ensuring inclusive and equitable access, bridging the digital divide, and strengthening digital literacy.
Within the UN system, the International Telecommunication Union (ITU)plays a leading role in the field of artificial intelligence. The organization focuses on harnessing the potential of AI technologies to support and advance the 17 Sustainable Development Goals (SDGs).
In line with its specific mandate, the ITU places particular emphasis on the application of AI in telecommunications and information and communication technologies (ICTs). Furthermore, the ITU has collaborated closely with other UN specialized agencies and programmes through thematic focus groups addressing opportunities and challenges of AI in various domains, including:
In addition, the ITU leads the "AI for Good" initiative, a digital platform that connects AI innovators and stakeholders to identify practical AI solutions that can advance the SDGs. The annual highlight of this initiative is the AI for Good Global Summit.
The ITU also co-leads, together with UNESCO, the "inter-agency working group on Artificial Intelligence", which brings together expertise across the UN system to support initiatives on AI ethics and strategic capacity-building related to AI governance.
UNESCO focuses on the ethical dimensions of artificial intelligence, particularly in the areas of education and culture. The organization developed the Recommendation on the Ethics of Artificial Intelligence, which serves as a global framework for educational institutions and other stakeholders to ensure the use of AI technologies aligns with fundamental rights and freedoms. A key emphasis is placed on the protection of cultural diversity. UNESCO also convenes the "Global Forum on the Ethics of Artificial Intelligence".
To support Member States in implementing the Recommendation, UNESCO developed the Readiness Assessment Methodology (RAM). This methodology includes a broad set of qualitative and quantitative questions to assess various dimensions of a country’s AI ecosystem, including legal and regulatory, social and cultural, economic, scientific and educational, as well as technological and infrastructural aspects.
Moreover, UNESCO provides a wide range of resources and projects aimed at strengthening AI-related capacities among both Member States and the broader public. In 2023, UNESCO launched a project to enhance the capacities of interested European authorities in the field of artificial intelligence. In this context, extensive training material was developed, including content related to the EU AI Act, aimed at existing and prospective AI market surveillance authorities.
The following training modules are offered:
WIPOplays a central role in global debates on artificial intelligence and intellectual property. Generative AI systems that process large datasets – including text, images, and other media – to generate content traditionally created by humans, raise complex legal issues. These include potential copyright infringements through the use of protected works in training data, questions of copyright protection for AI-generated content, and the need for adequate patent protection of AI models themselves.
To address these challenges, WIPO promotes international dialogue through its "WIPO Conversation on Intellectual Property and Artificial Intelligence". This platform brings together global experts to discuss the impact of AI on IP law and practice.
UN Global Pulseis an initiative of the UN Secretary-General aimed at leveraging big data and artificial intelligence to improve humanitarian and development efforts. By analyzing large-scale data, Global Pulse seeks to generate insights that support rapid and effective responses to crises. The initiative is considered a model for the use of AI in real-time data analysis to support public good.
The International Monetary Fundfocuses on the economic and financial implications of the ongoing development and deployment of artificial intelligence systems. In this context, the IMF has published several studies and papers addressing various relevant areas, including impacts on the financial sector, issues of wealth distribution, competition, and opportunities and challenges in the field of taxation. Notable publications include:
The Centre for Artificial Intelligence and Robotics at the United Nations Interregional Crime and Justice Research Institute (UNICRI) was established to advance understanding of AI, robotics, and related technologies, particularly in the context of crime prevention, counter-terrorism, and other security threats. The Centre’s mission is to assist UN Member States in assessing the risks and opportunities associated with these technologies and exploring their potential to strengthen efforts against violence and crime.
In 2022, the International Atomic Energy Agency (IAEA) published a report entitled "Artificial Intelligence for Accelerating Nuclear Applications, Science and Technology". Additionally, the IAEA hosts the "AI for Atoms" platform, which provides comprehensive information on the Agency’s AI-related activities, including relevant initiatives, news, publications, and events.
The UN Refugee Agency (UNHCR) applies artificial intelligence to enhance the effectiveness of humanitarian assistence. AI technologies are used to forecast refugee movements, simulate the spread of diseases such as COVID-19 in refugee camps, and support the coordination of responses. Moreover, AI aids in analyzing text data from social media and other sources to identify protection needs and enable timely interventions.
The OECD Working Party on Artificial Intelligence Governance (AIGO) supports the OECD´s Digital Policy Committeein matters related to AI policy and governance. It leads the work programme on AI governance and focuses on the analysis, implementation, monitoring, and evaluation of national AI strategies.
Its core tasks include:
AIGO also supports the implementation of OECD standards, promotes international exchange of best practices, and develops tools such as the OECD.AI Policy Observatory and Globalpolicy.AI. It works in close cooperation with international organisations (e.g. GPAI, UNESCO, World Bank), as well as with civil society and private sector stakeholders.
On 22 May 2019, the OECD Council adopted the Recommendation on Artificial Intelligence, proposed by the Committee on Digital Economy Policy (CDEP). As the first intergovernmental AI policy instrument of its kind, the recommendation seeks to strengthen innovation and trust in AI by promoting responsible practices. It highlights the importance of human rights and democratic values, complementing existing OECD standards in areas such as data protection, digital security risk management, and responsible business conduct.
The recommendation defines five value-orientated principles for the development and use of AI:
It also provides five policy recommendations to governments:
The Recommendation was revised twice: in 2023, to update the definition of AI systems, and in 2024, to address issues such as the growing importance of combating misinformation and disinformation, refining transparency and disclosure obligations, and strengthening security-related requirements.
The OECD has worked to build consensus on a common definition of an AI system. Member countries recently approved an updated version of the OECD’s definition:
This definition largely aligns with that used in the European Union’s AI Act, which similarly defines an AI system as:
[An AI system is] "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
Launched in February 2020, the OECD.AI Policy Observatory supports evidence-based policy by providing resources, data, and analysis on artificial intelligence. Its activities are driven by a number of expert groups focusing on priority topics, including:
The OECD.AI Observatory operates a real-time database of public AI policies and initiatives: the Global AI Initiatives Navigator (GAIIN). GAIIN offers a global overview of national and international efforts in the field of AI and serves as a central resource for policymakers.
It is continuously updated by official contact points from participating countries, international organisations, and OECD.AI experts.
The OECD Framework for the Classification of AI Systems (Overview | Report) was developed by the OECD.AI Network of Experts to assist policymakers, regulators, legislators, and other stakeholders in assessing the risks and opportunities associated with different types of AI systems.
The framework classifies AI systems and applications along five dimensions: People & Planet, Economic Context, Data & Input, AI Model, and Task & Output. Each dimension contains specific characteristics and subcategories relevant to policy considerations and risk assessment.
The OECD AI Incidents Monitor aims to collect and analyse AI-related incidents globally, helping policymakers, experts, and stakeholders better understand associated risks and harms.
An AI incident is defined as:
An AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms:
(a) injury or harm to the health of a person or groups of people;
(b) disruption of the management and operation of critical infrastructure;
(c) violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights;
(d) harm to property, communities or the environment.
The AIM is designed to identify recurring patterns and provide insights into the complex nature of AI-related incidents. Initially, the monitor identifies incidents reported in reputable international media sources. These are then classified using machine learning models, based on criteria derived from the OECD AI classification framework — such as severity, affected sectors, relevant AI principles, harm types, and stakeholders involved.
Analysis is based on article headlines, abstracts, and opening paragraphs. Data is sourced from Event Registry, a news intelligence platform that processes over 150,000 English-language articles daily.
The OECD.AI website offers a comprehensive collection of publications and research reports on AI policy, both from within and beyond the OECD. These materials are categorised by policy areas including economy, education, health, competition, and digital policy.
The site also features a continuously updated feed of AI-related news from around the world, with articles classified as positive, negative, or neutral in tone regarding AI.
On 14 March 2024, the Committee on Artificial Intelligence (CAI) of the Council of Europe adopted the draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (Draft Explanatory Report). The draft was formally adopted by the Committee of Ministers on 17 May 2024 and has since entered into force as a legally binding international treaty. Negotiations included all 46 Council of Europe member states, as well as the European Commission and the United States.
The core of the Convention establishes fundamental principles, rules, and rights to ensure that the development and use of AI systems respects human rights, promotes democracy, and upholds the rule of law. It applies to the design, development, and use of AI systems, with the aim of regulating them across all stages of their lifecycle. The Convention is applicable to both public and private actors, a point that remained controversial until the final negotiations. Exemptions for national security-related AI systems are included but in a restricted form.
The Convention extends the existing human rights framework to the field of AI. It includes provisions on data protection and privacy, non-discrimination, and the protection of individual freedom, dignity, and autonomy. Signatory states are encouraged to involve stakeholders in the development and deployment of AI systems and to foster public debate on AI-related issues.
AI-specific obligations include the promotion of AI literacy, human oversight, and the requirement to inform individuals about interactions with AI systems.
The Convention aligns with the EU AI Act and other EU legal instruments, incorporating several key concepts from the EU legislation. It promotes a human-centric approach to AI based on human rights, democracy, and the rule of law, and adopts a risk-based framework. Key principles for trustworthy AI outlined in the Convention include transparency, robustness, safety, data governance, and privacy protection.
In support of the Framework Convention, the Council of Europe developed HUDERIAa non-binding methodology for the risk and impact assessment of AI systems.
HUDERIA offers a structured approach to help public and private actors identify and manage the risks and impacts that AI systems may pose to human rights, democracy, and the rule of law throughout their lifecycle. It aims to bridge the gap between international human rights standards and existing technical frameworks for AI risk management.
The HUDERIA methodology is composed of four key elements:
At the G20 Summit in Osaka in June 2019, the G20 community engaged in a high-level discussion on artificial intelligence (AI) for the first time. The Heads of State and Government adopted a set of Principles for Responsible Stewardship of Trustworthy AI, which are largely based on the OECD AI Principles. These principles aim to serve as a foundation for national regulatory frameworks and to guide internationally active companies in developing their own standards. AI systems should be designed in a manner that respects the rule of law, human rights, democratic values, and diversity.
In the Leaders´ Declaration adopted at the G20 Summit in New Delhi in September 2023, several measures were outlined to promote responsible use of AI:
Under Brazil’s G20 Presidency in 2024, in addition to promoting safe and transparent AI for public good, a special emphasis was placed on addressing inequality.
The Leaders´ Declaration from the Rio de Janeiro Summit (18–19 November 2024) highlighted both the opportunities and challenges posed by AI. It emphasized the importance of a responsible, inclusive, and human-centred use of AI, safeguarding human rights, transparency, fairness, and data protection. A pro-innovation regulatory approach is to be pursued to mitigate risks while promoting the technology’s benefits. The declaration also stresses the need for international cooperation to bridge digital divides and support developing countries through capacity building and digital inclusion initiatives.
Particular attention was given to the impact of AI on the world of work. The G20 underlined the importance of fair working conditions, privacy protection, and closing digital gender gaps by 2030. It further encouraged the responsible use of AI to enhance education, healthcare, and women’s empowerment. Businesses were urged to engage in social dialogue with workers when introducing digital technologies, in order to strengthen acceptance. The declaration called for continued international cooperation to ensure that AI develops in a sustainable and equitable manner.
During the South African G20 Presidency in 2025, a new Task Force on Artificial Intelligence, Data Governance and Innovation for Sustainable Development was established.
This task force provides the G20 with a platform to shape the future development and use of AI in the interest of the global public good. It aims to advance the creation of safe, ethical, trustworthy, and resilient AI ecosystems.
Key outcomes include the launch of the "AI for Africa" initiative and the development of a Technology Policy Assistance Facility.
The Hiroshima Processwas launched in May 2023 at the G7 Summit in Hiroshima (Leaders' Statement) to advance international dialogue on the opportunities and risks of artificial intelligence (AI).
This process led to the development of the G7 AI Principles and Code of Conduct (AIP&CoC - Guiding Principles for All AI Actors | Guiding Principles for Organisations Developing Advanced AI Systems | Code of Conduct for Organizations Developing Advanced AI Systems). A core element of the AIP&CoC is the G7’s strong commitment to key areas of AI governance. This includes a risk-based approach applied throughout the AI lifecycle, beginning with preventive risk assessments and mitigation measures prior to deployment. The importance of ongoing monitoring, reporting, and response to misuse and incidents is also emphasized. As a precaution, the documents highlight the need for developers and deployers to have policies and processes in place for risk management and robust security controls.
In addition to addressing risks associated with advanced AI systems, the AIP&CoC outline research and development priorities, including content authentication, data rights protection, mitigation of societal and security risks, use of AI to tackle global challenges such as climate change, and the development of technical standards.
The G7 has committed to further develop the principles and code as part of a comprehensive policy framework, in cooperation with other countries, the OECD, the Global Partnership on AI, and a wide range of stakeholders from academia, industry, and civil society.
Under Italian G7 Presidency , artificial intelligence was a major focus throughout 2024, with particular emphasis on the ethical and socially responsible development of AI.
The Leaders’ Communiqué highlighted the importance of international cooperation and regulatory harmonization to strengthen the safety, transparency, and accountability of AI technologies and applications. It endorsed risk-based regulatory approaches aimed at fostering innovation, inclusive growth, and sustainability.
Leaders agreed to launch an Action Plan for AI in the World of Work and to develop a brand identity to support implementation of the international G7 AI Code of Conduct. The crucial role of robust and resilient semiconductor supply chains for safe and trustworthy AI was also emphasized. To address challenges in this area, the G7 welcomed the creation of a Semiconductors G7 Point of Contact Group.
At the 2025 G7 Leaders’ Summit in Canada, the G7 Leaders’ Statement on AI for Prosperity was adopted.
The statement underscores the use of AI in the public sector to improve service delivery, the support of small and medium-sized enterprises (SMEs) in adopting AI, and the promotion of fair, inclusive labour markets. It also addresses the energy-related challenges posed by AI, including the increased electricity demand from data centres and large language models. These should be addressed through innovation in energy efficiency and sustainable infrastructure.
The G7 emphasized partnerships with emerging markets and developing countries to build local innovation capacity and ensure equitable AI access. Key initiatives advancing these goals include the G7 GovAI Grand Challenge, the G7 AI Adoption Roadmap, and the G7 AI Network (GAIN).
The AI Seoul Summit was another major international initiative in the field of artificial intelligence. It was held on 21–22 May 2024, co-hosted by the Republic of Korea and the United Kingdom, building upon the outcomes of the AI Safety Summit held in Bletchley Park in November 2023.
The summit concluded with the "Ministerial Statement for Advancing AI Safety, Innovation and Inclusivity", in which 27 countries agreed to define common risk thresholds for the development and deployment of AI, and to strengthen scientific collaboration on AI safety. This includes cooperative efforts on safety testing, the development of evaluation guidelines, and the sharing of best practices.
In addition, ten countries and the European Union endorsed the "Seoul Declaration for Safe, Innovative, and Inclusive AI" to support the Ministerial Statement. The declaration aims to establish an international network of publicly backed AI safety institutes to encourage harmonized approaches to regulation, testing, and research, and to accelerate the creation of a global framework for secure AI applications. Signatories included: Australia, Canada, the European Union, France, Germany, Italy, Japan, South Korea, Singapore, the United Kingdom, and the United States.
Alongside political leaders, leading technology companies also participated in the AI Seoul Summit. Sixteen globally recognized firms—including Amazon, Google, Meta, Microsoft, Samsung, and OpenAI—signed the "Frontier AI Safety Commitments". These commitments include pledges not to develop or deploy AI models that pose catastrophic risks, and to implement responsible governance structures and transparency measures regarding their AI safety practices.