S/PV.10005 Security Council

Wednesday, Sept. 24, 2025 — Session 80, Meeting 10005 — New York — UN Document ↗

Provisional

Adoption of the agenda

The agenda was adopted.
I would like to warmly welcome the Secretary-General and the distinguished presidents, prime ministers and other high-level representatives present in the Security Council Chamber. Their presence today underscores the importance of the subject matter under discussion. Before each member is a list of speakers who have requested to participate, in accordance with rules 37 and 39 of the Council’s provisional rules of procedure, as well as the previous practice of the Council in this regard. I propose that they be invited to participate in this meeting. There being no objection, it is so decided. The Security Council will now begin its consideration of the item on its agenda. I wish to draw the attention of Council members to document S/2025/593, which contains a letter dated 19 September 2025 from the Permanent Representative of the Republic of Korea to the United Nations addressed to the Secretary-General, transmitting a concept note on the item under consideration. I now give the floor to the Secretary-General, His Excellency Mr. António Guterres.
I thank the Republic of Korea for convening this high-level open debate at a decisive moment for global cooperation on artificial intelligence (AI). AI is no longer a distant horizon  — it is here, transforming daily life, the information space and the global economy at breathtaking speed. The question is not whether AI will influence international peace and security, but how we will shape that influence. Used responsibly, AI can strengthen prevention and protection, anticipating food insecurity and displacement, supporting de-mining, helping to identify potential outbreaks of violence, and so much more. But without guardrails, it can also be weaponized. Recent conflicts have become testing grounds for AI- powered targeting and autonomy. AI-enabled cyberattacks can disrupt or destroy critical infrastructure in minutes. The ability to fabricate and manipulate audio and video threatens information integrity, fuels polarization and can trigger diplomatic crises, and the massive energy and water demands of large-scale models, coupled with competition over critical minerals, are creating new drivers of tension. Innovation must serve humankind, not undermine it. Last month, the General Assembly established an Independent International Scientific Panel on Artificial Intelligence and an annual Global Dialogue on Artificial Intelligence Governance (General Assembly resolution 79/325). This is a recognition of the unique convening power of the United Nations. Together, these initiatives aim to connect science, policy and practice, provide every country a seat at the table and reduce fragmentation. They represent practical tools to make AI safer, more inclusive and more accountable. I will soon launch an open call for nominations for the Scientific Panel. I urge all Member States to nominate eminent, diverse experts and to support Today I wish to focus on four priorities. First, we must ensure human control over the use of force. Let us be clear: humankind’s fate cannot be left to an algorithm. Humans must always retain authority over life-and-death decisions. The Council and Member States must ensure that military use of AI remains in full compliance with international law and the Charter of the United Nations, and human control and judgment must be preserved in every use of force. I reiterate my call for a ban on lethal autonomous weapons systems operating without human control, with a view to concluding a legally binding instrument by next year. And until nuclear weapons are eliminated, any decision on their use must rest with humans, not machines. Secondly, we must build coherent global regulatory frameworks. From design to deployment to decommissioning, AI systems must always comply with international law. Military uses must be clearly regulated through legal reviews, human accountability and strong safeguards against misuse. We need greater transparency, confidence-building and cooperation to reduce risks, especially in conflict zones. AI must never lower barriers to acquiring or using prohibited weapons or undermine disarmament obligations. I welcome the Responsible Artificial Intelligence in the Military Domain initiative and commend members’ leadership in these efforts. Last December, the General Assembly adopted a resolution on AI in the military domain and its implications for international peace and security (resolution 79/239). Building on that, I presented a report to the General Assembly (A/80/78) recommending that States take concrete steps to initiate a dedicated and inclusive process to address this issue. I urge Member States to take this forward. Thirdly, we must protect information integrity in situations of conflict and insecurity. The United Nations Global Principles for Information Integrity provide a foundation for coordinated action. Governments, platforms, media and civil society must cooperate to detect and deter AI-generated deception, from disinformation campaigns to deep fakes, targeting peace processes, humanitarian access and elections. We need transparency in the entire AI life cycle; rapid and verified attribution of information sources and their dissemination; and systematic safeguards to prevent AI systems from spreading disinformation and igniting violence. (spoke in French) Fourthly and lastly, we must close the AI capacity gap. Technology can accelerate sustainable development and foster stability and peace. We must create space for all nations to shape our AI future. This means investing in capacity-building, supporting talent development, ensuring that safe and reliable public infrastructure is in place, promoting data diversity to reduce bias and ensuring equitable access to AI tools, computing power and training. Last month, I presented a report outlining innovative voluntary financing options to support AI capacity‑building in developing countries (A/79/966). I urge members to support those efforts. In fields ranging from nuclear arms control to aviation safety, the international community has risen to the challenge posed by technologies that could destabilize our societies by establishing rules, setting up institutions and prioritizing human dignity. The window for shaping AI for peace, for justice and for humanity is closing. We must act without delay.
I thank the Secretary-General for the briefing. Mr. Bengio: I would like to thank you, Mr. President, Your Excellency Mr. Lee Jae Myung, for offering me the opportunity to speak to the Security Council today. We start by looking backwards a bit. In recent years, we have seen the trajectory of artificial intelligence (AI) going towards greater capabilities, and it has been particularly surprising even for experts in the field, including myself. The scientific data is very clear. The ability of AI to solve tasks, perform like humans and have agency has been advancing rapidly. Many researchers are concerned that, as companies are starting to use AI to advance AI research, this could even accelerate. Of course, I do not have a crystal ball, and nobody knows what the future advances will be and how fast they will go, but we can look at past trends. If these trends continue, some AIs could surpass most humans across most cognitive tasks in as little as 5 or maybe 10 years. This would be a radical change in the history of humankind. Trillions of dollars are being invested to develop AI further and further, and we are now seeing how quickly this technology is evolving, fundamentally transforming our economies and our societies, including the military. Yet scientists still do not know how to design AIs that will not harm people, that will always act according to our instructions and that will comply with human rights and human dignity. We do not know how to do that. In a recent study, OpenAI concluded that AI hallucinations are unavoidable with currently known designs. If we do not learn how to build trustworthy AI, humans will be under threat from AI misuse by bad actors or from the misalignment of AI systems with societal norms and laws. Advances in AI will offer ways to tackle some of society’s biggest challenges, yet they will also introduce major risks to international peace and security. Frontier AI systems will be extremely powerful if things continue according to current trends, and the misuse or careless use of this power by any individual country or company could create widespread disruption. I am currently the Chair of the International AI Safety Report — an international mandate following a resolution made by 30 countries, the European Union, the Organisation for Economic Co-operation and Development and the United Nations. It is the first-of-its-kind international report to set out an up-to-date, science-based understanding of the safety of advanced AI systems. It is being developed by more than 100 independent AI experts, and it is intended to provide a base of scientific evidence to inform policy. It identifies areas of scientific consensus and areas where there are different views or gaps in the current scientific understanding. I want to touch briefly today on the three risks outlined in the report that are particularly relevant to today’s discussion. The first pertains to market and power concentration. AI models, which soon might be more capable than or as capable as humans, could provide a strategic advantage on a global scale. This power could be used to create new powerful technologies and tip the balance in favour of a few companies, countries or individuals, potentially enabling a disproportionate economic, political or military concentration of power. The second risk concerns malicious use. AI could also be used by malicious actors to lower barriers to chemical and biological weapons development, to facilitate cyberattacks, which is already happening, or to help to design sophisticated persuasion and disinformation campaigns. We are already seeing AI capabilities improving fast in that respect. The third risk is related to misalignment and, at its extreme, a loss of human control over advanced AI. Again, we currently do not know how to reliably align and control the most advanced AIs, the ones that already exist. When they become more We must act now to mitigate all these risks together. This is not a problem confined to a single nation’s border, but one that threatens all of us. We must mobilize our best minds and make substantial investments to innovate on two fronts: science and governance. On the science and technical side, we need to significantly increase global research endeavours focused on safe and trustworthy AI. I recently launched my own project, LawZero, a non-profit working on scientist AI that tries to build AI so that it will be safe by design and provide a guardrail that checks that other AIs will satisfy our safety specifications and will behave according to our instructions. But we need many more such projects. If we want to build effective global agreements on AI use, it is not sufficient to make these kinds of technical progress. We also need technical advances on verification technology, just as we did for nuclear technology, because countries may not trust each other. Therefore, if we want to have agreements, they need to be verifiable. And work is currently going on, both on the software side and the hardware side, to allow us to build trust with international partners. On this subject, I would like to refer Council members to the United Nations brief from the Scientific Advisory Board of the Secretary-General on this topic. That, then, goes for the technical side. On the governance side, earlier this week, we came together with 200 other experts, including former Heads of State and Nobel laureates and recipients, to support the establishment of international red lines to prevent unacceptable AI risks. Governments must have substantial visibility and control over these technological advances instead of letting very important decisions be taken behind closed doors. Transparency is the most important element that will increase public protection and global interests. In addition, we need to innovate beyond the existing mechanisms of accountability and multilateral governance surrounding AI to ensure that we achieve the following three goals: first, that AI is developed safely by everyone on this planet; secondly, that AI is not abused to gain an unfair advantage over other nations or competitors; and thirdly, that the benefits of AI are shared globally, for example, advances in medicine or the environment. If we develop and manage AI safely, it does offer extraordinary opportunities to improve human life and enhance our collective security, as we will have the opportunity to discuss today. I would like to end with a few concluding thoughts that I have taken from the International AI Safety Report, stating the following. First, the future of general-purpose AI, the most advanced or frontier AI, is uncertain. Many scenarios are possible. Secondly, both very positive and very negative outcomes are possible, which means that we need to prepare for all the plausible scenarios, the positive ones and the negative ones. And much depends on how our societies and Governments will act. AI does not happen to us. We, and particularly Council members as decision-makers, have the power to shape its trajectory so that it benefits the world. I thank the Council again for this opportunity, and I look forward to discussing this important topic with Council members today. I now give the floor to Ms. Yejin Choi, Dieter Schwarz Foundation Professor of Computer Science and Senior Fellow, Stanford University Institute for Human- Centered AI. Ms. Choi: It is such a privilege to join the Secretary-General and the members of the Security Council today. As someone who grew up in South Korea and who devoted her career to advancing artificial intelligence (AI), it is deeply meaningful to contribute to this very dialogue under the presidency of the Republic of Korea and at such a pivotal moment of global importance. I have spent more than two decades as a computer scientist seeking to understand how machines might interpret the world in ways more like we humans do. My work asks whether intelligence can be built in forms that are not just powerful, but genuinely reflective of humanity. We stand today at an extraordinary inflection point. AI dazzles us with achievements that only years ago seemed impossible. In particular, I am deeply motivated by how AI is accelerating scientific discovery — from advancing medicine to exploring the natural world — to expand the horizons of human knowledge. But this promise is accompanied by scientific limits and societal choices. Today I want to talk about one of those choices: choosing intelligence that is not only powerful, but accessible, robust and efficient, because when only a few have the resources to build and benefit from AI, we really leave the rest of the world waiting at the door. Therefore, my message to the Council today is simple: let us expand what intelligence can be and let everyone everywhere have a role in building it. With regard to the need for new scientific frontiers, scientific history shows us that breakthroughs rarely come from staying in the same lane. At critical moments, new methods and new ways of thinking open entire fields. In AI, however, much of the energy and investment has converged on one model of progress, which is scaling. In recent years, developers have sought to build AI systems using ever larger datasets and computing power. This approach has delivered impressive results, while also leading to a reality in which the most advanced models are built by a mere handful of companies in just a few countries. This concentration in the hands of a few really narrows both our science and who gets to shape it. When most of the world lacks the resources to experiment at the frontier, we lose a diversity of perspectives from researchers, institutions and societies that could lead to important discoveries and pivots. The task before us, then, is to expand the frontier: to cultivate space for alternative approaches, to encourage curiosity-driven science and to support bold exploration. Science has historically leapt forward when we have taken risks and opened the door for more voices to contribute to discovery. We can and must pursue alternative approaches to AI development that are more adaptive, more resilient and broadly accessible to the global community. In terms of striving for global access and representation in AI, what might those alternative paths mean in practice? Expanding the frontier is not only about new scientific methods for achieving intelligence. It is also about how we choose to think about access. Who can build advanced AI systems? To whom are they accessible? Whose voices do they include? And whose values do they reflect? If AI is to truly benefit humankind, access must be the North Star. That means pushing the frontier along two dimensions. The first dimension is to build AI that is smaller. If we really want to democratize AI, we must rethink our dependence on massive-scale data and computing resources from the outset and design methods that can do more with less  — for example, The second dimension is ensuring that AI systems represent and serve all communities. We should recentre what truly matters to humankind: linguistic diversity, cultural breadth and pluralistic values. Today’s leading AI models underperform for many non-English languages and reflect narrow cultural assumptions. These flaws lead to the systematic exclusion of entire communities from AI’s benefits, and they cannot be cured simply by patching gaps after the fact. What is required is to rethink the foundations — the training data, learning objectives and evaluation methods — so that systems are built from the start to be robust across languages, contexts and perspectives. Accomplishing this will rest on answers to open research questions across disciplines: what does meaningful linguistic fluency in AI look like? How do we measure value alignment across societies? Who must be at the table when decisions are made? These are only some of the questions we must address through interdisciplinary, cross-cultural collaboration. Every country and community has unique expertise to contribute — expertise that is indispensable when building AI systems that truly serve the world. So, here is a call for bold and collective investment. I want to leave members today with three considerations for how all of us can begin to make inroads on these issues. First, we must invest in high-risk, high-reward science. Because market- driven forces tend to favour short-term, profit-driven research and development, Governments and international bodies must fund bold experiments that look uncertain today but promise to open new frontiers tomorrow. Secondly, we must build shared, public AI infrastructure. We need open, multilingual and multimodal data sets; rigorous benchmarks that test for cultural pluralism and real-world applications; and shared compute resources for academic institutions and nations otherwise left out. These are not just nice-to-haves; they are essential foundations to expand the talent and ideas that drive discovery. Thirdly, we must prioritize capacity-building. Fellowships and exchanges that connect researchers across borders, advanced training programmes that equip the next generation with cutting-edge skills and collaborative institutes that sustain long-term partnerships are all crucial. Innovation can emerge from many paths, and the next breakthrough may come from where we least expect it. There is still so much we can and must do to achieve more global access and representation in AI. Progress on these fronts will not be easy. It demands our willingness to take paths less travelled and to set aside narrow competition in favour of collaborating for the common good. I thank you again, Mr. President, the Secretary-General and distinguished members of the Security Council for the privilege of speaking to the Council today.
I thank Ms. Choi for her briefing. I shall now make a statement in my capacity as President of the Republic of Korea. Let me thank the Secretary-General, Professor Bengio and Professor Choi for their insightful briefings. As I listened, I was reminded of the words of Professor Geoffrey Hinton, who once said that today’s artificial intelligence (AI) is like a very cute tiger cub. This tiger cub before us may well grow into a predator that devours AI in particular will bring the most disruptive innovation to the way we process knowledge and information, and it may soon be able to judge and decide for itself like a human being. Therefore, an entirely different future will unfold before us, depending on how wisely we choose to wield this tool called AI. If used well, AI can help us overcome daunting challenges like low growth and high prices, opening a new path to prosperity. It could also provide solutions to various problems in fields such as health, food and education. But if we are dragged along by the changes without being prepared for them, the extreme technological divide may function as a silicon curtain that surpasses even the iron curtain, aggravating global inequality and imbalance. To turn the changes of the AI era, where light and shadow coexist, into opportunities, it is essential for the international community to unite and uphold the principle of responsible use of AI. If, as many experts warn, AI threatens humankind and leads to its downfall, that would probably be because we failed to establish common global norms befitting such a monumental transformation. In an era where AI capabilities are emerging as a key determining factor of national power, in both economic and security terms, it is neither possible nor realistic to attempt to reverse technological progress like the Luddites. The only viable and wise choice would be to compete for national interest while cooperating for the benefit of humankind at the same time. Governments, academia, industry and civil society must come together and draw on collective wisdom to bring innovation towards AI for all and inclusive and human-centric AI. The role and responsibility of the Security Council is ever more important. In the field of international peace and security, upon which the lives and safety of countless people depend, AI holds both unlimited potential benefit and risks. From intelligence and surveillance to logistics and military planning, AI is strengthening accuracy and precision across a military domain while leading innovation in operational efficiency and command systems. If used well, AI could be a powerful instrument for preventing conflict and maintaining peace, such as by monitoring the proliferation of weapons of mass destruction. It could also contribute to promoting international peace and security by ensuring the swift delivery of humanitarian assistance to those in need. However, if this formidable tool were to escape human control, we would not be able to avoid a dystopian future of rampant mis- and disinformation and surging terrorism and cyberattacks. Instability in security may also deepen due to an AI-driven arms race. The Security Council has actively responded to evolving threats  — from terrorism and cyberattacks to pandemics  — while guiding the international community with vision and leadership. Now, in the AI era, the Council must once again assess the changing security landscape and seek new collective responses. The Republic of Korea, as a responsible global power, is committed to leading international cooperation to ensure that AI becomes a tool for building a sustainable future for humankind. Already last year, the Republic of Korea presented, together with the Netherlands, the first-ever General Assembly resolution on AI in the military domain (resolution 79/239) and hosted the Responsible Artificial Intelligence in the Military Domain Summit in Seoul. Furthermore, the Republic of Korea supported the efforts of the United Nations to combat mis- and disinformation against peacekeepers and presented a resolution on emerging technologies and human rights as a member of the Human Rights Council. At the AI Seoul Summit, held in May 2024, we adopted the Seoul Declaration for Safe, Innovative and Inclusive In the face of the great transformation in the history of civilization that AI will bring about, humankind is going through a critical inflection point, when it must safeguard the universal values that it has long upheld throughout history. Human civilization has always responded to new challenges and — because we never lost hope of advancing towards a better world, even in the face of despair — we have been able to achieve the progress that we see today. The shining history of the United Nations, which has constantly sought the path of world peace and shared prosperity amid times of crisis, holds the answer. Let us not shy away from the new historical mission entrusted to us. Let us turn the changes brought by AI into a springboard for humankind to make a renewed leap forward. I now resume my functions as President of the Council. I call on His Excellency Mr. Hassan Sheikh Mohamud, President of Somalia. President Mohamud: Allow me to begin by expressing our appreciation to the Republic of Korea for convening this important debate and for its leadership in building a shared vision on the role of artificial intelligence (AI) in international peace and security. We are also grateful to the Secretary-General for his insight and remarks and to all the briefers for their contribution to this discussion. Artificial intelligence is no longer a distant prospect; it is already rapidly transforming societies, economies and our approach to peace and security. It brings both remarkable promise and profound risk, as experts have reported here. Subject to responsible governance, its ability to revolutionize early warning, crisis prevention and humanitarian responses is undeniable. Yet, with its capabilities often spread unevenly, the risk of misuse is growing, potentially creating new avenues for instability. Therefore, it is our collective responsibility to ensure that AI advances international peace and security and to guide its use for the benefit of all. To this end, I would like to propose the following three core priorities. First, we must establish clear global standards for the responsible use of AI. AI must operate within a framework of international law and respect for human rights and human dignity. The Security Council should champion efforts to develop comprehensive guidelines that ensure that AI is used ethically and transparently across all peacekeeping and security operations globally. These standards should address issues such as algorithmic bias, privacy and the protection of vulnerable groups. Establishing these principles will require the active participation of all Member States and regional organizations, thereby ensuring that diverse perspectives are reflected and that frameworks are adaptable to different contexts. Secondly, we must ensure equitable access to AI and prevent new forms of technological dependency syndrome. The benefits of AI must not be confined to a privileged few or concentrated in certain regions. Too often, technological advancements and data control are withheld from the most affected communities, perpetuating imbalances in power and opportunities. This risk of digital colonialism, in particular in Africa, can be addressed only through international partnerships with regional organizations such as the African Union and by supporting strategies such as the pan-African AI initiative, which prioritizes data sovereignty, digital literacy and homegrown innovations. It is essential to promote inclusive access and support Thirdly, we must proactively address the unique risk that AI poses to peace and security and the livelihoods of societies. The misuse of AI, whether through autonomous weapons, disinformation campaigns or interference in fragile peace processes, poses significant threats to stability. These risks demand constant vigilance and proactive measures. The Council should support the regular monitoring and assessment of emerging AI threats, facilitate the sharing of best practices and invest in early-warning systems. Close collaboration with civil society and regional actors is essential, as local knowledge and networks can help to identify signs of destabilization before they escalate. Moreover, the Council should encourage the development of accountability mechanisms to ensure that those who misuse AI are held responsible and accountable. In conclusion, the era of artificial intelligence is not a distant prospect but our current reality. The future that we shape with these tools is dependent on our commitment to cooperation, ethical leadership and global solidarity. We are being presented with a defining opportunity to guide this historic transition responsibly so that technological progress serves as a bridge to peace and shared prosperity, not as an instrument of division. The power of artificial intelligence does not entail an inevitable outcome; it is shaped, guided and given meaning by our values and our choices. Somalia is committed to acting with foresight and resolve together so that future generations will look back on this moment. We must unite as nations to guide technological progress for the good of all, while protecting peace, upholding dignity and broadening the horizons of human possibility.
I now call on Her Excellency Ms. Nataša Pirc Musar, President of Slovenia. President Musar: Let me begin by extending our gratitude to the Republic of Korea for convening today’s open debate. I would also like to welcome and thank the Secretary-General, as well as Mr. Bengio and Ms. Choi. In my previous life, I was a lawyer dealing with human rights, especially in connection with data protection, which is so heavily involved in the development of modern technologies. This is why this topic is very close to my heart. I have learned something regarding artificial intelligence (AI): artificial intelligence is neither artificial nor intelligent. It is not artificial because it possesses only the knowledge that we have gained over the centuries, and it is not intelligent because it hallucinates all the time. If artificial intelligence is going to hallucinate in automatic weapons connected to AI, then nothing good will come of it. Since my last visit to the Security Council two years ago, the world has witnessed the growing digitalization of warfare. Contemporary conflicts — from Gaza to the Sudan and Ukraine — have become testing grounds for the military use of new technologies. Soldiers on the ground can commit terrible acts against others. Some may develop a guilty conscience over the years. Some former combatants even apologize to those they harmed. Algorithms, armed drones and robots created by humans, however, have no conscience. We cannot appeal to their mercy or beg them to spare their loved ones. This is the world in which technology is increasingly determining humankind’s fate. One must agree with the Secretary-General that artificial intelligence represents both an existential threat and one of the greatest opportunities of our generation. One might even add that AI can also pose a risk to our daily lives. As with many things, excess never does any good. Just as excessive use of social media can On the other hand, there are countless tasks that AI can perform more accurately and efficiently. Its solutions are based on quantities of data far beyond human capacity to process. In supporting United Nations peace operations, AI can assist with multilingual translation, summarizing field reports, planning routes and optimizing logistics, and consolidating overlapping or fragmented information into a coherent picture. For conflict prevention, AI can be harnessed positively to identify food insecurity and predict displacement caused by extreme events and climate change. We acknowledge and welcome the important discussions taking place within the United Nations on the use of lethal autonomous weapon systems and on AI in the military domain. We thank the Republic of Korea for its leadership on the latter. As an expert has warned, the three biggest risks involved in the development of AI- powered weapons are as follows. First, these weapons may make it easier for countries to become involved in conflicts. Secondly, non-military scientific AI research may be censored or co-opted to support the development of such weapons. Thirdly, militaries may use AI-powered autonomous technology to reduce or deflect human responsibility in decision-making. From the perspective of the United Nations — the cornerstone of the system of collective security — AI has already become a reality with a visible impact not only on peace and security, but also on its other two pillars, namely, human rights and development. It has not only entered the battlefield. AI can be misused in so many ways: through AI-accelerated cyberattacks, its proliferation in terrorism and organized crime or its environmental impact. This further exacerbates threats to international peace and security. This debate is therefore timely. The far-reaching impact of AI on international peace and security, by its very nature, requires action by the Security Council, which bears the primary responsibility under the Charter of the United Nations for the maintenance of international peace and security. The Security Council must be prepared to respond to the impact of AI on global peace. I trust that today’s meeting is just one of many activities in the Security Council and the United Nations system to discuss the presence of AI in international relations. To this end, Slovenia calls for regular briefings by the Secretary-General to the Security Council on developments in the field of artificial intelligence that affect international peace and security, so that the relevant risks can be recognized, understood and, above all, addressed. We also call for the inclusion of discussions on AI-related risks on the Security Council agenda to ensure the protection of civilians, humanitarian aid and United Nations personnel and the effective implementation of mandates for peace operations. We are also willing to consider concrete actions by the Security Council, including recognizing that decision-making on AI must be guided by international law, in particular international humanitarian law, international human rights law and ethical principles to ensure its responsible use in maintaining and promoting international peace and security. In conclusion, while we address the risks of AI, it is crucial to remember that behind every algorithm and every automated decision stands a human being who wrote the code. This simple fact underscores the central principles of human responsibility and accountability — principles that must remain at the heart of both our national approaches and our multilateral efforts. I thank you, Mr. President, for inviting me here to the Security Council, and I wish you all the best in your future work.
I now call on His Excellency Mr. Kyriakos Mitsotakis, Prime Minister of Greece.
I wish to begin by congratulating the Republic of Korea for convening this important meeting of the Security Council on artificial intelligence (AI) and international peace and security. I also thank the two professors for their very insightful introductory remarks. Today’s debate builds on the Arria formula meeting that Greece co-organized in April, along with France and the Republic of Korea, offering us the opportunity to further reflect on a topic that will undoubtedly shape our discussions for years to come. There is now, I think, a broad consensus on the close interplay between artificial intelligence and the maintenance of international peace and security. The two Responsible Artificial Intelligence in the Military Domain Summits held in The Hague in 2023 and in Seoul in 2024; the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy launched by the United States of America; and the Artificial Intelligence Action Summit held in Paris earlier this year have all contributed to the development of norms, rules and guidelines for the responsible development, deployment and use of AI, including of course in the very sensitive military domain. AI is not just another tool; it is a general-purpose capability with the potential to both empower and destabilize. In the right hands, it can strengthen peacekeeping, improve early-warning systems and accelerate humanitarian relief, just to offer a few examples. In the wrong hands, it can fuel disinformation, it can amplify cyberattacks, and it can certainly lower the threshold for escalation in conflict. AI is by nature a dual-use technology, and it means that our collective security increasingly depends on the choices that we make. Our discussion today is particularly timely, as it comes just weeks after the release of the first report of the Secretary-General on artificial intelligence in the military domain and its implications for international peace and security (A/80/78). This was, I think, a milestone in our collective effort to address the profound implications of AI for peace and security. Building on this crucial momentum, we must recognize a fundamental truth: that in order for the rules-based order to remain relevant, it must adapt. Just as past generations built new institutions to govern nuclear energy, nuclear weapons and arms control, so too must we now develop mechanisms to ensure that AI innovation reinforces not only peace and security but also human dignity. This, of course, requires international cooperation, transparency and a renewed commitment to the principles of the Charter of the United Nations. At the same time, we must be candid. Preserving peace does not mean ignoring the realities of power. Malign actors are racing ahead in developing AI capabilities. If we are to protect our citizens, uphold deterrence and maintain stability, we too must responsibly invest in defensive and security applications of AI, but always in line with international law, and always with a commitment to human oversight. This We stand at an inflection point. The choices we make on artificial intelligence will not only redefine the balance of power but also determine whether technology becomes a force for human progress or a driver of human peril. The Security Council itself must rise to the occasion. Just as it once rose to meet the challenges of nuclear weapons and the challenges of peacekeeping, so too must it now rise to govern the age of AI. Greece believes the United Nations bears a historic responsibility to chart a path on which innovation strengthens peace, on which responsibility tempers power and on which technology serves humankind’s highest aspirations. Let us ensure that artificial intelligence becomes not a source of rivalry and division, but a cornerstone of a more secure, more just and more peaceful world.
I now call on His Excellency Mr. David Lammy, MP, Deputy Prime Minister of the United Kingdom of Great Britain and Northern Ireland.
There is an urgency to this debate. It was two years ago that the United Kingdom first brought artificial intelligence (AI) to the Council (see S/PV.9381). Since that time, its capabilities have grown exponentially. This is a lightning strike of change. Everyone  — diplomat, peacebuilder, terrorist — now carries superhuman expertise in their smartphones: better at maths, better at translation and better at diagnosis, than almost any human expert. And now, superintelligence is on the horizon, able to operate, coordinate and act on our behalf. We are staring at a technological frontier of astounding promise and power. No aspect of life, war or peace will escape. Deep AI analysis of situational data holds this promise for peacekeeping: ultra-accurate real-time logistics, ultra-accurate real-time sentiment analysis and ultra-early warning systems. But there are also these challenges for armed conflict: ultranovel chemical and biological weapons, ultra-accessible to malign actors, and ultrarampant distortion and disinformation. And, of course, this is what is at stake for our shared security: the risk of miscalculation, the risk of unintended escalation and the arrival of artificial intelligence-powered chatbots, stirring conflict. The risk of deeper instability is immense. And this is why I so welcome the Secretary-General’s report on military AI (A/80/78). This is an opportunity for collective understanding and for us to build new safeguards and guardrails and reaffirm international law as the bedrock of responsible use. We all know that artificial intelligence use is growing, of course, exponentially, offering us both extraordinary promise and intense challenges. Nowhere is this clearer than in climate. On current trends, artificial intelligence could add the equivalent of a new Japan to world electricity consumption. Yet, it also promises to utterly transform efficiency and power our green transitions, fine-tuning electrical production to the minute to meet demand and eliminating astonishing levels of waste. This is the power of AI. We are crossing humankind’s most profound technological frontier. Our lives, our world and our politics are about to be flooded with super-powerful AI. There is only one way forward: resilience; learning how to use these tools and embedding them safely in society. This is the United Kingdom’s mission, through our AI Security Institute, with more dedicated researchers than anywhere else in the world, and through the International AI Safety Report, with its secretariat based in the United Kingdom, under the chairmanship of Yoshua Bengio, one of our briefers today.
I now call on His Excellency Mr. Ahmed Attaf, Minister for Foreign Affairs, National Community Abroad and African Affairs of Algeria.
Allow me to express my deepest appreciation to you, Mr. President, for organizing this open debate. Allow me also to express my deepest thanks to His Excellency the Secretary-General and the experts that are present with us today. I thank them for their valuable briefings on artificial intelligence and its impact on international peace and security. Algeria values holding this meeting because it reflects a growing international awareness of the need for these modern technologies, as an influential factor with multiple and complex dimensions at the same time. Artificial intelligence is no longer a technical tool. It is rather an essential geostrategic factor in reshaping the balance of power in the international arena. Artificial intelligence is no longer a mere promising project for humankind. Rather, it poses a number of legal and moral challenges, and even security challenges, that affect the sovereignty and social cohesion of States. Artificial intelligence has undoubtedly become a double-edged sword for peace and security. It is a constructive weapon that contributes to building capacities and promoting sustainable development. At the same time, it is a destructive weapon when it is misused to jeopardize peace and stability and threaten collective security. The success of the global dialogue on artificial intelligence governance cannot achieve its noble purposes if it does not address the following three concerns. First, the sovereignty concern, which is closely linked to the need to respect the principles of the Charter of the United Nations to maintain the sovereignty and territorial integrity of States. We believe that the United Nations Convention against Cybercrime, the negotiations for the adoption of which were led by Algeria, could serve as a legal basis to promote the sovereignty of States over their data and ensure their protection against cyberattacks. Secondly, the security concern is linked to the need to develop clear rules that regulate the use of artificial intelligence in the military and security domains in order to prevent an uncontrolled new arms race, keep innovative arms beyond the reach of non-State armed groups and ensure that any decisions to use lethal force involving innovative artificial intelligence technologies are taken strictly by humans. The third and last concern — and it is related purely to development — is the fact that States must have equal opportunities to access artificial intelligence technologies to prevent the deepening of the development and digital divides between the North and the South. What is certain is that developing countries are facing tremendous challenges in keeping up with the current digital revolution. For our continent, Africa, these challenges can be summarized in the following key points. First, Internet coverage on the continent remains inadequate, at 38 per cent, at a time when the global average exceeds 68 per cent. Secondly, only 10 of the 55 States members of the African Union have adopted the necessary information technology regulations, which reflects the weakness of legislative and regulatory frameworks in most African countries. Thirdly and lastly, digital sovereignty clearly poses a challenge to the African continent, which is home to 18 per cent of the world’s population but has only 1 per For its part, my country continues to implement its national strategy on digital transformation. We are fully committed to contributing as much as possible to alleviating the challenges facing the African continent in this regard. Our continent is fully aware of the global challenges and opportunities arising from digital and technological revolutions. Africa is making every effort not to be left behind by these revolutions or to miss out on them, as it did with the industrial and the information revolutions because of colonization and its repercussions. Africa is working responsibly and diligently to build successful and beneficial international partnerships in order to keep up with the current revolutions and contribute towards shaping and governing them. However, Africa strongly opposes being transformed into a guinea pig to test these technologies and their use, especially in the military and security domains. We stand ready to avoid repeating the mistakes of the past and their painful repercussions, which proved costly for our continent.
I now give the floor to His Excellency Mr. Hugh Hilton Todd, Minister for Foreign Affairs and International Cooperation of the Co-operative Republic of Guyana.
I thank you, Mr. President, for convening this open debate to discuss the complexities and impacts of artificial intelligence (AI) and to underscore the need for its responsible use in the context of international peace and security. I also thank His Excellency Secretary-General António Guterres and the briefers for their presentations. The advent of artificial intelligence presents several opportunities for international peace and security, including for conflict prevention, peacebuilding, disaster prediction and response, and cybersecurity. The misuse of AI can also present many challenges to international peace efforts, including through cyberwarfare, the exploitation of weapons of mass destruction, privacy infringements and the proliferation of automated weapons. Maximizing the opportunities and minimizing the risks of the use of AI should continue to be at the forefront of our discussions. In Guyana, we have been preparing ourselves to seize these opportunities. We are integrating AI through our Digital Transformation Strategy and are also actively involved in the development of a regional AI policy road map with the support of the Caribbean AI Task Force. This is to ensure that countries of the Caribbean are not only prepared for the global AI revolution but also actively involved in shaping its direction in alignment with our priorities, values and development goals. The conversations on AI governance are complex. In examining the complexities and promoting the responsible use of AI, I offer the following points. First, harnessing AI to advance peace and security requires a structured approach that involves robust regulation in line with international law. The use of AI must be grounded in inclusive and equitable international standards to strengthen global cooperation and regulate its use in the military domain, among other areas. The recent decision by the General Assembly to establish two global AI governance mechanisms is a positive step in this direction. These are good entry points for the Security Council to contribute to efforts aimed at utilizing AI tools in peace and security in order to, inter alia, identify signs of possible conflict, monitor misinformation and disinformation, assess risks in peacekeeping operations and respond to cyberattacks against critical infrastructure. We also stress the use of AI Secondly, the misuse of AI poses a threat to international peace and security. In the current geopolitical context, there has been increased use of AI systems, including by non-State actors, for illegal purposes. These include cyberattacks against critical infrastructure and creating fear by spreading misinformation. Autonomous weapons systems in particular are accessible, inexpensive and easy to produce, making it easier for non-State actors to exploit and use them for violent purposes and to carry out disinformation campaigns in conflict areas. The Security Council therefore has a responsibility to ensure the responsible use of AI. Thirdly, AI has been used by parties to evade obligations established by the Security Council, a trend that is becoming worrisome. It has been used to circumvent sanctions, increase tensions and reduce the chances of securing ceasefire agreements. To counter these threats, the Security Council can utilize AI tools for the implementation of its resolutions. With its predictive and analytical features, AI can help to improve the implementation of sanctions by detecting whether they are being evaded or violated, monitor compliance with resolutions, counter mis- and disinformation that could undermine peace efforts and track ceasefire violations. Fourthly, while noting the destructive use of AI in conflicts, we acknowledge that its responsible use can be helpful in peacebuilding efforts and in peacekeeping operations. Integrating AI considerations and ethics into peacekeeping mandates would ensure the use of AI tools in peacekeeping operations is transparent, responsible and accountable. The promotion of AI capacity-building for peacekeepers would also enable the missions to be effectively equipped to tackle AI-driven threats, including those targeting critical infrastructure and civilians. In conclusion, briefings such as this one are useful in advancing the discussions on understanding the benefits and risks of the use of AI in advancing global peace and security. To this end, the Council should work proactively towards mitigating the risks to advance international peace and security and ensure a safe and more secure world for all.
I now call on His Excellency Mr. Javier Martínez Acha Vásquez, Minister for Foreign Affairs of Panama.
Panama congratulates the Republic of Korea for convening this meeting. We greet His Excellency Mr. Lee Jae Myung and thank the Secretary-General for his important contribution to this debate. We also thank the briefers for their valuable contributions on one of the most disruptive challenges facing the collective security system in the twenty-first century: the development of artificial intelligence. This technology must continue to grow and evolve for the common good of all humankind, and in that sense, we have a responsibility to ensure that artificial intelligence is a tool that contributes to peace, human development, equity and responsible and legally coherent innovation. Panama welcomes General Assembly resolution 79/325 establishing the Artificial Intelligence and the Global Dialogue on Artificial Intelligence Governance and recognizes the leadership of Costa Rica and Spain in facilitating this process in a spirit of inclusion, balance and forward-looking vision. These platforms represent a concrete step towards ensuring that decision-making on technology governance is guided by evidence, participation and shared responsibility. Companies that develop and operate large-scale artificial intelligence models also have a great global responsibility. The private development of these technologies cannot continue to escape public scrutiny. The lack of regulation in this area cannot be ignored; it represents a systemic risk that we must address collectively from an inclusive and open perspective. In line with this multilateral approach, Panama is developing its national artificial intelligence strategy, based on principles of responsible use, participatory governance and protection of rights. This strategy is aimed not only at strengthening our technical capabilities, but also at applying artificial intelligence in critical sectors such as logistics, health and finance, areas in which Panama has not only undeniable strengths, but also the potential to contribute to advancing solutions that benefit the region and the world. We are committed to promoting public policies that reduce technological gaps and strengthen the citizens’ trust. Artificial intelligence is also being used to spread disinformation, manipulate narratives and polarize societies. It has been used to create false content, affecting public trust, especially in fragile or conflict contexts. This represents a direct threat to international peace and security. This technology is also being used by extremist groups to radicalize, recruit and finance themselves. Digital platforms and video games have been exploited to identify vulnerable profiles and spread hate propaganda in an automated and dangerous way. This threat is linked to the proliferation of cyberattacks, organized crime and the manipulation of personal data on a massive scale. This problem requires a robust legal framework that combines prevention, international cooperation and institutional transparency. The Security Council must promote international standards that ensure positive human evolution and independent auditing of artificial intelligence systems. You can count on Panama to help ensure that today’s challenges lead to responsible decisions tomorrow. In the military sphere, Panama reaffirms its adherence to the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, adopted in Seoul in 2024. We reaffirm that any application of artificial intelligence in defence contexts must respect the principles of international humanitarian law and the international human rights framework, as well as the principles established in the Charter of the United Nations. We reject the development and use of autonomous systems that make lethal decisions without effective human intervention. Human oversight must be ensured at all critical stages. Failure to do so is tantamount to delegating legal and moral responsibility to a machine. Artificial intelligence is not just about how we train machines but also about how we decide to govern them. If we do not act now, regulatory gaps will be filled by those who prioritize control over equity, profit over justice and power over peace. In Panama, with our strong humanistic spirit, we are convinced that artificial intelligence represents one of the greatest challenges and opportunities of our time. It can be an engine of progress, social justice and global cooperation or pose a major risk of exclusion, inequality and conflict. For Panama, the key value is clear: artificial intelligence must serve humankind and never the other way around. Human dignity, human rights and peace must always guide any development or application The Security Council was not created to passively observe changes in the world; it was created to guide their direction. Panama believes that leadership in artificial intelligence is measured not by the amount of data one possesses but by the collective will to put that technology to the service of humankind.
I now call on His Excellency Mr. Musa Timothy Kabba, Minister for Foreign Affairs and International Cooperation of Sierra Leone.
At the outset, I welcome Mr. Lee Jae Myung, President of the Republic of Korea, to the Security Council, and I thank you, Mr. President, and the Republic of Korea for convening this important and timely discussion on artificial intelligence (AI) and international peace and security. Africa has always been in the depths of despair and suffering in the wake of every critical invention, whether it was the advent of naval technology, which saw the bonding of Africans into slavery from the African continent to different parts of the world, or the invention of the gun powder, which decimated Africans in the nineteenth and twentieth centuries. Therefore, this topic is not only important to us as Africans but is also quite visceral. On that note, I thank Secretary-General António Guterres and the two briefers for their insightful presentations. According to the Brookings Institution, the term artificial intelligence refers to machines that respond to stimulation consistent with traditional responses from humans given the human capacity for contemplation, judgment and intention. AI does have the ability to understand language, plan, recognize objects and sounds, learn and solve problems. This attribution of human-like capabilities to artificial intelligence is what makes it both promising and alarming, specifically for international peace and security. According to the Secretary-General, in his most recent report of 2 July on innovative voluntary financing options for artificial intelligence capacity-building (A/79/966), AI is a foundational general-purpose technology that will shape economic outcomes, societal well-being and global equity for decades to come. Indeed, AI holds outstanding potential for supporting inclusivity and reducing inequalities in the advancement of sustainable development and in addressing causes of conflict rooted in delayed development and, conversely, by the same token, the risk of further widening the gap between developing and developed nations and further exacerbating the root causes of conflict. In 2023, 120 Member States co-sponsored, and all 193 States Members of the United Nations joined consensus on and adopted, the first General Assembly resolution on AI (General Assembly resolution 78/265), setting out a common framework for safe, secure and trustworthy AI. The resolution recognizes the positive impact of AI on economic, social and environmental aspects and the tremendous potential for AI to help accelerate progress towards the achievement of the Sustainable Development Goals, as well as the imperative of ensuring that AI systems are designed, tested and deployed in a manner consistent with the Charter of the United Nations and the Universal Declaration of Human Rights. The resolution emphasizes that AI should not be used to undermine peace or repress human rights. In addition, the Global Digital Compact, adopted last year, holds much promise for the acceleration of the implementation of the relevant aspirations of the African Union Agenda 2063 and the Sustainable Development Goals through technological innovation. In June 2024, African nations adopted the landmark Continental Artificial Intelligence Strategy and the African Digital Compact. The Strategy envisions the establishment of strong research institutions that will promote our Nonetheless, there are serious political, legal and ethical considerations raised by artificial intelligence, as well as peace and security concerns, especially for conflict-prone areas. Generative AI embedded in lethal autonomous weapons and robotic systems in nuclear command and control, in military applications and beyond weapon systems poses significant threats to peace and security in the absence of the embodiment of sufficient human agency in the process. For this reason, Sierra Leone affirms the African Union’s position that artificial intelligence must be governed by frameworks that are open, inclusive, equitable and trustworthy. The African Union’s Continental Artificial Intelligence Strategy and the African Digital Compact, adopted in 2024, are timely and visionary. They emphasize responsible AI development aligned with regional needs and ethical imperatives. At the African Union’s 1,266th Peace and Security Council meeting, held on 19 March, African member States reiterated the need for multilateral action to ensure that AI promotes peace, security and sustainable development on the continent. Since the majority of AI creation and regulation development is presently concentrated within a few countries, it will be necessary to construct a global governance framework to limit the adverse consequences of a few countries and private companies wielding such immense power over the rest of the world. At the national level, Sierra Leone has taken proactive steps. We are a party to the African Union Convention on Cyber Security and Personal Data Protection, and we have developed a national cybersecurity strategy that aligns with regional and global digital governance norms. This strategy emphasizes ensuring that digital technologies such as AI do not become tools of exploitation or instability. We are also in the process of developing a national artificial intelligence framework with a focus on ethics, inclusion and peace-oriented innovation. This framework aims to strengthen national institutions’ capacity to understand, govern and responsibly deploy artificial intelligence. To address the complexities, multifaceted impact and responsible use of AI, especially in the peace and security domain, we must prioritize prevention uses, that is, deploying AI for early warning, conflict analysis, climate resilience and humanitarian response, particularly in regions of acute vulnerability. Let me therefore emphasize that the Security Council has a critical role in ensuring the responsible application of AI in the context of maintaining international peace and security. The Council can encourage best practices in peace operations, promote safeguards to retain human agency in military uses and ensure compliance with international law and international humanitarian law. Furthermore, the unregulated weaponization of AI, its diversion to non-State actors and the destabilizing effects of AI-driven disinformation must be addressed through global accountability frameworks and rapid response mechanisms. The Security Council should thus complement wider United Nations efforts, promoting coherence with the General Assembly, the Global Digital Compact and regional frameworks, such as the African Union’s Continental AI Strategy. In the light of the foregoing, let me share four key recommendations. First, in line with the interdisciplinary and multi-stakeholder approach of the Global Digital Compact and the Pact for the Future, we encourage international organizations to deepen engagement with Governments, peacebuilders, civil society, academia and the private sector to co-design AI systems that are tailored to promoting peace, development and resilience, especially in post-conflict settings. Thirdly, we are concerned that much of AI development and regulation remains concentrated in a small number of countries and private entities. We call on the United Nations to lead in shaping an inclusive and equitable governance architecture that addresses this imbalance. Measures must be taken to avoid entrenched digital inequality, algorithmic bias and the concentration of AI capabilities in ways that exclude countries in the global South. The Pact for the Future rightly emphasizes capacity-building, technology transfer and equitable access as prerequisites for shared digital progress. Investing in AI literacy, digital skills and ethical innovation ecosystems in developing countries is essential for preventing asymmetries that could widen geopolitical or digital divides. In that context, there is a need to upgrade national cybersecurity capabilities and strategies in line with the impending risks posed by AI solutions, while continually assessing AI’s contributions to modern warfare. Finally, despite the rapid uptake of AI technologies in various sectors, there is a clear lack of empirical data on the extent and impact of AI applications in conflict settings. Sierra Leone calls for a comprehensive global mapping and risk assessment of AI use in the military and peace and security domains. This should be carried out in order to inform appropriate regulation, safeguards and cooperative frameworks, as well as to facilitate continual updates and vigilant monitoring. Sierra Leone also urges that all AI development be rooted in these key fundamental principles: transparency, fairness, inclusivity, accountability and respect for human rights. Those designing the most advanced AI models must be vigilant in addressing algorithmic bias and discriminatory outcomes. Governments, developers and civil society must work in partnership to ensure that new capabilities are aligned with the United Nations Charter and the Sustainable Development Goals. In conclusion, Sierra Leone calls for AI to be developed in a manner that fully respects international law, including international humanitarian law, and is guided by ethical principles that place human dignity, agency and equity at the centre, while reaffirming its commitment to working with Member States, regional bodies and the United Nations system to ensure that artificial intelligence becomes a force for peace and progress, not polarization or conflict.
I now call on His Excellency Mr. Khawaja Muhammad Asif, Federal Minister for Defence of the Islamic Republic of Pakistan.
I deeply appreciate the Republic of Korea’s initiative of convening this open debate. It is of special significance that President Lee Jae Myung is personally presiding over our deliberations. I thank Secretary-General Guterres for his insightful and valuable briefing. We also thank the other briefers. Artificial intelligence (AI) is reshaping our world at a breathtaking pace. It is the most consequential dual-use technology of our times, capable of accelerating socioeconomic progress, but equally capable of deepening inequalities and destabilizing international order. However, we must harness this responsibility. AI can drive inclusive growth and shared prosperity, but in the absence of global normative standards and legal guardrails, the AI revolution risks reinforcing digital divides, entrenching new forms of dependency and imperilling peace. The unregulated and irresponsible use of AI enables disinformation campaigns, offensive cyberoperations In recognition of the importance of this technology, Pakistan developed its first- ever national artificial intelligence policy in July. This transformative framework seeks to build AI infrastructure, train 1 million people and ensure the responsible and ethical use of AI. It lays out six pillars, prioritizing innovation, public awareness, secure systems, sectoral transformation, infrastructure and international partnership. Pakistan also hosted regional consultations in Islamabad, on 17 June, on the responsible use of AI in the military domain, in partnership with the Republic of Korea, the Netherlands and Spain. We appreciate the leadership of the Republic of Korea and Netherlands in this area. The AI transformation is unfolding in a highly uncertain international environment. The Charter of the United Nations is under strain. There is an attempt to normalize the use of force, arms control frameworks are fraying, strategic competition is intensifying, and technological disparities are deepening. In such a context, the militarization of AI compounds the sense of insecurity, undermines the principle of equal security for all and erodes trust. In the recent conflict in the subcontinent between India and Pakistan, for the first time, autonomous loitering munitions and high-speed dual-capable cruise missiles were used by one nuclear-armed State against another during the military exchange, which manifests the dangers that AI can pose. These developments raise serious questions for the future of warfare. Three lessons are clear in this regard. First, AI lowers the threshold for the use of force, making wars more politically and operationally feasible. Secondly, AI compresses the time necessary for making decisions, thus narrowing the window for diplomacy and de-escalation. Thirdly, AI blurs domain boundaries, merging cyber, kinetic and informational effects in unpredictable ways. Our collective approach to meet these challenges must rest on the following principles. First, the United Nations Charter and international law must fully govern the development and use of AI applications, their use without meaningful human control should be prohibited. Second, strategic communication on AI-nuclear intersections is vital to reducing risks of miscalculation. Third, States must commit to pre-emptive incentives and to measures that prevent the destabilizing use of AI. Fourth, developing countries must have the capacity, access and voice to shape AI governance. Fifth, AI must not become a tool of coercion or technological monopoly. Sixth, AI governance must be anchored in the legitimacy of the United Nations system. Seventh, attempts to monopolize strategic advantages will fail. The only sustainable path is cooperation based on mutual respect. The development dimension of AI is crucial. Last year’s adoption of General Assembly resolution 78/311, led by China, was a landmark achievement that established the first multilaterally agreed blueprint for AI capacity-building. Pakistan also welcomes the establishment of the annual Global Dialogue on AI Governance, along with the Independent International Scientific Panel on Artificial We must ensure that AI is harnessed to promote peace and development, not conflict and instability. Let us work together to shape an AI architecture that is inclusive, equitable and effective. Let us preserve the primacy of human judgment in matters of war and peace, ensuring that, even in an age of intelligent machines, innovation is guided by the principles of morality and humanity.
I would like to thank the Republic of Korea for initiating this high-level open debate. As we speak, the rapid advancement of artificial intelligence (AI) is empowering a wide range of industries globally, while giving rise to new risks and challenges. It is imperative for the international community to build consensus on global AI governance so as to steer AI development in a direction that is beneficial and inclusive, in line with the vision of AI for good. In 2023, President Xi Jinping put forward the Global AI Governance Initiative, providing clear guidance for strengthening global AI governance. The Security Council bears primary responsibility for maintaining international peace and security. Its role in advancing global AI governance and managing non-traditional security risks is indispensable. To that end, we should step up efforts in the following areas. First, we must uphold a people-centred approach and adhere to the principle of developing AI for the greater good. The development of artificial intelligence must always seek to enhance the well-being of humankind and be predicated on respect for the dignity, rights and interests of humans and foster the advancement of human civilization. It is essential to swiftly establish and refine the ethical norms and accountability mechanisms that are to govern AI, clearly delineate the responsibilities and powers of the relevant stakeholders and ensure that AI-related research and development and applications adhere to international law and are aligned with the shared values of humankind. Secondly, we must accelerate empowerment measures in a spirit of fairness, inclusiveness, openness and sharing. It is vital to jointly foster an open, inclusive, fair and non-discriminatory environment for technological development; firmly oppose unilateralism and protectionism; refrain from decoupling supply chains; eschew small-yard, high-fence practices; avoid drawing ideological lines or engaging in overly broad interpretations of the concept of national security; and eliminate artificially imposed technological barriers. We must strengthen AI capacity-building and accelerate efforts to bridge the North-South AI divide. China supports the Security Council’s exploration of scenarios in which AI could be applied to peacekeeping and peacebuilding operations, thus harnessing technology to empower peace. Thirdly, we must firmly uphold the fundamental principles of the peaceful, safe and controllable use of AI. It is essential to ensure that AI remains under human control at all times to prevent the emergence of lethal autonomous weapons that operate without human intervention. When it comes to military applications of AI technologies, all countries — and major Powers in particular — should take a prudent and responsible approach to prevent arms races and the malicious use or misuse of AI, which could prove disastrous. The Council should give due priority to the risks posed by the misuse of AI by terrorist groups, extremist forces and transnational criminal networks and should promote enhanced international cooperation to address these threats. Fourthly, we must build a governance system based on peace, shared responsibility and global solidarity. It is essential to step up coordination on strategies, governance rules and technical standards for AI so as to develop AI governance frameworks, China remains committed to advancing global AI governance and international cooperation. Since last year, China has promoted the consensus-based adoption of the General Assembly resolution 78/311, entitled “Enhancing international cooperation on capacity-building of artificial intelligence”. We hosted the 2025 World Artificial Intelligence Conference and the high-level meeting on global AI governance, released the global AI governance action plan and proposed the creation of a global AI cooperation organization. Looking ahead, China will provide 200 capacity- building programmes on the digital economy and AI to developing countries in order to empower the global South in achieving sustainable development. China attaches great importance to the security risks posed by military applications of artificial intelligence and has consistently taken a responsible and constructive approach in participating in global AI governance in the military domain. In 2021, China released its position paper on regulating military applications of artificial intelligence and has since actively promoted the building of international consensus on the responsible development and use of AI in the military domain. This year marks the eightieth anniversary of the founding of the United Nations, which presents a new opportunity to reform and improve the global governance system. Recently, President Xi Jinping solemnly put forward the Global Governance Initiative, calling on all countries to build a more just and equitable global governance system and to work together towards a community with a shared future for humankind. This provides fundamental guidance for global governance in emerging fields, including artificial intelligence. As a permanent member of the Council, China will continue to work with all countries to actively participate in global AI governance and will contribute wisdom and strength to the building of a world of lasting peace and universal security.
I extend my sincere thanks to the Republic of Korea, which is presiding over the Council, and to President Lee, who is serving as its President, as well as to Secretary-General Guterres and our briefers today, Professor Bengio and Professor Choi. The United States is committed to creating, promoting and protecting the most innovative artificial intelligence (AI) ecosystem in the world. AI is set to define the future of economic growth, national security and global competitiveness. The Trump Administration believes it is our responsibility to ensure this technology’s development benefits all our fellow citizens, safeguards their liberty and way of life and protects our next generation. Two months ago, President Trump unveiled America’s AI Action Plan, based on three pillars: accelerating AI innovation, building AI infrastructure and leading international AI diplomacy and security. The United States is resolved to stand at the forefront in developing and deploying frontier AI models and cutting-edge AI applications to address national security risks and usher in a new era of international peace and prosperity. Facilitating better, faster decision-making and enhanced situational awareness for warfighters and statesmen alike, AI technology will have revolutionary applications for war and for peace. But the improper use of AI systems can erode deterrence, create destabilizing effects and reinforce systems of political control and social engineering. Knowing that, we are resolved that AI technologies used in national security applications are consistent with the highest levels and standards of privacy, civil liberties, transparency and protections found in the laws of the United We totally reject all efforts by international bodies to assert centralized control and global governance of AI. We believe that the responsible diffusion of AI will help pave the way to a flourishing future — one of increased productivity, empowered individuals and revolutions in scientific advancement. The path to this world is found not in bureaucratic management but in the freedom and duty of citizens, the prudence and cooperation of statesmen, and the independence and sovereignty of nations. We believe that broad overregulation incentivizes centralization, stifles innovation and increases the danger that these tools will be used for tyranny and conquest. Ideological fixations on social equity, climate catastrophism and so-called existential risk are dangers to progress and obstacles to responsibly harnessing this technology as an extension of human ingenuity and capacities. The United States is focused on establishing American AI as the global gold standard and enabling allies and trade partners to build their own sovereign AI ecosystems with secure American technology. The President has called for an American AI export programme, which will support the deployment of full-stack AI technology packages globally. Through dealmaking and diplomacy, we can create an AI ecosystem that fosters and promotes mutual prosperity and security. We call on all members of the Council to do the same.
First, let me thank the Secretary-General for his remarks, as well as the two briefers, Professor Bengio and Professor Choi, for sharing their insights with us. I would also like to express my sincere appreciation to the Republic of Korea for convening us today and for its tireless efforts in the domain of artificial intelligence (AI). The unprecedented speed of AI innovation has brought about new opportunities for sustainable development, human rights and peace and security. But we have also seen that AI carries significant risks, especially if not managed or used properly. AI exacerbates the security threats we face, for example, when it amplifies misinformation and disinformation campaigns or malicious cyberactivities. These threats not only destabilize societies and jeopardize democracies but also undermine the legitimacy and efforts of United Nations peacekeepers. AI operates in the virtual space, but it can put real people at real risk. We need to mitigate those risks and counter abuse and misuse, and we need to do so quickly. Together we must ensure that AI benefits all of us. We must work against widening digital divides and against the potential of AI to cause damage and conflict. Denmark’s main priority is to ensure safe and trustworthy AI that it is used in compliance with international law, in particular international humanitarian law and international human rights law. This includes AI-enabled weapons. Responsibility and accountability cannot be delegated to machines. Human oversight and human control in decision-making are required. In this regard, Denmark is encouraged by the continued deepening of international cooperation in relation to AI. We support the creation of a cohesive global framework for AI governance, and Denmark has also endorsed the Paris Declaration on Maintaining Human Control in AI-enabled Weapon Systems. We call for a particular focus on accountability, as well as on a multi-stakeholder approach that involves civil society, technical experts, industry and academia alike. AI technologies can bolster the capabilities of the United Nations in multiple ways, for example, by detecting and addressing cyberthreats, monitoring the implementation of sanctions regimes and countering misinformation, disinformation and hate speech. AI has the potential to enhance the safety and operational effectiveness of United Nations peace operations and support remote monitoring of ceasefire agreements. This is particularly important on the ground, where United In conclusion, Denmark stands ready to contribute to ensuring that AI systems remain safe, secure and trustworthy. We welcome the engagement of the Council, and we look forward to further considerations on how AI systems can be leveraged by the United Nations to advance international peace and security.
We are grateful to Secretary-General António Guterres and the briefers for their contributions to our discussion. Today artificial intelligence (AI) is not merely a trendy topic that everyone is talking about; it is, above all, a cornerstone of successful technological development and economic and social progress, and a critical element of security for any State. AI technologies are being applied across the political, social, economic and defence spheres. However, these technologies also pose significant risks and are becoming a new factor that could affect the stability of the entire system of international relations. AI-based tools can sway public opinion and election outcomes by spreading news, publications and fake content on social media and can interfere with the operation of critical infrastructure in other States. Let us be honest and admit that no one in the world fully understands all the risks associated with AI, and this point cannot be ignored. I will start with the positives. AI undoubtedly has immense potential to promote economic development. According to various estimates, if current growth rates continue, the AI industry could contribute up to $15.7 trillion to the global economy by 2030. Furthermore, AI could help to achieve the Sustainable Development Goals; relevant applications are already being developed and implemented to address challenges relating to climate change and healthcare. One of the key features of AI is its potential impact on employment. International Monetary Fund projections suggest that AI could swallow up nearly 40 per cent of jobs worldwide, either replacing or enhancing human labour. As a result, countries with developed economies will be better positioned to capitalize on the benefits of AI compared with emerging markets and developing countries, which will exacerbate the digital divide and the already significant imbalance in global development. That, in turn, could lead to social tensions and new conflicts. The benefits of AI and its most advanced forms are universally recognized. However, the so-called AI race  — the ambition to outpace geopolitical rivals by rapidly developing a technology that is still not fully understood or controlled, without sufficient AI safety measures for all stakeholders — could, much like the arms race, endanger humankind’s very existence. This is not the only potential threat associated with AI development. It is therefore no surprise that those who see AI primarily as a source of new opportunities are evenly matched in number with those who view it mainly as a source of potential threats. In any case, I would venture to say that virtually no one today is indifferent to this issue, which is predictably becoming a topic of discussion in various political and expert forums. First, we struggle to understand how the generic theme of AI relates to the Council’s mandate, as clearly defined in the Charter of the United Nations, of maintaining international peace and security. Secondly, and equally as important, the Council consists of 15 member States, with an evident and unnatural overrepresentation of Western countries. This creates a real risk that those States, which are eager to maintain and strengthen their technological dominance in this field, may attempt to impose a narrow, self-serving approach on the global community, at the expense of inclusive, specialized forums engaged in practical work in this area. Thus, discussions on cyberattacks involving AI would be more appropriately held within the Global Mechanism, which was established to succeed the open-ended working group on security of and in the use of information and communications technologies (ICT). That negotiating platform is mandated to address key security issues in the field of ICT, including AI. Unlike the Security Council, the Global Mechanism offers all countries the opportunity to participate on an equal footing in decision-making. There are also multilateral and inclusive specialized forums for discussing the military aspects of AI use, primarily the Group of Governmental Experts on Lethal Autonomous Weapons Systems, which operates under the framework of the Convention on Certain Conventional Weapons, as well as the Disarmament Commission. Incidentally, these platforms have yet to develop a common understanding on even the most basic issues, such as terminology. I think that we all understand that, in this context, it would be premature, to say the least, to transfer discussions on such sensitive issues to other forums, especially the Security Council, not to mention addressing the impact of AI technologies on other aspects of non-proliferation and disarmament. Please do not project our position on the role of the Security Council vis-à-vis AI onto the Organization as a whole. We believe that the United Nations can and should play a coordinating role in AI development, as a counterbalance to various non-inclusive and temporary forums with politicized agendas. At the same time, it is crucial for us to reach universal agreements, with States playing the leading role and engaging in equal dialogue and, of course, with due regard being given to all legitimate interests of participants in the negotiation process. Unfortunately, the countries that I referred to earlier are not only seeking, as I said, to increase their technological lead over the countries of the global South in the field of AI but also organizing non-inclusive “summits on the responsible use of AI for military purposes”, which are destructive in nature. I am referring, in particular, to the so-called summit on AI held in Seoul in 2024 — a closed-door event with a limited number of participants, to which most developing countries were not invited. The conferences in Bletchley in 2023 and Paris in 2025 are quite similar. These events and their outcome documents do not reflect the views of all stakeholders and cannot serve as a foundation for further action that would reflect a common understanding of the issue. The noble goal of allegedly proactively developing “responsible” AI governance appears to be noble only in words. In practice, the regulatory system that is being imposed is dangerous and harmful, because it explicitly suggests that AI applications can be categorized as “good” or “bad”. This raises a key question: who will decide what is “responsible” and what is not when it comes to the use of AI? It seems as if Attempts by individual States to push through rules and guidelines at the Security Council level and to replace international legal instruments, including by promoting non-inclusive formats and coalitions, are unlikely to bring us closer to developing common approaches to addressing the issue of AI, including its “military” aspects. This could ultimately backfire on the international community and have a destructive impact on the maintenance of international peace and security. We would like to caution our colleagues against such reckless steps. Our position remains unchanged. We advocate in support of the United Nations playing a coordinating role vis-à-vis AI. We welcome the establishment of AI governance mechanisms within the framework of the United Nations, such as the Global Dialogue and the Independent International Scientific Panel. We look forward to productive discussions within these mechanisms, based on the principles of respect for State sovereignty and compliance by AI developers with national laws. We hope that everyone in this Chamber shares the conviction that we cannot allow AI to take precedence over human beings and human values. For our part, we stand ready to continue substantive, equal and mutually respectful work on all aspects of AI in any inclusive specialized platform, with a view to finding mutually agreeable solutions and maximizing the benefits of AI implementation for all countries of the world.
At the outset, I would like to thank the Republic of Korea for the initiative to hold this open debate. I also thank the Secretary-General for his remarks and his written report (A/79/966) and the briefers, Mr. Bengio and Ms. Choi, for their insights into the topic of today’s debate. Artificial intelligence (AI) is the major technological revolution of our century. It is already transforming our societies and economies, and in the coming years it will have a growing impact on international peace and security. The Secretary-General reminded us that this technology must serve peace and the common good. France is committed to this. We launched the Global Partnership on Artificial Intelligence in 2019, alongside Canada. France is implementing the European regulation on high-risk AI uses. France also supported the adoption of the Global Digital Compact at last year’s Summit of the Future. At the AI Action Summit in February, we launched the Current AI initiative, which is aimed at placing AI at the service of the public interest. Three key principles must guide our action. First, the new opportunities offered by AI must strengthen the impact and effectiveness of the work of the United Nations. When used responsibly, AI’s data collection and analysis capabilities can improve the protection of civilian populations and facilitate peace operations. AI can be used for identifying early warning signs of conflict, for planning, for supporting decision-making, for measuring performance and for training. AI can also be used to better anticipate and manage humanitarian risks and climate disasters and to reduce the environmental impact of peace operations. That is why France supports using AI to modernize peacekeeping tools, including decision-making platforms such as Unite Aware and early warning systems in Africa as part of the Silencing the Guns by 2030 initiative. Secondly, the use of AI in matters pertaining to peace and security must be subject to regulation. Disinformation and the manipulation of information facilitated by generative AI threaten our democracies, undermine peace operations and endanger the security of Blue Helmets. In cyberspace, artificial intelligence increases the capabilities of those who seek to exploit our vulnerabilities. In the military domain, At the United Nations, in accordance with General Assembly resolution 79/239, presented by the Republic of Korea and adopted in December 2024, we must continue to collectively assess the consequences of applying artificial intelligence in the military domain. In other forums, France will continue the work being carried out by the European Union and the Council of Europe, as well as by Korea and the Netherlands, at the Responsible Artificial Intelligence in the Military Domain Summits. Thirdly, we must promote governance around AI that is inclusive, multilateral and respectful of fundamental rights. Linguistic fragmentation would mean new rivalries in the world. Normative fragmentation would mean competition between rival models and would deepen digital divides. Such a prospect is not desirable for any State. A common governance architecture must be built based on international law and respect for human rights. That is why France supports rapid implementation of the Independent International Scientific Panel on AI and the Global Dialogue on AI provided for by the Global Digital Compact. Building on the Paris Artificial Intelligence Action Summit, an AI that is trustworthy, inclusive and environmentally sustainable must be allowed to develop. We share the view of several of our partners that the Security Council has a specific role to play. It must do so in concert with other ongoing processes at the United Nations and elsewhere, by providing added value on issues within its mandate, whether that be conflict prevention, peacekeeping or peacebuilding. In conclusion, France calls for collective mobilization so that artificial intelligence can remain a tool in the service of peace, sustainable development and human rights. We must act together to ensure that this technological revolution creates shared progress for all and not new divisions. France will continue to make its full contribution to this common effort.
I now give the floor to His Excellency Mr. Karol Nawrocki, President of the Republic of Poland. President Nawrocki: I thank you, Mr. President, for convening this debate. Artificial intelligence (AI) is no longer a laboratory experiment. It is a tool used by millions of people around the world every day in all areas of our life, including national security. Today our security is threatened not only by tanks and missiles but also cyberattacks, disinformation campaigns, the use of so-called deep fakes and manipulation in the information space. These are also modern weapons, very different from conventional ones. In recent years Poland, like many others, has become the target of increasingly intense attacks from hostile countries, mainly Russia. We are attacked every day. Let me mention that, in the past year alone, more than 100,000 confirmed incidents were reported in Poland. I believe that AI can be our shield on this new cyberbattlefield and gives us a real advantage. Poland is not standing on the sidelines of this revolution. We have enormous potential and great examples from our history in which outstanding Polish scientists influenced the fate of the world. Let me recall Polish mathematicians, Marian Rejewski, Jerzy Różycki and Henryk Zygalski — I know those names are Many innovative technology start-ups are developing in Poland today, creating solutions based on AI. Our research teams and technical universities are involved in international projects and are leading global companies. I am convinced that we must do everything we can to support the development of new technologies, including those that help strengthen our resilience and national security. To achieve that goal, three weeks ago, I signed a legislative initiative to establish a breakthrough technology development fund in Poland. In Poland, we are fully aware that for artificial intelligence to truly serve peace and global development, it must be based on much more than just technology. It must also be based on transparent principles consistent with human ethics and international regulations. We need a forum in which countries can share their experiences in the responsible implementation of AI, including in the security and defence sectors. I think that the Security Council can become a leader in developing rules for the use of AI in armed conflicts and for standards limiting its use against humans. Artificial intelligence is a tool — no more, no less. It is up to us whether it will be a tool of creation or a tool of destruction. Whether we will use it to protect our values or allow it to be used against them. Poland is ready to cooperate, to share our know-how, to collaborate in developing best practices and to engage in international cooperation in the spirit of ethics, with a view to preserving peace and the principles of international law.
I now give the floor to His Excellency Mr. Marcelo Rebelo de Sousa, President of the Portuguese Republic.
This is a crucial and timely debate. Artificial intelligence (AI) develops faster than our capacity to assess its impact. Its integration into the military domain has particularly profound implications. If misused, it threatens global stability, undermines trust between States and jeopardizes international humanitarian law. This is particularly the case with autonomous weapons systems. Human control, decision and accountability must be at the heart of the use of force. It is a moral, ethical and legal responsibility that cannot and should not be delegated. This is why Portugal strongly supports the Secretary-General’s recommendation for an international treaty banning lethal autonomous weapon systems. We believe results are better achieved through cooperation. Canada and Portugal are co-Chairs of the working group on accountability, in the wake of the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. We are also engaged in the Responsible Military Use of Artificial Intelligence process. Finally, we are investing in digital training so that the benefits of AI can reach all regions on an equal footing. The United Nations-Portugal Digital Fellowship capacity-building programme is the latest example. The Security Council has a decisive role to play in ensuring that AI is used for the common good. Alongside strategic debates, it can take very practical measures within its mandate, like providing missions with the means to confront disinformation campaigns, creating practical guides on AI in conflict scenarios or using AI tools to improve risk prevention and analysis. Innovation must be at the service of humankind, never against it. Artificial intelligence must be an ally of peace, security and the dignity of all citizens. Portugal is fully committed to this common goal and always faithful to the Charter of the United Nations.
Artificial intelligence is a radical change in our modern world. It holds great potential for innovation and development but also poses challenges and risks that may exacerbate instability, especially in countries suffering from protracted conflicts. For decades, our people have suffered from marginalization, conflict and oppression, as well as instability, most recently at the hands of the terrorist Houthi militia supported by Iran. We have seen how artificial intelligence can be used as a tool by the Houthis to spread misinformation and false propaganda, and we have witnessed Iran’s efforts to spread autonomous weapons systems that threaten security and stability. Artificial intelligence is a double-edged sword. Without wise and prudent governance, it can become a tool in the hands of terrorist groups to threaten regional and international security. At the same time, if used responsibly, it can contribute to building peace, promoting good governance and enabling communities to rebuild their institutions and restore basic services. Artificial intelligence should not be treated as a purely technical matter, but rather as a matter of sovereignty and the dignity for peoples. If controlled by foreign parties, it could threaten fragile stability. If used responsibly, however, it could support local governance, provide early warning tools to prevent conflicts and promote economic recovery. To maximize the benefits of artificial intelligence, we must take into account four priorities. First, with regard to development, artificial intelligence can help countries suffering from war, such as Yemen, rebuild their basic services in the health, education, energy and water management sectors. Its agriculture and climate forecasting technologies can also promote adaptation to environmental challenges that could increase instability. Secondly, with regard to local peace and local security, the use of artificial intelligence to promote peace is important. In my country, that could be done by monitoring the ceasefire, detecting illegal weapons trafficking and promoting the gathering of open- source information about the movements of the Houthi terrorist militia and its propaganda networks. If used wisely, it could strengthen communities’ trust in their local institutions and reassure donors that their assistance is reaching those in need. Thirdly, in terms of regional and international peace and security, the responsible use of advanced technology and artificial intelligence is directly related to regional and international peace and security. We have seen how the Houthi militia’s access to such technology has had a direct impact on the security situation in the region, targeting maritime navigation, the global economy, energy supplies and civilian infrastructure in the region. Therefore, if this technology falls into the hands of illegal terrorist groups, it could fuel conflicts and escalation in a way that we are not able to control in the future. Against that backdrop, we are committed to playing our part in establishing security and stability. We begin by explaining our experience and then transform it into an opportunity for cooperation and collaboration between the Presidential Leadership Council, the Arab Alliance and the international community, in accordance with the deterrence strategy adopted by the Presidential Leadership Council. Fourthly, with regard to achieving justice and respecting the principles of the Charter of the United Nations, artificial intelligence must be a contributing factor to achieve justice and enforce the law, not only locally but also internationally. Artificial intelligence must not negatively affect the aspirations or rights of peoples
I now give the floor to His Excellency ⁠Mr. Sitiveni Ligamamada Rabuka, Prime Minister and Minister for Foreign Affairs, Civil Service and Public Enterprises, and Information of the Republic of Fiji.
I am indeed grateful for this opportunity to speak on this issue. And I sincerely thank the Republic of Korea for its leadership this month. We now live in a world in which our lives are reliant on digital technologies. In line with the Pact for the Future, technological advancements and enhanced artificial intelligence (AI) can support peace and security by improving human capacity to detect, prevent and manage conflict. But AI cannot resolve them on its own. Peace is ultimately a human process involving trust, justice and political will. AI is a digital tool, not a solution in itself. AI can analyse conflict prevention and early warning or timely intervention by Governments and organizations. AI strengthens defences against cyberattacks and the spread of disinformation. It can also detect online hate speech or extremist recruitment. In conflict zones, including in States affected by climate change and natural disasters, AI enhances the mapping and location of displaced populations. The Security Council therefore needs to consider a global governance architecture through an intergovernmental process on the use of AI in peace and security and to map clearly its risks and challenges. For our part, the Pacific leaders have endorsed the Ocean of Peace Declaration. We have declared our region a zone of peace where sovereignty, respect and rights uphold our way of life  — a region free from military interference, a region that upholds and sustains peace. In closing, I cannot overemphasize the need for the Security Council to act in an inclusive process to examine the potential that AI has to offer, including its risks in peace and security. We need to act swiftly together now.
I now give the floor to His Excellency Mr. Ervin Ibrahimović, Deputy Prime Minister for International Relations and Minister for Foreign Affairs of Montenegro.
I thank the Republic of Korea for convening this important debate at such a critical moment. Across the globe, unresolved conflicts and rising geopolitical tensions are testing our collective ability to safeguard peace. At the same time, the pace of technological transformation brings new and complex risks. Among them, artificial intelligence (AI) stands out, holding not only extraordinary potential to advance humankind but also the power to deepen existing security challenges. AI can act as a powerful force for good, strengthening peacekeeping, early warning, conflict prevention and humanitarian responses. Yet in the wrong hands or without proper safeguards, it can be weaponized to spread disinformation, enable cyberattacks or even design novel weapons. Montenegro is fully aware of both the risks and opportunities that AI brings. By the end of this year, we will finalize our first national strategy on AI. It will provide a comprehensive framework for the responsible development and use of this technology. As a future member of the European Union (EU), we will strive to align it with the highest standards of the EU AI Act. Raising awareness of risk must go hand in hand with unlocking the potential of AI for innovation, security and sustainable development. This requires global International cooperation is also necessary. The Security Council has a distinct role to play in ensuring that AI contributes to peace and security rather than undermining them. We encourage collective efforts to establish norms and guidelines, foster trust and transparency and ensure that all countries benefit from this technology in a safe and responsible manner. In this regard, Montenegro welcomes the launch of the Global Dialogue on AI governance. Considering its rapid advancement, including the future of artificial general intelligence, we see it as a unique, inclusive multi-stakeholder platform to advance collective approaches in line with the Global Digital Compact. In closing, allow me to reaffirm Montenegro’s readiness to contribute constructively to this effort. If we do not act responsibly today, we risk handing future generations a technology that divides rather than unites. But, if we act together, AI can become a force for peace, not a danger.
I now give the floor to His Excellency Mr. Hendrikus Wilhelmus Maria Schoof, Prime Minister of the Kingdom of the Netherlands.
When the Dutch philosopher Cornelis Verhoeven was asked for his opinion on the digital revolution, he said: “Computers will never be able to do what we do because we created them. But then again, that is what the apes said about us.” It might sound like a flippant remark, but there is an element of truth to it, too. We are here today to talk about artificial intelligence (AI) and international peace and security. It is such an important topic that South Korea has put it on the Security Council agenda, which we greatly appreciate. When it comes to AI, opinion is generally divided into two camps: for and against. But if one looks more closely, it is clear that the positive and negative views on AI’s potential are equally valid because AI can indeed help provide solutions to global issues like climate change, food security, water management and collateral damage in military conflicts. But it can also be used for cyberattacks and disinformation campaigns, so there is every reason for us to develop accountability mechanisms. In 2023, the Netherlands and South Korea jointly initiated the broad international debate on the responsible use of AI in the military domain, also known as the REAIM process. As part of this process, the Netherlands launched a global commission on responsible AI in the military domain. This independent body is made up of around 20 international commissioners — from philosophers and lawyers to mathematicians and computer scientists — and is supported by a broad group of experts. It has been working on this issue for the past 18 months. Today it is presenting its final report, with concrete recommendations on how Governments can embed this theme in their policy agendas. But beyond the military domain, AI has already spread more broadly into our day-to-day lives. And while everyone accepts that we need security safeguards for military applications, that is not automatically the case in other areas. We therefore need to pay special attention to those areas, too. This is a task that the Netherlands takes very seriously by drawing up frameworks and impact analyses for the use of AI and algorithms, as well as by working with a range of organizations concerned with raising awareness and providing information on AI, including ELSA Labs, which stands for ethical, legal and societal aspects of AI. But, of course, this requires a broader approach, too. The Netherlands supports the European Union vision on jointly building an innovative European ecosystem focused on people, trust and accountability. And now I am keen to hear the views of everyone in the Chamber.
I now give the floor to His Excellency Mr. Sadyr Zhaparov, President of Kyrgyzstan. President Zhaparov (spoke in Kyrgyz; English interpretation provided by the delegation): At the outset, I would like to express my sincere appreciation to the President of the Republic of Korea, Mr. Lee Jae Myung, for organizing this high- level meeting. I consider the inclusion of this topic in the Security Council’s agenda to be both timely and highly relevant. Kyrgyzstan is currently actively working on the introduction of artificial intelligence (AI) technologies in public administration and key sectors of the economy. In accordance with the digital transformation vision for 2024–2028, we are implementing a number of initiatives. These include the creation of a national artificial intelligence platform, a high-performance computing centre and the development of educational programmes for training specialists. In this area, Kyrgyzstan is participating in the initiatives of the Shanghai Cooperation Organization and is engaged in discussions on joint projects with countries of Central Asia and the Commonwealth of Independent States. Artificial intelligence is becoming one of the most advanced and transformative technologies in the twenty-first century. It affects all spheres of our lives. At the same time, we cannot ignore or turn a blind eye to the new challenges, including the militarization of AI, the design and development of autonomous weapon systems, cybersecurity threats and the spread of disinformation. These risks could undermine confidence among members of the international community, jeopardize strategic stability and potentially lead to new forms of conflict. Kyrgyzstan firmly believes that artificial intelligence technologies should serve exclusively the goals of peace, conflict prevention and the protection of human life. In this regard, we call for the development of international standards and regulations for the responsible use of artificial intelligence. The benefits of these technologies must be available to all countries, not only developed, but also developing ones. Kyrgyzstan has put forward its candidacy for non-permanent membership of the United Nations Security Council for the term 2027–2028. If elected to the Security Council, we are ready to contribute to the efforts of the international community to ensure the use of artificial intelligence for peaceful purposes, as well as to actively participate in joint initiatives.
I now give the floor to the representative of Morocco.
My Foreign Minister has been held up at another meeting. I will read out this statement on his behalf. Allow me to thank the Republic of Korea, the President of the Security Council, for holding this timely debate on artificial intelligence as a tool for preserving international peace and security. We are holding this discussion at a time when the rapid technological advances associated with artificial intelligence are profoundly transforming the dynamics As a nation committed to peace, stability and responsible technological progress, Morocco would like to highlight four points. First, we must invest in artificial intelligence systems capable of detecting the early signs of instability, particularly in the most fragile regions, because early detection can prevent conflict and save lives. Secondly, artificial intelligence must be used in the fight against digital disinformation, by detecting harmful content such as hate speech, disinformation and incitement to violence. This is particularly important in areas in which peacekeeping operations are deployed, in which such speech threatens the ability to implement mandates, as well as the safety and security of Blue Helmets. Thirdly, we can, and indeed we must, use the capabilities of artificial intelligence to predict and manage environmental challenges such as water shortages, droughts and disruptions to agricultural systems. Fourthly and lastly, when it comes to AI capabilities in the hands of non-State actors and terrorist groups, the Security Council, in accordance with its mandate, could consider an approach similar to that of resolution 1540 (2004) on weapons of mass destruction. Morocco reaffirms its commitment to working with all partners towards the responsible use of artificial intelligence, guided by principles that guarantee the respect for international law and ethical considerations, the prevention of discrimination, equitable access and technology transfer, environmental responsibility in the deployment of AI infrastructure, and governance that encourages innovation.
I now give the floor to the representative of Cambodia.
It is my honour to represent the Government of Cambodia at this important open debate on the implications of artificial intelligence (AI) for international peace and security. I wish to commend the Republic of Korea for its remarkable leadership as President of the Security Council for this month and to express my appreciation to the Secretary-General and all briefers for their insightful remarks. Artificial intelligence is no longer an abstract concept. It is already reshaping the global security landscape. AI holds immense promise in peacekeeping, conflict prevention, humanitarian action and post-conflict recovery. Yet its dual-use nature also presents profound risks, such as disinformation campaigns, cyberattacks, the development of autonomous weapons and the threat to non-proliferation and human rights. The choice before us is stark. AI can either serve as a catalyst for peace and stability or as a multiplier for insecurity and conflict. In this regard, permit me to highlight the following key points. First, the safe, ethical and responsible use of AI must be a global priority. Making responsible AI a worldwide priority is not optional. It is a duty that requires urgent and coordinated global action, rooted in law and ethical norms, driven by cooperation and guided by a shared commitment to humankind. Cambodia, therefore, supports establishing a dedicated United Nations body on AI, under the Secretary-General, to ensure effective governance, protect peace and promote sustainable development, in line with the New Agenda for Peace. Thirdly, cooperation is indispensable. No country can address the complexity and multifaceted impact of AI alone. Cambodia supports multilateral efforts, including within the United Nations Security Council, to establish common norms, share best practices and enhance safeguards to ensure that AI contributes to stability, prosperity and respect for human dignity. Against this backdrop, Cambodia is in the process of formulating its first national artificial intelligence strategy, for the years 2025–2030. It has become the fourth country in South-East Asia to complete a national artificial intelligence readiness assessment, a UNESCO-led AI readiness assessment, in line with the national digital economy and the 2021–2025 information technology framework. In conclusion, Cambodia reaffirms its commitment to working with all Member States to shape a future in which artificial intelligence promotes peace, security and humanity. We pledge to ensure that the development and use of AI technology follow international law and reflect our shared values, driving a global vision that is inclusive, ethical and ground in respect for humankind.
I now give the floor to the representative of Bangladesh.
The enormous benefits of artificial intelligence (AI) are self-evident. At the same time, the potential perils of AI are pernicious, for which we need appropriate policy responses, both at the national and international levels. With the exception of city-States, Bangladesh is the most densely populated country in the world, with a large, young population. Heavy density means that online disinformation can spread very quickly, with serious offline consequences in the physical world. Especially when that disinformation is conducted from outside our borders with weaponized AI, the policy challenges are great unless we work together internationally across borders. The United Nations has the opportunity to ensure that all Member States play a constructive role in combating motivated disinformation efforts from across borders. The United Nations may also play a catalytic role in ensuring greater cooperation between platform or technology companies and Governments. Member States may also collaborate to share experiences in educating their populations about AI- doctored disinformation. We welcome the ongoing United Nations and multilateral initiatives, including the Secretary General’s call for a Global Digital Compact, the recent launch of the Global Dialogue on Artificial Intelligence Governance and the establishment of an Independent International Scientific Panel on Artificial Intelligence. These are crucial, not just for peace and security but also to ensure that the fruits of AI are inclusive of all and that the most vulnerable of all are protected from the threats from AI. Bangladesh is ready and eager to contribute to these initiatives.
I now give the floor to the representative of Liechtenstein.
The use of artificial intelligence (AI) tools has become a reality in our daily lives. As with all technologies, it brings potential benefits but also considerable risks. We must ensure that the development and use of AI tools take place in line with the Charter of the United Nations and international law. The use of AI tools in warfare has resulted in serious violations of international humanitarian law. For example, AI is increasingly used to automate the process of targeting. With human life at stake, algorithmic bias in these tools can result in ethically and legally unacceptable results without even the hypothetical prospect of accountability. Civilian harm resulting from the use of AI tools demonstrates the need to redouble our efforts to prohibit fully autonomous weapons in international law, in line with the recent call of the Secretary-General. Life-or-death decisions must never be made without human control. Ensuring meaningful human control over AI tools is the bare minimum standard for compliance with international humanitarian and international human rights law and the only way to ensure that accountability for violations is achievable. Regulation is also needed to ensure that AI tools cannot be used to spread disinformation and hate speech. Such regulation would ideally pave the way to ensuring that those who develop or deploy such tools are held accountable for violations they are responsible for, in accordance with applicable international law. Malicious cyberoperations, including those enabled by AI tools, can cause devastating harm to civilians, and yet there remains a troubling lack of clarity on how international criminal law applies to cyberwarfare. While there is broad agreement that international law governs cyberspace, its practical application remains unsettled, especially in the context of accountability for cyberrelated atrocities. We therefore wish to draw members’ attention to the Council of Advisers’ report on the application of the Rome Statute of the International Criminal Court to cyberwarfare, prepared by Liechtenstein. This report seeks to contribute to a clear understanding of how the Rome Statute applies in the cyberdomain, which is essential not only for the Court’s work but also for guiding the Security Council in its mandate to refer situations involving the commission of the most serious crimes under international law.
There are still a number of speakers remaining on my list for this meeting. I intend, with the concurrence of members of the Council, to suspend the meeting until tomorrow, 25 September, at 3 p.m.
The meeting was suspended at 6.10 p.m.