Meeting participants
Workshop participants during one of the breakout sessions of the International Workshop on Responsible Artificial Intelligence for Peace and Security, NYU and United Nations, May 7–9, 2025.

May 21, 2025

SGS PhD student Alexandra Bodrova and Co-Director Alexander Glaser joined the International Workshop on Responsible Artificial Intelligence for Peace and Security, organized by the United Nations Office for Disarmament Affairs (UNODA) and the Stockholm International Peace Research Institute (SIPRI) from May 7–9, 2025, at the NYU Tandon School of Engineering and the United Nations Headquarters in New York City.

The workshop focused on what it might take to ensure that artificial intelligence is developed responsibly in a world of accelerating technological change, dual-use risks, and geopolitical uncertainty. It brought together a diverse multigenerational international group of educators, researchers, and policymakers to explore how responsible innovation in AI can be taught and integrated into STEM education globally. There was also a high-level roundtable discussion with UN representatives.

Glaser and Bodrova introduced First Law Robotics, a new interdisciplinary effort to understand and address dual-use aspects of robotics, especially those enabled by artificial intelligence. Combining new theoretical and experimental approaches in robotics and AI, this initiative aims to develop means for safeguarding robots and autonomous systems from being repurposed from peaceful to harmful missions for which they were not designed or intended. A key goal is to examine and develop the concept and prototype of an “Asimov Box” operationalizing Asimov’s First Law: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” The effort grew out of a course developed by Ryo Morimoto and Alexander Glaser (Robots in Human Ecology) and is guided by insights from SGS disarmament science studies on monitoring and verification of nuclear weapons and materials. It brings together researchers from Princeton’s School of Engineering and Applied Science, the School of Public and International Affairs, and Anthropology.

Group Photo of Meeting Participants
Participants at the International Workshop on Responsible Artificial Intelligence for Peace and Security Workshop,  NYU and United Nations, May 7–9, 2025

Over the course of three days at the workshop and the UN roundtable participants examined how to empower the next generation of AI developers to anticipate and mitigate societal and security risks. Key themes included questions of how to embed ethical reasoning, dual-use awareness, and social impact assessments into technical AI courses; the importance of bridging the gap between technical and policy domains; and opportunities for concrete steps that universities, governments, and international organizations can take to support responsible AI education and governance. The workshop featured an interactive curriculum design session, and small-group evaluations of the forthcoming Handbook on Promoting Responsible Innovation in AI for Peace and Security.

As part of the event, NYU's new Center for Robotics and Embodied Intelligence, led by Ludovic Righetti, organized a panel discussion featuring Vincent Boulanin (SIPRI), Virginia Dignum (Umeå University), Peter Asaro (The New School), and Julia Stoyanovich (NYU), highlighting the pressing need for interdisciplinary, people-centered approaches to responsible AI.