On January 28, the Center for Data Innovation and DiploFoundation hosted an event which brought together over 150 people—diplomats, policy makers from EU institutions and member states, researchers, journalists and others—with an interest in the relationship between artificial intelligence (AI), diplomacy, and foreign policy. The event served as a platform to launch and discuss DiploFoundation’s report, Mapping the challenges and opportunities of artificial intelligence for the conduct of diplomacy, which was commissioned by the Finnish Ministry for Foreign Affairs. The launch was followed by three panels of experts, which deepened the discussion on some of the key themes in the report and allowed for further exchange of ideas.
In her opening remarks, Sini Paukkunen, head of the policy and planning unit at the Finnish Ministry for Foreign Affairs, stressed the timeliness and importance of the topic. She argued that more resources should be devoted to understanding the relationship between AI and diplomacy. Paukkunen also highlighted the relevance of the sustainable development goals (SDGs) and the importance of leaving no one behind in the era of AI.
Katharina Höne, senior researcher, lecturer, and project manager at DiploFoundation, introduced the key elements and recommendations of the report. She emphasized the importance of realizing that AI is an umbrella term and a moving target that escapes easy definitions. She also highlighted that when people talk about AI and diplomacy, they are exclusively talking about cases of specialized AI—AI systems designed for one particular task. Introducing the report in detail, she shared findings from a comparison of various national AI strategies, introduced examples of AI tools being used to support the work of diplomats, and zoomed in on the impact of AI on human rights. Höne concluded with three specific suggestions for ministries of foreign affairs (MFAs) that want to take the first steps to respond to the opportunities and challenges posed by AI:
- Develop input indicators to measure and evaluate the work of ministries in the area of AI;
- Engage in capacity building to ensure a general understanding of AI is shared by everyone in the MFA; and
- Complement the existing organizational structure of the MFA with a small and agile “AI and big data unit” that can explore AI as a tool for diplomacy on the basis of being given the freedom to innovate and experiment.
Questions following the presentation touched on issues related to the potential regulation of AI systems, competition among countries and the danger of an “AI arms race,” the limits of cooperation in AI, and the challenges associated with building the capacities of countries that might otherwise be left behind.
The first panel discussion on “The Geostrategic Impacts of AI” brought together Richard Stirling, (CEO & co-founder, Oxford Insights), Michael Street (head of innovation and data science, NATO, NCI), and Maaike Verbruggen (PhD researcher on international security, Institute for European Studies) under the moderation of Nicole Reynolds (project manager, DiploFoundation).
Stirling warned that too much focus on advanced AI applications in many discussions could overlook AI used at “lower levels” that might not look very different from standard data analysis. Clearly, autonomous weapons is where AI is set to be most impactful in the military field. Yet, AI will also be key in supporting logistics, deploying equipment to the right locations at the right time, as well as distributing skills in various services. Stirling also encouraged a healthy skepticism regarding the promises of AI applications. In his view, AI can act as a power amplifier and although this could result in more balance among countries, there is still a risk for the existing imbalance of power to be further entrenched between producers and consumers of AI. Countries should ask themselves where in their AI strategies their comparative advantage may be, and which are the areas they should be investing in.
Street noted the importance of having a clear understanding of what the technology can and cannot do. In this, the quality of the data is fundamental to ensure the quality of the AI. Street introduced the audience to the concept of “data poisoning,” which is the idea that if it is known that one looks at a certain source of data to make a decision, this data can be manipulated. Street also touched upon “meaningful human intervention,” a key term in the military community, referring to the discussion around having a human “in” or “on the loop”—i.e., making decisions informed by AI, or where events happen too fast, being able to interrogate and override an automated decision. Street then discussed “meaningful investigations” when using AI for recommendations. AI can act as an advisor recommending a potential course of action to a commander, but a challenge could be to include the possibility to interrogate and gain insights into the rationale behind this recommendation. Some aspects may not be included in an automated, mathematical analysis. Regarding how AI will affect the balance of power between actors, Street expects there will be a change in the speed driving decision-making in the military field. In addition, non-military activities, such as influencing political opinion or economic attacks on countries, are increasingly used for military-like gains. Hybrid warfare and national influence as a way to exert power will be making increasing impacts.
Verbruggen cautioned that the talk about an “AI arms race” could well be a self-fulfilling prophecy. AI is not a zero-sum game, especially when it comes to economic applications. There are technical benefits to the use of AI in the military field, such as increased speed and efficiency, and deploying operations in dangerous environments. She underlined that the competition between powers produces a cascading and accelerating effect on technological development because military capabilities remain a strong symbol of hard power. This is moving faster than reflections on the dilemmas of security and the appropriate use of some AI applications in the military field. Further, Verbruggen emphasized that there will not be a radical departure from the status quo or revolution any time soon. She does not expect that there will be a major shift in the balance of power. In addition, there are still clear obstacles to further militarization of AI. Verbruggen also mentioned asymmetrical risks and how the nature of warfare may change, given that some countries may have less AI military capabilities than others. Commenting on questions of data privacy protection, she warned that in some cases data privacy might be used as an excuse in order to avoid sharing data and thus gaining a competitive advantage.
Moderated by Eline Chivot (senior policy analyst, Center for Data Innovation) the second panel on “AI as a Cognitive Tool for Diplomatic Practice” was joined by Jovan Kurbalija (executive director and co-lead, United Nations Secretary-General’s High Level Panel on Digital Cooperation), Andrew-Tony Camilleri (technical attaché, Permanent Representation of Malta to the European Union), Arnaud Gaspart (business intelligence analyst, Management Unit of the Secretary General at Belgium’s Federal Public Service Foreign Affairs, Foreign Trade and Development Cooperation), and Philippe Lorenz (project lead artificial intelligence and foreign policy, Stiftung Neue Verantwortung).
Kurbalija reminded the audience that in an increasingly interdependent world, there will be more, not less, need for diplomacy. However, the diplomacy of the future might look slightly different than it does now. For example, it might increasingly be performed by non-traditional diplomats. He argued that AI is already happening and shifting things around us, while we are busy discussing its future. In terms of using AI as a tool, he encouraged everyone to start with simpler applications and focus on low-hanging fruits. Many simple tools are missing. For example, in the diplomatic multilateral world, exchanging candidatures is a huge undertaking given the diversity of UN institutions and agencies. There is no sophisticated system or database yet that provides an overview of talents available across countries, facilitates transfers, and swaps applications. AI can excel in diplomatic services, for instance, by identifying linguistic cognitive patterns: Kurbalija used the example of patterns and insights generated through the analysis of typical diplomatic speeches and transcripts that had been immediately transcribed during conferences.
Camilleri described three reasons why AI harbors great potential as a tool for diplomacy. First, AI can act as an equalizer between small and big member states and help in dealing with understaffing issues. For example, Malta’s foreign affairs department in charge of the examination of proposals issued by the Commission includes one person, compared to two or three persons in other countries. Malta could greatly benefit from a tool for processing and performing data-heavy analyses. In addition, using such tools could add speed to managing affairs at an EU level—which may help in solving societal and economic challenges arising from slow political decision-making and bureaucratic systems. Second, AI solutions can increase the efficiency of diplomatic practice, and can support ministries and governments as a whole through more coordinated and consistent approaches. AI could help in dealing with employee turnover as attachés and other civil servants tend to change postings, or with successive national governments. This will retain institutional memory and address information growth, even as the teams that discuss and negotiate proposals change over time. Finally, AI as a tool for diplomatic practice can support the preparation of negotiations through various AI-enabled research tools.
Gaspart cautioned that in contrast to the way the Internet economy operates, diplomacy does not work according to the principle “move fast and break things.” He also described the ways in which his ministry already engages with AI tools and mentioned the use of chatbots in consular affairs in particular. These can be used to answer real-time questions from citizens. AI tools can make text easier to digest and facilitate the adoption of laws. Gaspart also stressed the importance of training the next generation of diplomats in understanding AI, while also emphasizing the significance of soft skills. In his view, these will remain paramount in diplomacy. AI can be an appropriate tool to go through datasets and analyze language, or prepare input for diplomatic talks and gather knowledge, but overall, once someone enters the negotiation room, going tech-less and switching off the phone can be the best way to make progress. AI cannot do this. In addition, using AI and robotics is still in the experimental phase within most foreign affairs services.
Lorenz recalled that AI is a key enabling, general-purpose technology. Some elements that apply to the Internet age can be used in the age of AI. He argued that ministries need to be more risk-tolerant, more inventive, and more ambitious so as to integrate AI technology. However, in order to do this, reorganizations within MFAs are necessary; it is not enough to have one small unit dealing with all things AI. According to Lorenz, the challenge will be to find the right people to integrate AI tools more firmly within MFAs, and that these people are unlikely to come through traditional recruiting channels of the public sector. In addition, AI could accelerate and mainstream the collection and delivery of valuable information about other countries, so as to support decision-making back in national headquarters.
Speakers on the third panel on “AI, Human Rights, and Ethics in International Relations” included Ricardo Castanheira (public policy, government & European affairs, Permanent Representation of Portugal to the European Union), Brian Parai (deputy director in the results and delivery unit, Strategic Policy Branch, Global Affairs Canada), Nicholas Hodac (government and regulatory affairs executive, IBM), and Nicolas Moës (AI policy researcher, The Future Society—AI Initiative). Tereza Horejsova (project development director, DiploFoundation, and Executive Director, Diplo US) moderated the panel.
Castanheira started by emphasizing that as a starting point for thinking about the ethics of AI, we should not ask what computers can do, but focus on what they should do.
He mentioned the European Commission’s Consultation on the Draft AI Ethics Guidelines and some of the main principles—such as fairness, inclusiveness, transparency and predictability, security and privacy, accountability, reliability and safety—contained there. The context of tech-lash, i.e., growing opposition to certain technologies and tech companies, makes it ever more critical to discuss these issues.
Parai emphasized the need for a human-centric approach, grounded in human rights, inclusion, diversity, innovation, and economic growth when discussing and using AI. The perspective of Global Affairs Canada is that technology can both advance and hinder human rights and that the way it is used will make an important difference. In contrast to debates on ethics, human rights are clearly defined, internationally agreed upon, more tangible, and can be clearly measured. Hence, a human rights framework can accommodate AI issues. Canada has set up several initiatives such as digital inclusion labs, bringing together issues related to privacy, equality, bias, and freedom of expression. Its strategy engages civil society, governmental units, and businesses.
Hodac reflected on the role of the private sector, as in his view, embedding an ethical approach to AI into business activities can be a competitive advantage. Business should take a more proactive approach that goes beyond compliance. IBM has principles on data responsibility including access to data, data ownership, and data security, as well as on the purpose and meaning of ethical AI. He highlighted the importance of the High-Level Expert Group on Artificial Intelligence (AI HLEG) in connection with the European AI Alliance and the Partnership on AI, founded by Amazon, DeepMind/Google, Facebook, IBM, and Microsoft. 2019 will be a tipping point, moving from the philosophical conversation and theory to action and implementation. Commenting on tech-lash, Hodac mentioned that all stakeholders involved in the AI conversation need to address the lack of trust in tech companies.
Moës noted that the trend of a multilateral approach between countries is failing, and the status quo is unsustainable. The lack of multilateral stability and the rise of polarized administrations coming to power are leading to a more brutal international environment. This also explains the weak global governance of AI. Instead of relying on multilateralism, and to solve this, Moës recommends “shifting gears.” In his view, a multistakeholder approach could address the questions of AI, ethics, and human rights. Moës emphasized the duty of the public and private sectors, civil society, and academia. Finally, he expressed concerns that AI might lead to greater inequalities between countries, and that some countries might be left behind.
The event did not only reflect the breadth of AI in the context of diplomacy and foreign policy, it also brought together a broad range of stakeholders, and allowed for an exchange between experts with diverse views on the topic. Diplomacy will be needed more than ever, but it will need to adapt in a variety of ways. The event also brought home the fact that AI is here to stay and that global and national efforts are needed to address the complexity of the issue.