1、Issue BriefOctober 2023DecodingIntentionsArtificial Intelligence and Costly SignalsAuthorsAndrew ImbrieOwen J.DanielsHelen Toner Center for Security and Emerging Technology|1 Executive Summary How can policymakers credibly reveal and assess intentions in the field of artificial intelligence?AI techn
2、ologies are evolving rapidly and enable a wide range of civilian and military applications.Private sector companies lead much of the innovation in AI,but their motivations and incentives may diverge from those of the state in which they are headquartered.As governments and companies compete to deplo
3、y evermore capable systems,the risks of miscalculation and inadvertent escalation will grow.Understanding the full complement of policy tools to prevent misperceptions and communicate clearly is essential for the safe and responsible development of these systems at a time of intensifying geopolitica
4、l competition.In this brief,we explore a crucial policy lever that has not received much attention in the public debate:costly signals.Costly signals are statements or actions for which the sender will pay a price political,reputational,or monetaryif they back down or fail to make good on their init
5、ial promise or threat.Drawing on a review of the scholarly literature,we highlight four costly signaling mechanisms and apply them to the field of AI(summarized in Table 1):Tying hands involves the strategic deployment of public commitments before a foreign or domestic audience,such as unilateral AI
6、 policy statements,votes in multilateral bodies,or public commitments to test and evaluate AI models;Sunk costs rely on commitments whose costs are priced in from the start,such as licensing and registration requirements for AI algorithms or large-scale investments in test and evaluation infrastruct
7、ure,including testbeds and other facilities;Installment costs are commitments where the sender will pay a price in the future instead of the present,such as sustained verification techniques for AI systems and accounting tools for the use of AI chips in data centers;Reducible costs are paid up front
8、 but can be offset over time depending on the actions of the signaler,such as investments in more interpretable AI models,commitments to participate in the development of AI investment standards,and alternate design principles for AI-enabled systems.1 We explore costly signaling mechanisms for AI in
9、 three case studies.The first case study considers signaling around military AI and autonomy.The second case study examines governmental signaling around democratic AI,which embeds commitments to human rights,civil liberties,data protection,and privacy in the design,development,and deployment of AI
10、technologies.The third case study analyzes private sector signaling around the development and release of large language models(LLMs).Center for Security and Emerging Technology|2 Costly signals are valuable for promoting international stability,but it is important to understand their strengths and
11、limitations.Following the Cuban Missile Crisis,the United States benefited from establishing a direct hotline with Moscow through which it could send messages.2 In todays competitive and multifaceted information environment,there are even more actors with influence on the signaling landscape and opp
12、ortunities for misperception abound.Signals can be inadvertently costly.U.S.government signaling on democratic AI sends a powerful message about its commitment to certain values,but it runs the risk of a breach with partners who may not share these principles and could expose the United States to ch
13、arges of hypocrisy.Not all signals are intentional,and commercial actors may conceptualize the costs differently from governments or industry players in other sectors and countries.While these complexities are not insurmountable,they pose challenges for signaling in an economic context where private
14、 sector firms drive innovation and may have interests at odds with the countries in which they are based.Given the risks of misperception and inadvertent escalation,leaders in the public and private sectors must take care to embed signals in coherent strategies.Costly signals come with trade-offs th
15、at need to be managed,including tensions between transparency for signaling purposes and norms around privacy and security.The opportunities for signaling credibly expand when policymakers and technology leaders consider not only whether to“conceal or reveal”a capability,but also how they reveal and
16、 the specific channels through which they convey messages of intent.3 Multivalent signaling,or the practice of sending more than one signal,can have complementary or contradictory effects.Compatible messaging from public and private sector leaders can enhance the credibility of commitments in AI,but
17、 officials may also misinterpret signals if they lack appropriate context for assessing capabilities across different technology areas.Policymakers should consider incorporating costly signals into tabletop exercises and focused dialogues with allies and competitor nations to clarify assumptions,mit
18、igate the risks of escalation,and develop shared understandings around communication in times of crisis.Signals can be noisy,occasionally confusing some audiences,but they are still necessary.Center for Security and Emerging Technology|3 Table 1:Examples of Costly AI Signals Military AI and Autonomy
19、 Democratic AI Private Sector Signaling Tying hands Issue unilateral policy statements to convey intent,such as committing to maintain a human in the loop for nuclear command and control decisions.Defend democratic AI principles by committing to predefined actions in response to AI-enabled adversari
20、al attacks on democratic societies.Release key information about advanced AI models,including transparency around the training data,model performance,and dangerous capabilities.Sunk costs Invest in red teaming procedures during training and before deployment and explore the use of emblems to facilit
21、ate attribution of AI-enabled weapons systems.Release due diligence guidance for private companies operating in markets where there is a systemic risk of misuse of AI technologies.Invest in trusted hosting services and test and evaluation infrastructure,including test beds and other facilities.Insta
22、llment costs Commit to sustained verification techniques for AI-enabled systems and develop arrangements for intensive compute accounting.Develop common certification standards,tools,and practices for AI auditors.Commit to real-time incident monitoring and common standards around data collection and
23、 analysis of incidents involving AI-enabled systems.Reducible costs Set requirements and create incentives for investing in interpretable AI models and alternate design principles.Sponsor prize competitions for AI safety research and the development of privacy-enhancing technologies that promote dem
24、ocratic values.Publish AI impact assessments and the results of internal audits of AI systems Center for Security and Emerging Technology|4 Table of Contents Executive Summary.1 Introduction.5 Costly Signals and Why They Matter.8 Costly Signaling Mechanisms and AI.10 Costly Signals in Practice.15 Mi
25、litary AI and Autonomous Weapons.15 Democratic AI and Inadvertent Signals.20 Private Sector Signaling.27 Policy Considerations and Lessons Learned.31 Authors.36 Acknowledgements.36 Appendix A:Multilateral examples of language about“democracy”or“democratic values”and AI.37 Appendix B:Unilateral examp
26、les of language about“democracy”or“democratic values”and AI.41 Endnotes.44 Center for Security and Emerging Technology|5 Introduction As the Cuban Missile Crisis neared its terrifying apex on October 22,1962,Soviet First Secretary Nikita Khrushchev expressed dismay that his intended signal of deterr
27、ence had gone so awry.“Our whole operation was to deter the USA so they dont attack Cuba,”the Soviet leader remarked to his inner circle.4 With U.S.missiles in Italy and Turkey,he reasoned,why should the Soviets be denied the opportunity to right the balance?Khrushchevs decision to place missiles in
28、 Cuba was calculated to achieve a geopolitical trifecta:dissuade the Americans from invading the island,reestablish credibility at home,and seize the initiative from an increasingly assertive China.Moscows motives were not readily apparent to analysts in Washington.Shortly after the Soviet launchers
29、 and missile shipments arrived in Cuba,an American U-2 reconnaissance plane captured evidence of the sites and relayed them back to a startled White House.U.S.President John Kennedy exclaimed to his advisors,“Why did he put these missiles in thereWhats the advantage of that?”5 Against this backdrop
30、of competing concerns and conflicting messages,a series of mishaps heightened tensions further.President Kennedy and Secretary of Defense Robert McNamara took pains to avoid what one historian observed was“the danger of having the Kremlin regard unauthorized actions as intentional signals.”6 On Octo
31、ber 26,however,the U.S.Air Force conducted an intercontinental ballistic missile test at Vandenberg Air Force Base in California.7 Then,on the morning of October 27,Soviet surface-to-air missiles struck an American U-2 spy plane in eastern Cuba,killing its pilot,Major Rudolph Anderson.Later that day
32、,another American U-2 on a mission to collect samples of nuclear tests over the North Pole drifted into Soviet airspace without authorization.The U-2 maneuvered out of Soviet gunsights and returned home,but the risks of misperception were not lost on Washington.As a senior official from the State De
33、partment cautioned,“The Soviets might well regard this U-2 flight as a last-minute intelligence reconnaissance in preparation for nuclear war.”8 The Cuban Missile Crisis is a reminder of the difficulty of sending clear and credible signals of intent in times of crisis.Leaders may think they are deli
34、vering one message,but the execution of their orders or lower-level actions of which they are unaware may convey another.Mirror imaging and the tendency to view other nations as monoliths only compound the challenge.Decades later,the United States once again confronts a world saturated with major po
35、wer tensions,strategic arms competition,and the rapid advance of new technologies.The imperative to avoid miscalculation and communicate credibly is no less urgent today than it was during those 13 harrowing days in 1962.Indeed,the task of signaling clearly may be even harder in the present environm
36、ent.Innovation is more globalized and dispersed.9 National security considerations increasingly permeate corporate decision-making on investment and supply chains.10 Commercial players exert Center for Security and Emerging Technology|6 influence on governmental decision-making,but,at times,act on t
37、he global stage independently or even against the national interest of their home countries.11 Trust among the major powers has frayed and military-to-military communication has deteriorated.12 Compounding matters,emerging technologies,such as artificial intelligence(AI),have become new playing fiel
38、ds for geopolitical competition.13 Advances in AI and machine learning,in particular,have altered the signaling landscape.Nations are vying for leadership over general-purpose technologies whose military and civilian applications are not easily differentiated.14 AI algorithms and software services a
39、re intangible,though they are often tightly coupled with hardware components.15 Such algorithms can be unpredictable in their effects and diffuse unevenly across sectors and societies.Openness has long characterized the academic field of AI,but concerns over safety and rising geopolitical and market
40、 pressures are accelerating the trend toward more closed ecosystems for AI development.16 As the rivalry between the United States and China gathers momentum,the risks of mixed messages will grow as leaders broadcast the strengths of their AI-enabled systems and conceal weaknesses and intended use c
41、ases for deployment.Entanglement between nuclear and non-nuclear capabilities could raise the stakes even higher,as governments integrate AI into military decision-making and planning.17 In this context,it is critical that leaders pursue technology and national security policy goals without fueling
42、instability or courting inadvertent escalation.The way forward will require a healthy dose of diplomacy and wise investments across a portfolio of standards,tools,and assessment approaches that facilitate responsible development across the life cycle of AI technologies.18 One tool that holds promise
43、 but has received little attention in the public debate is what researchers have termed“costly signals.”The essence of a costly signal is that the sender will pay a price if they back down or fail to make good on a promise or threat.19 Costly signals reveal information of a certain type:governments
44、or companies that send a costly signal are disclosing information that a less capable or resolved actor would not otherwise send.20 The costs may be financial or reputational,or they may involve a cost in the human lives that such actions or statements put at risk,such as the deployment of troops to
45、 defend security commitments to allies.21 For a signal to be costly and not a form of“cheap talk,”the receiver must be able to observe compliance and the sender must be willing to risk paying a price for noncompliance.Policymakers should be humble about the ability to convey accurate signals with cr
46、itical and emerging technologies.Yet while signals can be noisy,they are still necessary.The solution is not to discount this important policy tool,but rather to wield it more effectively.Policymakers must understand the value and limitations of costly signals in AI and explore their potential appli
47、cations for quickly advancing technologies that require careful net assessments of the cost,benefits,and risks for international stability.Center for Security and Emerging Technology|7 This policy brief has four parts.Part one defines costly signals and why they matter in foreign policy.Part two out
48、lines costly signaling mechanisms and maps them onto the field of AI to produce a framework of costly signals.Part three examines costly signals in practice by considering three case studies:major power signaling on AI-enabled weapons,U.S.government signaling on technology and democracy,and private
49、sector efforts to signal restraint and responsible development and deployment of large language models(LLMs).22 Part four draws out the policy implications and explores how and why costly signals may operate differently and elicit different reactions today than during the Cold War.Center for Securit
50、y and Emerging Technology|8 Costly Signals and Why They Matter Policymakers rely on diplomacy and intelligence to gauge not only the capabilities of friend and foe,but also to discern their intentions.Information is at a premium,and leaders cannot discount the possibility that counterparts will bluf
51、f,mislead,or double deal to gain advantages over the other side.Is there any way out of this dilemma?Researchers divide over two basic questions:whether leaders can divine intentions with any degree of certainty,and if so,whether statements or actionswords or deedsare more dispositive of intent.Sign
52、aling pessimists argue that international relations are too uncertain,and the temptations to deceive are too great,for any signal of intent to be taken at face value.23 Policymakers may be able to persuade friendly nations of benign motives,but interests can change in the future and no nation can co
53、nduct its foreign policy on the basis of lasting amity.By investing weight in the ability to shape their adversaries intentions,leaders risk pursuing cooperative strategies with competitors that seem appealing in the near term but may leave them vulnerable over the long term.24 Far wiser,pessimists
54、argue,to assume the worst about other states intentions and prepare accordingly.25 Signaling optimists,on the other hand,believe that intentions are discernable under certain conditions.The late theorist Robert Jervis distinguished between“signals”and“indices.”26 As he defined them,signals are“state
55、ments or actions”that are intended“to influence the receivers image of the sender.”27 They are discrete actions that are observable,controllable,and inherently manipulable.As a result,they are telling but less reliable than what Jervis calls“indices,”which are“statements or actions that carry some i
56、nherent evidence that the image projected is correct because they are believed to be inextricably linked to the actors capabilities or intentions.”28 Indices are not under the control of the sender.They are useful on their own terms but also as a diagnostic for the signals and associated images that
57、 senders aim to present.The distinction between signals and indices reflects a broader division among signaling optimists.Some argue that statements can be dispositive if they are delivered in private or threaten a rupture in ties.29 Others claim that a signals credibility is more closely tied to ob
58、servable behaviors or shifts in material capabilities.30 Still others point to institutional arrangements,domestic regime types,personal diplomatic impressions,and psychological traits as indicative of intent.31 Evidence suggests that tying hands is not necessarily conditional on regime type.32 In a
59、 democracy,accountability may take the form of losing an election;in competitive or closed autocracies where the leader relies on a clientelist group to stay in power,accountability may take more extreme forms.33 While signaling optimists differ over the relevant variables,they share a common assump
60、tion:although intentions may be inconsistent,they are not inscrutable.Statements and behaviors can diverge,but they can also be tracked Center for Security and Emerging Technology|9 over time based on a portfolio of indicators.34 By understanding the context,operational concepts,and foreign policy d
61、ispositions of different leaders,states may form reasonable expectations about intent that can guide policymaking and mitigate the risks of accidents or inadvertent escalation.35 As governments and companies integrate AI into high-stakes systems that operate in increasingly complex environments,poli
62、cymakers will need to understand the full range of tools at their disposal to reassure allies,restrain potentially threatening capabilities,and reveal intentions credibly.Costly signals can be an effective tool to achieve these goals,but it is important to understand the value and limitations of sig
63、naling in the rapidly advancing field of AI.Center for Security and Emerging Technology|10 Costly Signaling Mechanisms and AI Research on costly signaling offers a framework for thinking about intentions in the context of AI and machine learning.Based on a review of the literature,this brief elabora
64、tes on four signaling mechanisms:tying hands,sunk costs,installment costs,and reducible costs.36 In practice,these mechanisms are not mutually exclusive.They can be employed in tandem to enhance the credibility of commitments and,at times,the lines between them blur.Taken together,they provide sever
65、al avenues through which public,private,and non-governmental actors can signal intent on AI.Tying hands involves the strategic deployment of public commitments before a foreign or domestic audience.The idea behind tying hands is that relevant audiences will hold a leader accountable if they do not m
66、ake good on promises or threats.Suppose a leader pledges during a campaign to provide humanitarian aid to a stricken nation or the CEO of a company commits publicly to register its algorithms or guarantee its customers data privacy.In both cases,the leader has issued a public statement before an aud
67、ience who can hold them accountable if they fail to live up to their commitments.The political leader may be punished at the polls or subjected to a congressional investigation;the CEO may face disciplinary actions from the board of directors or reputational costs to the companys brand that can resu
68、lt in lost market share.In each case,the costs imposed are ex post,meaning they occur after the leader sends the signal,and they are receiver-independent,meaning they rely solely on the person sending the signal to make good on the promise or threat.37 In the context of AI,there are many examples of
69、 political leaders and companies employing the tying hands mechanism.U.S.military leaders have developed responsible AI principles and committed publicly and unilaterally not to cede decision-making on nuclear command and control to AI systems.38 More recently,the U.S.Department of State issued a“Po
70、litical Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.”39 Many companies have issued public statements and articulated AI principles to guide their decision making,with varying levels of transparency and accountability.40 The company OpenAI sparked a vigorous public
71、 debate in 2019 when it announced that it would stage the release of its LLM,GPT-2,to avoid unintentional harm from misuse.41 Since then,companies have experimented with a range of public release policies for their AI models.42 Beyond these examples,tying hands in AI could involve any number of poli
72、cies and actions.Countries and companies could articulate public commitments in multilateral and multistakeholder fora that expose them to reputational costs or sanctions for noncompliance.Developers could pledge to adopt watermarking techniques in their products,commit to public evaluations and aud
73、its of their systems,and invest in assuring their AI models by generating evidence that they are sufficiently safe for their intended uses.43 Private sector companies could Center for Security and Emerging Technology|11 signal a commitment to data privacy by investing in privacy-enhancing technologi
74、es as well as smaller models and approaches that do not rely on massive pools of data.44 Similarly,militaries could commit to unique emblems that facilitate attribution of AI-enabled systems.45 Nations concerned about the risks of employing autonomous functionalities in weapons systems could sign up
75、 to codes of conduct that prohibit adversarial attacks on AI and machine learning resources or prescribe responsible conduct in certain areas of operation.46 Such agreements could include voluntary pledges to accept third-party monitoring,common standards for test and evaluation procedures,and mecha
76、nisms to share information and resolve disputes.Sunk costs rely on commitments whose costs are priced in from the startunlike the tying hands mechanism,which involves public commitments that are only costly in the event of noncompliance.Similar to tying hands,however,sunk costs do not rely on the ac
77、tions of the person receiving the signal.Sunk costs communicate a credible,often long-term commitment to a particular policy direction,buy-in from powerful stakeholders,and a lower likelihood of unexpected,drastic change from the set course.For example,one way a nation can indicate its resolve to us
78、e force is to mobilize large numbers of troops.The mobilization need not imply a decision to use force,but it is a costly signal that involves significant resources and political attention that cannot be recovered,which an otherwise irresolute nation may not send.In the context of AI and machine lea
79、rning,sunk costs could include commitments to chain of custody requirements for advanced AI chips,licensing and registration of algorithms,and system inspections for AI verification,such as setting up verification zones to ensure that a system does not include AI chips or that AI chips are not contr
80、olling sensitive functionalities.47 Nations and companies could commit large-scale investments for test and evaluation,including test beds and other facilities.A version of clinical trials for AI models could prove to be an equally costly signal that a company or public-private partnership is commit
81、ted to transparency and responsible development.Virtual boundaries,or geofencing,and other design features could raise the costs up front and limit the capabilities or zones of operation of AI-enabled systems.48 Installment costs are costly commitments that the sender will incur in the future instea
82、d of the present.In contrast to the costs from tying hands,which are only incurred if the sender reneges on their commitments,installment costs are not reliant on the actions of the sender.They are fixed costs that cannot be recouped over time.For this reason,however,they can help extend the durabil
83、itynot just the credibilityof commitments.Consider the costly signal of military basing arrangements.As research on costly signaling points out,the decision to establish a military base overseas engages two costly signaling mechanisms:significant investments up front(sunk costs)and a commitment to o
84、perate and maintain the base in the future(installment costs).49 The time horizons and costly signaling mechanisms are related,but the logics differ in ways that have implications for assessing the credibility of commitments.50 Center for Security and Emerging Technology|12 As applied in the context
85、 of AI,installment costs could involve pledges by governments and companies to conduct risk assessments of AI models and make the results of those assessments available to the public.Governments could require,and private sector actors could implement,sustained verification techniques for AI systems,
86、such as anti-tamper techniques that protect the integrity of software.51 Given the important role of computing power in driving AI progress,policymakers and researchers are exploring compute accounting tools that track clusters of AI chips or specific properties of training runs in data centers for
87、large models,such as the model weights or floating point operations per second above a certain threshold.52 Efforts to codify and enforce these limits would leverage two costly signaling mechanisms:a costly public commitment to abide by the terms of the treaty(tying hands)and a longer-term commitmen
88、t to intrusive monitoring and verification(installment costs).Governments and companies can work together to signal credibly through installment costs.For example,governments could partner with companies to develop standardized practices,tools,and certifications for AI auditors.53 Companies could wo
89、rk with governments to develop audit trails benchmarked against AI principles.They could also agree to provide data access for auditing purposes,involve relevant stakeholders in the process,and disclose the findings of audits publicly.54 Contractual requirements between developers and deployers coul
90、d include such requirements as costly signals of future intent.The Partnership on AI has developed an incident monitoring database based on voluntary input.55 Publicly committing to standards for reporting incidents involving the use of AI models leverages installment costs by pledging transparency
91、up front and then backing up that pledge with regular monitoring and evaluation.56 Such an approach could support a more robust horizon scanning capability within governments and targeted regulations over time,including AI liability laws.57 It could also help avoid misperceptions among rival nations
92、.For example,governments could explore best practices for AI auditors and common standards around data collection and analysis of incidents involving AI-enabled systems.Reducible costs are a final type of costly signal.In contrast to installment costs,reducible costs are paid up front but can be off
93、set over time depending on the actions of the signaler.58 For example,arms control agreements that provide for notifications of the movement of weapons systems or the collection and transmission of data on relevant forces and activities are costly future signals that can pay dividends to both sides
94、in terms of greater transparency and stability.59 In the AI context,reducible costs may take the form of private sector investments in more interpretable AI models and incentives for information sharing,such as model cards and data sheets that provide transparency on the training data,model weights,
95、and other specific features of AI models.60 It is costly for many companies to commit to such approaches unilaterally,but as AI models diffuse across societies and economies,companies Center for Security and Emerging Technology|13 may recoup these costs over time by earning a reputation as a trustwo
96、rthy and responsible developer of AI systems.Similarly,companies could develop investment standards for AI products and services that are consistent with the AI Principles of the Organisation for Economic Co-operation and Development(OECD).61 The costs would be paid up front in terms of human capita
97、l development,financial resources,and dedicated staff time,but the benefits could be offset in the form of advantageous positions in supply chains and the ability to set the rules in competitive,next-generation markets.As with other costly signaling mechanisms,governments and companies can work toge
98、ther to send costly signals of intent.Governments could promote responsible development by sponsoring prize competitions for AI safety and security or encouraging bounty programs for mitigating bias in AI systems.62 Public-private partnerships could coordinate priorities and leverage shared resource
99、s for multilateral research and development initiatives on accident risks involving AI-enabled systems,including efforts to develop criteria for what constitutes an AI-related“incident”and best practices for the post-mortem process.Such cooperation could take the form of a Multilateral Artificial In
100、telligence Research Institute or international collaboration that draws lessons from the International Atomic Energy Agency or CERN,an intergovernmental organization for scientific research in fundamental physics.63 Financial commitments and active contributions to a global research enterprise for A
101、I safety could signal commitment to responsible AI development.64 The startup costs would be significant,but governments and companies can recoup those costs by investing in AI safety research and best practices,thereby reducing the risks of accidents and inadvertent escalation.In applying these cos
102、tly signaling mechanisms,it is important to distinguish between the specific properties of AI models and the policy choices guiding their development and deployment.65 Consider the challenge of understanding how an AI model“reasons”to make a prediction(sometimes called the“interpretability”problem).
103、AI models can have billions of parameters,or“weights,”that are updated based on large amounts of data or simulated environments where the model can infer decision rules through trial and error.The task of understanding which features of the training data mattered for a specific prediction is challen
104、ging.66 Interpretability remains an active area of research in the field,but it already raises vexing questions in foreign and defense policy.Suppose an adversary were to deploy an AI-enabled system in combat to conduct intelligence,surveillance,and reconnaissance in contested waters.If the system m
105、istakenly identified a merchant vessel as a naval ship and recommended kinetic strikes,how should the targeted government respond?On the one hand,the decision to strike was based,at least in part,on a faulty AI model deployed beyond the context for which it was trained.On the other hand,the target m
106、ay not be privy to that information and will likely draw conclusions about the rivals intent based on its decision to deploy the AI-enabled system in the absence of safeguards.Center for Security and Emerging Technology|14 A further complication in the signaling landscape is that not all actions are
107、 calculated to reveal intent.Companies may develop and deploy AI models for commercial reasons irrespective of the signal those decisions send to other states.Similarly,governments may impose regulations or take steps to accelerate innovation in AI for reasons unrelated to costly signals,even though
108、 such actions will affect how other states interpret their motives.67 Whats more,governments and companies conceptualize costs differently:governments may focus more on questions of national security and broader economic competitiveness and resilience,whereas companies will likely define costs in te
109、rms of market share and reputational constraints.Commercial players will also define costs based on where they are headquartered and their positions in global value chains.In short,domestic pressure groups,commercial interests,and governments respond to different political,social,and economic impera
110、tives and pursue objectives that can be mutually reinforcing or conflicting depending on the context.As the case studies in this paper highlight,decisions that appear monolithic often reflect varying motives and time horizons among disparate actors.Center for Security and Emerging Technology|15 Cost
111、ly Signals in Practice Military AI and Autonomous Weapons If one wanted proof that it is hard to distinguish signals from the noise,a good place to start would be the international debate over lethal autonomous weapons(LAWS).This case study reveals the complexity of signaling in new and evolving are
112、as of policymaking that concern not only government officials but also the statements and actions of commercial entities.Given the challenges of conveying intent in low-trust environments,this case explores the role of tying hands,sunk costs,installment costs,and reducible costs as mechanisms for st
113、abilizing relations among the major powers as they compete to develop and deploy military AI applications.Since 2014,nations have gathered in Geneva to develop principles for the potential use of such weapons.68 Policymakers have debated where and how international law applies and the critical role
114、of human judgment in the decision to employ autonomous weapons systems.69 Both the United States and China have taken part in this process,and both countries have agreed to the consensus documents of the Group of Governmental Experts on Lethal Autonomous Weapons Systems(GGE),the United Nations body
115、established in 2016 to examine issues related to these technologies.In 2019,the High Contracting Parties to the Convention on Certain Conventional Weapons adopted 11 guiding principles,including accountability,human responsibility,and the application of international humanitarian law to the developm
116、ent and potential use of LAWS.70 Behind these consensus documents,however,lies substantial disagreement over the definition of autonomous weapons and the level of human involvement necessary to ensure compliance with international law.Since 2019,nations have struggled to reconcile these differences
117、and momentum has stalled.The challenge of signaling clearly and credibly is evident in Chinas 2016 and 2018 position papers submitted under the auspices of the GGE.In its 2016 position paper,China expressed concern about the ability of LAWS to adhere to the principles of distinction and proportional
118、ity under international law,noting that“such a weapons system presents difficulty in terms of accountability for its use.”71 While acknowledging the role of a new weapons review process,China made clear that it“supports the development of a legally binding protocol on issues related to the use of LA
119、WS,similar to the Protocol on Blinding Laser Weapons,to fill the legal gap in this regard.”72 Two years later,however,China evolved its position.It enumerated five“basic characteristics”of LAWS,including lethality,autonomy,“impossibility for termination,”indiscriminate,and evolution or the ability t
120、o“learn autonomously.”73 It concluded that“national reviews on the research,development and use of new weapons have,to a certain extent,positive significance on preventing the misuse of relevant technologies and on reducing harm to civilians.”74 Center for Security and Emerging Technology|16 To U.S.
121、observers,the differences between Chinas 2016 and 2018 position papers were ambiguous at best.75 The definition of LAWS as lethal,irremediable,and indiscriminate in their effects would place them well beyond the pale of international law,and no responsible commander would seek to employ a weapon wit
122、h such characteristics.By defining LAWS in the extreme but sanctioning the research and development of novel weapons with autonomous functionalities,China appeared to be implementing a principle of“legal warfare”to box in its competitors while creating flexibility for its own strategic imperatives.7
123、6 Why shift from a position of public support for a legally binding protocol to a more equivocal stance on research and development if China did not want to pursue such a capability?Irrespective of Chinas intentions for LAWS,U.S.analysts and policymakers have drawn conclusions from Chinas public sta
124、tements and actions.77 As one former Department of Defense(DoD)official testified before the U.S.-China Economic and Security Review Commission,“available evidence suggests that China is pursuing development of AI-enabled lethal autonomous weapons.”78 To bolster this claim,the former official pointe
125、d to Chinas definition of AI as a strategic priority in its 2017 New Generation AI Development Plan,in its 14th Five-Year Plan for 2021-2026,and in its most recent defense white paper.He also cited the statements of a senior executive at Chinas third-largest defense company.This executive expressed
126、confidence that nations would continue to integrate AI and autonomy on the battlefield:“In future battlegrounds,there will be no people fighting.”79 Consistent with such statements,the former DoD official highlighted Chinas export of military unmanned systems and armed drones with autonomous functio
127、nalities,including Chinese military drone manufacturer Ziyans Blowfish A2 model.He pointed to the companys website as claiming that the Blowfish A2 model“autonomously performs more complex combat missions.”80 The former official recognized the safety issues with AI-enabled weapons,but attributed the
128、 refusal of Chinas Peoples Liberation Army(PLA)to engage in defense policy dialogue with the DoD as evidence of its intent to develop LAWS and not be constrained by international norms.Given the concerns over reliability and the risks of escalation with increasingly autonomous weapons,it is all the
129、more important that nations send credible signals on LAWS.Yet doing so is challenging for three reasons.First,the technology is brittle and untested in battle.Unlike the production and assembly of nuclear weapons technology,military AI and autonomy are a fast-developing but nascent field of endeavor
130、 where the commercial sector plays a leading role.While the United States military has devised AI principles and updated its policy on LAWS,many nations have yet to clarify their national doctrine and processes for weapons with increasingly autonomous functionalities.81 Modern AI systems are prone t
131、o accidents,opaque in their functioning,and can fail in ways that are surprising and hard to remediate.82 Many AI models require training data that are specific to the context in which they will be deployed.Center for Security and Emerging Technology|17 Data about relevant war-fighting domains is of
132、ten incomplete,unavailable,or limited for reasons of security or legal and bureaucratic process.Countries are not fully transparent about their spending on LAWS,which makes it difficult to assess national-level capabilities and how those capabilities would perform during combat.As with debates over
133、the regulation of AI in a domestic context,governments will face tradeoffs between AI model access,on the one hand,and concerns over national security and sensitive datasets,on the other.Second,test and evaluation procedures for LAWS are underdeveloped and challenging to implement.Militaries must de
134、velop policy frameworks,standards,and metrics that are tailored to mission objectives.They must devise test and evaluation plans for AI-enabled systems that can learn and adapt over time in complex,dynamic operating environments,such as low-earth orbit or sub-surface locales.83 Success is not easily
135、 defined,and the tradeoffs between safety and performance are hard to manage.Militaries must also guard against adversarial attacks and attempts to reverse engineer sensitive systems.As a consequence,test and evaluation for military AI systems will require continuous feedback between designers,devel
136、opers,integrators,testers,and users.Militaries may also need to consider periodic retesting of AI-enabled systems even after deployment.Such approaches should focus not only on testing underlying algorithms but also on integrating AI software and hardware in a“system of systems”approach and developi
137、ng human-machine frameworks that take into account cognitive biases and austere operating environments.84 The willingness of countries to subject their systems to rigorous test and evaluation is unclear.Nations are pursuing military AI and autonomy under conditions of escalating geopolitical competi
138、tion.The pressures to deploy untested systems for military advantage are ever-present,but they will grow more intense as countries mask relevant weaknesses in their programs and stoke distrust about their ultimate intentions.Decoding signals on military AI and autonomy faces a third challenge:the in
139、creasing salience of commercial industry to defense innovation.Multinational corporations developing cutting-edge AI technologies may be headquartered in a single nation,but they are part of a global AI research enterprise with globalized supply chains.While their decisions can reflect national prio
140、rities,corporations are first and foremost subject to the demands of shareholders,financial markets,trade flows,and international economic trends.Compounding matters,AI is a general-purpose technology with a wide range of civilian and military applications.Partnerships between commercial entities an
141、d the government to develop dual-use technologies may end up supporting military innovation.Chinas efforts to develop a“techno-security state”that fuses its defense industrial base with civilian enterprises is well-documented,but the success of this strategy is difficult to measure.85 Nonetheless,th
142、e close coupling of its military and civilian defense economies will encourage decision-makers to treat the statements and actions of Chinese commercial enterprises as indicative of national intent.Center for Security and Emerging Technology|18 Indeed,one rationale for the U.S.decision in October 20
143、22 to impose country-wide semiconductor-related export controls on China is the concern that dual-use technology partnerships with Chinese firms and civilian actors will be diverted to the PLA.86 Chinese officials may also draw their own conclusions about DoDs efforts to strengthen cooperation with
144、Silicon Valley and the growing ties between U.S.commercial entities and the U.S.military establishment.87 The increasing role of commercial industry in national security may enhance the credibility of commitments when public and private sector actors are in alignment,but it could also invite misperc
145、eptions when companies exaggerate their capabilities or take actions independent from their governments.For example,in the weeks following Russias February 24,2022,invasion of Ukraine,reports surfaced that Russia had deployed an AI-enabled drone to the battlefield.88 As analysts observed,however,the
146、 weapon in question did not necessarily incorporate AI.89 The Russian drone manufacturer and its parent company issued press releases that created ambiguity about the weapons capabilities.The drone manufacturer,a subsidiary of the Russian arms maker Kalashnikov,claimed that the weapon could obtain c
147、oordinates from“the sensor payload targeting image.”90 Kalashnikov issued a separate press release boasting of the drones AI-enabled capabilities for industrial and agricultural use cases.Neither of these two statements implies that the drone in Ukraine was equipped with AI to select and engage targ
148、ets independently of human operators,but it would not be a stretch for governments to assume otherwise.Similarly,Ukrainians have operated the United Kingdoms Brimstone missile.The developer of this missile advertised several modes of operation,including a“fire-and-forget”mode that“provides through-w
149、eather targeting,kill box-based discrimination and salvo launch.”91 As experts were quick to point out,while the weapon likely operates in a semi-autonomous mode today,it is a software update away from potentially crossing the blurry threshold into a fully autonomous weapon.92 How can policymakers s
150、ignal credibly in such complex operating environments?When it comes to LAWS,there are several mechanisms that governments and companies could leverage to communicate intent.Tying hands mechanisms offer one starting point.Just as the former head of the U.S.Joint AI Center Lieutenant General Jack Shan
151、ahan stated publicly that the United States would not integrate AI into nuclear command and control,governments could make unilateral policy statements on LAWS or enshrine such positions in official doctrine and processes.93 One recent example is the United States February 16,2023,“Political Declara
152、tion on the Responsible Military Use of Artificial Intelligence and Autonomy.”94 While talk is cheap and public commitments can be walked back,unilateral statements of policy leave countries open to charges of hypocrisy and may entail reputational costs in the form of disapproving votes in multilate
153、ral bodies or lost support from friendly partners and domestic audiences,including the prospect of congressional investigation or budgetary restrictions.Center for Security and Emerging Technology|19 The same logic could apply to Americas competitors.With reports that Russia aims to deploy an autono
154、mous,nuclear-armed underwater drone by 2027,the United States could urge China to make a unilateral statement of policy that such a capability would be destabilizing.95 This signal would be costly for China,given its“no limits”partnership with Russia.96 While Chinese leaders may decline to make a pu
155、blic statement to this effect,their refusal would send an important signal about Chinas relationship with Russia and potentially their own intentions to develop similar weapons,which would allow U.S.policymakers to update their assessments.Similarly,the United States,China,Russia,and other relevant
156、countries and stakeholders could agree publicly to convene a series of Track 1.5 or Track 2 dialogues on AI safety.97 These dialogues would be difficult to convene amid the onslaught of Russias war against Ukraine.At the appropriate time,however,such conversations could not only surface potential ar
157、eas of agreement on AI safety,but also clarify relevant national doctrine or policy related to LAWS as well as enhance transparency around the development and employment of military AI applications.Given public reports that Chinas PLA refused to discuss AI risk-reduction measures during the Defense
158、Policy Coordination Talks of 2021,China could send a costly signal by allowing the PLA to participate in such dialogues and include this topic on the agenda.98 By showing a willingness to define AI safety in practical terms and develop a common set of standards and testing protocols,the major powers
159、 could send a costly signal that they seek to reduce the risks of instability and inadvertent escalation.The United States,China,and Russia could also explore sunk costs mechanisms.Nations could invest more and commit to transparency measures in test and evaluation procedures and allow relevant pers
160、onnel to conduct site visits to test ranges and other facilities.Sharing safety technology will not necessarily make a competitors system more effective.Indeed,evidence suggests that there can be tradeoffs between performance and safety.99 The risks of improving the predictability of a competitors A
161、I-enabled systems must also be weighed against the benefits of reducing inordinate dangers to all sides.100 Suppose Chinese leaders were to integrate AI more fully into their early warning systems.One does not need to rehearse the terrifying near-misses from the Cold War to know that such systems ca
162、n be prone to failure in novel environments.101 In a crisis scenario with the United States,would Chinese leaders regard such failures as unintended mishaps or preludes to an intentional attack,such as a conventional or nuclear counterstrike?102 Given the relatively underdeveloped law,doctrine,and p
163、olicy on incidents related to AI-enabled systems,a crisis involving such platforms could easily escalate to conflict.Policymakers should also consider signaling with installment costs,or future costs that cannot be offset over time.The U.S.-Soviet Incidents at Sea Agreement of 1972 helped maintain s
164、tability and provided a mechanism for sharing information and resolving disputes.103 As researchers have suggested,the major powers could sign an“International Autonomous Center for Security and Emerging Technology|20 Incidents Agreement,”which would invoke tying hands and installment costs as signa
165、ls of intent.104 Leaders could commit publicly to information-sharing and transparency measures or submit to intrusive monitoring and verification of their AI-enabled systems in designated geographic zones.Hardware inspections could verify whether AI chips are present in systems or controlling weapo
166、ns functionalities.105 Governments that commit to such measures publicly would send a costly signal about their intentions to abide by international norms in the development and potential use of LAWS.Finally,governments could partner with industry leaders and university-affiliated research centers t
167、o implement reducible costs for AI-enabled military systems.Governments could set requirements and create incentives for investing in more interpretable AI models and alternate design principles,such as small data approaches to AI.106 Policymakers and legislators could engage in public processes to
168、develop common standards for military AI and explore the feasibility of sharing testing protocols with allies and competitors to mitigate the risks of escalation.As governments signal around the use of AI,they must be mindful that the technical characteristics of AI models can also confound efforts
169、to send clear messages of intent.For this reason,policymakers should explore financial and other resource commitments to a global AI research enterprise charged with monitoring and measuring AI capabilities,improving methods for enhancing the interpretability of AI models,and developing a more robus
170、t empirical base for understanding and evaluating the dynamics of signaling in human-machine teams.Democratic AI and Inadvertent Signals Policymakers must keep in mind that both the intent of the sender and the predispositions of the receiver matter when it comes to sending and interpreting signals.
171、Another important consideration involves audiences whom signalers may not be targeting but who nonetheless absorb public statements and declarations.This case study explores the implications of signaling around democratic AI development,regulation,and use(referred to with the shorthand of“democratic
172、 AI”)for relationships with non-democratic partners.While much of the section focuses on government signaling,it also briefly examines the private sectors role in sending costly signals around democratic AI.The primary costly signal mechanism in evidence is tying hands,although this case study also
173、highlights the role that installment cost and reducible cost mechanisms can play as part of the democratic AI toolkit.Democratic AI has become a widely discussed topic in multinational fora and national AI statements.A broad definition of democratic AI based on these statements refers to AI applicat
174、ions that incorporate safeguards for democratic processes and societies into their development and deployment,as well as future democracy-protecting regulations.Examples include ensuring that systems are not biased against certain classes of citizens,whether by Center for Security and Emerging Techn
175、ology|21 poor data or algorithmic design;that governments do not use facial recognition or other potentially privacy-eroding AI applications in ways that infringe on citizens civil liberties;and that adversaries and bad actors cannot use generative models to disrupt information environments to under
176、mine faith in elections or the rule of law.This framing contrasts with authoritarian uses of AI,such as Chinas deployment of facial recognition and other AI applications in Xinjiang against the provinces Uyghur ethnic minority,or censorship technologies and exploitation of data analytics with AI.107
177、 Multinational and national-level government statements generally support this understanding of democratic AI,though they differ in the level of detail and specificity they provide.For example,at their 2023 summit in Japan,the G7 nations stated their determination to“advance international discussion
178、s on inclusive AI governance and interoperability to achieve our common vision and goal of trustworthy AI,in line with our shared democratic values.”108 The European Unions(EU)draft Artificial Intelligence Act,with new amendments adopted in June 2023,aims to promote“the uptake of human-centric and t
179、rustworthy artificial intelligence and to ensure a high level of protection of health,safety,fundamental rights,democracy and rule of law.”109 Other notable multilateral groupings calling for democratic values in the development and governance of AI include the OECD,Council of Europe,Global Partners
180、hip on AI,the United Nations Educational,Scientific,and Cultural Organization(UNESCO),the Freedom Online Coalition,and the U.S.-EU Trade and Technology Council,among others(see Appendix A).110 Individual national documents echo and,in some cases,expand on multilateral statements(see Appendix B).Aust
181、ralia,Brazil,Canada,Italy,New Zealand,Spain,the United Kingdom,and the United States are among the countries that have developed national AI strategies,principles,or vision documents that incorporate explicit considerations of democracy,though not all national statements focus on democratic principl
182、es and AI to the same extent.111 For some,these statements reinforce multilateral declarations they have co-signed.The U.S.Blueprint for an AI Bill of Rights,created and adopted by the Biden administration,lays out five principlessafe and effective systems,algorithmic discrimination protections,data
183、 privacy,notice and explanation,and human alternatives consideration,and fallbackintended to protect society and ensure that AI progress does“not come at the price of civil rights or democratic values.”112 Other states,including France,Germany,Japan,and South Korea,have co-signed multilateral statem
184、ents about democratic AI but do not mention them in their recent national documents.113 At present,democratic AI signals appear primarily intended to tie hands,indicating public commitments and sending messages against which leaders might one day be held accountable.The numerous multilateral and cou
185、ntry-level statements mentioned above demonstrate hand-tying before foreign and domestic audiences.States have also borne some initial sunk costs in trying to organize and adopt democratic AI,such as the two U.S.-proposed Center for Security and Emerging Technology|22 and co-organized Summits for De
186、mocracy.The Biden administration used the summits to tie hands,acknowledging the need for democracies to“put forward a vision of what they stand foran affirmative,persuasive,secure and privacy-preserving,values-driven,and rights-respecting view of how technology can enable individual dignity and eco
187、nomic prosperity,and also what they will stand against,”namely digital authoritarians abuses of AI and other technologies.114 In devoting resources,personnel,and capabilities to host the virtual summits,the United States and the summits co-hosts also absorbed sunk costs they cannot immediately recou
188、p to indicate their commitment to multilateral diplomacy around democratic AI.In addition,statements and gatherings about democratic AI could result in longer-term installment costs or reducible costs as governments devote funding to democratic AI projects and hold future AI-enabled systems to forma
189、lized“democratic”standards.Legislation designed to protect democracy and democratic values from AI could create installment costs for governments that must enforce compliance with liability laws among public and private sector developers.The United States and United Kingdom jointly hosted a prize ch
190、allenge with$3.75 million in awards for transatlantic AI developers who create privacy-enhancing technologies that reinforce democratic values,an example of a reducible cost whose benefits governments might reap over time by adopting the contest winners creations.115 All of these costly signals abou
191、t democratic AI,though mainly those intended to tie hands,appear geared toward communicating intentions to four general audiences:like-minded partners,domestic publics,the private sector,and authoritarian competitors.The message from the sender side is that governments intend to develop,encourage ot
192、hers to develop,and use AI in alignment with democratic values.The nuances differ for each audience.Like-minded U.S.partners are clear receivers of signals about democratic AI,particularly when they are co-signatories of multilateral statements.They could interpret such signals as a desire to collab
193、orate in areas of shared interest;alternatively,failure by a signatory to uphold previously agreed principles could result in reputational damage and diplomatic pressure from democratic peers.116 Not all democratic governments strike the same balance in negotiating the tradeoffs between transparency
194、 around AI models for evaluation purposes and the goals of security,privacy,and data protection.Such differences between the United States and its allies create the opportunity for costly signals through the tying hands mechanism.Domestic audiences,including the general public,civil society groups,a
195、nd the media,might use public commitments around AI principles to hold leaders accountable in the future.Journalists and interest groupsincluding researchers or think tanks,trade groups,and non-governmental organizationscould draw the publics attention to past statements if governments use or permit
196、 the development of AI that contradicts democratic values and civil rights,creating domestic political costs for leaders.117 Center for Security and Emerging Technology|23 The private sector,especially the tech industry,is a third key audience for these signals since governments are overwhelmingly c
197、onsumers of AI technology and innovation from the commercial sector.Multilateral statements have even targeted the private sector,such as the“Call to the Private Sector to Advance Democracy”issued at the Summit for Democracy.The document appealed for greater commercial involvement in countering the
198、misuse of technology and highlighted examples of how authoritarians and other actors have used technologies ranging“from machine learning models to surveillance technologies”to“polarize and fragment democratic societies.and erode public trust in democratic institutions,”in addition to other harmful
199、misapplications.118 Governments signaling the importance of democratic values for AI development may expect private sector partners to incorporate these considerations into their system designs and consider refraining from selling AI technology to countries with poor human and civil rights records.F
200、or their part,firms may speak out when they are asked to develop AI capabilities,particularly for government stakeholders,that stand in opposition to democratic AI principles.It is worth noting that the private sector,in addition to being an audience for government signals about democratic AI,may al
201、so send its own signals to consumers and other stakeholders.While occasionally referencing democratic AI in the same way as governments in such fora as the Summit for Democracy,commercial entities may also broadcast different interpretations of democratic AI,intentionally or not.For example,research
202、ers from Google DeepMind published an article in the journal Nature Human Behavior entitled,“Human-Centered Mechanism Design with Democratic AI.”119 This paper focused not on electoral systems or processes,but instead on using AI to design redistributive economic policies“democratically”to benefit t
203、he most people at differing wealth levels.120 Lack of clarity or shared definitions among government and private sector stakeholders around democratic AI,coupled with the private sectors leading technology development role,could make signaling on the topic in general more opaque.A final audience is
204、competitor states and near peers who might use AI-enabled or automated capabilities to attack the foundations of democratic societies,particularly election processes,or those who use AI to undermine human rights in their own societies.Threats of foreign interference in democratic processes using tec
205、hnology became particularly salient following Russian interference in the 2016 U.S.presidential elections and attempted interference by rogue actors in the 2017 French presidential election.121 Recent advances in generative AI capabilities,including LLMs,have fueled concerns about the potential for
206、adversaries to create and spread mis-and disinformation at scale.122 Democratic AI statements and actions may therefore signal to Russia,China,and other competitors that the use of AI to attack democratic societies could engender a response.Though not directly related to AI,in 2020 then-candidate Bi
207、den vowed to“treat foreign interference in our election as an adversarial act that significantly Center for Security and Emerging Technology|24 affects the relationship between the United States and the interfering nations government,”detailing retaliatory steps he would task his administration to t
208、ake against a foreign meddler.123 U.S.Secretary of Defense Lloyd Austin stated that“our use of AI must reinforce our democratic values,protect our rights,ensure our safety,and defend our privacy”against the AI“pacing challenge”of China.124 Given the signals policymakers aim to send to these differen
209、t audiences,the framing of democratic AI,particularly in opposition to authoritarianism,may be a useful shorthand for distinguishing the approaches of democratic nations from those of competitors.Yet this framing belies the more complicated reality that democratic states frequently collaborate with
210、authoritarian governments to protect their own interests and security.Furthermore,democracies often defend such cooperation by underscoring the need to firm up relationships with global swing states amid competition with China.125 The United States has a broad network of global partners ranging from
211、 weak democracies to undemocratic and authoritarian states,many of whom might be uninterested in or even opposed to technology developed according to democratic values.Statements about democratic AI alone may not necessarily push them closer to China,but where the quality of democratic-and authorita
212、rian-developed AI is comparable,non-democratic partners may choose to adopt the latter set of technologies with no strings attached.126 Democratic policymakers should not abstain from trumpeting democratic principles on these states accounts,but they should consider the potential consequences of sta
213、tements about democratic AI if they choose to rely on these partners in the future.The monarchies of the Gulf Cooperation Council(GCC)offer examples of authoritarian U.S.partners for whom associating democracy with AI could create diplomatic and strategic challenges and negatively impact security.Sa
214、udi Arabia,the United Arab Emirates(UAE),Qatar,Bahrain,Kuwait,and Oman have individually and collectively cultivated strategic relationships with the United States,premised on a long-standing American security guarantee in exchange for cooperation on energy and security interests.127 Today,the GCC s
215、tates host more than 30,000 U.S.military personnel,multiple U.S.Central Command(CENTCOM)headquarters across military domains,multinational maritime task forces,and they provide access to at least 20 basing facilities throughout the Gulf.128 Cooperation in the past two decades of U.S.operations in CE
216、NTCOM has featured intelligence sharing,assistance in political negotiations,and even some joint counterterrorism operations.Despite their significance,U.S.-Gulf relations have been difficult and even fractious.Tensions stem from differing policy and threat assessments to legitimate U.S.concerns aro
217、und the suppression of dissent,civil liberties,and womens,minorities,and migrant workers rights in the Gulf,among others.U.S.lawmakers and civil society have led high-profile criticism and calls for the United States to distance itself from these partners,particularly Saudi Arabia.129 Center for Sec
218、urity and Emerging Technology|25 Furthermore,while their most significant security partner remains the United States,China is a leading Gulf trade partner,complicating U.S.efforts to rely on the GCC states amid technological and strategic competition.130 U.S.-Gulf cooperation persists,but often in s
219、pite of a challenging misalignment of political systems,values,and,sometimes,interests.The Gulf states are worth examining because of their role in intelligence,basing,and access partnerships and because their adoption of non-democratic AI systems,particularly those developed by China,could impact U
220、.S.security.The long history of U.S.-Gulf relations may suggest that the GCC states do not see democratic messages as applicable to them.However,costly signals about democratic AI complicate this dynamic.Since the United States is signaling that democratic AI will impact the design and deployment of
221、 particular technologies,the reactions of Gulf partners to messaging about values may turn on how and whether they believe that technology with democratic values“baked-in”serves their interests.In this context,exploring how Gulf partners might react to inadvertent U.S.signals about democratic AI and
222、 the AI capabilities they might adopt is instructive,given the potential national security implications.One possibility is that democratic AI signals could have little impact on Gulf partners or be dismissed by them as cheap talk.They could interpret democratic AI signals as extensions of U.S.-China
223、 competition,rather than indicative of a differentiated,values-based approach.Gulf partners could buy the best technology they are able to access,regardless of who develops it,leaving democratically developed AI to compete with authoritarian technology on cost and technical merits.In this case,democ
224、ratic AI might not necessarily dissuade Gulf partners from purchasing U.S.technology,but could exacerbate strained political and diplomatic relations.131 Another possibility is that Gulf partners might refrain from buying certain U.S.AI products and services they could use for surveillance applicati
225、ons,such as facial recognition and data analytics,if they interpret from U.S.signaling that such products and services are designed with democratic safeguards and unlikely to help them address regime security concerns.132 Efforts to counter the proliferation of AI capabilities used for autocratic pu
226、rposes would align with U.S.national and multinational democratic AI commitments.However,such commitments would provide the United States scant leverage to dissuade partners from buying these capabilities from China.This outcome could,in turn,deepen U.S.worries about Chinas growing regional influenc
227、e and U.S.network and intelligence security.133 The experience of 5G adoption in the Middle East with Huawei offers insight into how authoritarian partners in the Gulf may respond when they do not perceive the United States as a reliable provider of a strong technological alternative.The United Stat
228、es previously expressed concerns to Saudi Arabia,the UAE,and Bahrain in 2019 over the installation of Huaweis 5G telecommunications infrastructure.Officials and elected representatives communicated the potential negative impact on intelligence sharing for countries adopting Center for Security and E
229、merging Technology|26 Huaweis technology.134 Nonetheless,the Gulfs largest telecom providers reached agreements to develop 5G networks in partnership with Huawei to fulfill national modernization plans,such as Saudi Vision 2030.135 The Gulf states have since decreased their exposure to U.S.-China te
230、nsions around 5G by investing in Open RAN systems,allowing feasible alternative 5G providers to Huawei to enter their markets.136 They have not,however,severed ties with Huawei to the same extent as Europe.137 Reporting in 2023 cited the UAEs Huawei deal as one indicator of close ties to China holdi
231、ng up F-35 aircraft and MQ-9 drones sales from the United States.138 If Gulf partners begin to incorporate Chinese-developed AI into their systems on the basis that they are uninterested in using democratic AI,it could heighten U.S.concerns about data security and interoperability.Such concerns may
232、even lead to reduced intelligence-sharing.Gulf partners adoption of Chinese technology could also further enhance Chinas ability to lead AI standards development in applications useful for authoritarian regimes,such as facial recognition.139 Outside of the Persian Gulf and beyond security issues,the
233、 United States has a number of strategic and economic non-or weak democratic partners who may bristle at democratic AI messaging.For example,Singapore is developing its own significant AI ecosystem by building domestic talent and attracting foreign investment from both the United States and China.14
234、0 As the United States competes with China to access Singapores AI market,democratic signaling could create uncertainty with the countrys government that puts the United States at a comparative disadvantage relative to China.The implications for strategically important but democratically backsliding
235、 nations,such as India,will also need to be managed carefully.141 Finally,the United States may be exposed to charges of hypocrisy or moral compromise for dealing with authoritarian partners and undercutting its democratic values.142 This challenge has long bedeviled U.S.ties with the Gulf countries
236、 and could do so with other undemocratic nations.Given the United States has stressed the importance of democratic AI development,however,creating technology partnerships with non-democracies or sharing capabilities could provoke a backlash from domestic stakeholders and other democratic partners.Th
237、e view that the United States might be supporting authoritarian applications of AI abroad,even if only through allowing private companies to provide technology to non-democratic regimes,could undermine the credibility of U.S.and allied signaling about democratic AIs importance.The United States and
238、other like-minded nations should not refrain from laying out principles to guide the development of AI that align with closely held democratic values.The task of articulating a positive vision for democratic AI is important,as is the process of establishing rules of the road that protect the sanctit
239、y and legitimacy of democratic processes,including election integrity,protection against mis-and disinformation,and safeguards for civil liberties and human rights.The defense of these values is worth the diplomatic costs.Yet the United States has many non-democratic partners,and non-aligned and glo
240、bal swing states may be Center for Security and Emerging Technology|27 unsure of how to interpret democratic AI signals that are not necessarily targeted at them.143 Policymakers should consider the broad range of audiences who may be receiving the signals they broadcast and take into account how th
241、is diversity of perspectives may complicate the messages they are trying to convey at home and abroad.Private Sector Signaling A notable feature of the present era is thatunlike during much of the 20th centurystrategic technologies are no longer primarily developed in laboratories run or funded by g
242、overnments.AI is no exception,with many of the most advanced systems being developed in consumer-facing technology companies.This shift in the center of gravity of where technologies are developed means that governments and the private sector are deeply interwoven and relevant signals could be sent
243、by an expanded set of actors.As the case studies on signaling around lethal autonomous weapons and democratic AI show,observers seeking to anticipate the trajectory of AI development and use must now attend not only to signals from governments,but also from a range of industry players who increasing
244、ly contribute essential functions and services in conflict environments,such as the ongoing contributions of major tech platforms in Ukraine.144 The growing role of private sector entities in national security underscores the complexity of the signaling landscape and the challenges involved in reduc
245、ing misperceptions and miscalculations amid geopolitical tensions.To better understand the dynamics around signaling in a commercial context,the case studies laid out below provide two different examples of companies sending costly signals of their intentions to develop technology safely and respons
246、ibly.The first case examines the role of tying hands and reducible costs as signaling mechanisms.The second case explores how companies can leverage installment costs to convey intent and strengthen norms around the release of potentially destabilizing capabilities.A long-standing concern among anal
247、ysts of AI development is the possibility of a“race to the bottom,”in which multiple players feel pressure to neglect safety and security challenges in order to remain competitive.Perceptionsand therefore signalsare key variables in this scenario.Most actors would presumably prefer to have time to e
248、nsure their AI systems are reliable,but the desire to be first,the pressure to go to market,and the idea that competitors might be cutting corners can all push developers to be less cautious.145 Accordingly,signaling has an important role to play in mitigating race-to-the-bottom dynamics.Parties dev
249、eloping AI systems could emphasize their commitment to restraint,their focus on developing safe and trustworthy systems,or both.Ideally,credible signals on these points can reassure other parties that all sides are taking due care,mitigating pressure to race to the bottom.Center for Security and Eme
250、rging Technology|28 Much private sector signaling on AI speaks directly to these concerns.The highest levels of leadership at major tech companies have emphasized the importance they place on building safe and trustworthy systems.Microsoft president Brad Smith described his firm as“committed and det
251、ermined as a company to develop and deploy AI in a safe and responsible way,”while Google CEO Sundar Pichai stated that“we are taking our time to perform safety checks,and well continue to be very,very responsible.”146 As with the public commitments discussed earlier in this paper,these broad statem
252、ents reflect one approach to costly signaling.To more fully understand how private sector actors can send costly signals,it is worth considering two examples of leading AI companies going beyond public statements to signal their commitment to develop AI responsibly:OpenAIs publication of a“system ca
253、rd”alongside the launch of its GPT-4 model,and Anthropics decision to delay the release of its chatbot,Claude.Both of these examples come from companies developing LLMs,the type of AI system that burst into the spotlight with OpenAIs release of ChatGPT in November 2022.147 LLMs are distinctive in th
254、at,unlike most AI systems,they do not serve a single specific function.They are designed to predict the next word in a text,which has proven to be useful for tasks as varied as translation,programming,summarization,and writing poetry.This versatility makes them useful,but also makes it more challeng
255、ing to understand and mitigate the risks posed by a given LLM,such as fabricating information,perpetuating bias,producing abusive content,or lowering the barriers to dangerous activities.In March 2023,California-based OpenAI released the latest iteration in their series of LLMs.Named GPT-4(with GPT
256、standing for“generative pre-trained transformer,”a phrase that describes how the LLM was built),the new model demonstrated impressive performance across a range of tasks,including setting new records on several benchmarks designed to test language understanding in LLMs.From a signaling perspective,h
257、owever,the most interesting part of the GPT-4 release was not the technical report detailing its capabilities,but the 60-page so-called“system card”laying out safety challenges posed by the model and mitigation strategies that OpenAI had implemented prior to the release.148 The system card provides
258、evidence of several kinds of costs that OpenAI was willing to bear in order to release GPT-4 safely.These include the time and financial cost of producing the system card as well as the possible reputational cost of disclosing that the company is aware of the many undesirable behaviors of its model.
259、The document states that OpenAI spent six months on“safety research,risk assessment,and iteration”between the development of an initial version of GPT-4 and the eventual release.Researchers at the company used this time to carry out a wide range of tests and evaluations on the model,including engagi
260、ng external experts to assess its capabilities in areas that pose safety risks.These external“red teamers”probed GPT-4s ability to assist users with undesirable activities,such as carrying out cyberattacks,producing chemical or biological weapons,or making plans to harm themselves Center for Securit
261、y and Emerging Technology|29 or others.They also investigated the extent to which the model could pose risks of its own accord,for instance through the ability to replicate and acquire resources autonomously.The system card documents a range of strategies OpenAI used to mitigate risks identified dur
262、ing this process,with before-and-after examples showing how these mitigations resulted in less risky behavior.It also describes several issues that they were not able to mitigate fully before GPT-4s release,such as vulnerability to adversarial examples.Returning to our framework of costly signals,Op
263、enAIs decision to create and publish the GPT-4 system card could be considered an example of tying hands as well as reducible costs.By publishing such a thorough,frank assessment of its models shortcomings,OpenAI has to some extent tied its own handscreating an expectation that the company will prod
264、uce and publish similar risk assessments for major new releases in the future.OpenAI also paid a price in terms of foregone revenue from the period in which the company could have launched GPT-4 sooner.These costs are reducible in as much as OpenAI is able to end up with greater market share by cred
265、ibly demonstrating its commitment to developing safe and trustworthy systems.As explored above,the types of costs in question for OpenAI as a commercial actor differ somewhat from those that might be paid by states or other actors.While the system card itself has been well received among researchers
266、 interested in understanding GPT-4s risk profile,it appears to have been less successful as a broader signal of OpenAIs commitment to safety.The reason for this unintended outcome is that the company took other actions that overshadowed the import of the system card:most notably,the blockbuster rele
267、ase of ChatGPT four months earlier.Intended as a relatively inconspicuous“research preview,”the original ChatGPT was built using a less advanced LLM called GPT-3.5,which was already in widespread use by other OpenAI customers.GPT-3.5s prior circulation is presumably why OpenAI did not feel the need
268、to perform or publish such detailed safety testing in this instance.Nonetheless,one major effect of ChatGPTs release was to spark a sense of urgency inside major tech companies.149 To avoid falling behind OpenAI amid the wave of customer enthusiasm about chatbots,competitors sought to accelerate or
269、circumvent internal safety and ethics review processes,with Google creating a fast-track“green lane”to allow products to be released more quickly.150 This result seems strikingly similar to the race-to-the-bottom dynamics that OpenAI and others have stated that they wish to avoid.OpenAI has also dra
270、wn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4,including regarding copyright issues,labor conditions for data annotators,and the susceptibility of their products to“jailbreaks”that allow users to bypass safety controls.151 This muddled overall pictu
271、re provides an example of how the messages sent by deliberate signals can be overshadowed by actions that were not designed to reveal intent.A different approach to signaling in the private sector comes from Anthropic,one of OpenAIs primary competitors.Anthropics desire to be perceived as a company
272、that values safety shines Center for Security and Emerging Technology|30 through across its communications,beginning from its tagline:“an AI safety and research company.”152 A careful look at the companys decision-making reveals that this commitment goes beyond words.A March 2023 strategy document p
273、ublished on Anthropics website revealed that the release of Anthropics chatbot Claude,a competitor to ChatGPT,had been deliberately delayed in order to avoid“advancing the rate of AI capabilities progress.”153 The decision to begin sharing Claude with users in early 2023 was made“now that the gap be
274、tween it and the public state of the art is smaller,”according to the documenta clear reference to the release of ChatGPT several weeks before Claude entered beta testing.In other words,Anthropic had deliberately decided not to productize its technology in order to avoid stoking the flames of AI hyp
275、e.Once a similar product(ChatGPT)was released by another company,this reason not to release Claude was obviated,so Anthropic began offering beta access to test users before officially releasing Claude as a product in March.Anthropics decision represents an alternate strategy for reducing“race-to-the
276、-bottom”dynamics on AI safety.Where the GPT-4 system card acted as a costly signal of OpenAIs emphasis on building safe systems,Anthropics decision to keep their product off the market was instead a costly signal of restraint.By delaying the release of Claude until another company put out a similarl
277、y capable product,Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur.Anthropic achieved this goal by leveraging installment costs,or fixed costs that cannot be offset over time.In the framework of this study,Anthropi
278、c enhanced the credibility of its commitments to AI safety by holding its model back from early release and absorbing potential future revenue losses.The motivation in this case was not to recoup those losses by gaining a wider market share,but rather to promote industry norms and contribute to shar
279、ed expectations around responsible AI development and deployment.Yet where OpenAIs attempt at signaling may have been drowned out by other,even more conspicuous actions taken by the company,Anthropics signal may have simply failed to cut through the noise.By burying the explanation of Claudes delaye
280、d release in the middle of a long,detailed document posted to the companys website,Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed.Taken together,these two case studies therefore provide further evidence that signaling around AI may be
281、 even more complex than signaling in previous eras.Center for Security and Emerging Technology|31 Policy Considerations and Lessons Learned Costly signals offer a way to communicate intentions in situations of low trust,but they operate differently today than during the Cold War.The economic context
282、 has transformed,and the role of commercial entities in driving innovation has expanded significantly.Dual-use technologies present challenges and opportunities for messaging clearly in an increasingly contested global science and technology landscape.Based on a close examination of major power sign
283、aling on military AI and autonomous weapons,U.S.government signaling on democratic AI,and private sector signaling around the release of powerful language models,this study highlights the following policy considerations and lessons learned.Signals are not as“loud and clear”as they once were.Policyma
284、kers during the Cold War experienced no shortage of nuclear crises fueled by misperceptions,but there are limits to comparing costly AI signals with diplomacy around nuclear weapons technologies.The scope and scale of AIs commercial impact is vastly larger and the resource base is both more concentr
285、ated(in the case of advanced chips and the photolithography equipment used to make them)and more diffuse(in the case of open-source data and AI software).The post-Cold War period has seen the rise of non-governmental actors,each with varying degrees of influence on models for AI governance and the c
286、ontemporary signaling landscape.Policymakers must also contend with the growing national security implications of general-purpose technologies,such as AI and advanced node semiconductors.It is not easy to distinguish between the military and civilian uses of such technologies.Doing so requires exper
287、tise,significant resources,technical infrastructure,and global situational awareness of science and technology trends.The economic entanglement of nations further complicates the signaling picture.Despite pressures toward supply-chain reshoring and“de-risking”of critical and emerging technologies in
288、 select areas,countries and companies remain deeply interconnected in todays global economy.Governments and private sector actors can leverage complex economic and financial networks and supply chains to send costly signals by restricting or expanding capital flows,approving or denying foreign inves
289、tment,and imposing or lifting trade controls.154 At the same time,the increasing role of private sector companies in driving innovation creates challenges for sending clear signals of intent in AI.Policymakers must interpret multiple,often conflicting,signals from governments and private sector acto
290、rs that may not share the same information,conception of costs,or geographic location.Such“noisy”environments present obstacles for signaling,but they can also create opportunities.155 By dispatching multiple signals and gauging the reactions of target audiences,leaders can adjust their messaging to
291、 amplify those signals that achieve the intended effect.Signals can be inadvertent yet potent.The distinction between intentional and unintentional signals highlights the growing complexity of the signaling landscape for policymakers.Not all Center for Security and Emerging Technology|32 signals fal
292、l within the purview of government officials,and actions intended to convey one message may resonate differently with foreign and domestic audiences.U.S.government messaging on technology and democracy is a form of inadvertent costly signaling.This posture risks straining ties with partners who may
293、not share these values,such as countries in the Gulf Cooperation Council,the Group of 77 in the United Nations General Assembly,and partners in Southeast Asia.Many of these governments pursue hedging strategies between the United States and China to maximize their autonomy in an increasingly competi
294、tive international environment.U.S.government messaging on technology and democracy could encourage partners to tilt toward Chinas no-strings-attached commercial approach to technology development and away from the United States commitment to values-based design.U.S.government signaling is costly in
295、 another way:it leaves the U.S.government open to charges of hypocrisy for articulating support for technology and democracy and then partnering with countries that do not share these values.Costly signals are only one tool in the AI policy toolkit and must be embedded in comprehensive strategies.Th
296、e leading role of commercial firms in AI development underscores the need for coordinated actions and strong partnerships between the public and private sectors.As the first case study highlighted,there is a distinction between the technical characteristics of AI models and the policies that shape t
297、heir design,development,and use in a military context.Governments have more influence on the latter than the former,though both sides of the equation have implications for how rival states will interpret costly signals.Policymakers may decide to deploy AI-enabled systems that meet certain thresholds
298、 for safety.156 While governments control the decision to deploy such systems,they can only indirectly influence the course of technical research and progress on robustness and interpretability in the field of AI.The signaling logics differ,but rival states may not distinguish between the concerted
299、decisions of governments and faulty AI-enabled systems that are deployed beyond the context for which they were trained.The second case study examined the strengths and limitations of costly signaling in a competitive context where the sides may be pursuing different objectives.In such environments,
300、messages are not always relayed or interpreted in the manner policymakers assume.If companies in the United States or allied countries design and sell AI-powered surveillance capabilities abroad,for example,such actions can undermine the signals policymakers think they are sending on technology and
301、human rights.157 The policy choice is not simply whether to conceal or reveal AI capabilities,but also how to reveal them and through which channels.Signals in AI can be costly in different ways.Test and evaluation approaches for AI-enabled weapons will signal different messages depending on the deg
302、ree of transparency and whether the focus is on civilian or military test and evaluation procedures,and whether they include sharing technologies and joint access to test Center for Security and Emerging Technology|33 ranges and infrastructure.The content and channels of the message matter and will
303、add a layer of complexity to the signals a party aims to convey.Concurrent signaling from public and private sector actors may indicate greater clarity of purpose and resolve than divergent or multivalent signaling.The political context also matters.Misperceptions about what counts as authoritative
304、in the political context of a rival nation may confound signaling attempts or communicate intent in ways that have unintended consequences.Signals are an indelible part of the contemporary foreign policy landscape,so it is worth examining how policymakers can communicate clearly and avoid mispercept
305、ions.One path forward is for governments to leverage procurement practices and regulations to shape norms around AI development and use.158 For example,policymakers could work with industry experts and academic researchers to enshrine norms around AI transparency(such as the release of model cards,s
306、ystem cards,or similar documentation)through procurement policies,including appropriate protections for privacy and security.The complexities involved in signaling would also benefit from focused Track 1.5 dialogues and table-top exercises among U.S.allies and competitor nations.Scenario-based exerc
307、ises would provide governmental and non-governmental actors the opportunity to stress-test assumptions and better understand how different parties conceptualize signals,define costs,and manage the risks of escalation.By incorporating signaling into policy dialogues between allies and competitors,pol
308、icymakers could facilitate the development of norms and shared understandings around signaling in different contexts and at various levels of escalation.The coupling of public and private sector messaging and actions can be a powerful source of multivalent signaling.Signals can come from multiple vo
309、ices and sources.This form of multivalent signaling can enhance the credibility of commitments when the signals are aligned and come from two or more independent actors.Multivalent signaling can also complicate the task of messaging clearly.The first case study demonstrates the challenges of signali
310、ng on AI-enabled weapons,particularly when public and private sector actors send divergent signals or when policymakers interpret the signals of private sector actors as indicative of national intent.Companies in freer markets may respond to national priorities,but they are also more accountable to
311、shareholders,financial markets,and global economic trends as compared with national champions in authoritarian states.Profit motives may encourage some businesses to exaggerate their capabilities or send signals at inopportune moments.Some governments may leverage the ambiguity of noisy signaling en
312、vironments to claim plausible deniability for adverse outcomes generated by private sector actions or statements.In short,the time horizons of the battlefield and boardrooms are not always aligned.As a tool of technology policy,costly signals come with their own trade-offs that need to be managed.15
313、9 The cases in this paper highlight the tensions between transparency for signaling purposes and norms around privacy and security.External audits of AI algorithms Center for Security and Emerging Technology|34 and greater transparency around the data used to train large models are features,not bugs
314、,of a safe and responsible approach to AI development.External audits enable third parties to corroborate internal test and evaluation procedures and surface areas of public concern that are not within the immediate field of vision of private sector actors.160 In practice,however,external audits may
315、 reveal personal data or expose proprietary information about algorithms that put companies at a disadvantage.More information about AI systems can also overwhelm consumers and widen the attack surface for unscrupulous actors who seek to exploit vulnerabilities of AI models or the larger systems of
316、which they are a part.Researchers are exploring the use of query-based approaches and structured transparency as methods for resolving the tensions inherent in external audits of AI systems.161 Technical approaches show promise for managing these trade-offs,but policymakers will also need to explore
317、 creative institutional,policy,legal,and regulatory mechanisms to balance concerns among parties across the life cycle of AI development.The ability to convey costly,credible,and clear signals may vary depending on the context and technology area.Critical and emerging technologies have different cha
318、racteristics and requirements that may expand or constrain the scope for costly signaling.For signaling purposes,it is helpful to think of critical and emerging technologies along a spectrum based on their capital expenditures,controllability,and covertness.162 Capital expenditures impact the number
319、 of actors involved in developing the most advanced AI models;controllability impacts the number of potential second-and third-movers who can apply AI innovations developed elsewhere;and covertness impacts the ability to monitor,measure,and assess AI capabilities and their future trajectories.AI mod
320、els are often embedded in larger systems that support decision-making and include sensors,hardware components,and human-machine interfaces.163 Future research on costly signals and AI should explore the degree to which AI-enabled systems vary in terms of costs,controllability,and covertness,as well
321、as other technical characteristics that enable or constrain the transmission of costly AI signals.The wide range of applications and the untested assumptions of how AI will affect crisis stability underscore both the critical need and the challenge of signaling intentions in this rapidly evolving fi
322、eld.Indeed,AI models and the larger systems of which they are a part complicate the task of signaling.AI models are vulnerable to intentional failures,such as the poisoning of data used to train AI models,adversarial attacks on trained AI models,and supply chain exploitation.164 As AI-powered algori
323、thms play a more central role in decision-making and communication,policymakers will need to grapple with the risk of AI-enabled deception,AI-driven“personalized persuasion,”and unintentional signals emanating from AI agents in dynamic environments.165 Signaling through greater transparency,informat
324、ion sharing,test and evaluation,and security by design across the life cycle of AI development will be critical to ensure these systems operate as intended.166 Center for Security and Emerging Technology|35 Policymakers should be more willing to develop and use costly signaling mechanisms with respe
325、ct to AI,but they must also be aware of the limitations of this tool.Signals can be noisy,but they are an enduring feature of modern diplomacy.The answer is not to give up on the enterprise of sending costly signals,but instead to be deliberate in how and through which channels policymakers convey i
326、nformation in complex interdependent networks where the private sector and academic research play an important role.One hopes that todays major powers need not experience the modern equivalent of a Cuban Missile Crisis before establishing open lines of communication and clearer understandings of the
327、 role that emerging technologies will play in crisis decision-making.The early stages of geopolitical competitions are often the most perilous for international stability.Power asymmetries loom large in the minds of policymakers,and the rules of the road are more fluid.167 While uncertainty remains
328、the watchword,leaders should consider the value and limitations of costly signals as a policy tool for modern AI.Talk is cheap,but inadvertent escalation is costly to all sides.By expanding the AI toolkit to include costly signals,policymakers can better communicate intent and learn from the shiftin
329、g patterns of history without repeating its follies.Center for Security and Emerging Technology|36 Authors Andrew Imbrie is Associate Professor of the Practice in the Gracias Chair for Security and Emerging Technology at the School of Foreign Service and an Affiliate at the Center for Security and E
330、merging Technology at Georgetown University.Owen J.Daniels is the Andrew W.Marshall Fellow at Georgetowns Center for Security and Emerging Technology.Helen Toner is Director of Strategy and Foundational Research Grants at Georgetowns Center for Security and Emerging Technology and also serves in an
331、uncompensated capacity on OpenAIs nonprofit board.Acknowledgements The authors are grateful to Samanvya Hooda and Jessica Maksimov for their excellent research assistance.For their insights and constructive feedback at various stages of this project,the authors would like to thank Catherine Aiken,Sa
332、m Bresnick,Margarita Konaev,Igor Mikolic-Torreira,and Emelia Probasco.Our thanks to Matt Mahoney,Jahnavi Mukul,and Shelton Fitch for editorial support.We are also indebted to Jason Brown and Dr.Erik Lin-Greenberg for their thoughtful comments and reviews.The opinions and characterizations in this pi
333、ece are those of the authors and do not necessarily represent those of the U.S.government.2023 by the Center for Security and Emerging Technology.This work is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License.To view a copy of this license,visit https:/creativecommons.org/licenses/by-nc/4.0/.Document Identifier:doi:10.51593/20230033Center for Security and Emerg