《布魯金斯學會:2023推動基礎人工智能模型的國際合作報告(英文版)(42頁).pdf》由會員分享,可在線閱讀,更多相關《布魯金斯學會:2023推動基礎人工智能模型的國際合作報告(英文版)(42頁).pdf(42頁珍藏版)》請在三個皮匠報告上搜索。
1、REPORTNOV 2023TOWARD INTERNATIONAL COOPERATION ON FOUNDATIONAL AI MODELSAN EXPANDED ROLE FOR TRADE AGREEMENTS AND INTERNATIONAL ECONOMIC POLICYJOSHUA P.MELTZER Toward international cooperation on foundational AI models An expanded role for trade agreements and international economic policy Joshua P.
2、Meltzer November 2023 Acknowledgement The Brookings Institution is a nonprofit organization based in Washington,D.C.Our mission is to conduct in-depth,nonpartisan research to improve policy and governance at local,national,and global levels.The conclusions and recommendations of any Brookings public
3、ation are solely those of its author(s),and do not reflect the views or policies of the Institution,its management,its other scholars,or the funders acknowledged below.The author extends sincere appreciation to Cameron Kerry and Andy Wycoff for their insightful feedback on the paper.Brookings gratef
4、ully acknowledges the financial support of IBM and google.org for its Global Economy and Development program.Meta,PwC,McKinsey&Company,and Goldman Sachs,all mentioned in the publication,are also donors to the Institution.Brookings recognizes that the value it provides is in its absolute commitment t
5、o quality,independence,and impact.Activities supported by its donors reflect this commitment.2 Executive summary Foundational AI presents new opportunities for social and economic flourishing,but also risks of harm The development of artificial intelligence(AI)presents significant opportunities for
6、economic and social flourishing.The release of foundational models such as the large language model(LLM)ChatGPT4 in early 2023 captured the worlds attention,heralding a transformation in our approach to work,communication,scientific research,and diplomacy.According to Goldman Sachs,LLMs could raise
7、global GDP by 7 percent and lift productivity growth by 1.5 percent over 10 years.McKinsey found that generative AI such as ChatGPT4 could add$2.6-$4.4 trillion each year over 60 use cases,spanning customer operations,marketing,and sales,software engineering,and R&D.1 AI is also impacting internatio
8、nal trade in various ways,and LLMs bolster this trend.The upsides of AI are significant and achieving them will require developing responsible and trustworthy AI.At the same time,it is critical to address the potential risk of harm not only from conventional AI but also from foundational AI models,w
9、hich in many cases can either magnify existing AI risks or introduce new ones.For example,LLMs are trained on data that encodes existing social norms,with all their biases and discrimination.LLMs create risks of information hazards by providing information that is true and can be used to create harm
10、 to others,such as how to build a bomb or commit fraud.2 A related challenge is preventing LLMs from revealing personal information about an individual that is a risk to privacy.In other cases,LLMs will increase existing risks of harm,such as from misinformation which is already a problem with onlin
11、e platforms or increase the incidence and effectiveness of crime.LLMs may also introduce new risks,such as risks of exclusion where LLMs are unavailable in some languages.International cooperation on AI is already happening in trade agreement and international economic forums Many governments are ei
12、ther regulating AI or planning to do so,and the pace of regulation has increased since the release of ChatGPT4.However,regulating AI to maximize the upsides and minimize the risks of harm without stifling innovation will be challenging,particularly for a rapidly evolving technology that is still in
13、its relative infancy.Making AI work for economies and societies will require getting AI 1 McKinsey,The economic potential of generative AI:The next productivity frontier.https:/ 2 N.Bostrom et al,Information Hazards:A typology of potential harms from Knowledge,Review of Contemporary Philosophy,2011
14、3 governance right.Deeper and more extensive forms of international cooperation can support domestic AI governance efforts in a number of ways.This includes by facilitating the exchange of AI governance experiences which can inform approaches to domestic AI governance;addressing externalities and ex
15、traterritorial impacts of domestic AI governance which can otherwise stifle innovation and reduce opportunities for uptake and use of AI;and finding ways to broaden access globally to the computing power and data needed to develop and train AI models.Free trade agreements(FTAs),and more recently,dig
16、ital economy agreements(DEAs)already include commitments that increase access to AI and bolster its governance.These include commitments to cross-border data flows,avoiding data localization requirements,and not requiring access to source code as a condition of market access,all subject to exception
17、 provisions that give government the policy space to also pursue other legitimate regulatory goals such as consumer protection and guarding privacy.Some FTAs and DEAs such as the New Zealand-U.K.FTA and the Digital Economy Partnership Agreement include AI-specific commitments focused on developing c
18、ooperation and alignment,including in areas such as AI standards and mutual recognition agreements.With AI being a focus of discussions,international economic forums such as the G7 and the U.S.-EU Trade and Technology Council(TTC),the Organization for Economic Cooperation and Development(OECD),as we
19、ll as the Forum for Cooperation on Artificial Intelligence(FCAI)jointly led by Brookings and the Center for European Policy Studies as a track-1.5 dialogue among government,industry,and civil society,are important for developing international cooperation in AI.Initiatives to establish international
20、AI standards in global standards development organizations(SDOs)such as the International Organization for Standardization/International Electrotechnical Commission(ISO/IEC)are also pivotal in developing international cooperation on AI.But more is neededwhere new trade commitments can support AI gov
21、ernance These developments in FTAs,DEAs,and in international economic forums,while an important foundation,need to be developed further in order to fully address the opportunities and risks from foundational AI models such as LLMs.International economic policy for foundational AI models can use comm
22、itments in FTAs and DEAs and outcomes from international economic forums such as the G7 and TTC as mutually reinforcing opportunities for developing international cooperation on AI governance.This can happen as FTAs and DEAs elevate the output from AI-focused forums and standard-setting bodies into
23、trade commitments and develop new commitments as well.FCAI is another forum to explore cutting-edge AI issues.4 The following table outlines key opportunities and risks from foundational AI models and how an ambitious trade policy can further develop new commitments that would help expand the opport
24、unities of foundational AI models globally and support efforts to address AI risks,including by building on developments in forums such as the G7 and in global SDOs.5 Table 1.New commitments in FTAs,DEAs and for discussion in international economic forums Enable AI opportunity Increase access to AI
25、compute and data Reduce barriers to hardware,data,and access to cloud computing.Increase access to AI products and services Reduce barriers to AI services and AI-enabled goods.Support opportunities to develop and use AI globally Commit to a dialogue and work program that identifies opportunities to
26、cooperate on expanding AI access and use in other countries.Manage AI risks Discrimination,exclusion,and toxicity Agree to implement appropriate privacy regulations.Commit to internationally recognize AI ethical principles.Develop government procurement commitments to drive responsible and trustwort
27、hy AI.Agree to develop mutual recognition agreements related to conformity assessment and AI audits.Include the G7 Code of Conduct for AI in trade agreements.Commit to cooperate in developing international AI standards.Include a TBT-style commitment to base domestic regulation on international AI st
28、andards.Agree to share best practices around data governance.Security and privacy Develop government procurement commitments to drive responsible and trustworthy AI.Include the G7 Code of Conduct for AI in trade agreements.Agree to implement appropriate privacy regulations.Commit to cooperate in dev
29、eloping international AI standards.Develop a TBT-style commitment to base domestic regulation on international AI standards.Include as a trade commitment the OECD principles on government access to personal data.Agree to share best practices around AI governance.Misinformation Identify opportunities
30、 to expand cooperation on misinformation/disinformation Include the G7 Code of Conduct for AI in trade agreements.Explainable and interpretable results Commit to cooperate on the development of international AI standards.Develop a TBT-style commitment to base domestic regulation on international AI
31、standards.Agree to develop mutual recognition agreements related to conformity,assessment,and AI audits.Cooperate on the development of technical solutions.Agree to share best practices around AI governance.Measuring AI risk and accountability Develop a SPS-style commitment to base AI regulation on
32、a risk assessment.6 Commit to cooperate in the development of international AI standards.Develop a TBT-style commitment to base domestic regulation on international AI standards.Include the NIST AI RMF as a trade commitment.Agree to share experience on AI governance.Include the G7 Code of Conduct fo
33、r AI in trade agreements.Copyright infringement Agree to share developments in domestic laws and evolving approaches to foundational AI and copyright.7 Introduction The development of artificial intelligence(AI)presents significant opportunities for economic and social flourishing.The release of Cha
34、tGPT4 in early 2023 captured the worlds attention,promising to change how we work,communicate,do science,and conduct diplomacy.ChatGPT4 is a large language model(LLM),which itself is a foundational AI systemone that is increasingly generalizable in that it can work across contexts and learn as it sc
35、ales.Other large language models(LLMs)include Googles PaLM and Metas LLaMA,to name a few.Foundational AI demonstrates the new opportunities as well as the risks from AI,underscoring the need for international cooperation.This paper takes the view that the upsides of AI are significant and that achie
36、ving them will require developing responsible and trustworthy AI.Many governments are either regulating AI or planning to do so with these goals in mind,and the pace of AI policy development and regulation has increased since the release of ChatGPT4.3 Yet,regulating AI to maximize the upsides and mi
37、nimize the risks of harm without stifling innovation will be challenging,particularly for a technology that remains in its relative infancy and is fast-moving.Yet,making AI work for economies and societies will require getting AI governance right.Deeper and more extensive forms of international coop
38、eration can help by sharing the various and different experiences with regulating AI;developing ways to address the spillovers and extraterritorial impacts of domestic AI governance;and finding ways to expand access to the data and the AI compute(the computational resources required for AI such as G
39、PUs/TPUs and memory)needed to build and run foundational AI models consistent with the goal of responsible and trustworthy AI.Trade agreements and more recently digital economy agreements(DEAs)already include commitments that increase access to AI and support AI governance.At the same time,AI is a f
40、ocus of discussions in international economic forums such as the G7 and the U.S.-EU Trade and Technology Council(TTC).This paper focuses on how trade agreements,DEAs and key international economic forums such as the G7 and the TTC can build effective forms of international cooperation on AI governan
41、ce.Part 1 explains what a foundational AI model is,with a focus on ChatGPT4.This part also provides an overview of the impact of AI on economic opportunity and international trade,as well as its geostrategic implications,and outlines where foundational AI introduces new risks or heightens existing A
42、I risks.Part 2 makes 3 OECD AI Policy Observatory https:/oecd.ai/en/wonk/national-policies-2 8 the case for why international cooperation on AI is needed to realize the opportunities of AI and build effective AI governance.This part describes how trade agreements,DEAs,and steps taken in internationa
43、l economic forums are already working to build international cooperation in AI.Part 3 explores how trade policy needs to be further developed to respond to the opportunities and risks from foundational AI models.Part 4 concludes.9 Part 1:The opportunities and risks from foundational AI models What a
44、re foundational AI models?This paper focuses on foundational AI models that include large language models(LLMs)such as ChatGPT4.An LLM processes and understands natural language data such as written text,spoken words,or other forms of language input.ChatGPT4 can process visual inputs using machine l
45、earning techniques such as deep neural networks to analyze and generate human-like language based on the patterns and structures it has learned from this data.4 LLMs are often referred to as generative AI as these models generate new content based on prompts.5 Foundational models such as LLMs have s
46、everal key features.First is the capacity for transfer learning,where knowledge gained from training on one task,such as object recognition,can be applied to another task.6 This means that foundational models are increasingly generalizable in that they can be used across a wide range of applications
47、.7 The second key element is that scaling the AI compute and training data results in significant performance improvements.8 To put this in perspective,the computation used to train AI has scaled by a factor of 10 every year for the last 10 years.This means that each next generation of LLM will be e
48、ven more powerful and impactful.Third,ever-larger datasets,exponential increases in AI compute,and the number of parameters of foundational AI models have led to new capabilities emerging as the system scales.9 In other words,foundational AI models can develop new capabilities to perform tasks for w
49、hich the AI system was not originally programmed.For example,ChatGPT4 seems to have developed in-context learning,enabling the LLM to adapt to downstream tasks by developing a description of that task.10 Indeed,the capacity of ChatGPT4 is still being understood.Some argue that theory-of-mind(TOM)the
50、 ability to impute unobservable mental states such as desires and beliefs to others emerged in Chat GPT3 as a byproduct of being trained to achieve other goals where TOM 4 Definition generated by ChatGPT.5 ChatGPT4 Technical Report,27 March 2023 6 Bommasani,D.A Hudson,E.Adeli,et al.,“On the opportun
51、ities and Risks of Foundation Models”https:/arxiv.org/pdf/2108.07258.pdf 7 Bommasani,D.A Hudson,E.Adeli,et al“On the opportunities and Risks of Foundation Models”https:/arxiv.org/pdf/2108.07258.pdf 8 Jacob Devline et al,BERT:Pre-training of deep bidirectional transformers for language understanding,
52、NAACL 2019 9 Jason Wei et al,“Emergent Abilities of Large Language Models”,Transaction on Machine Learning Research,26 October 2022 10 Bommasani,D.A Hudson,E.Adeli,et al“On the opportunities and Risks of Foundation Models”2108.07258.pdf(arxiv.org)10 would be a benefit.11 When it comes to ChatGPT4,so
53、me claim that elements of artificial general intelligence(AGI)may also have emerged.12 The social,scientific,and economic opportunities from foundational AI Foundational AI models expand on many of the economic and social opportunities of AI.The impact of LLMs is potentially transformative given the
54、 central role of language in human culture and as the basis on which we understand the world.As Yuval Noah Harari put it recently with respect to GPT4,“In the beginning was the word.Language is the operating system of human culture.AIs new mastery of language means it can now hack and manipulate the
55、 operating system of civilization.”13 For instance,foundational AI models can write and compose music and generate images.According to an op-ed by Henry Kissinger,Eric Schmidt,and Daniel Huttenlocher,LLMs like ChatGPT4“will redefine human knowledge,accelerate change in the fabric of our reality,and
56、reorganize politics and society.”14 These observations underscore the potentially wide-ranging social and economic implications of LLMs.On the economic front,LLMs could lead to rapid increases in productivity and economic growth.According to PwCs Global Artificial Intelligence Study,with accelerated
57、 development and uptake of AI,global GDP could be 14 percent or almost$16 trillion higher by 2030.According to Goldman Sachs,LLMs could raise global GDP by 7 percent and lift productivity growth by 1.5 percent over 10 years.15 McKinsey found that generative AI such as ChatGPT4 could add$2.6-$4.4 tri
58、llion annually across the 63 use cases it analyzed,with 75 percent of that value being derived from customer operations,marketing,and sales,software engineering,and R&D.16 Currently,large companies and large tech companies specifically have the resourcesthe computational capacity,data,and talent to
59、build and train foundational AI models.However,access to foundational AI is often available via application programming interfaces(APIs)which allow further training and fine-tuning of the model for specific use-cases.11 Michael Kosinski,“Theory of Mind May Have Spontaneously Emerged in large Languag
60、e Models”,Stanford University.12 Sebastien Bubeck et al,Sparks of Artificial General Intelligence:Early experiments with GPT-4”,arXiv:2303.12712v5,13 April 2023.13 Yuval Harari,Tristan Harris,and Aza Raskin,“You can have the blue pill or the red pill,and were out of blue pills”,New York Times Guest
61、Essay,March 23,2023 14 Henry Kissinger,Eric Schmidt,and Daniel Huttenlocher,“ChatGPT Heralds and Intellectual Revolution”,WSJ Opinion,Feb 24,2023 15 Generative AI Could Raise Global GDP by 7%.https:/ 16 McKinsey,The economic potential of generative AI:The next productivity frontier https:/ 11 LLMs a
62、nd foundational AI more broadly are likely to transform companies,change business models,and deeply impact jobs and the future of work.17 This will include expanded use of robotics,product R&D,and opportunities for sales.Foundational AI should lead to more efficient manufacturing and supply chains,a
63、s well as productivity gains across services as foundational AI systems assist in information retrieval and support services delivery across education,health care,and professional services.Foundational AI models can also drive important advances in human well-being and flourishing.For instance,Alpha
64、Fold developed by Deep Mind has predicted the structure of about 350,000 proteinsabout half of all known human proteinsand is now using AI to predict how these proteins work together.18 This was previously an experimental process that took years and hundreds of thousands of dollars per protein.Under
65、standing the 3D structure of proteins will lead to new targeted drugs.19 Or to take another example,recently it took researchers at IBM and the University of Oxford a matter of weeks to train a generative AI with general information about proteins to identify potential antivirals for COVID-19,synthe
66、size,manufacture,and test against the virus.20 More broadly,foundational AI stands to rewrite how science is conducted,which includes using LLMs to help predict discoveries in physics or biology,formulating better hypotheses for testing,and conducting faster,cheaper and larger experiments.21 The AI
67、opportunity for international trade AI is also impacting international trade in various ways,and LLMs bolster this trend.22 Where AI improves worker and firm productivity,this should lead to more trade as firms are more competitive.23 Indeed,it is already the case that firms most adept at using AI a
68、re more productive than non-AI-adopting firms.24 AI can help firms analyze data to better forecast demand in other countries.AI can also 17 Webb M.(2020).The impact of artificial intelligence on the labor market.Working Paper,Stanford University.Accessed 18 September 2023.Available from URL:https:/w
69、ww.michaelwebb.co/webb_ai.pdf 18 John Jumpter,et al.,“Highly accurate protein structure prediction with AlphFold,Nature,15 July 2021 19 Sciences 2021 Breakthrough of the Year:AI brings protein structures to all|Science|AAAS 20 Kenna-Hughers-Castleberry,“AI can suggest Covid-19 antivirals from protei
70、n sequence alone.”https:/ Eric Schmidt,“This is How AI will transform the way science gets done,”MIT Technology Review,July 5,2023 22 Joshua P.Meltzer,The Impact of AI on International Trade.https:/www.brookings.edu/articles/the-impact-of-artificial-intelligence-on-international-trade/.23 Marc J Mel
71、itz and Stephen J Redding,“Heterogenous Firms and Trade”2014,hand of International Economics,4th Ed.1-54(Elsevier),Martin N Bailey,Eric Brynjolfsson,Anoton Korinek,“Machines of mind:The case for an AI-powered productivity boom”,Brookings May 10,2023 https:/www.brookings.edu/articles/machines-of-mind
72、-the-case-for-an-ai-powered-productivity-boom/24 Dirk Czarnitzki,Gaston P.Fernandez and Christian Rammer,Artificial Intelligence and firm-level productivity,J.of Econ.Behavior&Org.Vol 211,July 2023,188-205 12 help optimize production and logistics,inform decisions about pricing,inventory levels,and
73、market trends.AI can also aid in identifying new markets for products and services and developing new products and services that are tailored to the needs of specific markets.These capabilities will allow businesses to expand their reach and grow their sales.AI will also be used to optimize the effi
74、ciency of global value chains.For example,AI provides the opportunity to increase automation and improve inventory management.Meanwhile,better analysis of overseas demand should allow for more efficient supply chains.Foundational AI can also reduce trade costs that are a barrier to services trade.Fo
75、r instance,AI-enabled translation services can reduce the costs of trade in services in different languages.As a result of eBays machine translation service,eBay-based exports to Spanish-speaking Latin America increased by 17.5 percent(value increased by 13.1 percent).25 PaLM 2Googles LLMhas multili
76、ngual proficiency and translation capabilities in over 100 languages.26 AI will also create opportunities to use e-commerce platforms for international trade.For small businesses in particular,digital platforms have provided unprecedented opportunities to go global.In the U.S.,for instance,97 percen
77、t of small businesses on eBay export,compared to just 4 percent of offline peers.AI will expand the utility that platforms provide for small businesses to engage in international trade.This will include better analysis of customer data,including browsing history,purchase behavior,and preferences tha
78、t can make personalized product recommendations.27 AI can also enable more efficient and targeted trade finance.28 AI can analyze vast amounts of data,including financial records,market trends,and customer behavior,to assess creditworthiness,detect fraud,and manage trade-related risks more effective
79、ly.Trade facilitation is another area where AI is expected to have a positive impact,complementing efforts to digitize trade documents.29 AI-powered systems can analyze trade documents,verify product compliance with regulations,detect fraudulent activities,and improve risk-based targeting of commerc
80、ial shipments.30 25 Brynjolfsson,E,X Hui,and Meng Liu(2018),“Does Machine Translation Affect International Trade?Evidence from a Large Digital Platform 26 Brynjolfsson,E,X Hui,and Meng Liu(2018),“Does Machine Translation Affect International Trade?Evidence from a Large Digital Platform 27 eBay 2015.
81、“Empowering People and Creating Opportunity in the Digital Single Market”An eBay report on Europes potential,October 2015.28 Dharmarajan Sankara Subrahmanian,Artificial Intelligence Platforms Will Drive the Next Phase of Trade Finance Growth,Forbes,Dec 20,2022,https:/ 29 White Paper on the use of Ar
82、tificial Intelligence in Trade Facilitation,UNECE,February 2023 https:/unece.org/sites/default/files/2023-02/WhitePaper_AI-TF_Feb2023_0.pdf 30 WTO/WCO Study Report on Disruptive Technologies,June 2022 13 This will help to reduce administrative burdens,enhance security,and lead to better compliance w
83、ith international trade rules.Yet geopolitics will lead to reduced trade in AI with China Developments in AI and foundational AI models are already part of U.S.-China geopolitical competition as both countries race to ensure they lead AI innovation and shape the governance of AI.31 China is intent o
84、n being a global leader in AI,and its 2017 New Generation AI Development Plan lays out steps to 2030 when China will be the worlds primary AI innovation center.32 China is also very capable in AI,by all accounts second only to the U.S.33 There is also significant foreign investment in Chinese AI sta
85、rtups,as the second largest AI market behind the U.S.,between 2015-2021.34 This competition over AI has already spilled over into U.S.-China trade and investment flows,driving so-called de-risking of the U.S.(and allied)economies from China in areas of critical technology,including AI.On October 7,2
86、022 and October 22,2023,the Biden administration imposed comprehensive restrictions on exports to China of advanced semiconductors needed for AI applications,and the software and equipment needed to make semiconductors.35 The U.S.has also prohibited engineers and scientists from assisting China in d
87、eveloping advanced semiconductors.In addition,the U.S.has tightened investment screening by Chinese investors into critical technology in the U.S.including AI,and most recently issued an executive order to come into effect in 2024 that would prevent U.S.outbound investment into key technology sector
88、s in China,including AI.36 The net result is that geopolitical competition with China is reducing international trade and investment between the U.S.and China in the technology and AI compute needed for developing foundational AI models.31 Ian Bremmer and Mustafa Suleman,The AI Power Paradox,Foreign
89、 Affairs,Sept/Oct 2023,Sisson,M.,2023.Artificial Intelligence,Geopolitics,and the US-China Relationship,Konrad-Adenauer-Stiftung.Germany.Retrieved from https:/ 06 Nov 2023.CID:20.500.12592/3pd5z7.32“Notice of the State Council on Issuing the New Generation Artificial Intelligence Development Plan”國務
90、院關于印發新一代人工智能發展規劃的通知,PRC State Council,2017,https:/perma.cc/B9ZR-5LQL 33 Kerry,Meltzer,and Sheehan,Can Democracies Cooperate with China on AI Research,Brookings Working Paper,Jan 9,2023.https:/www.brookings.edu/research/can-democracies-cooperate-with-china-on-ai-research/34 Emily S.Weinstein and Ngor
91、 Luong,“U.S.Outbound Investment into Chinese AI Companies”,CSET,February 2023 35 https:/www.bis.doc.gov/index.php/documents/about-bis/newsroom/press-releases/3158-2022-10-07-bis-press-release-advanced-computing-and-semiconductor-manufacturing-controls-final/file;https:/www.bis.doc.gov/index.php/docu
92、ments/about-bis/newsroom/press-releases/3355-2023-10-17-bis-press-release-acs-and-sme-rules-final-js/file 36 https:/www.whitehouse.gov/briefing-room/presidential-actions/2023/08/09/executive-order-on-addressing-united-states-investments-in-certain-national-security-technologies-and-products-in-count
93、ries-of-concern/14 The risks from LLMs As outlined,foundational AI models such as LLMs present a range of significant economic and trade opportunities.37 However,to be ambitious and realize these upsides will also require addressing the risk of harm from AI.In other words,ensuring that LLMs are resp
94、onsible and trustworthy is paramount.This notion of responsible and trustworthy AI picks up on goals expressed in the 2023 Bletchley Declaration on AI Safety agreed by 28 countries and the EU,including the U.S.,China,Germany,France,Japan,Indonesia,Brazil,and others,which calls for AI that is“trustwo
95、rthy and responsible.”38 Effective AI governance that produces responsible and trustworthy AI will be needed to underpin broad-based uptake of AI by governments,businesses,and households.Many of the risks of LLMs,such as disinformation and risks to privacy,are not new or specific to AI,but may be ma
96、de more acute.For instance,AI could lead to more misinformation,but this is already a problem with online platforms.LLMs may also introduce new risks,such as risks of exclusion where LLMs are unavailable in some languages.Some of these risks may also end up being mitigated by AI as LLMs are further
97、refined and models become more powerful and accurate.For example,ChatGPT4 is 70 percent more accurate than ChatGPT3.5.39 That said,ChatGPT4 retains various limitations of associated risks of harm,including bias and misinformation.40 Moreover,while refining LLMs can reduce some risks of harm,other ri
98、sks may become more acute as a result.For example,more accurate LLMs can increase the risk of over-reliance by people on the results of LLMs,underscoring that addressing AI risks will involve trade-offs.A larger point is that developing trustworthy and responsible AI should be in everyones interest.
99、It is needed as a key building block for optimizing the upsides of AI.However,achieving trustworthy and responsible AI will also require navigating various trade-offs,where optimizing for some value may require sacrifices elsewhere.How this is done and where these trade-offs are struck will require
100、broad-based and inclusive discussions at domestic and international levels.Developing new trade commitments and progress in international economic forums will be an important part of these international efforts.The following outlines the key risks of LLMs to be clear about the challenges before gett
101、ing into how trade policy and cooperation in international economic forums can realize the upsides and address the risks.37 Markus Anderljung and Julian Hazell,“Protecting Society from AI Misuse:When are Restrictions on Capabilities Warranted?”,38 Bletchley Declaration,https:/www.gov.uk/government/p
102、ublications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 39 ChatGPT4 Technical Report,27 March 2023 40 ChatGPT4 Technical Report,27 March 2023 15 Discrimination,exclusion,and toxicity LLMs are trained on data
103、that encodes existing social norms,with all their biases and discrimination.LLMs will encode unfair discrimination when the data on which it is trained reflects historical patterns of discrimination.For example,earlier versions of ChatGPT4 associated homemaker or nurse with the female pronoun she.41
104、 When ChatGPT3 was asked to complete a sentence about Muslims,66 percent of the time it featured Muslims committing violence.42 Moreover,as LLMs have the capacity for emergent behavior as they scale and learn in the wild,this can lead to different forms of harm over time,and addressing these risks w
105、ill likely require ongoing assessments of the LLM.However,even here the extent that ChatGPT4 exhibits emergent capacity is uncertain.43 LLMs can also risk further marginalization and exclusion of people or groups of people.This can happen when the accuracy of LLMs declines for disadvantaged and marg
106、inalized groups that may be using slang or dialects that the LLM does not recognize.As LLMs are more widely used,failing to respond accurately to language prompts can affect access to a wide range of services.The use of toxic language is a widespread problem with online platforms that may be exacerb
107、ated by LLMs.This is also one area,however,where AI can help reduce toxicity,both by identifying and removing it and using technical responses such as human feedback reinforcement.That said,what is toxic language for some is not for others,and context matters,underscoring the challenge.This difficul
108、ty of getting toxicity to zero also points to a need to understand what is an acceptable level of risk.Is it zero,is it better than the status quo,or something else?Determining the risk that a country or society is willing to accept is a core expression of sovereignty.However,an explicit discussion
109、about the level of risk tolerance seems necessary.41 Bolukbasi,T.,Chang,K.-W.,Zou,J.Y.,Saligrama,V.&Kalai,A.T.in Advances in Neural Information Processing Systems Vol.29,43494357(NeurIPS,2016)42 A.Abid,M.Farooqi,J.Zou,“Large language models associate Muslims with violence”,Anti-Muslim Bias in GPPT-3
110、,August 2020 43 Ryan Schaeffer et al,“Are Emergent Abilities of Large Language Models are Mirage”,28 April,2023 arXiv:2304.15004v1 16 Security and privacy Information hazards arise when LLMs disseminate information that is true and can be used to create harm to others.Examples of information hazards
111、 include information on how to build a bomb or commit fraud.44 A related challenge is preventing LLMs from revealing personal information about an individual that risks harming privacy.Another higher risk from the misuse of LLMs is an increase in the incidence and effectiveness of crime.For instance
112、,criminals can use LLMs to fine-tune spam emails to impersonate an individual,allowing for more targeted manipulation and more successful phishing.45 This underscores a broader point about the types of risk mitigation techniques that will need to be developed for LLMs,which includes strengthening th
113、e human capacity to review and challenge the information provided by LLMs.Misinformation LLMs can also be expected to make false statements and reasoning errors,referred to as hallucinations.46 This remains true for ChatGPT4,though as discussed,with significant improvements over ChatGPT3.5.47 Given
114、the way that LLMs work by assigning a probability to what should be the next best word based on the previous word,sentence,and overall text,nothing about this presumes the truth of the resulting sentence.In addition,training data drawn from the web contains lots of false statements.Even training LLM
115、s on only factual data would not necessarily overcome this problem as context matters.For instance,a factual statement such as“John owns a car”may be true in one context and not another.LLMs so far do not reliably distinguish between such contexts.48 LLMs also increase the risk of greater and more e
116、ffective misinformation and disinformation campaigns.For instance,LLMs can be used to generate very believable false statements,images,and videos that expand the disinformation space and the harm already caused by online misinformation and disinformation.49 44 N.Bostrom et al,Information Hazards:A t
117、ypology of potential harms from Knowledge,Review of Contemporary Philosophy,2011 45 Markus Anderljung and Julian Hazell,“Protecting Society from AI Misuse:When are Restrictions on Capabilities Warranted?”,46 G.Branwen,GPT-3 Creative Fiction https:/ 47 ChatGPT4 Technical Report,27 March 2023 48 L.Wei
118、dinger et al“Ethical and social risks of harm from Language Models,DeepMind https:/arxiv.org/abs/2112.04359 49 Zellers,R.,Holtzman,A.,Rashkin,H.,Bisk,Y.,Farhadi,A.,Roesner,F.and Choi,Y.,2019.Defending against neural fake news.Advances in neural information processing systems,32.17 Relatedly,LLMs can
119、 also be used by authoritarian governments to improve domestic surveillance and as a propaganda tool.50 Overconfidence in the results There is a related problem with overconfidence in results generated by LLMs.This happens when people anthropomorphize LLMs,overestimate their competencies,and place u
120、nwarranted trust in the AI.This is likely to occur as interaction with LLMs appears human-like,passing the Turing test and leading people to assign impressions of warmth and competence(and even consciousness)to AI systems.51 Overconfidence in the output of such human-like LLMs can lead to even great
121、er reliance on LLMs,including false information,which can perpetuate and expand the scope for harm.Such harm can also be material,such as where it leads people to misdiagnose using LLMs or to base action on information provided by LLMs that is incorrect.52 Explainable and interpretable results LLMs
122、make achieving explainability and interpretability a particular challenge due to the inherently unknowable process of how LLMs produce results and the difficulty measuring the capabilities of these AI models.53 Explainability requires describing how AI systems function and interpretability is about
123、describing why the LLMs made that particular output.54 For this reason,it has been noted that foundational LLM can“increase human knowledge but not human understanding.”The difficulty of explaining LLM outcomes can exacerbate other potential LLM harms.55 For instance,interpretability helps users ass
124、ess whether an LLM is fair,robust,and trustworthy.56 Being unable to interpret how or why an LLM produced toxic language or discriminatory outcomes can make detecting such failures harder,thereby increasing scope for harm.50 Markus Anderljung and Julian Hazell,“Protecting Society from AI Misuse:When
125、 are Restrictions on Capabilities Warranted?”,51 McKee,Kevin R.,Xuechunzi Bai,and Susan Fiske.2021.“Humans Perceive Warmth and Competence in Artificial Intelligence.”PsyArXiv.February 26.doi:10.31234/osf.io/5ursp.52 Bickmore TW,Trinh H,Olafsson S,OLeary TK,Asadi R,Rickles NM,Cruz R,Patient and Consu
126、mer Safety Risks When Using Conversational Assistants for Medical Information:An Observational Study of Siri,Alexa,and Google Assistant J Med Internet Res 2018;20(9):e11510 53 F.Doshi-Velez and B.Kim,Towards a Rigorous Science of Interpretable Machine Learning arXiv:1702.08608 stat.ML 54 NIST AI RMF
127、(AI RMF 1.0),p16-17 55 Henry Kissinger,Eric Schmidt,and Daniel Huttenlocher,“ChatGPT Heralds and Intellectual Revolution”,WST Opinion,Feb 24,2023 56.Doshi-Velez and B.Kim,Towards a Rigorous Science of Interpretable Machine Learning arXiv:1702.08608 stat.ML 18 Measuring the risk and accountability of
128、 LLMs LLMs also introduce new challenges when it comes to measuring AI risk.Foundational AI models such as LLMs bifurcate the AI model developer and the entity that then takes the model and develops it for specific applications.This raises new challenges when it comes to ensuring accountability for
129、the LLM across the value chain,which includes how to assess the risk of an LLM when its ultimate use may be unforeseen by the original LLM developer.57 As the AI value chain lengthens,this raises the issue of how downstream users can assess risk and where to allocate liability for harm.Relatedly,thi
130、s will also require addressing when access to the foundational model and its underlying data by third parties may be needed.Copyright infringement LLMs raise a host of copyright and patent issues.58 LLMs are trained on the internet which raises the risks of using a lot of copyrighted material.The ou
131、tputs from ChatGPT4 may also be similar enough to existing copyrighted work such that this output may infringe copyright.Where LLMs produce new creative output or inventions,there is the question as to whether this can receive copyright or patent protection.For instance,is ChatGPT4 a creator,or is t
132、he creator the human prompting the chatbot?Finally,LLMs and other forms of foundational AI systems can copy artists,whatever the medium.For example,you can now listen to Drake covering Colbie Caillat or Michael Jackson covering The Weekend,yet these are all generated by AI systems.The question as to
133、 whether this output infringes copyright remains unanswered.The above analysis of risks from foundational AI does not cover all AI risks,including concerns about AI alignmenthow to align the goals of AI with humans,particularly when it comes to superhuman AI or artificial general intelligence(AGI),a
134、s well as the use of AI for national security purposes and related cybersecurity challenges.59 These issues are being discussed in other specific forums and trade agreements,and the G7 and TTC may not be well suited to engage with these types of AI risks.57 Alex C.Engler and Andrea Renda,“Reconcilin
135、g the AI Value Chain with the EUs Artificial Intelligence Act”,CEPS,September 2022-03 58 WIPO Conversation on Intellectual Property and Artificial Intelligence,Revised Issues Paper on Intellectual Property and Artificial Protection,WIP/IP/AI/GE/2-/1/Rev.May 21,2020,59 National Security Commission on
136、 Artificial Intelligence,https:/www.nscai.gov/wp-content/uploads/2021/03/Final_Report_Executive_Summary.pdf 19 Part 2:International cooperation and a role for trade policy Why international cooperation on AI is needed As outlined in the work of the Forum on Cooperation in AI(FCAI),there are a range
137、of reasons that international cooperation on AI is needed.60 The development of LLMs underscores and makes even more urgent the need for international cooperation in AI.AI will be governed in the first instance domestically,with governments taking different approaches.International AI cooperation ha
138、s a role in guiding domestic AI governance,improving the outcomes,and building cooperation and interoperability globally among different approaches to AI governance.The following outlines where international cooperation on AI is needed and how foundational AI makes such cooperation even more importa
139、nt.International cooperation is needed to update and develop commonly agreed principles for what is responsible and trustworthy AI in the age of foundational AI models.International cooperation is needed to address the externalities and extraterritorial impacts of domestic AI regulation that can lea
140、d to higher costs for AI innovation and use in other countries,as well as greater AI risk.Foundational AI heightens the need for international cooperation as it accelerates the pace of AI regulation.International cooperation is needed to facilitate learning from experience with AI governance.61 The
141、rapid uptake and use of LLMs and experience with different approaches to regulating AI is generating learning that should be systematically and globally shared.International cooperation is needed to expand opportunities for AI R&D and to access the resources needed to use foundational AI systems.Dev
142、eloping AI models,particularly LLMs such as ChatGPT4 is costly and compute-intensive.The result is that only so many companies and governments can run the most advanced LLMs with implications for concentration in capacity.Greater access to foundational AI models consistent with developing responsibl
143、e and trustworthy AI is needed to ensure that the economic and social benefits are widely shared.60 C.Kerry,J.P.Meltzer,A.Renda,A.C.Engler&R.Fanni.,“Strengthening International Cooperation on AI”,Brookings Report October 2021.https:/www.brookings.edu/wp-content/uploads/2021/10/Strengthening-Internat
144、ional-Cooperation-AI_Oct21.pdf 61 Gillian K.Hadfield and Jack Clark,“Regulatory Markets:The Future of AI Governance”,April 2023 20 The role of trade policy in supporting international cooperation on AI International trade agreements and the discussions underway in international economic forums such
145、as the TTC,G7,and in the OECD,as well as in FCAI,are important for developing international cooperation in AI.Over the past decade,digital issues broadly have become increasingly central to FTAs and DEAs,and figure prominently in international economic discussions.62 As this section will outline,FTA
146、s and DEAs support domestic AI regulation as well as international cooperation in AI governance.This includes commitments to cross-border data flows,avoiding data localization,agreement not to require access to source code as a condition of market access,agreement to having privacy regulation and de
147、veloping interoperability mechanisms.In addition,some trade agreements such as the New Zealand-U.K.FTA,digital economy agreements such as the Digital Economy Partnership Agreement(DEPA),and the Australia-Singapore DEA include specific AI commitments.A range of AI issues have also been taken up in va
148、rious international economic forums,the main ones being the G7,the TTC,and the OECD.Efforts to develop international AI standards in global standards development organizations such as the ISO/IEC are also important areas for developing international cooperation on AI.This distributed landscape for i
149、nternational cooperation in AI is potentially a feature rather than a bug as it allows for flexible combinations of countries and other stakeholders,and the ability for agenda priorities to adapt quickly in response to developments in AI.Indeed,the explosion of foundational AI models and LLMs,in par
150、ticular,has underscored the need for international cooperation to be nimble and adaptive.That said,the current landscape for international cooperation on AI has some downsides.This includes the exclusion of some governments and key stakeholders,missed opportunities where progress made in one set of
151、international discussions is not carried over or reflected in others,and duplication of effort.Currently,the G7 seems the most likely place for effective discussions on AI,though it is not without its limitations.The G7 has a track record on AI,having had AI issues on its agenda since 2016.While the
152、 G7 as a seven-country grouping is not globally inclusive,it does include many countries where getting AI governance right will matter most given the preponderance in these countries of AI compute,tech companies,and AI talent.In addition,each year the country hosting the G7 invites a number of other
153、 countries to participate,and the European Commission 62 Joshua P Meltzer,Supporting the Internet as a Platform for International Trade,Brookings Working Paper 69,February 2014 https:/www.brookings.edu/wp-content/uploads/2016/06/02-international-trade-version-2_REVISED.pdf 21 and the OECD also parti
154、cipate in G7 meetings,which further expands the buy-in of G7 outcomes.The U.S.-EU TTC is another forum where cooperation on AI may be even more rapid and granular than the G7,given the TTCs bilateral nature and technology-focused agenda.Finding ways for the U.S.and EU to cooperate on AI issues will
155、be a key building block for any effective approach to international cooperation on AI governance.Yet,the TTCs bilateral nature will limit its global impact and the government-to-government format of the group will likely limit its relevance.While AI has been discussed in the G20,geopolitical tension
156、s with China and Russia in particular make it unlikely that the G20 can play an effective role in building international cooperation in AI governance in the foreseeable future,and for this reason,is not discussed further here.China does not participate in the G7 or the TTC.While China is a so-called
157、 key partner in the OECD,it does not engage in OECD work on AI.China is,however,a party to WTO negotiations on e-commerce.The question of how to involve China in AI governance is beyond the scope of this paper but is clearly important.As a final point,there is an emerging debate about whether these
158、developments in AI governance are enough,with proposals variously calling for new forms of international cooperation and new international organizations.63 This paper focuses on the narrower question of how to use existing international economic forums and trade agreements to build international coo
159、peration in AI.One reason for this focus on what is actually happening is that many of the AI governance needs identified by some authors are already being developed or could be developed(more or less)using existing international economic forums and through a more robust turn to trade policy.For ins
160、tance,some proposals call for developing AI standards,yet as outlined here,there is already important work underway in developing international AI standards,such as for risk management frameworks,standards for mutual recognition,and auditing of AI systems.There are also calls by the U.S.,the EU,Japa
161、n,and others in the G7 and TTC to expand this standards work and to increase the uptake and use of AI standards in domestic regulation.This is also an area where trade policy could contribute to supporting the development and use of international AI standards.64 The following outlines key areas in t
162、rade agreements,DEAs,and in other international economic forums where there are existing commitments on AI,63 European Commission President von der Leyen,State of the European Union speech 2023,https:/ec.europa.eu/commission/presscorner/detail/en/speech_23_4426;Lewis Ho et al,International Instituti
163、on for Advanced AI”,arXiv:2307.046992v2,11 July 2023;Ian Bremmer and Mustafa Suleyman,“The AI Power Paradox,Foreign Affairs,Vol 102,No5.Sept/Oct 2023 22 drawing on my recent article in the Asia Economic Policy Review.65 Section three will discuss what more could be done in terms of new trade commitm
164、ents to support international AI outcomes that maximize the opportunities and help develop AI governance that also addresses the risks from AI.The WTO The WTO rules were agreed well before AI was relevant for international trade and even before the impact of the internet and cross-border data flows
165、became an international trade issue.Yet,the WTO remains relevant for AI.The WTO could become even more relevant upon a successful conclusion of the Joint Statement Initiative(JSI)e-commerce negotiations,which could result in a commitment to cross-border data flows,no data localization,and access to
166、source code,which would likely support easier access to better data for AI projects and reduce developer risk when exporting AI models.The following outlines key WTO rules for AI.Specifically,under the GATS,where WTO Members have made a mode 1 services commitment there is also a commitment to allow
167、for the data flows to deliver that service.66 The WTO Agreement on Technical Barriers to Trade(TBT)requires WTO members to use international standards as a basis for their domestic regulation and to justify departures from international standards.67 These commitments could apply to AI standards for
168、products and be a basis for building interoperability across AI regulation.The TBT Agreement also includes commitments to cooperation on mutual recognition and conformity assessment agreements which can help reduce costs of trade where exporters can avoid the costs of multiple conformity assessment
169、processes for AI.68 These TBT commitments are,however,limited in that they only apply to goods and not services.Yet,AI systems and LLMs will be deployed in many instances as services in the market via APIs and the cloud.Under the WTO plurilateral Information Technology Agreements(ITA)I and II,WTO me
170、mbers have also agreed to reduce tariffs on a range of technology products,including some used to support AI development,such as goods used to expand internet connectivity and use.The WTO TRIPS agreement includes an agreement on international intellectual property(IP)standards developed in 65 Joshua
171、 P.Meltzer,The Impact of Foundational AI on International Trade,Services and Supply Chains in Asia,Asian Economic Policy Review,November 2023,https:/ 66 Appellate Body Report,United StatesMeasures Affecting the Cross-Border Supply of Gambling and Betting Services,202,WT/DS285/R,(adopted Apr.25,2013)
172、;Appellate Body Report,China Measures Affecting Trading Rights and Distribution Services for certain Publications and Audiovisual Entertainment Products,151,WTO Doc.WT/DS363/AB/,(adopted Dec.21,2009).67 TBT Agreement Article 2.4 68 TBT Agreement Article 5 23 various IP treaties.Yet,the key issues ra
173、ised by foundational AI models such as LLMs are not specifically addressed in these international copyright commitments.While WTO rules remain relevant for AI,the WTO is unlikely to build the international cooperation on AI that is needed.This reality reflects the larger institutional challenges the
174、 WTO faces in addressing new trade issues,and similar to,what hobbles the G20,geopolitical competition over AI will prevent a multilateral forum such as the WTO from making significant progress.For these reasons,FTAs,DEAs,and other international economic forums such as the G7,the OECD,and the TTC wi
175、ll need to be the focus of efforts.Free trade agreements(FTAs)and digital economy agreements(DEAs)Access to data There have been significant developments in FTAs and DEAs that are relevant to AI.A recent development in FTAs is the emergence of Digital Trade Chapters.These chapters now include a rang
176、e of commitments relevant to AI,such as commitments to cross-border data flows and avoiding data localization measures.These commitments matter for AI as they affect access to data for AI using cloud and APIs.These commitments come with an exceptions provision.The extent of this exception strikes a
177、balance between the commitment to,for instance,the free flow of data and the degree to which governments can impose restrictions on cross-border data flows to meet other regulatory objectives.For example,the CPTPP,USMCA,U.K.-Japan Comprehensive Economic Partnership Agreement and the New Zealand-U.K.
178、FTA include commitments to cross-border data flow and to no data localization measures along with an exception provision modeled on the GATS general exception provision in Article XIV.In contrast,the exception provision in the Regional Cooperative Economic Partnership(RCEP)to the commitment to cross
179、-border data flows is based on GATS Article XIV bis national security exception,allowing for much broader government discretion to restrict data.Access to source code Modern trade agreements and DEAs also include a commitment not to require access to source code as a condition of market access.Contr
180、ol over source code is a key source of value and can determine control of the AI model.The CPTPP,USMCA,and the Australia-Singapore DEAs include commitments not to require access to source code as a condition of import.These commitments are also balanced again the need for access by the 24 government
181、 for regulatory purposes.For example,USMCA preserves the rights of regulatory or judicial bodies to require access to source code for a specific investigation,inspection,enforcement action,or judicial proceedings subject to safeguards against unauthorized disclosures.69 Interoperability Another focu
182、s of trade policy and international economic cooperation more broadly is on developing interoperability mechanisms.Interoperability is focused on enabling cross-border data flows given different approaches to data regulation.This matters for AI development given the importance of data for AI and LLM
183、s in particular.For example,CPTPP states that the parties will“encourage the development of mechanisms to promote compatibility between these different regimes.These mechanisms may include the recognition of regulatory outcomes,whether accorded autonomously or by mutual arrangement,or broader intern
184、ational frameworks.”70 USMCA states that each party should encourage the development of mechanisms to promote compatibility between these different regimes.The parties to USMCA“recognize that the APEC Cross-Border Privacy Rules system is a valid mechanism to facilitate cross-border information trans
185、fers while protecting personal information”another way of saying that this is an interoperability mechanism.Open government data Related to the importance of access to data for AI,trade agreements increasingly include a commitment to open government data.Governments possess considerable amounts of d
186、ata,whether in the form of tax returns,medical records,or meteorological data.All of this data has potential use cases in training AI systems.The move to make government data more accessible therefore matters for AI.For example,the USMCA digital trade chapter includes provisions on the availability
187、of government data.71 AI-specific commitments The New Zealand-U.K.FTA has notably gone further than other FTAs with respect to making specific AI commitments in the digital trade chapter.This includes an agreement to account for principles and guidelines of relevant international bodies when develop
188、ing AI governance frameworks to take a risk-based approach to AI regulation that acknowledges industry-led standards development and risk management best practices.Other areas 69 USMCA Article 19.16 70 CPTPP Article 14.8 71 USMCA Article 19.18;CPTPP 25 of AI cooperation include enforcement,cross-bor
189、der research and development,and algorithmic transparency.The Australia-Singapore Digital Economy Agreement and the Digital Economy Partnership Agreement(DEPA)are also starting to directly address AI in the context of ethical use,standards development,talent,and more.The parties to DEPA have agreed
190、to endeavor to promote ethical governance frameworks that support the trusted,safe,and responsible use of AI technologies and to take into consideration internationally recognized principles,including explainability,transparency,fairness,and human-centered values.72 In the Australia-Singapore DEA,th
191、e parties have agreed to share research and industry practice around AI technologies and their governance,to promote the responsible use of AI technologies,and collaborate in the development and adoption of AI governance frameworks that support trusted,safe,and responsible use of AI technologies,tak
192、ing into account international principles or guidelines on AI governance.73 International AI standards Some FTAs and DEAs have also included a limited commitment to AI standards development and use.74 The New Zealand-U.K.FTA and Australia-Singapore DEA for instance include commitments to participate
193、 in the development of AI standards in regional and international bodies,share experience developing standards,exchange views on potential future areas to develop and adopt standards,and build cooperation with industry on research projects that can increase understanding of the AI standards needed.7
194、5 72 DEPA Article 8.2 73 Australia-Singapore DEA Article 31 74 Australia-Singapore DEA Article 31 75 Australia-Singapore DEA Article 30 26 Other international economic forums As already touched on,there are a range of other international economic forums where cooperation on AI is being developed.The
195、 key ones are the G7,the U.S.-EU Trade and Technology Council,and the OECD.This is not a complete overview of the forums for international discussion on AI,which also include work by the U.N.to develop a Global Digital Compact and in the Global Partnership on AI(GPAI).The Indo-Pacific Economic Forum
196、,GPAI,and the Quad are also involved in different ways with developing cooperation on AI but are not addressed further here as they have yet to lead on AI governance in the way that has been seen with the G7,TTC,and OECD.The G20 is another international economic forum where AI and AI-related issues
197、have been discussed.However as noted earlier,the role of the G20 is not addressed here as geopolitical competition with China over AI and the inclusion of Russia makes G20 progress on AI issues unlikely.The G7 The G7 is emerging as a key venue for leadership on a range of digital policy issues inclu
198、ding AI.Most recently,in 2023 G7 leaders established the Hiroshima AI process with a focus on generative AI.76 On October 30,2023 the G7 released International Guiding Principles for Organizations Developing Advanced AI Systems and an International Code of Conduct for Organizations Developing Advanc
199、ed AI Systems,both documents covering foundational AI models.77 The Guiding Principles updates the OECD AI Principles to consider new risks posed by foundational AI models.The Code of Conduct builds on the Voluntary AI Commitments large tech companies made at the White House in July and is a set of
200、steps companies agree to take to“seize the benefits and address the risks and challenges brought about by these technologies.”76 G7 Hiroshima Leaders Commuique.77 https:/digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system 27 The G7 has also
201、been central in developing the G20 notion of data free flow with trust(DFFT).This includes in 2023 the G7 agreement to establish an Institutional Arrangement for Partnership(IAP)to progress DFFT.Relatedly,the G7 has also developed the importance of interoperability as a means of enabling DFFT.The 20
202、23 G7 Digital and Tech Ministers Statement also reaffirmed the importance of developing interoperability mechanisms,specifically with respect to AI governance frameworks.Under the G7 2023 Digital and Tech Track the G7 will:78 o Raise awareness of international AI technical standards.o Build capacity
203、 among stakeholders to participate in the development of international AI technical standards.o Encourage the adoption of international AI standards as a tool for advancing trustworthy AI.The G7 has also been leading the development of principles on digital trade that can also support AI.In 2021 the
204、 G7 released G7 Digital Trade Principles,which includes the principle that“data should be able to flow freely across borders with trust”and elaborates on how to balance opportunities from data flows with the need for domestic regulation that might restrict cross-border data flows.79 This includes an
205、 agreement to“address unjustified obstacles to cross-border data flows,while continuing to address privacy,data protection,the protection of intellectual property rights,and security.”80 The 2021 G7 Digital Trade Principles also recognize the need to“cooperate to explore commonalities in our regulat
206、ory approaches and promote interoperability between G7 members.”81 The US-EU Trade and Technology Council(TTC)In the TTC,the U.S.and EU have identified trustworthy and innovative AI as a key priority.Since then,the TTC has made some progress on AI cooperation.The main area is the development of a jo
207、int road map with a focus on three areas of cooperation:o Interoperable definitions of key terms such as trustworthy,risk,harm,risk threshold,and socio-technical characteristics such as bias,robustness,safety,interpretability,and security.A shared and consistent understanding of these concepts and t
208、erminology is key 78 https:/g7digital-tech-2023.go.jp/topics/pdf/pdf_20230430/ministerial_declaration_dtmm.pdf 79 G7 Digital Trade Principles 80 G7 Digital Trade Principles 81“G7 Trade Ministers Digital Trade Principles,”GOV.UK,October 22,2021,https:/www.gov.uk/government/news/g7-trade-ministers-dig
209、ital-trade-principles.28 for operationalizing AI and risk management in an interoperable fashion.o Support for multi-stakeholder development of AI standards,including cooperation on AI standards development,convening stakeholders to promote representation in SDOs,promoting the development and use of
210、 international AI Standards,and developing technical tools to map,measure,manage,and govern AI risks.The U.S.and EU also agreed to adhere to the WTO TBT principles,i.e.,to use international standards as appropriate as the basis for technical regulations,conformity assessment,and regional standards.8
211、2 o Monitor and measure existing and emerging AI risks,including developing a tracker of risks and risk categories that can provide common ground for the U.S.and EU to better define risks and their impact.The May 2023 TTC Ministerial produced the EU-U.S.Terminology and Taxonomy for Artificial Intell
212、igence,a list of 65 AI terms.83 This includes technical terms such as synthetic data and reinforcement learning as well as more socio-technical terms such as what is meant by accuracy,human-centric AI,and resilience.These terms are a“first edition”and open to feedback and further revision.Alignment
213、in AI terms is a necessary building block to more robust cooperation on international standards for trustworthy AI.Developing a shared understanding of these terms is needed as a building block toward developing a common approach to AI standards,regulations,and policies.Getting broader agreement on
214、key terms can help align domestic AI regulation and underpin international cooperation on auditing to support the development of international AI standards.The U.S.-EU TTC is also engaging in open government data.This includes identifying and promoting best practices for open government data,facilit
215、ating collaboration between government agencies,businesses,and civil society organizations on open government data,supporting research and development on open government data,and promoting international standards for open government data.82 TTC Joint Roadmap on Evaluation and Measurement Tools for T
216、rustworthy AI and Risk Management,December 1,2022.https:/www.nist.gov/system/files/documents/2022/12/04/Joint_TTC_Roadmap_Dec2022_Final.pdf 83 https:/ec.europa.eu/commission/presscorner/detail/en/statement_23_2992 29 Standards development organizations There is already significant activity underway
217、in various domestic,regional,and global standards development organizations(SDOs)on AI technical and socio-technical standards.AI standards are being developed in SDOs such as the International Organization for Standardization(ISO),the Institute of Electrical and Electronics Engineers(IEEE),and the
218、International Telecommunication Union(ITU).This includes AI standards around concepts and terminology(ISO/IEC 22989)and AI risk management systems(ISO/IEC 42001 and 23894).There is also a range of AI standards under development on data quality management and governance,AI system testing,and oversigh
219、t of AI systems.For instance,the IEEE has a draft standard for Algorithmic Bias Considerations,a draft standard addressing the record-keeping requirements in the EU AI Act,and a Standards Model Process for Addressing Ethical Concerns During Systems Design.84 A defining feature of global SDOs such as
220、 the IEC,ISO,and IEEE is that they are multi-stakeholder and industry-led.Governments and civil society participate alongside the private sector.International standards developed by global SDOs are typically based on consensus and are voluntary,in that it remains up to governments and businesses whe
221、ther to use them.Yet,despite their voluntary nature,international AI standards developed by global SDOs will likely have significant effects on AI.AI developers are likely to use AI standards as benchmarks in contracts and as a basis for industry self-regulation.Governments are also likely to refere
222、nce AI standards in domestic laws or regulations,making them de facto binding.Indeed,the EU Act will rely extensively on AI standards in areas such as risk management systems,governance and quality of data sets,record keeping,human oversight,and post-market monitoring.Under the EU AI Act,conformity
223、with AI standards will create a presumption of conformity with the Act.The NIST AI RMF also references multiple AI standards from global SDOs.The importance of international cooperation on standards,as well as the role of international standards in minimizing unnecessary regulatory diversity that ca
224、n segment markets and raise costs of compliance,has long been a feature of trade policy.As outlined,the WTO TBT agreement,which is also reflected in FTAs includes commitments to base domestic regulation on international standards.When it comes to AI,the development of international AI standards in g
225、lobal SDOs provides an opportunity to use trade policy to reinforce the importance of cooperation on AI standards and to using international AI standards as a basis for domestic regulation.84 IEEE P7003,IEEE P7001,IEEE 7000 30 Part 3:Next steps for trade policy and discussions in international econo
226、mic forums As discussed,foundational AI models heighten the need for international cooperation on AI,particularly in light of the speed at which LLMs like ChatGPT4 are being developed and adopted.Indeed,the call for a moratorium on further versions of ChatGPT4 applications and research speaks to gro
227、wing anxiety.85 As outlined,there are already important commitments on digital trade that matter for AI,and AI is a focus of discussion in a range of international economic forums.The rapid pace of AI development,the learning needed to understand the opportunities and risks of AI,as well as the need
228、 to develop best practices when it comes to AI regulation require a strategic two-tiered,mutually reinforcing role for trade agreements and discussion in international economic forums.Trade agreements should elevate the output from AI-focused forums and standards bodies into trade commitments and de
229、velop new commitments.International economic forums such as the G7,the TTC,and the OECD also provide opportunities for sharing regulatory experience and testing new forms of cooperation on AI that could be later ripe for inclusion in trade agreements.FCAI as a track 1.5 dialogue is another forum to
230、explore cutting-edge AI issues.The following outlines where additional commitments in trade agreements and DEAs are also needed and where to build on the AI-focused discussions in the various international economic forums.Access to AI compute AI compute covers the hardware and software that supports
231、 AI workloads and applications.86 Access to AI compute is critical if countries are to develop foundational AI and LLMs.Yet the AI compute needed to run foundational AI models keeps growing rapidly.By some estimates,the computational capacity required to train AI models has grown by hundreds of thou
232、sands of times since 2012.87 For instance,training ChatGPT4 has required access to supercomputers using state-of-the-art hardware(CPUs and GPUs)and high-bandwidth networks that access top cloud infrastructure.88 AI platforms or software are also needed to build or implement AI capabilities,such as T
233、ensorFlow or PyTorch,as well as the applications to deliver AI capabilities.85 Pause Giant AI Experiments:An Open Letter,March 22,2023 https:/futureoflife.org/open-letter/pause-giant-ai-experiments/86 OECD.AI Expert Group on AI Compute and Climate 87 Sevilla,J.et al.(2022),“Compute Trends Across Thr
234、ee Eras of Machine Learning”,https:/arxiv.org/abs/2202.05924 88 Microsoft announced new supercomputer,lays out vision for future AI,May 19,2020,Microsoft announces new supercomputer,lays out vision for future AI work-Source 31 According to one survey of 450 industry professionals in the U.S.and Euro
235、pe,access to AI compute is now the key challenge facing AI development,surpassing access to data.89 In the U.S.,the NAIRR Task Force highlighted the extent that the AI R&D ecosystem in the U.S.is becoming inaccessible for many businesses and researchers.These developmentsthe growing cost of AI compu
236、te needed to train LLMspoint to the need for expanding AI capacity.90 Trade policy can support the development of access to AI compute and data by reducing barriers to AI infrastructure,data,and cloud computing as well as AI services.In some cases,this may be about reducing trade barriers to the har
237、dware needed for AI compute.In other cases,it is about reducing barriers to trade in services that are needed to access AI compute and AI services themselves.For instance,Turkeys prohibition on use of cloud computing services by public institutions and Koreas cloud security requirements create barri
238、ers to trade in cloud services that can negatively affect the development and uptake of AI.91 Commitments in trade agreements on avoiding data localization measures could get at some of these barriers and highlight their relevance for AI.Risk-based AI regulation One area where trade policy could be
239、developed further is by giving added content to existing international agreement that AI regulation will be risk-based.As noted,the New Zealand-U.K.FTA includes a commitment to a risk-based approach to AI.The 2023 TTC ministerial affirmed the importance of a risk-based approach.92 In addition to the
240、 EU and the U.S.,various other governments are developing risk-based approaches in their AI regulation,including Japan,the U.K.,Canada,and Brazil.More is needed on what it will mean for regulation to be risk-based,including what are the risk assessment and risk management tools that governments deve
241、lop,and organizations adopt.The NIST AI RMF is one example of how organizations can go about conducting a risk assessment for AI that could be used globally.93 The AI RMF also references international AI standards,making the AI RMF a strong candidate for building interoperability among AI regulation
242、s calling for a risk-based approach to AI.Trade agreements could incorporate or 89 Run:AIs 2023 State of AI Infrastructure survey reveals that infrastructure and compute have surpassed data scarcity as the top barrier to AI development().https:/ 90 A Blueprint for Building National Compute Capacity
243、for Artificial Intelligence,OECD Digital Economy Papers,February 2023,No.350 91 2023 NTE Report.pdf(ustr.gov).https:/ustr.gov/sites/default/files/2023-03/2023%20NTE%20Report.pdf.92 U.S.-EU Joint Statement of the Trade and Technology Council|The White House 93 https:/www.nist.gov/itl/ai-risk-manageme
244、nt-framework 32 reference the AI RMF as an agreed tool.The TTC and G7 could also reference the AI RMF as an example of an approach to a risk-based approach to AI regulation.94 There are other ways that FTAs and DEAs can develop commitments to risk-based AI regulation.The WTO Sanitary and Phytosanita
245、ry(SPS)Agreement provides some guidance here.A key commitment in the SPS Agreement is that governments undertake risk assessments and base their SPS measures on risk assessments.95 Other relevant SPS commitments are that regulations are not more trade restrictive than necessary to achieve the approp
246、riate level of SPS protection.96 Under the SPS agreement,governments also remain free to set their own level of risk tolerance.Using the SPS Agreement as a guide,FTAs and DEA could include commitments to base AI regulation on a risk assessment,to specify the risks against which potential harm is to
247、be assessed,and to provide explanations for risk management practices and approaches that in effect regulate AI in ways that are more restrictive than necessary to achieve each governments chosen level of risk tolerance.Government procurement and responsible and trustworthy AI Trade agreements could
248、 include commitments on government procurement that supports the development of responsible and trustworthy AI.In many countries,government procurement will be an important way to influence how AI is developed.For instance,U.S.government agencies are required to develop regulatory plans for AI,and a
249、 number have done so.97 The U.S.Executive Order on AI directs federal government agencies to develop standards and guidelines and reports to address risks from AI as well as to encourage the uptake and use by the federal government of AI.98 EU agencies will also need to develop AI policies under the
250、 EU AI Act as they assume responsibility for regulating AI incorporated into regulated products.Governments can also seek to drive responsible and trustworthy AI by setting standards through government procurement.Trade agreements can help here by including commitments that government procurement co
251、ntracts are based on international AI standards and are nondiscriminatory.99 Commitments such as these would support the uptake and globalization of international AI standards and promote regulatory compatibility 94 U.S.-EU Joint Statement of the Trade and Technology Council|The White House https:/w
252、ww.whitehouse.gov/briefing-room/statements-releases/2022/12/05/u-s-eu-joint-statement-of-the-trade-and-technology-council/.95 SPS Agreement Article 5.1 96 SPS Agreement Article 5.5 97 Maintaining American Leadership in Artificial Intelligence,February 2019 EO 13859 and OMB guidance M-21-06 98 US Exe
253、cutive Order on Safe,Secure and Trustworthy Artificial Intelligence https:/www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/99 See for example the NZ-UK FTA,Article 16.4&16.9 33 am
254、ong countries.The G7 Code of Conduct for AI could also be expanded via its uptake in government procurement contracts.Commitments to nondiscrimination would also support international trade in AI products and services.Conformity assessment and auditing of LLMs Assessing compliance of AI systems with
255、 AI regulations and standards will require ex-ante conformity assessment mechanisms and ex-post monitoring,as well as auditing of AI systems in goods and services.When AI is exported in products such as medical devices or motor vehicles,mutual recognition agreements(MRAs)between countries of conform
256、ity assessment can allow for testing AI products with the importing countrys AI regulation in the country of export,reducing the uncertainty and costs of trade.A complementary step is recognition by the importing country of conformity assessment bodies in the exporting country able to undertake the
257、conformity assessment.There are various efforts underway to develop conformity assessment and auditing systems for AI.The EU AI Act requires ex-ante conformity assessments for high-risk AI systems by third parties referred to in the AI Act as a“notified body.”Avoiding such requirements for conformit
258、y assessment becoming a trade barrier will require the development of mutual recognition agreements with other countries.The AI Act does seem to foresee MRAs with third countries.Entities responsible for high-risk AI systems must also meet auditing documentation requirements.In the U.S.,regulatory a
259、uthorities such as the Federal Trade Commission are focusing on ex-post oversight of industry self-assessment of compliance with their AI policies.Various DEAs have made initial progress on building cooperation on MRAs that could apply to AI.For example,the Australia-Singapore DEA includes a recogni
260、tion of the importance of conformity assessment to support digital trade and includes an agreement to“endeavor to exchange information to facility conformity assessment to support digital trade.”100 Building on this could include new commitments to developing the necessary MRAs and recognition of co
261、nformity assessment bodies with respect to AI.Another related area where trade policy could do more is with respect to auditing foundational AI models.The AI Act requires that third-party conformity assessment bodies carry out periodic audits to ensure that the AI provider has the internal quality m
262、anagement system and to provide an audit report.101 The proposed amendments by the EU Parliament to the AI Act note the need to develop auditing capacity and call for internal auditing of foundational AI models 100 Australia-Singapore DEA Article 30.5 101 Ai Act Annex VII.Clause 5.3 34 to be broadly
263、 applicable,i.e.,a common approach to assessing risk across AI systems.102 It is likely that the auditing of AI systems will become a feature in how the government regulates AI.Auditing may also be needed where AI regulation relies on self-assessment by companies for compliance with laws and regulat
264、ions and with their own internal standards and processes for delivering trustworthy AI.To effectively audit foundational AI models may require a tiered and multilayered approach.This could include governance audits that assess the organization developing the AI model,its organizational procedures,ac
265、countability structures,and quality management systems.Process audits of the AI model and its datasets,as well as ex-post downstream application audits,may also be necessary.103 Enabling effective audits and avoiding audit requirements becoming trade barriers will require common auditing standards a
266、nd recognition of audit reports carried out in third countries.This can be facilitated by MRAs with third countries that recognize who can qualify as an auditor and what is an audit report for domestic AI regulation.Trade agreements could include commitments to MRAs for auditing.In addition,trade ag
267、reements could be used to support domestic uptake of conformity assessment and auditing processes based on international standards.For instance,the ISO/IEC is working on a standard on how to carry out a conformity assessment for AI management systems and the needed competencies for AI auditors.Basin
268、g conformity assessment and auditing systems on international standards could enable interpretability of auditing reports across countries,facilitating compliance with domestic AI regulation and building trust in AI systems.Cooperation on international AI standards As outlined,there are already some
269、 international principles that can guide AI developers,and considerable work is underway in developing AI standards in global SDOs.There are two areas where trade policy can support the development and use of international AI standards.First is by developing new TBT-like commitments that apply to in
270、ternational standards for services,which would cover AI.Second is by supporting the development of international AI standards in global SDOs.Working to align regional approaches to AI standards with international AI standards Trade policy in FTAs and DEAs can build on WTO TBT commitments and include
271、 a commitment to base domestic AI regulation on international AI standards while providing flexibility to adapt international AI standards 102 AI Act draft compromise amendments,p.29 clause(60h)103 J.Mokander,et al,“Auditing Large Language Models:A Three-Layered Approach”,16 Feb 2023 35 where necess
272、ary to respond to local needs and conditions.This would require going beyond the TBT agreement and extending the commitment to AI as a service.AI regulation based on international AI standards should also benefit from a presumption of consistency with the trade agreement.There are,however,limitation
273、s with the TBT-style approach in the AI context and in particular,the scope of flexibility the TBT Agreement provides to ignore international standards in favor of domestic/regional standards where the government decides that the international AI standard is not fit for purpose.This is due to the so
274、cio-technical nature of many AI standards that seek to address technical AI issues as well as many of the broader societal and rights-based impacts of AI.This means that many of the AI standards being developed under the AI Act,for example,will need to address the risks of AI to EU fundamental right
275、s.For instance,the EU AI Act requires standards to establish a risk management system for high-risk AI systems.There are already global standards dealing with risk management,specifically,ISO/IEC 31000 contains general guidelines on risk management,and the AI-specific ISO/IEC 23894 addresses how org
276、anizations manage risk.On the one hand,there is an opportunity here to align the EU approach to risk management under the AI Act with global AI standards.Yet,the ISO/IEC standards which address whether AI systems operate consistently with an organizations standards may not meet the EUs need for a ri
277、sk management system for the impact of AI systems on European fundamental rights.104 This raises the prospect that the EU standards bodies conclude that international AI standards are not fit for purpose and require instead a regional approach.To further strengthen a requirement to base domestic reg
278、ulation on international AI standards,trade agreements should also include commitments that governments will ensure a domestic standards process that is transparent and open to broad participation,opportunities for all stakeholders to submit comments,and obligations on regulators to provide reasons
279、for their decision.Such an outcome would give confidence that departures from international AI standards were driven by legitimate domestic needs rather than protectionism.104 Soler Garrido,J.,Fano Yela,D.,Panigutti,C.,Junklewitz,H.,Hamon,R.,Evas,T.,Andr,A.and Scalzo,S,Analysis of the preliminary AI
280、 standardisation work plan in support of the AI Act,Publications Office of the European Union,Luxembourg,2023,doi:10.2760/5847,JRC132833.36 Ensure development of AI standards by SDOs that are fit for purpose Commitments to base domestic AI regulation on international AI standards also require agreem
281、ent on what the standards bodies are that can produce the relevant international standards.The TBT Agreement provides guidance here in Annex 1 which defines standards as being based on consensus and as being developed by a body whose membership is open to the relevant bodies of at least all WTO memb
282、ers.The WTO TBT Committee Principles for the Development of International Standards adds further detail and lists the principles and procedures that should be observed when developing international standards.105 This includes for instance transparency and openness to all WTO Members.These TBT princi
283、ples seem relevant today for identifying the standards bodies developing international AI standards.As outlined,discussions in international economic forums on international standards now also include discussion of the operation of the SDOs themselves,such as expanding participation in SDOs by devel
284、oping-country governments,industry,and civil society.106 The TTC is also bringing together the U.S.and EU standards bodies and related organizations to work on metrics and methodologies for measuring AI trustworthiness including risk management methods.However,this is one area where the limited memb
285、ership of the TTC could restrict the impact of its work,which should aim for global uptake.This suggests at least seeding similar efforts in the G7.Data governance for AI There are currently only limited commitments in trade agreements on some of the data governance issues specific to AI that addres
286、s how better data governance can help minimize risks that the data used to train the AI models can cause discrimination,lead to unfair outcomes,misinformation,and privacy violations.As outlined,DEPA and the Australia-Singapore DEA include commitments to sharing information and cooperation on AI gove
287、rnance Frameworks and to the G7 work on Data Fee Flow with Trust(DFFT),and the Global Cross-Border Privacy Rules(CBPR)Forum seeks to facilitate trusted access to personal data.Trade agreements and DEAs also increasingly include a commitment to protecting the 105 Decision of the TBT Committee on Prin
288、ciples for the Development of International Standards,Guides and Recommendations with Relation to Articles 2,5 and Annex 3 of the Agreement.WTO|Principles for the Development of International Standards,Guides and Recommendations:https:/www.wto.org/english/tratop_e/tbt_e/principles_standards_tbt_e.ht
289、m 106 Joshua P.Meltzer,“A Critical Technology Standards Metric,assessing the development of critical technology standards in the Asia-Pacific”,Brookings Report,September 2022 https:/www.brookings.edu/wp-content/uploads/2022/09/CTSM-Report-Sep-2022_Final.pdf 37 privacy of personal data.107 This is a
290、good beginning,but more focused cooperation on data for AI and foundational AI is needed.For example,the heightened risk from LLMs of discrimination and toxicity from foundational AI also underscores the importance of best practices when it comes to data curation and data governance.This is a comple
291、x area,but initial steps could aim to share best practices in terms of how LLMs document their data governance practices,how to incentivize appropriate data governance,and methods and experience with opening data and algorithms to scrutiny.Looking ahead,a better understanding of data needs for found
292、ational AI,including the opportunities for synthetic data,would benefit from international cooperation.Data governance for AI is also being taken up in international standards bodies,it is part of the NIST AI RMF and there are data governance requirements in the EU AI Act.More robust commitments in
293、trade agreements and DEAs on how to use and reflect international AI standards as they apply to data governance is another way to level up a more consistent and robust approach to data governance for LLMs.Another area where trade policy could add weight is government access to personal data held by
294、private entities for law enforcement and national security purposes.The question of U.S.government access to such data was at the heart of the Schrems II case that has led the Court of Justice of the European Union to invalidate Privacy Shield.108 In December 2022,the OECD adopted a set of principle
295、s governing government access to personal data.109 This declaration marks an important development in getting at how to enable data free flow with trust.The declarations principles balance government access to personal data for the legitimate needs of law enforcement and national security,with the n
296、eed to also protect privacy consistent with broader democratic norms.One focus for the recently agreed G7 IAP will be to increase awareness of this OECD declaration.This declaration could also be specifically referenced in trade agreements as the basis for a shared understanding of the terms on whic
297、h governments can access data for national security and law enforcement purposes.Transparency of reporting on foundational AI use Building trust in how risks from foundational AI models are being addressed will require that the companies responsible for developing and testing foundational AI be tran
298、sparent about the steps they take to test and mitigate these risks.In the U.S.,large technology companies that are at the forefront of developing 107 CPTPP Article 14.8,USMCA Article 19.8,DEPA Article 4.2.108 CJEU,Data Protection Commissioner v Facebook Ireal and Maximillian Schrems,C-311/18,16 July
299、 2020 109 OECD Declaration on Government Access to Personal Data Held by Private Sector Entities,14/12/2022 https:/legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0487 38 foundational models made Voluntary AI Commitments at the White House in July where they agreed to publicly disclose red-teami
300、ng and safety procedures in transparency reports and to share information among companies and with governments on advances in frontier capabilities and emerging risks and threats.110 These Voluntary AI Commitments were subsequently used by the G7 as the basis for the International Code of Conduct fo
301、r Organizations.Modern trade agreements include comprehensive commitments by governments to regulatory transparency and due process when it comes to developing regulation affecting international trade,including opportunities for comment and commitments for written responses to comments.Trade agreeme
302、nts and DEAs could build on this approach and reference the G7 Code of Conduct as steps that all developers of foundational AI models would agree to take,whether government or private sector actors.The G7 Code of Conduct also targets actions that organizations should take to enhance information shar
303、ing and disclosure of AI governance and risk management policies.These could be turned from voluntary commitments into binding commitments by way of trade agreements,further strengthening trust in foundational AI models.Moreover,in order for reporting and disclosure to be meaningful across countries
304、 will require some agreement on what information should be reported and disclosed and in what form.This is another area where FTAs and DEAs could elaborate.110 Voluntary AI Commitments,https:/www.whitehouse.gov/wp-content/uploads/2023/09/Voluntary-AI-Commitments-September-2023.pdf 39 Part 4:Conclusi
305、on AI will have significant implications for how economies grow,what jobs are done,how societies work,and how governments function.The release by OpenAI of ChatGPT4 has highlighted the rapid progress being made in foundational AI,with potentially significant new opportunities for economic growth and
306、 human flourishing,but also with new risks.Governments are looking to regulate AI,and this is where much of what matters for AI governance will play out.International cooperation is needed to ensure that the AI governance that emerges is effective,enhances economic and social flourishing,and address
307、es the spillover and extra-territorial impact of domestic AI regulation.This paper outlines a role for building international cooperation through trade agreements as well as through the various international discussions on AI happening in the G7,the U.S.-EU TTC,and the OECD.In fact,as this paper out
308、lined there is a lot happening and progress is being made.Yet,there are several areas where international cooperation needs to be deepened and expanded in light of foundational AI.This includes how to align approaches to risk assessment for AI,cooperation on conformity assessment and auditing of AI
309、systems,developing international AI standards,and more.What seems clear is that the key challenge will be to maximize opportunities to use AI globally while ensuring that AI is responsible and trustworthy.This will require regulating AI to minimizes the risks and build trust in the technology,withou
310、t stifling AI innovation and access.This is a big governance challenge that governments,industry,and civil society are only beginning to understand how to navigate.Undoubtedly,innovative approaches to domestic regulation and international cooperation will be required.Developing new commitments in tr
311、ade agreements and DEAs,while also expanding and deepening discussions in international economic forums,present key opportunities for developing flexible and new approaches to international cooperation on AI governance that will be required.1775 Massachusetts Ave NW,Washington,DC 20036(202)797-6000www.brookings.edu