《WEF&埃森哲:2024生成式人工智能時代的全方位治理策略:構建彈性政策與監管框架白皮書(英文版)(37頁).pdf》由會員分享,可在線閱讀,更多相關《WEF&埃森哲:2024生成式人工智能時代的全方位治理策略:構建彈性政策與監管框架白皮書(英文版)(37頁).pdf(37頁珍藏版)》請在三個皮匠報告上搜索。
1、Governance in the Age of Generative AI:A 360 Approach for Resilient Policy and RegulationW H I T E P A P E RO C T O B E R 2 0 2 4In collaboration with AccentureImages:Getty Images,MidjourneyDisclaimer This document is published by the World Economic Forum as a contribution to a project,insight area
2、or interaction.The findings,interpretations and conclusions expressed herein are a result of a collaborative process facilitated and endorsed by the World Economic Forum but whose results do not necessarily represent the views of the World Economic Forum,nor the entirety of its Members,Partners or o
3、ther stakeholders.2024 World Economic Forum.All rights reserved.No part of this publication may be reproduced or transmitted in any form or by any means,including photocopying and recording,or by any information storage and retrieval system.ContentsForeword 3Executive summary 4Introduction 51 Harnes
4、s past 61.1 Examine existing regulations complicated 6 by generative AI attributes 1.2 Resolve tensions between policy objectives 9 of multiple regulatory regimes 1.3 Clarify expectations around 10 responsibility allocation 1.4 Evaluate existing regulatory authority capacity 11 for effective enforce
5、ment 2 Build present 122.1 Address challenges of stakeholder groups 122.2 Facilitate multistakeholder knowledge-sharing 18 and interdisciplinary efforts3 Plan future 213.1 Targeted investments and upskilling 213.2 Horizon scanning 223.3 Strategic foresight 253.4 Impact assessments and agile regulati
6、ons 253.5 International cooperation 26Conclusion 27Contributors 28Endnotes 33Governance in the Age of Generative AI2ForewordWe are living in a time of rapid innovation and global uncertainty,in which generative artificial intelligence(AI)stands out as a transformative force.This technology impacts v
7、arious industries,economies and societies worldwide.With the European Unions(EUs)AI Act now in effect,we have a precedent for comprehensive AI regulation.The US,Canada,Brazil,the African Union,Japan and China are also developing their own regulatory approaches.This pivotal moment calls for visionary
8、 leadership and a collaborative approach to anticipatory governance.Over the past year,the AI Governance Alliance has united industry and government with civil society and academia,establishing a global multistakeholder effort to ensure AI serves the greater good while maintaining responsibility,inc
9、lusivity and accountability.We have been able to position ourselves as a sounding board for policy-makers who are grappling with the difficulties of developing AI regulatory frameworks,and to convene all players from the AI value chain to create a meaningful dialogue on emerging AI development issue
10、s.With Accenture as its knowledge partner,the Alliances Resilient Governance and Regulation working group(composed of over 110 members),has contributed to shaping a shared understanding of the global regulatory landscape.The group has worked to establish a comprehensive governance framework that cou
11、ld be used to regulate generative AI use well into the future.This paper is a culmination of those efforts and equips policy-makers and regulators with a clear roadmap for addressing the complexities of generative AI by examining existing regulatory gaps,the unique governance challenges of various s
12、takeholders and the evolving forms of this technology.The outputs of this paper are designed to be practical and implementable,providing global policy-makers with the tools they need to enhance generative AI governance within their jurisdictions.Through this paper,our AI Governance Alliance:Briefing
13、 Paper Series,launched in January 2024,and our events and community meetings,we seek to create a tangible impact in AI literacy and knowledge dissemination.Given the international context in which this technology operates,we advocate for a harmonized approach to generative AI governance that facilit
14、ates cooperation and interoperability.Such an approach is essential for addressing the global challenges posed by generative AI and for ensuring that its benefits are shared equitably,particularly with low-resource economies that stand to gain significantly from its responsible deployment.We invite
15、policy-makers,industry leaders,academics and civil society to join us in this endeavour.Together,we can shape a future where generative AI contributes positively to our world and ensures a prosperous,inclusive and sustainable future for all.Arnab Chakraborty Chief Responsible AI Officer,AccentureCat
16、hy Li Head,AI,DataandMetaverse;Deputy Head,Centrefor the FourthIndustrialRevolution;Member,ExecutiveCommittee,WorldEconomic ForumGovernance in the Age of Generative AI:A 360 Approach for Resilient Policy and Regulation October 2024Governance in the Age of Generative AI3Executive summaryThe rapid evo
17、lution and swift adoption of generative AI have prompted governments to keep pace and prepare for future developments and impacts.Policy-makers are considering how generative artificial intelligence(AI)can be used in the public interest,balancing economic and social opportunities while mitigating ri
18、sks.To achieve this purpose,this paper provides a comprehensive 360 governance framework:1 Harness past:Use existing regulations and address gaps introduced by generative AI.The effectiveness of national strategies for promoting AI innovation and responsible practices depends on the timely assessmen
19、t of the regulatory levers at hand to tackle the unique challenges and opportunities presented by the technology.Prior to developing new AI regulations or authorities,governments should:Assess existing regulations for tensions and gaps caused by generative AI,coordinating across the policy objective
20、s of multiple regulatory instruments Clarify responsibility allocation through legal and regulatory precedents and supplement efforts where gaps are found Evaluate existing regulatory authorities for capacity to tackle generative AI challenges and consider the trade-offs for centralizing authority w
21、ithin a dedicated agency2 Build present:Cultivate whole-of-society generative AI governance and cross-sector knowledge sharing.Government policy-makers and regulators cannot independently ensure the resilient governance of generative AI additional stakeholder groups from across industry,civil societ
22、y and academia are also needed.Governments must use a broader set of governance tools,beyond regulations,to:Address challenges unique to each stakeholder group in contributing to whole-of-society generative AI governance Cultivate multistakeholder knowledge-sharing and encourage interdisciplinary th
23、inking Lead by example by adopting responsible AI practices3 Plan future:Incorporate preparedness and agility into generative AI governance and cultivate international cooperation.Generative AIs capabilities are evolving alongside other technologies.Governments need to develop national strategies th
24、at consider limited resources and global uncertainties,and that feature foresight mechanisms to adapt policies and regulations to technological advancements and emerging risks.This necessitates the following key actions:Targeted investments for AI upskilling and recruitment in government Horizon sca
25、nning of generative AI innovation and foreseeable risks associated with emerging capabilities,convergence with other technologies and interactions with humans Foresight exercises to prepare for multiple possible futures Impact assessment and agile regulations to prepare for the downstream effects of
26、 existing regulation and for future AI developments International cooperation to align standards and risk taxonomies and facilitate the sharing of knowledge and infrastructureGovernments should address regulatory gaps,engage multiple stakeholders in AI governance and prepare for future generative AI
27、 risks.Governance in the Age of Generative AI4IntroductionAs organizations and individuals consider how best to adopt generative artificial intelligence(AI),new powerful capabilities continue to emerge.For some,humanitys future with generative AI can feel full of promise,and for others,concern.Indee
28、d,across industries and sectors,generative AI presents both opportunities and risks.For example will generative AI enhance personalized treatment plans improving patients health outcomes,or will it induce novel biosecurity risks?Will journalism be democratized through new storytelling tools,or will
29、disinformation be scaled?There is no single guaranteed future for generative AI.Rather,how society adapts to the technology will depend on the decisions humans make in researching,developing,deploying and exploiting its capabilities.Policy-makers,through effective governance,can help to ensure that
30、generative AI facilitates economic opportunity and fair distribution of benefits,protects human rights,promotes greater equity and encourages sustainable practices.Governance decisions made now will shape the lives of present and future generations,how(and whether)this technology benefits society an
31、d who is left behind.In response to the continued growth of the generative AI industry and rapid adoption of its applications across the world,this papers 360 framework outlines how to build resilient governance that facilitates AI innovation while mitigating risks,from the development stage to its
32、use.The framework is designed to support policy-makers and regulators in the development of holistic and durable generative AI governance.Thespecific implementation of the framework,however,will differ between jurisdictions,depending on the national AI strategy,maturity of AI networks,economic and g
33、eopolitical contexts,individuals expectations and social norms.A 360 framework is needed for resilient generative AI governance,balancing innovation and risk across diverse jurisdictions.A 360 approach for resilient policy and regulationFIGURE 1Top/bottom margin area 8mm360governancePillar 1:Harness
34、 pastPillar 2:Build presentPillar 3:Plan futureEncourage whole-of-society generative AI governance and cross-sector knowledge sharing.Make use of existing regulations and address gaps caused by generative AI.Incorporate preparedness and agility in generative AI governance and facilitate internationa
35、l cooperation.Governance in the Age of Generative AI5Harness pastGreater clarity and certainty regarding existing regulatory environments is necessary to address emerging generative AI challenges and opportunities.With increasing digitalization and a growing trend of monetizing personal and professi
36、onal data,protection of privacy is both vital and complex.Policy-makers are looking to prioritize privacy-preserving considerations.1.1 Examine existing regulations complicated by generative AI attributesSuccessful implementation of national strategies for responsible and trustworthy governance of g
37、enerative AI requires a timely assessment of existing regulatory capacity among other governance tools to tackle the unique opportunities and risks posed by the technology.This includes examination of the adequacy of existing legal instruments,laws and regulations,resolution of regulatory tensions a
38、nd gaps,clarification of responsibility allocation among generative AI supply chain actors and evaluation of competent regulatory authorities effectiveness and capacities.Such assessments must respect the fundamental rights and freedoms already codified in international human rights law,such as the
39、protection of particular groups(e.g.minority rights1 and childrens rights2)as well as legal instruments that are domain-specific(e.g.to cybercrime3 and climate change4).5While generative AIs emerging properties and capabilities may warrant novel regulations,policy-makers and regulators should first
40、examine their jurisdictions existing regulations for addressing new challenges.They should also identify where existing regulations may be applied,adapted or foregone to facilitate the objectives of a national AI strategy.Navigating generative AIs interactions with existing regulations requires a nu
41、anced understanding of both the technical aspects and the legal principles underlying the impacted regulations.Table 1 discusses examples of how regulatory instruments can be complicated in the context of generative AI.Privacy and data protectionGenerative AI models amplify privacy,safety and securi
42、ty risks due to their reliance on vast amounts of training data,powerful inference capability and susceptibility to unique adversarial attacks that can undermine digital trust.6 A number of risks arise from the inclusion of personal,sensitive and confidential information in training datasets and use
43、r inputs,lack of transparency over the lawful basis for collecting and processing data,the ability of models to infer personal data and the potential for models to memorize and disclose portions of training data.With increasing digitalization and a growing trend of monetizing personal and profession
44、al data,protection of privacy is both vital and complex.Policy-makers are looking to prioritize privacy-preserving considerations applicable to digital data while also creating affordances for data pooling that could lead to AI-facilitated breakthroughs.7 Such affordances could be made to promote in
45、novation for public goods in areas such as agriculture,health and education,or within narrowly specified exceptions for data consortia that facilitate the training of AI models to achieve public policy objectives.8 Another emerging issue for policy-makers is that of ensuring generative AI safety and
46、 security,even when it may involve interaction with personal data,as in the case of investigating and responding to severe incidents.This could be addressed through the creation of regulatory exceptions and guardrails to ensure both privacy and responsible AI outcomes.Pillar 1Governance in the Age o
47、f Generative AI6Copyright and intellectual property Generative AI raises several issues relating to copyright infringement,plagiarism and intellectual property(IP)ownership(see Issue spotlight 1),some of which are currently being considered by courts in various jurisdictions.Rights related to protec
48、ting an individuals likeness,voice and other personal attributes are also implicated by the creation of“deepfakes”using generative AI.A blanket ruling on AI training is uncertain and judges could determine the fairness of certain data uses for specific products based on the products features or outp
49、uts frequency and similarity to training data.9 Looking ahead,there is a pressing need for comprehensive examination of regulatory frameworks and for necessary guidance on documenting human creativity in the generation ofcontent as a means of asserting IP protection.Training generative AI systems on
50、 copyright-protected data,and tensions with the text and data mining exceptionISSUE SPOTLIGHT 1Text and data mining(TDM)is the automated process of digitally reproducing and analysing large quantities of data and information to identify patterns and discover research insights.Various jurisdictions a
51、round the world such as Japan,Singapore,Estonia,Switzerland and the European Union(EU)have introduced specific exemptions within their copyright laws to enable TDM extraction from copyright-protected content to innovate,advance science and create businessvalue.Given the vast amounts of data that gen
52、erative AIsystems use to train on and generate new content,jurisdictions should establish regulatory clarity regarding TDM for the purpose of generative AI training.This could be done,for example,by confirming whether AI development constitutes“fair dealing”or“fair use”(a key defence against copyrig
53、ht infringement)or falls within the exemptions recognized in some copyright laws.Countries like the UK are exploring such regulatory exceptions,seeking to promote a pro-innovation AI agenda.10 Ultimately,there is mounting pressure on governments to resolve the copyright tension definitively.11 Licen
54、sing and data access on an“opt-in”or“opt-out”basis are also under examination to address TDM concerns,in addition to a range of technologies and standards that attempt to cede control to creators,allowing them to opt out from model trainers.12 Licensing proponents argue that scraping for generative
55、AI training without paying creators constitutes unlawful copying and is a form of reducing competition.13 AIdevelopers,however,argue that requirements to pay copyright owners for content used in training would constrain model development,negatively impact venture capital(VC)funding and reduce compet
56、ition among generative AI models.14 While they do not eliminate IP law concerns entirely,opt-in/out and licensing efforts could contribute to setting standards that generative AI foundation model providers would be expected to uphold.Consumer protection and product liabilityWhile AI-specific regulat
57、ion remains voluntary or pending in jurisdictions outside of the EU,consumer regulation and product liability laws continue to be applicable,regardless of whether they strictly contemplate AI or other technologies.Generative AI has the potential to influence the consumer market by automating various
58、 tasks and services.This may,however,also challenge traditional approaches to risk assessment and mitigation(due to the technologys broad applicability and ability to continually learn and generate new and unique content),as well as product safety standards(for example,in health and physical safety)
59、.The development of standards should be an iterative,multidisciplinary process that keeps pace with technological advancements.7Governance in the Age of Generative AICompetitionMarket authorities must ensure that the competitive conditions driving the rapid pace of innovation continue to benefit con
60、sumers.Although existing competition laws remain applicable,generative AI raises new concerns related to the concentration of control over critical components of the technology and certain partnership arrangements.For example,generative AIs capabilities are enhanced with access to high-performance c
61、ompute capacities and certain datasets that may prove critical for model development.The latter can depend on access to a vast number of users,contributing to economies of scale that challenge competition.15 In response,competition authorities around the globe are starting to provide guidance on com
62、petition risks and expectations in generative AI markets.16 Competition complexities at each layer of the AI stack will need to be evaluated as the technology evolves to enable access and choice across AI models,including general(e.g.ChatGPT),area-specific(e.g.models designed for healthcare)and pers
63、onal use models.Such evaluations will also need to be considered alongside existing legislation relating to national security,freedom of expression,media and assembly.Selection of complexities introduced by generative AI for existing regulatory areasTABLE 1Regulatory areaEmerging complexities (non-e
64、xhaustive)Emerging strategies under consideration by regulators(non-exhaustive)Privacy and data protection Legal basis for user data being used to train generative AI models Enforcement of data-minimization principles17 and opt-in/out rights by generative AI providers and deployers18 Incidental coll
65、ection of personal data by web-crawlers Clarifying web terms-of-service agreements and encouraging privacy-enhancing technological measures such as the detection and redaction of personally identifiable information19Specifying purpose limitations for data collectionGuidance for purpose thresholds wi
66、thin domain-specific regulations,e.g.financial services20Online safety and protection of vulnerable groups,especially minors,from harmful outputsPosition statements highlighting expectations for safety measures and preferences for emerging best practices21 Copyright and IP Copyright infringement of
67、training dataClear policy positions and accumulation of legal precedents on the relations between copyright and generative AI22IP rights and ownership of works generated by AIGuidance on assessing the protectable elements of AI-generated works23Attribution and fair compensation for artists and creat
68、orsInvestments in solutions for attribution and author recognition such as watermarking and content provenance,along with privacy and data protectionExtension of generative AI model training to additional data modalities(e.g.sensory,biological,motion)Considerations of new IP challenges and classific
69、ations related to emerging data modalities Consumer protection and product liabilityLiability obligations resulting from scope of multiple applicable regulationsConsiderations around whether and in which cases a concern is covered by the existing regulationsThe lack of a specific purpose of the gene
70、rative AI model before its implementation complicates liability arising from defectiveness and faultCombining the conventional AI fault and defectiveness criteria with new methods designed for generative AIs technical nuancesEfficacy of evidential disclosure requirementsBroadening the disclosure req
71、uirement to encourage transparency via explainability,traceability and auditability,and include systems that are not just classified as high-risk CompetitionBusiness conduct or agreements that enable a dominant firm to exclude rivalsInitiating sectoral studies to develop a baseline understanding of
72、the competitive dynamics of the AI technology stack,reviewing agreements between industry players and examining single firm conduct24Unfair or deceptive practiceIssuing guidance on unfair or deceptive practice prohibitions if it does not exist25Impact of downstream applications on competition across
73、 several sectorsStakeholder consultations on how generative AI impacts competition in important markets,e.g.search engines,online advertising,cloud computing and semiconductors26Governance in the Age of Generative AI8The intersectional nature of generative AI technologies and the applicability of mu
74、ltiple regulatory instruments creates a complex environment where regulatory frameworksoften overlap and conflict due to competing policy objectives.As technology evolves and becomes more widely adopted,regulators must address emerging tensions and mitigate the risk of undermining legal certainty an
75、d respect for legitimate expectations.Addressing tensions between horizontal regulationsMultiple horizontal regulations,which aim to create broad,industry-agnostic standards,may conflict when they impose requirements that are difficult to reconcile across generative AI contexts or applications.For e
76、xample,generative AI model developers may have trouble identifying the appropriate lawful basis for data processing and delivery according to data protection rights articulated through the EUs General Data Protection Regulation(GDPR).A similar tension emerges between copyright law which protects the
77、 rights of creators and inventors ensuring that they can control and profit from their creations and generative AI innovation,which often uses copyrighted material for training.Addressing tensions between horizontal and vertical regulationsHorizontal regulations may also conflict with vertical regul
78、ations tailored to specific sectors.For instance,financial institutions using generative AI may encounter challenges balancing horizontal privacy regulations with financial sector know-your-client(KYC)procedures.Where data protection regulations require organizations to minimize personal data collec
79、tion linked to a specific purpose,KYC guidelines require financial institutions to conduct thorough due diligence on clients to ensure compliance with anti-money-laundering laws.1.2 Resolve tensions between policy objectives of multiple regulatory regimes Regulators must address emerging tensions an
80、d mitigate the risk of undermining legal certainty and respect for legitimate expectations.Governance in the Age of Generative AI9Challenges and considerations for generative AI responsibility allocation(non-exhaustive)TABLE 2Example challengesConsiderations for policy-makers Variability Model varia
81、tions include features(e.g.size),scope(e.g.use purpose),and method of development(e.g.open-to-closed source).Technical approaches to layering and fine-tuning continuously evolve,enabling general-purpose models to adapt functionality for specific applications.Entity categorization complexities involv
82、e multiple actors from different sectors with overlapping or multiple roles.Case-based review:Policy-makers should provide general allocation guidance to cultivate predictability,but include mechanisms that allow case complexities to determine precise allocation.Requiring actors to identify responsi
83、bility hand-offs is one approach being examined by jurisdictions.Terminology:Policy-makers should collaborate to arrive at shared terms for models,applications and roles,e.g.in line with ISO 420001 from the International Organization for Standardization(ISO).Regulatory carve-outs:Policy-makers shoul
84、d limit instances when use can lead to unfair advantages,such as where some entities are able to bypass crucial safeguards and accountability measures or engage in regulatory arbitrage.Disparity between actors Single points of failures and power concentration occur as a result of a few foundational
85、models(serving many applications and billions of end users).Disparities in influence emerge between upstream and downstream actors.There is limited transparency for downstream actors related to training data and for upstream actors related to end-user activity.Proportionality:Policy-makers should co
86、nsider the control,influence and resources each actor has in the generative AI life cycle,and ability to redress issues resulting in harm.Third-party certifications:Policy-makers should consider appropriateness and necessity of using third parties for a robust AI certification system(potentially def
87、ined through regulation)that enables actors to verify and trust each others capabilities.Complexity of review Interpretability difficulties relating to outputs arise due to models often operating as“black boxes”to varying degrees.Traceability difficulties transpire in 1)diversity of data sources,2)s
88、equence of events that led to a fault,3)determining whose negligence or malice induced the fault or made the fault more likely.Physical inspection or verification of changes to generative AI products in the market has limited feasibility.Documentation:Policy-makers should incentivize appropriate tra
89、nsparency and vulnerabilities disclosure upstream and downstream to enable responsible decisions.Concerns about trade secrets or data privacy compromise the need to be mitigated.Traceability mechanisms:Policy-makers should require the ability to trace outputs back to their origins while considering
90、compromise and mitigation measures for IP and data privacy concerns.Continuous compliance:Policy-makers should integrate standards for market entry and procedures for post-approval changes,and encourage industry review boards and ongoing independent audits.28As defined in the World Economic Forums D
91、igital Trust Framework,27 maintaining accountability and oversight for trustworthy digital technologies requires clearly assigned and well-defined legal responsibilities alongside remedy provisions for upholding individual and social expectations.Generative AI introduces complexities into traditiona
92、l responsibility allocation practices,as examined in Table 2.Policy-makers should consider where supplementary efforts are needed to address gaps and where legal and regulatory precedents can help to clarify generative AI responsibility.The issuance of effective guidance requires consideration of ho
93、w liability within the generative AI supply chain can vary for different roles and actors as well as consideration of retroactive liabilities and dispute-resolution provisions.Unresolved ambiguity in responsibility allocation can limit investor confidence,create an uneven playing field for various s
94、upply chain actors and leave risks unaddressed and harms without redress.1.3 Clarify expectations around responsibility allocationGovernance in the Age of Generative AI10Effective regulatory enforcement depends on governments identifying the appropriate authority or authorities and enabling their ac
95、tivity with adequate resources.Expansion of existing regulatory authority competenciesWhile generative AI may elicit consideration of a new AI-focused authority,governments should first assess opportunities to make use of existing regulatory authorities with unique domain knowledge and ensure they c
96、an translate high-level AI principles to sector-specific applications.Considerations of how to delegate regulatory authority for AI will depend on a jurisdictions AI strategy,resources and existing authorities.For example,countries that have a data protection authority(DPA),such as France,tend to re
97、ly on the DPA to comprehensively address AI,since data is fundamental to AI models and uses.In the same vein,countries without DPAs,such as the US,may lack a readily apparent existing authority.Furthermore,the specific mandate and procedural frameworks of existing authorities such as DPAs impact AI
98、governance.For example,Singapores DPA,the Personal Data Protection Commission(PDPC),sits within a broader authority,the Infocomm Media Development Authority(IMDA),whose mission includes cultivating public trust alongside economic development.Thus,AI governance from Singapores DPA actively considers
99、both trust and innovation within its regulations.This underscores how generative AI may necessitate the expansion of remits for existing regulators.For example,Singapores IMDA must now consider issues related to generative AI data ownership and provenance,and the use of data for model training,inclu
100、ding potential compensation for creators whose content was trained on.Coordination of regulatory authoritiesCoordination between regulatory authorities can prevent duplication of efforts and enhance operational resilience for overburdened and under-resourced offices.New coordination roles or respons
101、ibilities should be considered.For example,the UK has created the Digital Regulation Cooperation Forum(DRCF),encompassing the Competition and Markets Authority(CMA),Financial Conduct Authority(FCA),Information Commissioners Office(ICO)and Office of Communications(Ofcom)to ensure greater cooperation
102、between regulators on online matters,including within the context of AI.Similarly,Australias Digital Platform Regulators Forum(DP-REG)an information-sharing and collaboration initiative between independent regulators considers how competition,consumer protection,privacy,online safety and data issues
103、 intersect.Dedicated AI agency versus distributed authority between sector-specific regulatorsThe founding of an AI agency requires careful consideration regarding,for instance,the scope of responsibilities,availability of resources and domain-specific regulatory expertise.For example,would the agen
104、cy serve to coordinate,advise and upskill sector-specific regulators on AI matters,likely requiring less funding,or would it serve as an AI regulatory authority with enforcement powers,requiring greater funding?Some argue that a central AI agency is needed to address highly capable AI foundation mod
105、els.29 Others consider a central AI agency more prone to regulatory capture and less effective for AIs diverse use cases than distributed regulations among existing sector-specific authorities with domain-specific knowledge.Consequently,many would prefer a council-like AI body that coordinates and a
106、dvises existing sector-specific authorities.30Jurisdictions are finding creative ways to navigate limited funding and political compromise.For example,the EU embedded its new AI Office within the EU Commission,31 instead of setting it up as a solitary institution,to amplify the effectiveness of the
107、offices limited number of staff.Like the EU,jurisdictions are navigating complex challenges of how to creatively resource a new AI body or authority while ensuring its independence.Still,enforcement of the AI Act,like GDPR,may strain authorities at the member-state level.For instance,while Spain has
108、 set up a centralized authority to enforce the acts provisions,France may use existing regulators,such as the DPA,as the authority of record.Some argue that a central AI agency is needed to address highly capable AI foundation models.Others consider a central AI agency more prone to regulatory captu
109、re and less effective for AIs diverse use cases.1.4 Evaluate existing regulatory authority capacity for effective enforcementGovernance in the Age of Generative AI11Build presentGovernments should address diverse stakeholder challenges to facilitate whole-of-society governance of generative AI and c
110、ross-sector knowledge sharing.2.1 Address challenges of stakeholder groupsWhile regulators play a critical role,they cannot independently ensure the resilient governance of a technology that has simultaneously broad and diversified impacts,and capabilities that continue to evolve.Other stakeholder g
111、roups hold key puzzle pieces to assembling resilient governance and a responsible AI system,for example:Industry:With proximity to the technology,its developers and users,industry is at the front line of ensuring that generative AI is responsibly governed across countless use cases within commercial
112、 applications and public services.Civil society organizations(CSOs):With expertise on how generative AI uniquely impacts the different communities and issue spaces they represent,CSOs enable informed and holistic policy-making.Academia:Through rigorous and independent research and educational initia
113、tives,academia is critical to shaping responsible AI development and deployment and ensuring public literacy on responsible use.Governments must use a broader set of governance tools,beyond regulations,to:Address the unique challenges of each stakeholder group in contributing to society-wide generat
114、ive AI governance Facilitate multistakeholder knowledge-sharing and encourage interdisciplinary thinking Lead by example by adopting responsible AI practicesEnable responsible AI implementation by industry Governments are carefully considering how to avoid over-and under-regulation to cultivate a th
115、riving and responsible AI network,where AI developed for economic purposes includes robust risk management,and AI research and development(R&D)is harnessed to address critical social and environmental challenges.Since market-driven objectives may not always align with public interest outcomes,govern
116、ments can encourage robust and sustained responsible AI practices through a combination of financial mechanisms andresources,clarified policies and regulations,and interventions tailored to industry complexity.Incentivize proactive,responsible AI adoption by the private sectorPublic policy-making pr
117、ocesses often lack the private sectors ability to adopt governance protocols for innovative technologies.To address this,governments should assess the applicability of existing AI governance frameworks(e.g.Presidio AI Framework,32 NIST AI Risk Management Framework33)and encourage proactive industry
118、adoption.In addition to educating industry on frameworks,governments can cultivate an environment where industry is incentivized to proactively invest in responsible AI.Potentialstrategies include:Financial incentives:Governments could introduce inducements for responsible AI practices such as tax i
119、ncentives,grants or subsidies for R&D,talent or training.Policy-makers could consider potential tax rate adjustments to incentivize AI designed to augment(rather than replace)human labour,34 and carefully consider trade-offs of proposed adjustments.Sustained funding:Government leaders should ensure
120、investment in both short-and long-term R&D to reach breakthroughs on complex AI innovations and address responsible AI challenges.Jurisdictions with a less advanced AI industry may require greater initial government investment to incentivize VCfunding.Pillar 2 Governments are carefully considering h
121、ow to avoid over-and under-regulation to cultivate a thriving and responsible AI network,where AI is harnessed to address critical social and environmental challenges.Governance in the Age of Generative AI12 Procurement power:Governments should explore preferred procurement measures for AI with demo
122、nstrable responsible AI metrics.Access:Governments should provide opportunities for public-private partnerships and access to public datasets for AI developed with demonstrable responsible AI metrics or thats designed for social or environmental benefit.Responsible AI R&D and training:Leaders should
123、 examine the suitability of requiring a percentage of R&D expenditure for responsible AI governance and/or training for organizations.Clarify policies and enable measurementA responsible AI system is of strategic importance to investors for mitigating regulatory and non-regulatory risks(e.g.cyberatt
124、acks),and improving top-and bottom-line growth.35 Over the last decade,investors have helped drive industry investment in environmental issues,and they can play a similar role in incentivizing responsible AI practices,for instance by addressing AIs vast energy use.36 However,uncertainty in how gover
125、nment AI policies will be implemented and enforced prevents confident investing in responsible AI practices.Governments should set clear national priorities and policies on responsible AI,reduce ambiguity in existing regulations and provide signals on the trajectory of regulations.Singapores PDPC,fo
126、r example,proactively shared advisory guidelines37 that clarify the application of existing data laws to AI recommendation and decision systems.The guidelines additionally highlight exceptions,with theaim of helping industry navigate regulation.Encourage businesses to test,evaluate and implement tra
127、nsparency measures,including through:Clear frameworks:to measure risks as well as social and human rights and environmental impacts Certifications:to clarify responsible AI practicesand testing are satisfactory and drawinvestors and the public Sandboxes:to experiment and refine before wider deployme
128、nt,with incentivized participation Knowledge-sharing:to promote sharing of benchmarking,e.g.Stanford AI Index Report38 Competitions:to address complex AI challenges,e.g.National Institute of Standards and Technology(NIST)generative AI challenge39 Technical standards:to establish common methodologies
129、 and benchmarks for evaluating AI system performance,safety and ethical compliance across different domains and applications.e.g.ISO 4200140 A responsible AI system is of strategic importance to investors for mitigating regulatory and non-regulatory risks(e.g.cyberattacks),and improving top-and bott
130、om-line growth.13Governance in the Age of Generative AIGovernance challenges by business size(non-exhaustive)TABLE 3Tailor interventions to diverse industry needsPolicy-makers need to consider the diversity of AI governance challenges faced by industry stakeholders to identify meaningful points of i
131、ntervention.Table 3 illustrates how business size can determine the resources available to implement responsible AI governance and the compliance complexities encountered.Other governance challenges can result from industry stakeholder characteristics,such as sector,location,industry maturity,risk s
132、ensitivity and rolein the AI supply chain.ChallengesConsiderations for policy-makers Large businessesImplementation:Difficulties may occur for AI governance operationalization and compliance within complex or differently structured organizations.Policy-makers should provide implementation guidance t
133、hat builds upon current risk management frameworks,global standards,benchmarks and baselines.Competition:Competitors may not invest equally in responsible AI practices.Policy-makers could review responsible AI practices and regulatory compliance across stakeholders.Clarity:Navigating compliance ambi
134、guities or complexities across sectors and between jurisdictions may present challenges.Where possible,policy-makers can provide guidance on what actions are within or outside regulations,reduce overlap and facilitate interoperability through harmonization.Small-and medium-sized enterprises(SMEs)and
135、 start-upsResources:Resources to develop and demonstrate robust responsible AI practices to regulators,investors or partners may be limited.Policy-makers should provide guidance,training and consultation access on AI governance,facilitate insight-sharing between large businesses and SMEs,and use cer
136、tification mechanisms.Applicability:AI governance frameworks and recommendations lack applicability or specificity to the realities of SME operations.Policy-makers should include input from diverse SMEs in development of national and international governance frameworks.Prioritization:Fast pace of st
137、art-ups and lack of capital can lead to prioritizing innovation over risk assessment.Policy-makers can incorporate responsible AI practices and regulatory landscapes into curricula of start-up accelerators and incentivize participation in sandboxes.Governance in the Age of Generative AI14Challenges
138、and considerations for policy-makers to support academic stakeholder groups(non-exhaustive)TABLE 4ChallengesConsiderations for policy-makersFor academic institutionsFor researchesFor educatorsAppropriate useClarify compliance with evolving relevant regulations(e.g.AI,data privacy and copyright)and s
139、implify regulations to enable research in responsible AI.Provide guidance on responsible generative AI use in research42(e.g.data analysis)and training onrisks to boost cognizance whenconducting research in the age of generative AI(e.g.of potential misuse by respondents to online studies).Provide gu
140、idance on responsible generative AI use by teachers(e.g.essay review and feedback)and students(e.g.critical evaluation of generative AI outputs in essay writing).ResourcesEnsure access to physical and digital infrastructure needed for faculty,researchers and students to become familiar with AI and u
141、se it responsibly.43Provide access to data and compute capabilities to conduct leading AI and generative AI research,and clarify guidelines for accessing public sector data while maintaining privacy.Provide regularly updated training materials and ensure that educators,regardless of institutional pr
142、estige,can keep pace with AI advancements.FundingClose pay gaps between industry and academia to reduce AI brain drain to industry.Allocate research grants into responsible AI challenges(e.g.hallucinations,bias)that do not require cut-throat competition or complex applications.Allocate funding for c
143、ourses on AI and responsible AI.Enable leading AI research and education by academiaThrough research and education,academia is a critical stakeholder in cultivating a robust AI network.Until the early 2000s,leading AI R&D was primarily conducted within academia.It contributed to providing open-sourc
144、e knowledge that accelerated innovation and optimized development costs.With recognition of the economic potential of AI,investment has since shifted R&D to industry.Without academia at the forefront of AI R&D,keyrisks emerge:Homogenization of the AI network Decline in discoveries that emerge from a
145、cademias interdisciplinary research settings Decreased independent research around AIethics,safety and oversight Diminished general workforce training Barriers to cross-institution collaboration Reduced ability to wield academic freedom tochallenge prevailing consensus Broken AI talent pipelineSince
146、 generative AI has extensive and costly infrastructural needs(pute capabilities,data),academias ability to conduct leading research is severely limited.41 Table 4 outlines the range of challenges facing academic stakeholders that policy-makers should address to cultivate a thriving AI system.These c
147、hallenges must be considered in the context of the different operating conditions of academic institutions,for example of private,public and community colleges,to ensure equitable access to AI literacy,benefit from the AI economy and a diverse pipeline of responsible AI experts.Similarly,policy-make
148、rs should address the unique literacy and access challenges in earlier educational settings,for example in primary and secondary schools.Governance in the Age of Generative AI15As the largest digital user group and fastest adopters of technology,children and youth are at the forefront of AI-enabled
149、systems.The effects of using generative AI,both positive and negative,will have wide-ranging and lifelong impacts that will shape the development,safety and worldviews of children.44 Research agendas are beginning to emerge to aid precise policies around the disproportionate effect of algorithmic bi
150、as on minoritized or marginalized children.45 They can additionally inform policies that address concerns around how generative AI training46 and use47 could amplify child sexual abuse material(CSAM),48 and how generative AI applications,especially the use of chatbots and smart toys,may affect cogni
151、tive functioning among children.49 Existing resources such as UNICEFs Policy Guidance on AI for Children,50 the European Commissions guidance on Artificial Intelligence and the Rights of the Child51 and the World Economic Forums AI for Children Toolkit,52 provide valuable direction.Given their limit
152、ed political agency,economic influence and organizing power,children can often be overlooked in technology governance considerations,even as they are most impacted.Further,the existence of inequalities around the digital divide exacerbates the risks and harmful effects of generative AI for some chil
153、dren more than others,given their inability to participate in shaping generative AIs development or access its benefits.Engaging young users,their guardians and local communities in a meaningful and ongoing way throughout the life cycle of generative AI projects and governance,directly and via CSOs
154、with deep technical or policy expertise in these areas,is vital for childrens empowerment and the development of responsible AI innovation.Transparency in how childrens rights and input have been considered and implemented is critical to promoting public trust and accountability.53A child-centric ap
155、proach for generative AI governanceISSUE SPOTLIGHT 2CSOs face significant access and participation challenges preventing them from assessing societal impacts of generative AI technologies,informing governance policies and supply chain accountability,and advocating for the rights of citizen groups an
156、d vulnerable populations such as children,as examined in Table 5.Ensure access and participation of CSOs In addition to ensuring technical expertise in governance conversations,there is a critical need for expertise related to the social impacts of AI and generative AI,informed by the lived experien
157、ces of those interacting with the technology.CSOs play a key role in representing various citizen groups,individuals and issue spaces and provide related technical and societal expertise.CSOs can also offer independent oversight,holding governments and companies accountable for their AIimplementatio
158、n.Depending on their missions,CSOs have unique expertise around generative AI implications that policy-makers should make use of,for example:Labour protection groups can help inform the skills and training needed to ensure generative AI leads to job growth rather than displacement.Environmental grou
159、ps can provide guidance on ways AI can help address local and global climate challenges,and considerations regarding generative AIs vast energy consumption.CSOs focused on creative practice,journalism,mis/disinformation or election monitoring can inform the harnessing of generative AIs creative pote
160、ntial while preserving information integrity and ownership rights.CSOs serving marginalized populations or protected classes can help ensure AI policies and technologies holistically consider the varied opportunities and risks posed(see Issue spotlight 2).Governance in the Age of Generative AI16Chal
161、lenges and considerations for policy-makers to support CSOs(non-exhaustive)TABLE 5ChallengesConsiderations for policy-makers AccessUnder-resourced:There is a lack of adequate tools and skills to review impacts of generative AI.Policy-makers should provide access and training for cutting-edge tools a
162、nd incentivize industry to share tools.They should fund R&D to improve tools abilities,(e.g.detection in minority languages or compressed media).They should provide funding for CSOs to undertake independent impact assessments.Opaque:There are limited metrics on how companies have implemented respons
163、ible AI,including principles that have been publicly committed to.54 Policy-makers should standardize and incentivize responsible AI reporting.They should provide CSOs with easier access to mandated transparency data,(e.g.via EU Digital Services Act and AI Act).Limited information:There is a lack of
164、 access to training data and weights,and information on how companies moderate public use of AI technologies.Policy-makers should incentivize industry to share data with CSOs,while preserving privacy and IP.They should standardize transparency reporting on how AI companies moderate technology use.Pa
165、rticipationDisempowered:CSO inclusion is often limited in numbers and in influence on decision-making.There is even less inclusion of CSOs operating outside regulatory regimes,which will be impacted by generative AI and regulatory shifts.Policy-makers should ensure sectoral parity in discussions.The
166、y should educate on the value of CSO community-driven insights.They should strengthen outreach to vulnerable communities and relevant CSOs,including transnational CSOs,and engage international CSO forums,(e.g.C7,C20,African Union Civil Society Division).Delayed:CSOs engaged late in technical and gov
167、ernance processes.Policy-makers should ensure task forces,institutes etc.have CSO participation at formation.Governance in the Age of Generative AI172.2 Facilitate multistakeholder knowledge-sharing and interdisciplinary effortsGovernments should facilitate knowledge-sharing across stakeholder group
168、s and with other governments to reduce duplicative efforts,offset expertise gaps and enable informed policies capable of addressing emerging,nuanced and wide-reaching generative AI challenges.Ensure conditions for knowledge-sharing feedback loopsKnowledge-sharing requires nurturing of feedback loop
169、conditions and proactive examination of challenges to those conditions that may prevent stakeholders from meaningfully participating,as described in Figure 2 and Table 6.Feedback loop conditions for effective multistakeholder participationFIGURE 2Feedback loop conditionsTrustworthyCommunicativeRepre
170、sentativeIndependentConsistentTransparentGovernance in the Age of Generative AI18Challenges impacting feedback loop conditions(non-exhaustive)TABLE 6Stakeholder challengesConsiderations for policy-makersTrustworthyIndustry may be wary of sharing models openly for fear of divulging trade secrets or e
171、xposure to legal liabilities.Policy-makers should provide safe harbour provisions and ensure discretion.To ensure mutual benefit,all participants should be willing to share insights while preventing privileged access.CommunicativeCSOs(that are more fluent in social impacts),industry(more fluent in t
172、echnology)and government(more fluent in policy)may have difficulty understanding each other.Further complicating the issue,CSOs may often examine topics through the lens of human rights,whereas industry does so through risks.Policy-makers should use professional facilitators,invest in structured sup
173、port for participation across sociotechnical conversations and increase incorporation of rights protections in frameworks(including in risk-based frameworks).RepresentativeBroad participation of actors is needed but can be difficult to coordinate,and its inputs can be hard to synthesize.Policy-maker
174、s could layer broad input models(e.g.written input)over narrow models(e.g.roundtable).They could set ample time for input review and synthesis.IndependentThe public may be concerned about regulatory capture or undue influence in boards or research partnerships.Policy-makers could set term limits for
175、 participation in boards.They could make disclosure of extent of industryparticipation in research collaborations arequirement.ConsistentSporadic touchpoints can leave non-industry participants playing catch-up on technological advances,and cause non-government participants to lag behind on policy c
176、hanges.Policy-makers should align on frequency expectations and coordinate multiple feedback loops.TransparentParticipants and the public may be concerned that some stakeholders yield greater influence.Policy-makers could include equitable sectoral representation and provide transparency on feedback
177、 review processes with strengthened whistleblower protections.Governments will need to coordinate multiple feedback models simultaneously to build holistic knowledge-sharing across issues and timelines(e.g.timing of AI model releases and legislative calendars),and to account for long-standing and em
178、erging issues.Layering models is also necessary to address limited resources.For example,calls for inputs,which enable insights from numerous stakeholders,can require substantial resources to meaningfully review.Governments may consider combining routine calls for input with more narrow feedback mec
179、hanisms,such as advisory boards.The boards themselves may conduct interviews and roundtables to broaden representation of the insights they share with policy-makers.In designing feedback loops,policy-makers should also consider how non-government stakeholders have limited resources.It is also crucia
180、l to explore how to simplify participation by,for instance,reducing unnecessary complexities in calls-for-input forms or merging similar calls for input from different agencies to reduce time requirements from participants.Encourage interdisciplinary innovation Generative AI innovation is built upon
181、 interdisciplinary research.For example,the development of ImageNet,a database that proved the importance of big data in training,emerged from the cross-pollination of ideas from linguistics,psychology,computer science and adjacent fields.55 Despite the importance of interdisciplinary collaboration
182、to generative AI innovation and addressing generative AIs sociotechnical challenges,industry and academia do not sufficiently cultivate environments that support this approach.Within private-sector tech companies,social scientists and humanities experts are often a fraction of the team.Despite maint
183、ained multi-disciplinary faculties within academic institutions,there are strong incentives for researchers to publish within discipline-specific journals,consequently encouraging isolated research.Policy-makers should consider levers to address these challenges,such as targeted academic research gr
184、ants with interdisciplinary requirements or financial subsidies for interdisciplinary industry R&D.Governance in the Age of Generative AI19Lead by example with responsible AI in public initiatives Making use of AI,including generative AI,may improve governments productivity,responsiveness and accoun
185、tability.56 However,its adoption requires responsible design,development,deployment and use,given its impact on individuals and society.Setting an example of responsible AI practices in government(including responsible procurement and acquisitions)could help to establish responsible AI norms57 and s
186、ecure the participation of industry,academia and civil society in creating a robust,responsible AI network.The City Algorithm Register,adopted across several cities in Europe,enables citizens to review algorithms employed by government agencies in public services,enhancing public oversight.58 Jurisd
187、ictions such as Australia59 and the US60 have published internal policies for government AI practices aimed at advancing responsible innovation and managing risks.Governance in the Age of Generative AI20Plan futureGenerative AI governance demands preparedness,agility and international cooperation to
188、 address evolving sociotechnical impacts and global challenges.Generative AIs capabilities are rapidly evolving alongside other technologies and interacting with changing market forces,user behaviour and geopolitical dynamics.Bringing ongoing clarity to generative AIs changing short-and long-term un
189、certainties is critical for effective governance.Pillar 3Government challenges and actions to keep pace with generative AITABLE 7Compounding challengesStrategic actionsLimited resources and expertise:Governments may struggle to prioritize investment in building state-of-the-art AI and generative AI
190、expertise compared to other pressing needs.Targeted investments and upskilling:Governments should be deliberate with limited resources in upskilling and hiring.Rapid evolution:Governments may lack sufficient proximity to,and awareness of,generative AI evolution and adoption to effectively approximat
191、e sociotechnical impacts.Horizon scanning:Governments should monitor emerging and converging generative AI capabilities and evolving interactions with society.Uncertain futures:Technology,society and geopolitical uncertainties are outpacing traditional upskilling practices and policy development cyc
192、les.Strategic foresight:Governments should ensure resilience though exercises that inform anticipatory policy.Slow mechanisms:Government decision-making can be slow by design(e.g.due to separation of powers and oversight)or complicated by administrative procedures.Impact assessments and agile regula
193、tions:Governments should prepare for the downstream effects of regulation and introduce agile dynamics into decision-making processes.Global fragmentation:Limited resource-sharing and segregated jurisdictional governance activity can paralyse domestic investment and policy,and create non-interoperab
194、le international markets.International cooperation:Governments should drive collective action to keep pace with generative AI innovation through harmonized standards and risk definitions,and sharing of knowledge and infrastructure.3.1 Targeted investments and upskilling Training on use:Ensure offici
195、als who use generative AI technologies are trained in their varied capabilities and limitations.Training on procurement:Ensure officials who work with vendors are equipped to assess and test the AI capabilities of a product.Adaptive upskilling:Collaborate with industry and academia on adaptive upski
196、lling of government in AI and foundational digital literacy.Strategic hiring:Recruit specialists for positions identified with amplified impact and,with limited resources,consider prioritizing sectors and use cases,for example,based on risk or domestic economic factors.Hiring vs upskilling:Consider
197、how to appropriately balance hiring AI experts with AI upskilling of sector-specific experts (e.g.in agriculture and health).AI body:Carefully consider the need and scope of an AI-specific body or authority(see“Expansion of existing regulatory authority competencies”under section 1.4).Guidance:Exami
198、ne where frameworks can be applied across sectors and where investment is needed for sector-specific guidance.Governance in the Age of Generative AI213.2 Horizon scanningTo anticipate and navigate novel risks and challenges posed by frontier generative AI,governance frameworks must continuously exam
199、ine the horizon of generative AI innovation,including:Emergence of new generative AI capabilities Convergence of generative AI with other technologies Interactions with generative AI technologiesDocumented,planned or forecasted emergence,convergence and interaction patterns can yield new waves of ec
200、onomic opportunities and novel approaches to addressing social and environmental challenges.Ongoing monitoring of opportunities and risks is critical to steering generative AI towards being a technology that benefits society.Multistakeholder knowledge-sharing(see Table 8)can enable informed horizon
201、scanning.Policy-makers should collaborate with industry to provide guidance on where disclosure of identified risks is needed and support oversight mechanisms to ensure compliance.Emergence As developers scale up generative AI models,the latter may exhibit qualitative changes in capabilities that do
202、 not present in smaller models.Such unexpected capabilities may include potentially risk-inducing abilities such as adaptive persuasion strategies,“power-seeking behaviours”to accrue resources and authority,and autonomous replication,adaptation and long-term planning capabilities.These emergent mode
203、l properties must inspire appropriate governance benchmarks to effectively address unpredictable powers and potential pitfalls.Generative AI emergent capabilities(non-exhaustive)TABLE 8CategoryExample useExample risksConsiderations for policy-makersMultimodal generative AI Systems that synthesize an
204、d generate outputs across diverse data types and sensory inputsData analysed from radars,cameras,light detection and ranging(LiDAR),sensors and global positioning systems (GPS)in a safety-critical system(like a self-driving vehicle)to predict the behaviour of surrounding vehicles and pedestrians mor
205、e accurately61 Compounded data manipulation across input types Amplification of potential flaws,biases and vulnerabilities Novel systemic failures Exacerbated societal disparities Scaled and difficult-to-detect mis/disinformation Novel persuasion techniques Focus on data integrity and secure-by-desi
206、gn frameworks,model architecture disclosures,responsible system design and impact assessment in public sectors Examine readiness of existing policies and,if necessary,amend to address emerging privacy,security,safety,fairness,and IP rights and accountabilityMulti-agent generative AI AI systems invol
207、ving multiple agents that autonomously pursue complex goals with minimal supervisionSwarms of drones deployed for military and security purposes62 Increased unpredictability and control complexity Added accountability complexity Challenges to traditional scenario planning and risk management Potenti
208、al for cascading failures Novel adversarial attacks Develop guidelines for design and testing focused on robustness,security,safety,transparency,traceability and explainability Establish accountability frameworksEmbodied generative AI AI systems embodied within physical entities such as robotics and
209、 devices capable of interacting with the real worldGeneral-purpose humanoid robot with neural network-powered manual dexterity and ChatGPT 4s visual and language intelligence63 Physical safety risks from control system failures Security issues from malicious use of such systems Novel physical manife
210、stations of hallucinations Implement safety standards and security benchmarks Encourage voluntary industry reviews and supplement with certification and audit practices,where appropriateGovernance in the Age of Generative AI22Convergence As a powerful general-purpose technology,generative AI can amp
211、lify other technologies,old and new,exposing complex governance challenges.For example,social media is under scrutiny due to its potential to distribute harmful AI-generated deepfakes,64 such as non-consensual pornography65 including CSAM66 and election disinformation.67 Looking ahead,the convergenc
212、e of generative AI with advanced technologies can pose unprecedented opportunities and risks,as both the technologies and their governance frameworks are in the early stages.Generative AI convergence with advanced technologies(non-exhaustive)TABLE 9 CategoryExample usesExample risksConsiderations fo
213、r policy-makersSynthetic biologyGenerative AI is increasingly used in developing artificial analogues of natural processes,e.g.generation of genome sequences and cellular images,and simulations of genes and proteins.It is also used in building“virtual labs”68 that can mitigate space and hazardous wa
214、ste of real-world experimentations.Unintended ecological consequences Gain-of-function research givingnaturally occurring diseases new symptoms or capabilities like resiliency to medical treatments Biosecurity risks and biologicalwarfare Novel ethical implications Robust bioethical frameworks Tracki
215、ng of the building and operation of various high-security disease labs globally Restrictions on high-riskresearch Strict containment protocols International collaboration on safety standards Refocusing of existing biological control lawsNeurotechnology Progress in generative AI,neuroscience and the
216、development of brain-computer interfaces offers potential for increasing scientific discoveries,enabling paralysed individuals with communication,as well as addressing the burden of neurological disease and mental illnesses such as attention deficit hyperactivity disorder(ADHD),post-traumatic stress
217、 disorder(PTSD)and severe depression.Intentional abuse Use in lethal autonomous weapon systems Cognitive enhancement by brain-computer interfaces can amplify existing inequities Behaviour modification and manipulation Enfeeblement Review of privacy approaches that consider cognitivefreedom,liberty a
218、nd autonomy,and the establishment of new digital rights,ifnecessary Establishment of assessment standards for model or neuroscientific accounts of disease on individuals,communities and society Internationally harmonized ethical standards for biological material and data collection Examination of mo
219、ral significance of neural systems under development in neuroscience research laboratories Context specification for neuroscientific technology use and deploymentQuantum computingThrough optimizing code,generative AI may improve the design of hardware and quantum computing circuits,which are intende
220、d to solve problems too complex for classical computing.Quantum computing may accelerate generative AI training and inference and optimize parameter exploration.Advanced models beyond human comprehension Impact on the environment due to increased energy and resource demands Review of legal provision
221、s for controlled innovation that balance pace and safety without hindering progress Incentivization of sustainable practices and energy-efficient technologies Consideration of measures such as investing in research to strengthen the security and privacy of these systemsGovernance in the Age of Gener
222、ative AI23Emotional entanglementEmotional AI aims to recognize,interpret and respond to human emotions,potentially improving human-computer interactions.As generative AI applications become more complex and computationally powerful,the risk of emotional reliance between humans and generative AI appl
223、ications tends to increase.69 Risks include dependency,privacy issues,coercion or manipulation leading to safety or psychological risks.70 Such issues are exemplified by cases of users claiming that AI companies are interfering with their romantic relationships with chatbots.71 The gravity of these
224、phenomena is already evident in society,as seen in the case of a man who reportedly“ended his life following a six-week-long conversation about the climate crisis with an AI chatbot”.72 Careful consideration of the ethical implications by policy-makers and legislators to ensure responsible AI use wi
225、ll be necessary.73Synthetic data feedback loops Human-created content scraped from the internet has been crucial in the training of large-scale machine learning,but this reliance is at risk due to the increasing prevalence of synthetic data generated by AI models.74 Training models with synthetic da
226、ta could lead to“model collapse”,where the quality of the generated content degrades over successive iterations,causing the performance of the models to deteriorate.75 Policy-makers,in collaboration with industry,academia and CSOs,will need to consider how to stabilize these systems with human feedb
227、ack,preserve human-created knowledge systems and incentivize the production and curation of high-quality data.Such considerations will need to be balanced against the requirements of substantial storage and processing resources,potentially impacting policy efforts related to sustainability.As genera
228、tive AI applications become more complex and computationally powerful,the risk of emotional reliance between humans and generative AI applications tends to increase.InteractionsToday,the integration of generative AI technologies into personal AI virtual assistants and companions raises new challenge
229、s that emerge from human interaction with and emotional reliance on these technologies.This issue highlights the need for responsible implementation,privacy,data protection and ethical human-AI interaction.For example,rapid advances and interactions with generative AI-enabled neurotechnology could b
230、ecome mainstream for many children,largely as consumer electronic devices that are not subject to rigorous oversight in clinical settings.The advancement and proliferation of voice chatbots,often with female-presenting voices,raise concerns about reinforced gender biases and stereotyping.Responsible
231、 and ethical development and regulation of these technologies,grounded in human rights,must therefore be an area of attention across stakeholder groups.Governance in the Age of Generative AI24Agile and flexible regulation is essential in AI to address evolving financial,economic and social impacts.P
232、olicy-makers must consider diverse stakeholder input to account for varied sectoral and community short-and long-term impacts.Governments should also study varied agile practices emerging globally and assess jurisdictional fit.For example,they should consider regulatory sandboxes for testing prior t
233、o broad deployment.Another approach is“complex adaptive regulations”,which are designed to respond to the effects they create and require defined goals,success metrics and thresholds for how regulations will adapt to their own impacts.Governmental structures can adopt the dynamics of tech companies
234、to become more agile through:1)a risk-based approach,2)regular review of technology and marketplace challenges,3)agile response to challenges,78 and 4)review of response effects and adaptation.79 Still,agile governance should not come at the expense of oversight or separation of powers,nor without r
235、egard to human rights and rights-based frameworks that ensure thatgenerative AI development and deployment align with societal values and norms.Governments should avoid adopting a“move fast and break things”form of hyper-agility that has been criticized for prioritizing go-to-market testing over mit
236、igation of harmful consequences.3.4 Impact assessments and agile regulationsOften,individuals and institutions rely on a default set of assumptions about the future.However,the future is inherently uncertain.For a technology as rapidly evolving(and with such complex geopolitics)as generative AI,unex
237、amined assumptions can lead to miscalculations in governance.Strategic foresight is a set of methodologies and tools that allow for an organized,scientific approach to thinking about,and preparing for,the future.Adoption of strategic foresight helps governments be agile to move beyond assumptions of
238、 the future,systematically explore critical uncertainties,envisage potential solutions and risks,sandbox newideas and articulate alternate visions of successful futures.Strategic foresight has been adopted successfully by various governments.For example,in Finland,the Government Report on the Future
239、 sets parameters for long-term planning and decision-making.76 In the United Arab Emirates,the Dubai Future Foundation(DFF)leads 13 councils,77 each of which convenes government directors and experts to investigate the future of different sectors or issue areas(such as AI),and to identify the govern
240、ance and capacity needed to drive positive change.Although strategic foresight initiatives vary,best practices include:Guided:Use models or prompts to guide exercises,e.g.use scenario planning matrices to consider potential futures across axes of critical uncertainties.Consistent:Plan exercises on a
241、 recurring basis and identify organizational champions.Multistakeholder:Engage cross-functional internal and external stakeholders to mitigate biases and map multiple possible futures.Transparent:Track and measure adoption for example,in Dubai,a numerical scale was developed to rank the effectivenes
242、s of each agency in integrating strategic foresight and rankings were then shared to increase healthy competition and incentivize adoption.3.3 Strategic foresight25Governance in the Age of Generative AIKey areas requiring international cooperation between jurisdictionsTABLE 10 Standards Standards ca
243、n help make abstract AI principles actionable,are more agile than regulations and can bolster global resilience while regulation processes are underway.They are critical to regulatory interoperability.81 Quality assurance techniques and technical standards support cross-border trade.Provisions in fr
244、ee trade agreements(FTAs)are needed to address challenges facing AI innovators.Testing certifications should be interoperable where possible.Anticipatory standards require increased inclusion of CSOs and academia,and coordination of standards bodies.82 Safety Strengthened R&D of safety techniques an
245、d evaluation tools is key to resilience.It is crucial to coordinate AI safety institutes to maximize limited resources.An agreement signed by various jurisdictions at the AI Seoul Summit on a network of institutes is promising.83 It is additionally necessary to ensure that long-term risks are not pr
246、ioritized at the expense of identified present AI harms.84 Risks Establishing mutual understanding of 1)taxonomy of risks,2)definition and scope of mitigating risks,and 3)approaches is necessary to evaluate,quantify and determine if a model/application meets the risk mitigation threshold.It is essen
247、tial to embrace jurisdictional variability on risk tolerance and ethical principles,85 while advancing risk management interoperability.This can be achieved by considering how standards may apply across high-risk cases while leaving the definition of“high-risk”to jurisdictions.Collaboration across s
248、ectors is crucial for proactively identifying generative AI opportunities and risks(including critical-,systemic-and infrastructure-related).This could be achieved via a dedicated international observatory.Prohibitions Lack of alignment on prohibitions increases the likelihood of generative AI misus
249、e by state or non-state actors with severe global consequences.Collaboration on treaties or other norm-building mechanisms is needed to establish clear prohibitions on specific forms of generative AI research,development,deployment and use.Knowledge-sharing Participation in a platform,such as a glob
250、al governance sandbox,enables the sharing of best practices,case studies(e.g.technical,ethical and legal)and tools that allow stakeholders to implement informed governance.Infrastructure Many jurisdictions have limited access to compute and high-quality data for training and fine-tuning,leading to r
251、eliance on models prone to error in local languages or contexts.Even open models are not easily fine-tuned to a new language due to underlying tokenization.Examination of opportunities for multilateral sharing,or shared ownership,of compute and data,alongside the mitigation of bad-actor access or ce
252、rtain other uses,e.g.military.Developed countries should prioritize sharing resources,expertise and best practices to enable global majority countries to build their AI capabilities and participate effectively in international forums.The current international discussions on generative AI governance
253、frequently lack meaningful participation from global majority countries.This can create significant knowledge gaps about the risks,opportunities and prospects of the generative AI supply chain in those underrepresented regions.80 Principles and frameworks developed without their input may prove inef
254、fective or even harmful.Unaddressed,these tensions could lead to a fragmentation of the global generative AI community into segregated,non-interoperable spheres.Thus,international cooperation is essential in six areas(see Table 10)to harness the benefits of generative AI while managing its dangers e
255、quitably.This can be achieved through bilateral,regional and broader international mechanisms of cooperation,like those advanced by the World Economic Forum,the United Nations(UN),Group of 20(G20),the Organisation for Economic Co-operation and Development(OECD)and the African Union High Level Panel
256、on Emerging Technologies(APET).3.5 International cooperationGovernance in the Age of Generative AI26ConclusionThis paper is intended to provide policy-makers and regulators with a detailed,practical and implementable generative AI governance framework.Generative AI,like other technologies,is not neu
257、tral it touches upon shared values and fundamental rights.Before introducing new AI regulations,it is crucial to evaluate the current regulatory landscape and enhance coordination among sectoral regulators to mitigate generative AI-induced tensions.Existing regulatory authorities should be assessed
258、for their capability to respond to emerging generative AI challenges,and the trade-offs of a distributed governance approach versus a single dedicated agency should be considered.A comprehensive whole-of-society governance strategy should address industry,civil society and academic challenges,promot
259、ing cross-sector collaboration and interdisciplinary solutions.Looking ahead,future strategies need to account for resource limitations and global uncertainties,with adaptable foresight mechanisms and international cooperation through standardized practices and shared knowledge.By adopting a harmoni
260、zed approach,generative AI challenges can be addressed more effectively at a global level.Governance in the Age of Generative AI27ContributorsLead authors Rafi Lazerson Responsible AI Specialist,Accenture;Project Fellow,AI Governance AllianceManal Siddiqui Responsible AI Manager,Accenture;Project Fe
261、llow,AI Governance AllianceKarla Yee Amezaga Lead,Data Policy and AI,World Economic ForumWorld Economic ForumSamira Gazzane Policy Lead,Artificial Intelligence and Machine LearningAccenturePatrick Connolly Responsible AI Research ManagerKathryn White Krumpholz Managing Director,Innovation Incubation
262、;ExecutiveFellow,AI Governance AllianceAndrew J.P.Levy Chief Corporate and Government Affairs OfficerValerie Morignat Responsible AI Senior Manager,Accenture;ProjectFellow,AI Governance AllianceCharlie Moskowitz Government Relations Senior ManagerAli Shah Managing Director,Responsible AI;Executive F
263、ellow,AI Governance AllianceDikshita Venkatesh Responsible AI Research Senior Analyst;ProjectFellow,AI Governance AllianceThis paper is a combined effort based on numerous interviews,discussions,workshops and research.The opinions expressed herein do not necessarily reflect the views of the individu
264、als or organizations involved in the project or listed below.Sincere thanks are extended to those who contributed their insights via interviews and workshops,as well as those not captured below.Acknowledgements Sincere appreciation is extended to the following working group members,who spent numerou
265、s hours providing critical input and feedback on the drafts.Their diverse insights are fundamental to the success of this work.Lovisa Afzelius Chief Executive Officer,Apriori BioHassan Al-Darbesti Adviser to the Minister and Director,International Cooperation Department,Ministry of Information and C
266、ommunication Technology(ICT)of QatarUthman Ali Global Responsible AI Officer,BPJason Anderson General Counsel,Vice-President and Corporate Secretary,DataStaxNorberto Andrade Professor and Academic Director,IE UniversityJesse Barba Head,Government Affairs and Policy,CheggRichard Benjamins Co-Founder
267、and Chief Executive Officer,OdiseIASaqr Binghalib Executive Director,Artificial Intelligence,Digital Economy and Remote Work Applications Office of the United Arab EmiratesAnu Bradford Professor,Law,Columbia Law SchoolDaniela Braga Founder and Chief Executive Officer,Defined.aiMichal Brand-Gold Vice
268、-President General Counsel,ActiveFenceGovernance in the Age of Generative AI28Adrian Brown Executive Director,Center for Public ImpactMelika Carroll Head,Global Government Affairs and Public Policy,CohereWinter Casey Senior Director,SAPDaniel Castano Parra Professor,Law,Universidad Externado de Colo
269、mbiaNeha Chawla Senior Corporate Counsel,InfosysSimon Chesterman Senior Director,AI Governance,AI Singapore,National University of SingaporeQuintin Chou-Lambert Office of the UN Tech Envoy,United NationsMelinda Claybaugh Director of Privacy Policy,Meta PlatformsFrincy Clement Head,North America Regi
270、on,Women in AIMagda Cocco Head,Practice Partner,Information,Communication and Technology,Vieira de Almeida and Associados Amanda Craig Senior Director,Responsible AI Public Policy,MicrosoftRene Cummings Data Science Professor and Data Activist in Residence,University of VirginiaGerard de Graaf Senio
271、r EU Envoy for Digital to the US,European CommissionNicholas Dirks President and Chief Executive Officer,The New York Academy of SciencesMark Esposito Faculty Affiliate,Harvard Center for International Development,Harvard Kennedy School and Institute for Quantitative Social SciencesNita Farahany Rob
272、inson O.Everett Professor of Law and Philosophy,Duke University;Director,Duke Science and SocietyMax Fenkell Vice-President,Government Relations,Scale AIKay Firth-Butterfield Chief Executive Officer,Good Tech AdvisoryKatharina Frey Deputy Head,Digitalisation Division,Federal Department of Foreign Af
273、fairs(FDFA)of SwitzerlandAlice Friend Head,Artificial Intelligence and Emerging Tech Policy,GoogleTony Gaffney President and Chief Executive Officer,Vector InstituteEugenio Garcia Director,Department of Science,Technology,Innovation and Intellectual Property(DCT),BrazilianMinistry of Foreign Affairs
274、(Itamaraty)Urs Gasser Dean,TUM School of Social Sciences and Technology,Technical University of MunichJustine Gauthier Director,Corporate and Legal Affairs,MILA-Quebec Artificial Intelligence InstituteDebjani Ghosh President,National Association of Software andServices Companies(NASSCOM)Danielle Gil
275、liam-Moore Director,Global Public Policy,SalesforceAnthony Giuliani Global Head of Operations,Twelve LabsBrian Patrick Green Director,Technology Ethics,Markkula Center for Applied Ethics,Santa Clara UniversitySamuel Gregory Executive Director,WITNESSKoiti Hasida Director,Artificial Intelligence in S
276、ociety Research Group,RIKEN Center for Advanced Intelligence Project,RIKENDan Hendrycks Executive Director,Center for AI SafetyBenjamin Hughes Senior Vice-President,Artificial Intelligence(AI)andReal World Data(RWD),IQVIAMarek Jansen Senior Director,Strategic Partnerships and Policy Management,Volks
277、wagenJeff Jianfeng Cao Senior Research Fellow,Tencent Research InstituteSam Kaplan Assistant General Counsel and Senior Director,PaloAlto NetworksGovernance in the Age of Generative AI29Kathryn King General Manager,Technology and Strategy,Office of the eSafety Commissioner AustraliaEdward S.Knight E
278、xecutive Vice-Chairman,NasdaqJames Laufman Executive Vice-President,General Counsel and Chief Legal Officer,Automation AnywhereAlexis Liu Head,Legal,Weights and BiasesCaroline Louveaux Chief Privacy and Data Responsibility Officer,MastercardShawn Maher Global Vice-Chair,Public Policy,EYGevorg Mantas
279、hyan First Deputy Minister,High-Tech Industry,Ministry of High-Tech Industry of ArmeniaGary Marcus Chief Executive Officer,Center for Advancement of Trustworthy AIGregg Melinson Senior Vice-President,Corporate Affairs,Hewlett Packard EnterpriseRobert Middlehurst Senior Vice-President,Regulatory Affa
280、irs,e&InternationalSatwik Mishra Executive Director,Centre for Trustworthy Technology,Centre for the Fourth IndustrialRevolutionCasey Mock Chief Policy and Public Affairs Officer,Center for Humane TechnologyChandler Morse Vice-President,Corporate Affairs,WorkdayHenry Murry Vice-President,Government
281、Relations,C3 AIMiho Naganuma Senior Executive Professional,Digital Trust Business Strategy Department,NECDidier Navez Senior Vice-President,Data Policy&Governance,DawexDan Nechita Former Head of Cabinet,MEP Drago Tudorache,European Parliament(2019-2024)Jessica Newman Director,AI Security Initiative,
282、Centre for Long-Term Cybersecurity,UC BerkeleyMichael Nunes Vice-President,Payments Policy,VisaBo Viktor Nylund Director,UNICEF Innocenti Global Office of Research and Foresight,United Nations Childrens Fund(UNICEF)Madan Oberoi Executive Director,Technology and Innovation,International Criminal Poli
283、ce Organization(INTERPOL)Florian Ostmann Head,AI Governance and Regulatory Innovation,The Alan Turing InstituteMarc-Etienne Ouimette Lead,Global AI Policy,Amazon Web ServicesTimothy Persons Principal,Digital Assurance and Transparency of US Trust Solutions,PwCTiffany Pham Founder and Chief Executive
284、 Officer,MogulOreste Pollicino Professor,Constitutional Law,Bocconi UniversityCatherine Quinlan M&A Legal Integration Executive,IBMRoxana Radu Associate Professor of Digital Technologies andPublic Policy,Blavatnik School of Government;Hugh Price Fellow,Jesus College University of OxfordMartin Rauchb
285、auer Co-Director and Founder,Tech Diplomacy NetworkAlexandra Reeve Givens Chief Executive Officer,Center for Democracy and Technology Philip Reiner Chief Executive Officer,Institute for Security and TechnologyAndrea Renda Senior Research Fellow,Centre for European Policy Studies(CEPS)Rowan Reynolds
286、General Counsel and Head of Policy,WriterSam Rizzo Head,Global Policy Development,Zoom Video CommunicationsGovernance in the Age of Generative AI30John Roese Global Chief Technology Officer,Dell Technologies Nilmini Rubin Chief Policy Officer,Hedera HashgraphArianna Rufini ICT Adviser to the Ministe
287、r,Ministry of Enterprises and Made in ItalyCrystal Rugege Managing Director,Centre for the Fourth Industrial Revolution RwandaJoaquina Salado Head,AI Ethics,TelefnicaIdoia Salazar Professor,CEU San Pablo UniversityNayat Sanchez-Pi Chief Executive Officer,INRIA ChileMark Schaan Deputy Secretary to th
288、e Cabinet(Artificial Intelligence),Privy Council Office,CanadaThomas Schneider Ambassador and Director of International Affairs,Swiss Federal Office of Communications,Federal Department of the Environment,Transport,Energy and Communications(DETEC)Robyn Scott Co-Founder and Chief Executive Officer,Ap
289、oliticalVar Shankar Affiliate,Governance and Responsible AI Lab(GRAIL Lab),Purdue UniversityNavrina Singh Founder and Chief Executive Officer,Credo AIScott Starbird Chief Public Affairs Officer,DatabricksUyi Stewart Chief Data and Technology Officer,data.orgCharlotte Stix Head,AI Governance,Apollo R
290、esearchArun Sundararajan Harold Price Professor,Entrepreneurship and Technology,Stern School of Business,New York UniversityNabiha Syed Executive Director,Mozilla FoundationPatricia Thaine Co-Founder and Chief Executive Officer,Private AIV Valluvan Veloo Director,Manufacturing Industry,Science and T
291、echnology Division,Ministry of Economy,MalaysiaOtt Velsberg Government Chief Data Officer,Ministry of Economic Affairs and Information Technology ofEstoniaMiriam Vogel President and Chief Executive Officer,Equal AITakuya Watanabe Director,Software and Information Service Industry Strategy Office,Min
292、istry of Economy,Trade and Industry JapanAndrew Wells Chief Data and AI Officer,NTT DATA Denise Wong Assistant Chief Executive,Data Innovation and Protection Group,Infocomm Media Development Authority of SingaporeKai Zenner Head,Office and Digital Policy Adviser,MEP Axel Voss,European ParliamentArif
293、 Zeynalov Transformation Chief Information Officer,Ministry of Economy of the Republic of AzerbaijanSincere appreciation is also extended to the following individuals who contributed their insights for this report.Basma AlBuhairan Managing Director,Centre for the Fourth Industrial Revolution,Saudi A
294、rabiaAbdulaziz AlJaziri Deputy Chief Executive Officer and Chief Operations Officer,Dubai Future FoundationDena Almansoori Group Chief AI and Data Officer,e&Daniela Battisti Senior Advisor and International Relations Expert,Department for Digital Transformation,Italian Presidency of the Council of M
295、inistersDaniel Child Manager,Industry Affairs and Engagement,Officeofthe eSafety Commissioner AustraliaValeria Falce Full Professor of Economic Law,Senior Advisor and Legal Expert,Department for Digital Transformation,Italian Presidency of the Council of MinistersGovernance in the Age of Generative
296、AI31Lyn Jeffery Distinguished Fellow and Director,Institute for the Future(IFTF)Japan External Trade OrganizationGenta Ando Executive Director and Project Fellow,World Economic ForumHitachi America Daisuke Fukui Senior Researcher and Project Fellow,WorldEconomic ForumWorld Economic ForumMinos Bantou
297、rakis Head,Media,Entertainment and Sport IndustryMaria Basso Portfolio Manager,Digital TechnologiesAgustina Callegari Lead,Global Coalition for Digital SafetyDaniel Dobrygowski Head,Governance and TrustKaryn Gorman Communications Lead,Metaverse InitiativeGinelle Greene-Dewasmes Lead,AI and EnergyBry
298、onie Guthrie Lead,Foresight and Organizational TransformationJill Hoang Lead,AI and Digital TechnologiesDevendra Jain Lead,Artificial Intelligence,Quantum TechnologiesJenny Joung Specialist,Artificial Intelligence and Machine LearningConnie Kuang Lead,Generative AI and Metaverse Value CreationBenjam
299、in Larsen Lead,Artificial Intelligence and Machine LearningNa Na Lead,Advanced Manufacturing and Artificial IntelligenceChiharu Nakayama Lead,Data and Artificial IntelligenceHannah Rosenfeld Specialist,Artificial Intelligence and Machine LearningNivedita Sen Initiatives Lead,Institutional Governance
300、Stephanie Smittkamp Coordinator,AI and DataStephanie Teeuwen Specialist,Data and AIKenneth White Manager,Communities and Initiatives,Institutional GovernanceHesham Zafar Lead,Business EngagementProductionLouis Chaplin Editor,Studio Miko Laurence DenmarkCreative Director,Studio MikoCat SlaymakerDesig
301、ner,Studio MikoGovernance in the Age of Generative AI32Endnotes1.Bielefeldt,H.,&Weiner,M.(2023).Declaration on the Rights of Persons Belonging to National or Ethnic,Religious and Linguistic Minorities.United Nations.https:/legal.un.org/avl/pdf/ha/ga_47-135/ga_47-135_e.pdf.2.United Nations(UN).(1990)
302、.The United Nations Convention on the Rights of the Child.https:/www.unicef.org.uk/wp-content/uploads/2010/05/UNCRC_PRESS200910web.pdf.3.United Nations Office on Drugs and Crime.(n.d.).Ad Hoc Committee to Elaborate a Comprehensive International Convention on Countering the Use of Information and Com
303、munications Technologies for Criminal Purposes.https:/www.unodc.org/unodc/en/cybercrime/ad_hoc_committee/home.4.United Nations.(1992).United Nations Framework Convention on Climate Change.https:/unfccc.int/files/essential_background/background_publications_htmlpdf/application/pdf/conveng.pdf;UnitedN
304、ations.(2015).Paris Agreement.https:/unfccc.int/sites/default/files/english_paris_agreement.pdf.5.Leslie,D.,Burr,C.,Aitken,M.,Cowls,J.,Katell,M.,&Briggs,M.(2021).Artificial intelligence,human rights,democracy,and the rule of law:A primer.SSRN.https:/doi.org/10.2139/ssrn.3817999.6.World Economic Foru
305、m.(2023).Data Equity:Foundational Concepts for Generative AI.https:/www3.weforum.org/docs/WEF_Data_Equity_Concepts_Generative_AI_2023.pdf.7.World Economic Forum.(2020).A New Paradigm for Business of Data.https:/www3.weforum.org/docs/WEF_New_Paradigm_for_Business_of_Data_Report_2020.pdf.8.Van Bekkum,
306、M.,&Zuiderveen Borgesius,F.(2023).Using sensitive data to prevent discrimination by artificial intelligence:Does the GDPR need a new exception?Computer Law&Security Review,vol.48.https:/doi.org/10.1016/j.clsr.2022.105770.9.Reisner,A.(2023).Generative AI Might Finally Bend Copyright Past the Breaking
307、 Point.The Atlantic.https:/ Government.(2023).Pro-innovation Regulation of Technologies ReviewDigital Technologies.https:/assets.publishing.service.gov.uk/media/64118f0f8fa8f555779ab001/Pro-innovation_Regulation_of_Technologies_Review_-_Digital_Technologies_report.pdf.11.House of Lords Communication
308、s and Digital Committee.(2024).Large language models and generative AI.https:/publications.parliament.uk/pa/ld5804/ldselect/ldcomm/54/54.pdf.12.Shan,S.,Ding,W.,Passananti,J.,Wu,S.,Zheng,H.,&Zhao,B.Y.(2024).Nightshade:Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models.Department of
309、Computer Science,University of Chicago.https:/people.cs.uchicago.edu/ravenben/publications/pdf/nightshade-oakland24.pdf.13.Grynbaum,M.M.,&Mac,R.(2023).The Times Sues OpenAI and Microsoft Over AI Use of Copyrighted Work.The New York Times.https:/ Horowitz would like everyone to stop talking about AIs
310、 copyright issues,please.Business Insider.https:/ the Antitrust Implications of Embedding Generative AI in Core Platform Services.CPI Antitrust Chronicles,vol.1,no.12.https:/ Commission.(2024,23 July).Joint Statement on Competition in Generative AI Foundation Models and AI Products Press release.htt
311、ps:/competition-policy.ec.europa.eu/about/news/joint-statement-competition-generative-ai-foundation-models-and-ai-products-2024-07-23_en.17.Macko,M.S.(2024).Applying Data Minimization to Consumer Requests.California Privacy Protection Agency Enforcement Division.https:/cppa.ca.gov/pdf/enfadvisory202
312、401.pdf.18.Office of the Privacy Commissioner of Canada.(2023).Principles for responsible,trustworthy and privacy-protective generative AI technologies.https:/www.priv.gc.ca/en/privacy-topics/technology/artificial-intelligence/gd_principles_ai/.19.Private AI.(n.d.).Background on Pll.https:/docs.priv
313、ate- Securities Commission.(2024).Data privacy and the Administrative Arrangement.https:/www.osc.ca/en/about-us/domestic-and-international-engagement/international-engagement/data-privacy-and-administrative-arrangement.21.E-Safety Commissioner,Australian Government.(n.d.).Tech Trends Position Statem
314、ent Generative AI.Australian Government.https:/www.esafety.gov.au/industry/tech-trends-and-challenges/generative-ai.22.Government of Canada.(2023,12 October).Government of Canada launches consultation on the implications of generative artificial intelligence for copyright Press release.https:/www.ca
315、nada.ca/en/innovation-science-economic-development/news/2023/10/government-of-canada-launches-consultation-on-the-implications-of-generative-artificial-intelligence-for-copyright.html.23.US Copyright Office,Library of Congress.(2023).Copyright Registration Guidance:Works Containing Material Generate
316、d by Artificial Intelligence 37 CFR Part 202.https:/public-inspection.federalregister.gov/2023-05321.pdf.Governance in the Age of Generative AI3324.Government of the United Kingdom.(2024).CMA seeks views on Microsofts partnership with OpenAI.https:/www.gov.uk/government/news/cma-seeks-views-on-micro
317、softs-partnership-with-openai.25.Atleson,M.(2023).Chatbots,deepfakes,and voice clones:AI deception for sale.Federal Trade Commission.https:/www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale.26.Competition and Market Authority.(2023).AI Foundation Models Rev
318、iew:Short Version.https:/assets.publishing.service.gov.uk/media/65045590dec5be000dc35f77/Short_Report_PDFA.pdf.27.World Economic Forum.(n.d.).Digital Trust Framework.https:/initiatives.weforum.org/digital-trust/framework.28.Groves,L.,Metcalf,J.,Vecchione,B.,&Strait,A.(2024).Auditing Work:Exploring t
319、he New York City algorithmic bias audit regime.ACM Digital Library.https:/dl.acm.org/doi/10.1145/3630106.3658959.29.Smith,B.(2023).How do we best govern AI?Microsoft on the Issues.https:/ Between AI Foundation Models:Dynamics and Policy Recommendations.Massachusetts Institute of Technology Connectio
320、n Science.https:/ide.mit.edu/wp-content/uploads/2024/01/SSRN-id4493900.pdf?x41178.31.European Commission.(2024).Commission Decision Establishing the European AI Office.https:/digital-strategy.ec.europa.eu/en/library/commission-decision-establishing-european-ai-office.32.World Economic Forum.(2024).A
321、I Governance Alliance:Briefing Paper Series.https:/www.weforum.org/publications/ai-governance-alliance-briefing-paper-series/.33.National Institute of Standards and Technology(NIST).(2024).AI Risk Management Framework.https:/www.nist.gov/itl/ai-risk-management-framework.34.Marcus,G.(n.d.).AI Took My
322、 Career!Broadcast.https:/ Economic Forum.(2024).Responsible AI Playbook for Investors.https:/www3.weforum.org/docs/WEF_Responsible_AI_Playbook_for_Investors_2024.pdf.36.The Forums AI Governance Alliance is currently researching energy resources as part of the work of the AI Transformation of Industr
323、ies pillar of work.Publications on this important topic will be released in coming months37.Personal Data Protection Commission,Singapore.(n.d.).Advisory Guidelines on use of Personal Data in AI Recommendation and Decision Systems.https:/www.pdpc.gov.sg/guidelines-and-consultation/2024/02/advisory-g
324、uidelines-on-use-of-personal-data-in-ai-recommendation-and-decision-systems.38.Maslej,N.et al.(2024).The AI Index 2024 Annual Report.Institute for Human-Centered AI,Stanford University.https:/aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf.39.National Institute of Standa
325、rds and Technology(NIST).(n.d.).Generative AI:Text-to-Text(T2T).https:/ai-challenges.nist.gov/t2t.40.International Standards Organization.(2023).ISO/IEC 42001:2023.https:/www.iso.org/standard/81230.html.41.Li,F.-F.(2023).Governing AI Through Acquisition and Procurement.Stanford Institute for Human-C
326、entered Artificial Intelligence(HAI),Stanford University.https:/hai.stanford.edu/sites/default/files/2023-09/Fei-Fei-Li-Senate-Testimony.pdf.42.European Commission.(2024).Living guidelines on the responsible use of generative AI in research.https:/research-and-innovation.ec.europa.eu/document/2b6cf7
327、e5-36ac-41cb-aab5-0d32050143dc_en.43.World Economic Forum.(2024).Shaping the Future of Learning:The Role of AI in Education 4.0 https:/www3.weforum.org/docs/WEF_Shaping_the_Future_of_Learning_2024.pdf.44.Osloo,S.,(2023,22 August).Why we must understand how generative AI will affect children.World Ec
328、onomic Forum.https:/www.weforum.org/agenda/2023/08/generative-ai-children-need-answers/.45.Solyst,J.,Yang,E.,Xie,S.,Hammer,J.,Ogan,A.,&Eslami,M.(2024).Childrens Overtrust and Shifting Perspectives of Generative AI.International Society of the Learning Sciences.https:/arxiv.org/pdf/2404.14511.46.Thei
329、l,D.(2023).Investigation finds AI image generation models trained on child abuse.Stanford Cyber Policy Center,Stanford University.https:/cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse.47.Thiel,D.,Melissa,S.,&Portnoff,R.(2023).New report finds generativ
330、e machine learning exacerbates online sexual exploitation.Stanford Digital Repository,Stanford University.https:/cyber.fsi.stanford.edu/io/news/ml-csam-report.48.World Economic Forum.(2023).Toolkit for Digital Safety Design Interventions and Innovations:Typology of Online Harms.https:/www3.weforum.o
331、rg/docs/WEF_Typology_of_Online_Harms_2023.pdf.49.Gruenhagen,J.H.et al.(2024).The rapid rise of generative AI and its implications for academic integrity:Students perceptions and use of chatbots for assistance with assessments.Computers and Education:Artificial Intelligence,vol.7.https:/ guidance on
332、AI for children.https:/www.unicef.org/innocenti/reports/policy-guidance-ai-children.Governance in the Age of Generative AI3451.Joint Research Centre,European Commission.(2022).Examining artificial intelligence technologies through the lens of childrens rights.https:/joint-research-centre.ec.europa.e
333、u/jrc-news-and-updates/examining-artificial-intelligence-technologies-through-lens-childrens-rights-2022-06-22_en.52.World Economic Forum.(2022).Artificial Intelligence for Children:Toolkit.https:/www3.weforum.org/docs/WEF_Artificial_Intelligence_for_Children_2022.pdf.53.Shekhawat,G.,&Livingstone,S.(2023).AI and childrens rights:A guide to the transnational guidance.London School of Economics(LSE)