《WTTC&微軟:2024負責任的人工智能報告-人工智能風險、安全和治理概述(英文版)(25頁).pdf》由會員分享,可在線閱讀,更多相關《WTTC&微軟:2024負責任的人工智能報告-人工智能風險、安全和治理概述(英文版)(25頁).pdf(25頁珍藏版)》請在三個皮匠報告上搜索。
1、RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)OVERVIEW OF AI RISKS,SAFETY&GOVERNANCEApril 2024RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council2FOREWORDWho has a say in the design of intelligent machines?How do we make sure data is used responsibly?Can algorithms exhibit unintended bias?T
2、his is the third publication in our four-part initial series on artificial intelligence(AI)from the World Travel&Tourism Council(WTTC),in partnership with global technology leader Microsoft.In this report,we explore the world of AI risks and governance,competing definitions of responsibility when us
3、ing AI,and the various attempts that are on-going to establish a global standard for the ethical use of AI.As this report series has shown,AI has come a long way in recent years.Today we can use AI to find personalised holiday ideas at the touch of a button.We can make restaurant queues shorter and
4、reduce hotel food waste.As we cross borders,algorithms can optimise everything from airport traffic,to passenger flow rates.Just imagine how transformative this technology could be at scale,with the ability to spot patterns,make predictions,or fine-tune operations for improved safety and efficiency
5、to levels that were previously unthinkable.But this progress is not without its dangers.Companies now hold huge amounts of information.Were more aware than ever of cyber threats,breaches of privacy,data bias and an alarming gap in digital skills around the world.The unfortunate truth is that AI legi
6、slation and digital education has simply failed to keep pace with the rapid development of AI.At the World Travel&Tourism Council,we are incredibly optimistic about the possibilities of AI in the decades to come.But we also believe that any technology must be used safely,fairly and responsibly.At pr
7、esent,different systems of governance have emerged in different places,with no global standard yet for the safe and responsible use of AI.That is why we are making sure the voice of Travel&Tourism is heard-along with other sectors,policymakers,and civil society-as we figure out the answers to these
8、era-defining questions.I hope you find this report valuable and insightful.As the AI and technology field evolves,we will be paying close attention and updating you along the way.But ultimately,AI is simply a tool.People-not machines-are responsible for our future.Travel&Tourism is by no means the o
9、nly voice in this conversation.But it should be a vocal one.Julia SimpsonPresident&CEOWorld Travel&Tourism CouncilRESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council3CONTENTSFOREWORD 2AI GOVERNANCE 4AI RISKS 4RESPONSIBLE AI 7AI STRATEGIES®ULATION 14GLOBAL PARTNERSHIP ON ARTIFICIAL
10、 INTELLIGENCE(GPAI)17AI INDUSTRY VOLUNTARY GOVERNANCE MEASURES 19ACKNOWLEDGEMENTS 22REFERENCES 23RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council4AI GOVERNANCEArtificial Intelligence(AI)is an exciting technology that opens up many possibilities for society,businesses and the Trave
11、l&Tourism sector,but as AI systems become more advanced,it is important they remain under human control and are aligned with our ethical values.There are risks that AI could be misused,or that it may behave in unintended ways,with unintended consequences,if not properly designed and monitored.It is
12、therefore crucial that researchers,companies and governments consider both the upsides and downsides of AI when developing and using AI,so that the world can successfully harness the huge potential of AI,while addressing valid concerns about its risks.AI RisksThere could be many potential risks of A
13、I,often unique to each situation and use case,but below are five strategic level AI risks that would be useful for all business leaders to be aware of and understand.RiskDescriptionPotential Mitigations1BiasAI systems are trained on data,and if the data is biased,the AI system could lead to discrimi
14、nation.Bias could include being in favour(or against)a particular idea,person,or thing.This could occur,for example,if an AI system was only trained on either left-leaning,or right-leaning media articles.Datasets for training AI systems should endeavour to include a broad and fully representative se
15、t of data relevant to their use case and the approval of AI systems could include testing for bias.2Job Replacement AI systems could automate many jobs currently performed by humans,potentially leading to significant unemployment and the loss of human skills in certain areas.Workers should be traine
16、d to be able to work alongside AI and job transition plans considered for the most affected employment areas.3DisinformationGenerative AI systems could be used to maliciously create false or misleading content(such as fake text and images)that is deliberately shared to deceive,or cause harm.Watermar
17、ks,or other similar features,could be included with AI generated content to show that it was created by AI.4Safety&SecurityAI systems could cause safety and security risks,with the most severe risks impacting national security or critical infrastructure,such as electrical grids,or transportation and
18、 traffic systemsAI used in safety critical systems(such as driverless cars)and where there may be security risks should be robustly tested,approved for use by an appropriate authority and have regular oversight5Existential RiskAI systems could become so intelligent that they surpass human control.AI
19、 systems should be developed that align with our human values and appropriate guardrails should be internationally agreed and implemented to control the development and use of AI.RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council5Recently there have been several high profile media s
20、tories about the risks of AI,including a study from Goldman Sachs investment bank into the impact of AI on the global economy.They estimated that AI and automation could replace up to 300 million jobs over the next 10 year,but also drive a 7%(or almost$7 trillion USD)increase in global GDP 1.For con
21、text WTTC data shows that pre-pandemic the global Travel&Tourism sector accounted for nearly 300 million jobs,so this is equivalent to the loss of every single Travel&Tourism job over the next decade.Some workers unions have therefore expressed deep worry that employment law is not keeping pace with
22、 the AI revolution and called for regulation on the use of AI for hiring,firing,performance reviews and setting working conditions.Goldman Sachs go on explain that in history,jobs displaced by automation have historically been offset by the creation of new jobs,and the emergence of new occupations.T
23、hey cite that 60%of todays workers are employed in occupations that didnt exist in 1940,following many technological innovations since the Second World War.Goldman Sachs therefore propose that AI could dramatically change the working landscape,rather than lead to mass unemployment.However they also
24、note that unlike the previous automation revolutions which predominantly affected manual(so called blue-collar)workers,such as factory workers being replaced by machines,the AI revolution would predominantly affect skilled(or white collar)workers,with managers and professionals some of the most like
25、ly to be impacted.The below diagram from the Goldman Sachs report shows that in Europe they estimate that 29%of managerial jobs and 34%of professional jobs(across all industries)could be replaced by AI and automation over the next 10 years.Goldman Sachs Global Economic Analysis:Potentially Large Eff
26、ects of AI on Economic Growth 2In early 2023 an open letter was published by the Future of Life Institute 3 calling for a pause on AI development for at least 6 months.The letter argued that the risks of AI are so great,the world needs to take more time to understand and mitigate them.The letter rec
27、eived considerable media attention as it was signed by over 30,000 interested parties,including Elon Musk(Owner of X,Tesla and SpaceX)and Steve Wozniak(Co-founder of Apple).One of the main concerns raised in the letter was the risk of AI becoming too smart and taking control of our lives.While the r
28、isks of AI are noted by many,the open letters recommendation was not taken forward as a global pause on all AI research and development was widely considered impractical and impossible to enforce.A few months later,the Center for AI Safety(CAIS)also raised concerns about the existential risk of AI,w
29、ith a succinct public statement that“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”4.This too was co-singed by several notable figures including Bill Gates(Founder of Microsoft)and academics Geoffrey Hinto
30、n&Yoshua Bengio(who have been nicknamed the Godfathers of AI due to their pioneering research in the field).While many of these public statements emphasised the negative risks of AI,an open letter from the UK Chartered Institute for IT,also published in 2023 and signed by over 1300 academics,was iss
31、ued to counter the AI doom narrative and called for governments to recognise AI as a“transformational force for good,not an existential threat to humanity”5.The letter argued that AI will enhance every area of our lives,as long as the world gets critical decisions about its development and use right
32、 and called for professional and technical standards for AI,supported by a robust code of conduct,with international collaboration and fully resourced regulation.RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council6UN Secretary General,Antonio Guterres(post on X,18th July 2023)In July
33、 2023,the United Nations Security Council(UNSC),also met for the first time to discuss the issue of AI,illustrating the seriousness of the technology.The session was chaired by the UK who described AI as a“momentous opportunity on a scale that we can barely imagine.We must seize these opportunities
34、and grasp the challenges of AI decisively,optimistically,and from a position of global unity on essential principles”.The chair went on to announce that in late 2023“the UK will host the first major global summit on AI Safety with world and industry leaders,where our shared goal will be to consider
35、the risks of AI and decide how they can be reduced through co-ordinated action“6.This AI Safety Summit was held in the UK on 1-2 November 2023 Other international diplomats and officials at the UN Security Council also urged the world to take the emergence of this new technology seriously.UN Secreta
36、ry General Antonio Guterres announced that he“welcomes calls from some Member States for the creation of a new United Nations entity to support collective efforts to govern this extraordinary technology”and“as a first step,I am convening a multistakeholder High Level Advisory Board for Artificial In
37、telligence,that will report back on the options for global AI governance”.This board submitted its interim report on Governing AI for Humanity to the UN Secretary General in October 2023,with the final report to be published in September 2024To help address the risks from AI,the US standards setting
38、 organisation,NIST,has released an AI Risk Management Framework(AI RMF)7 to assist all organisations that are designing,developing,deploying,or using AI systems.The Framework is intended to be voluntary,rights-preserving,non-sector specific,and use-case agnostic to provide flexibility to organisatio
39、ns of all sizes,in all sectors and throughout society.Its aim is to help all AI stakeholders manage the many risks of AI,while promoting the trustworthy and responsible development and use of AI systems.Though voluntary at this time,some have called for this to form the basis of a global regulatory
40、framework for managing AI risks.The NIST AI Risk Management Framework has four core functions(Govern,Map,Measure,Manage)which contain 19 categories and 72 sub-categoriesRESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council7Responsible AIResponsible AI is about ensuring AI systems opera
41、te appropriately,and is sometimes also called Safe AI,Trustworthy AI or Ethical AI.Although these terms can each mean slightly different things,they are often used interchangeably to recognise the need for AI systems to align with human values.WTTC industry members Microsoft,IBM and Google have each
42、 developed a set of responsible AI principles that guide their AI research and use,as illustrated below:1.Inclusiveness-AI systems should empower everyone and engage people1.Explainability-An AI system should be transparent,particularly about what went into its algorithms recommendations,as relevant
43、 to a variety of stakeholders with a variety of objectives1.Be socially beneficial-As we consider potential development and uses of AI technologies,we will take into account a broad range of social and economic factors,and will proceed where we believe that the overall likely benefits substantially
44、exceed the foreseeable risks and downsides.2.Fairness-AI systems should treat all people fairly2.Fairness-This refers to the equitable treatment of individuals,or groups of individuals,by an AI system.When properly calibrated,AI can assist humans in making fairer choices,countering human biases,and
45、promoting inclusivity2.Avoid creating or reinforcing unfair bias-We will seek to avoid unjust impacts on people,particularly those related to sensitive characteristics such as race,ethnicity,gender,nationality,income,sexual orientation,ability,and political or religious belief3.Reliability&Safety-AI
46、 systems should perform reliably and safely3.Robustness AI powered systems must be actively defended from adversarial attacks,minimising security risks and enabling confidence in system outcomes3.Be built and tested for safety-We will design our AI systems to be appropriately cautious,and seek to de
47、velop them in accordance with best practices in AI safety research4.Transparency-AI systems should be understandable4.Transparency-To reinforce trust,users must be able to see how the service works,evaluate its functionality,and comprehend its strengths and limitations4.Be accountable to people-We w
48、ill design AI systems that provide appropriate opportunities for feedback,relevant explanations,and appeal5.Privacy&Security-AI systems should be secure and respect privacy5.Privacy-AI systems must prioritize and safeguard consumers privacy and data rights and provide explicit assurances to users ab
49、out how their personal data will be used and protected5.Incorporate privacy design principles-We will give opportunity for notice and consent,encourage architectures with privacy safeguards,and provide appropriate transparency and control over the use of data6.Accountability-People should be account
50、able for AI systems6.Uphold high standards of scientific excellence-We will work with a range of stakeholders to promote thoughtful leadership in this area and we will responsibly share AI knowledge by publishing educational materials,best practices,and research that enable more people to develop us
51、eful AI applications7.Be made available for uses that accord with these principles-Many technologies have multiple uses.We will work to limit potentially harmful or abusive applicationsRESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council8The NIST AI Risk Management Framework(mentioned
52、 earlier)has also developed 7 characteristics for trustworthy and responsible AI that aim to reduce negative AI risks.The NIST AI Risk Management Framework defines 7 characteristics for Trustworthy AIIf a Travel&Tourism business does not yet have an approach to the responsible and ethical use of AI,
53、the above frameworks from NIST 6,Microsoft 8,Google 9 and IBM 10 can offer a good starting point to ensure the values most important to the company are maintained when using AI.As AI is expected to swiftly become more critical to companies and an everyday business tool within organisations,adopting
54、responsible AI principles should be considered as a high priority.This is to:1)Manage Risk&Reputation:no organisation wants to be in the news for the wrong reasons.Incorrect or biased actions based on the inappropriate use of AI could result in lawsuits and customer,stakeholder or employee mistrust.
55、This could lead to damaging the organisations reputation.2)Maintain Corporate Values:ethical decisions are important to all organisations and requires monitoring of the AI system over time to ensure it continues to meet organisational values as corporate strategy,or behavioural patterns change.This
56、may require retraining,or rebuilding,of the AI system over time.3)Protect&Scale against Government Regulations:AI regulations could change at a rapid pace as AI technology advances.Non-compliance could lead to costly audits or fines,but by adopting responsible AI principles,it is more likely that or
57、ganisation will be able to adapt and comply with any new or changing government regulations,as they too are likely to be built on similar(if not the same)responsible approaches to AI.There is no single guide or definition for Responsible AI,but it could be considered as a principles based approach t
58、o the research,design,development,deployment,use,maintenance and governance of AI systems,across all sectors,that considers the effects(both positive and negative)that AI may have on organisations,individuals,communities and society at large.A survey in May 2023 of 439 business executives from acros
59、s multiple industries participated in a Responsible AI Index 11 which found that:82%of businesses believed they were applying best practice approaches to responsible AI,but on closer inspection,only 24%were taking deliberate action to ensure their AI systems were developed and operated responsibly.O
60、rganisations where the CEO was responsible for driving the company AI strategy had a higher responsible AI index score,but only 34%of organisations had an AI strategy where the CEO was personally involved.The survey authors concluded there was a worrying action gap between what organisations thought
61、 they were doing and what they were actually doing to ensure their use of AI was safe and that the results suggest businesses may be struggling to know how to practically implement responsible AI principles.To support the implementation of responsible AI practices,several universities now offer comp
62、rehensive training for business leaders such as the University of Sydney one day course on Ethical AI:From Principles to Practice 12,or the MIT three day course on Ethics of AI 13.However the University of Helsinki offers two free courses on the basics of AI and AI Ethics,which can be a useful start
63、ing point for all business leaders.RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council9University of Helsinki:Introduction to AI(https:/)University of Helsinki:Ethics of AI(https:/ethics-of-ai.mooc.fi)As interest in AI is exploding around the world,many intergovernmental organisation
64、s and countries are also now developing(or have developed),ethical AI principles and frameworks,but there is no universally agreed global standard for the responsible use of AI at this time,as each of these approaches applies a slightly different emphases on issues such as fairness,the moral use of
65、AI,data ethics or citizen privacy.A 2023 study by UNIDIR(United Nations Institute for Disarmament Research)to explore the military use of AI,mapped the responsible AI principles adopted by 26 UN Member States and 11 intergovernmental organisations in an effort to define a common taxonomy for respons
66、ible AI.They found 26 different principles across the group they examined,with the 5 most common responsible AI principles to be Impartibility,Explainability,Inclusiveness,Safety and Human Oversight,Judgement or Control,but with some regional differences.UNIDIR study into Responsible AI principles 1
67、4The Organisation for Economic Cooperation&Development(OECD)was the first intergovernmental organisation to issue recommendations for countries to promote the innovative and trustworthy use of AI 15.Their non-binding recommendations were adopted by the 38 OECD Member States in May 2019 and in June 2
68、019 the leaders of the G20 countries welcomed the OECD recommendation at the G20 Summit in Japan 16,stating that“the responsible development and use of Artificial Intelligence(AI)can be a driving force to help advance the UN SDGs and to realise a sustainable and inclusive society.To foster public tr
69、ust and confidence in AI technologies and fully realise their potential,we commit to a human-centered approach to AI,and welcome the non-binding G20 AI Principles,drawn from the Organization for Economic Cooperation and Development(OECD)Recommendation on AI.”The OECD recommendation identified five p
70、rinciples for the responsible stewardship of trustworthy AI,along with five policy recommendationsAI Principles1)Inclusive growth,sustainable development and well-being2)Human-centred values and fairness3)Transparency and explainability4)Robustness,security and safety5)AccountabilityAI Policy Recomm
71、endations 1)Investing in AI research and development2)Fostering a digital ecosystem for AI3)Shaping an enabling policy environment for AI4)Building human capacity and preparing for labour market transformation5)International co-operation for trustworthy AIRESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World
72、 Travel&Tourism Council10Governments that have embraced the OECD and G20 AI PrinciplesThe UN Educational,Scientific and Cultural Organisation (UNESCO)has also championed the protection of human rights and dignity with AI.In 2021 their 193 Member States adopted the UNSECO recommendations on the ethic
73、al use of AI 17,with the UNESCO Assistant Director General for Social&Human Sciences,Gabriela Ramos stating“AI technology brings major benefits in many areas,but without ethical guardrails,it risks reproducing real world biases and discrimination,fuelling divisions and threatening fundamental human
74、rights and freedoms”.The UNESCO ethical recommendations include four core values and ten principles:While values and principles are crucial to establishing any ethical AI framework,recent AI developments have emphasised the need to move beyond high-level beliefs and towards practical strategies.UNES
75、CO therefore also include 11 Policy Action Areas to translate the ethical recommendations into tangible action and UNESCO committed to support 50 countries in 2023 to design their national ethical AI policies based on the UNESCO recommendations.RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tou
76、rism Council11UNESCO has also formed a Business Council for AI Ethics,which is co-chaired by Microsoft and Telefonica for Ibero-America18.This will promote and support the implementation of the recommendations on the ethics of AI with the private sector.Specific activities include the exchange of ex
77、periences and perspectives between a multi-stakeholder community,the use of ethical impact assessment tools,the generation and sharing of knowledge on the responsible use of AI,and the dissemination of awareness campaigns focused on the UNESCO Ethical Recommendations.Natasha Crampton,Microsoft Chief
78、 Responsible AI Officer said“AI has the potential to transform societies around the world,and its essential that we guide this technology proactively toward outcomes that are beneficial,equitable,and inclusive.To do this,we need to bring together experiences and perspectives across societies to bett
79、er inform decisions around the responsible development and use of AI.We are looking forward to partnering with UNESCO on this vital global effort”UNESCO AI Policy Action AreasUNESCO will also support countries by using an AI Readiness Assessment Methodology(RAM)which will assess a countrys legal,soc
80、ial,cultural,scientific,educational,technical and infrastructural AI capacities and alignment with the UNESCO AI ethical recommendations.UNESCO has been publishing the details of these country assessments in an AI Ethics and Governance Observatory from 202419 which is an online transparency portal f
81、or the latest data and analysis on the ethical development and use of AI around the world,and a platform for sharing examples and best practices.A fellow UN agency,the UNDP(UN Development Programme)also observed that many governments around the world are engaged in a continual,never ending game of c
82、atch up with technological developments.The UNDP is therefore supporting developing countries with their digital transformation,including for AI.They are conducting this in partnership with another UN agency,the ITU(International Telecommunications Union),to combine the UNDPs extensive country prese
83、nce,with ITUs technical expertise.In 2023,UNDP launched an AI Readiness Assessment 20 as a set of tools that enable governments to get an overview of their AI readiness across various sectors.The framework is focused on the dual roles of governments as 1)facilitators of technological advancement and
84、 2)users of AI in the public sector.The assessment also prioritises AI ethical considerations from the UNESCO recommendations.The UNDP aims to support countries so they may implement AI powered technologies at population scale,which will enable them to meet national priorities and contribute to achi
85、eving the UN Sustainable Development Goals(SDGs).RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council12Another framework,the 2022 Government AI Readiness Index 21,from Oxford Insights,assessed 181 countries readiness to implement AI in the delivery of public services and found that th
86、e top five countries were the U.S.,Singapore,UK,Finland and Canada,with all of those countries already embracing AI for government public services.But it is not only the AI technology industry,individual countries and intergovernmental organisations that are establishing responsible AI principles an
87、d approaches so are religious leaders.In January 2023,religious leaders from the Christian,Jewish and Islamic faiths met at the Vatican in Rome to sign a Charter on AI Ethics,known as the Rome Call 22,alongside technology companies including Microsoft and IBM.Signing of the Rome Call for AI Ethics 2
88、3Archbishop Vincenzo Paglia,President of the Pontifical Academy for Life said,“we have gathered with our Jewish and Muslim brothers in an event of great importance to call upon the world to think and act in the name of brotherhood and peace even in the field of technology”.RESPONSIBLE ARTIFICIAL INT
89、ELLIGENCE(AI)World Travel&Tourism Council13Pope Francis renewed his interest in the ethical development of artificial intelligence,stating:“I am glad to know that you also want to involve the other great world religions and men and women of goodwill so that“algor-ethics”,that is ethical reflection o
90、n the use of algorithms,be increasingly present not only in the public debate,but also in the development of technical solutions.Every person,in fact,must be able to enjoy human and supportive development,without anyone being excluded”.On 1st January 2024,Pope Francis dedicated his World Day of Peac
91、e message to AI,where he called for“the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms”and released a short video on AI and Peace24.The Rome Call contains six ethical pr
92、inciples and is also currently being considered by Eastern religious leaders.Rome Call for AI Ethics 20As has been shown in this chapter there is considerable on going activity in the safe and responsible development of AI,with strategic interest ranging from the Pope,to the UN Secretary General,wit
93、h a large(and increasing)array of voluntary ethical,responsible and trustworthy AI principles developed in both the public and private sectors,but no universal agreement on a common set of principles at this time.WTTC therefore recommends Travel&Tourism organisations considering AI in their business
94、 should use any of the above frameworks best suited to their organisation,whilst WTTC advocates in parallel for international agreement to a common set of responsible AI principles.But whilst this may align the world on the correct use of AI,there is also a challenge to define“who is responsible”for
95、 safe AI(also covered in the accompanying WTTC report on AI in Action)and the issue of liability for any misuse,or negative consequences that could result from using an AI system.Should liability for damage,or harm,be with the AI software programmer,the AI system manufacturer,the business running th
96、e AI system,an individual user,or another party?This question,sometimes called the responsibility gap,is unresolved at this time and being considered by AI and legal experts around the world,but is especially important in safety critical scenarios(such as AI powered driverless cars).For example,in s
97、ome countries where there are laws for self-driving vehicles,the AIs action(or inaction)is in most cases the legal responsibility of the vehicle owner/driver,but should a driver be responsible for the autonomous decisions of a car?Or for more general AI systems that learn from their environment and
98、adapt over time,the programmer,manufacturer,or operator of an AI system may be unable to fully predict an AIs future behaviour as it will be based on many unknown variables.Can they therefore be legally,or morally,responsible for its actions?Complex legal questions such as these are still being cons
99、idered by governments around the world as they develop their regulatory regimes for AI.RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council14AI Strategies&RegulationAI has captured the attention of the world with its potential to radically transform our economy,our society and humanit
100、y.But there are legitimate concerns about the power of this technology and its potential to be used to cause harm,rather than good.Governments around the world are therefore looking at how existing laws and regulations can be applied to AI and considering if new legal frameworks are required.At the
101、international level,as previously mentioned in this report,some countries are calling for a new UN agency to oversee AI.A UN appointed High Level Advisory Board on AI was therefore formed in 2023 and has issued an interim report on options for the global governance of AI to the UN Secretary General.
102、At the national and regional level,governments are also considering how they can encourage and embrace the economic and social benefits of AI,whilst managing the potential risks through both voluntary guidance and regulation.There are two main approaches to AI governance being considered by governme
103、nts at this time.The first is principles led(following the responsible AI approaches discussed in the previous section),while the second is rules led with prescriptive actions that must be taken,with an additional two approaches to AI regulatory oversight also under consideration.These are either ce
104、ntralised through a dedicated government agency for AI,or decentralised with AI oversight responsibilities split between existing government departments.In the 2023 AI Index 25 report from the Stanford University Human Centered AI(HAI)faculty,they analysed the legislative records of 127 countries an
105、d found that the number of bills that have been put into law around the world containing the words“artificial intelligence”has grown from just one a year in 2016,to 37 a year in 2022.In six years(2016-2022)countries around the world passed 123 AI related laws.A complementary analysis of the parliame
106、ntary records from 81 countries also found that mentions of AI in legislative proceedings had increased by nearly 6.5x since 2016.In 2022,legislative bodies in 127 countries passed 37 laws that included the words“artificial intelligence”.Since 2016,countries have passed 123 AI related billsThese reg
107、ulations cover a range of issues,from data privacy and security,to algorithmic transparency and accountability.However despite the recent increase in AI regulatory activity,most government oversight around the world continues to be through voluntary guidance at this time.RESPONSIBLE ARTIFICIAL INTEL
108、LIGENCE(AI)World Travel&Tourism Council15The following table and short descriptions summarise the status of AI oversight in 2023 for a few countries that have leaned into AI regulation and guidance as they seek to balance their economic,social,and public priorities with AI innovation.WTTC has also p
109、roduced an accompanying document to this report called“Global AI Strategies,Policies&Regulations”which provides much more detail and will be periodically updated by WTTC to include more countries and up to date information as the global regulatory environment for AI changes and evolves.Status of AI
110、Regulations(in 2023)Regulatory Oversight(in 2023)AI specific legislationAI regulated with existing lawsUsing existing regulatory bodiesNew office for AI oversightEUUKUSA(State&City level only)CanadaChinaJapanSingaporeAustralia(State level only)European Union(EU):Artificial Intelligence Act(AIA)As pa
111、rt of the EUs Digital Strategy 26,the European Union intends to regulate artificial intelligence(AI)and is advancing an Artificial Intelligence Act(AIA),which is expected to be adopted in early 2024,with a transitional implementation period that could see it fully enforced 24 months later,with some
112、parts applicable sooner.The European Aviation Safety Agency(EASA)has also published a roadmap which outlines their vision for the safety and ethical areas that must be considered for the use of AI in European aviation.The EU-US Trade&Technology Council is also developing a voluntary code of conduct
113、to guide the responsible development and use of AI whilst official laws are still being developed on both sides of the Atlantic.United Kingdom(UK)The UK believes its existing laws,regulators and courts already address some of the emerging risks posed by AI(such as discrimination,product safety and c
114、onsumer rights)and is therefore taking a different approach to the EU.The UK plans to empower its existing regulators(such as the UK Health&Safety Executive and the UK Competition&Markets Authority)to come up with tailored,context specific governance approaches that best suit the way that AI can be
115、used within their sectors.Following a very successful first international AI Safety Summit,hosted by the UK in November 2023,the UK established an AI Safety Institute(AISI)27.The Institute RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council16aims to advance the worlds knowledge of AI
116、 systems by carefully examining,evaluating and testing new types of AI to understand what they are capable of.It makes this work widely available to the world,enabling an effective global response to both the opportunities and risks presented by advanced AI systemsUSAThe United States does not curre
117、ntly have a national level framework for regulating AI and is following a similar path to the UK,with individual US government departments providing recommendations and guidance.However the White House has published an Executive Order on Safe,Secure&Trustworthy AI which commits US Federal Agencies t
118、o a broad range of measures designed to stimulate innovation in AI,as well as a Blueprint for an AI Bill of Rights which offers guidelines for the responsible design and use of AI.In addition,US States(including Colorado&Illinois)and City governments(including New York)are also pursuing their own AI
119、 regulations and task forces,with AI oversight therefore targeting specific use cases and locations,rather than seeking to regulate AI technology nationally,or across all industries.Similarly to the UK,the USA has also established a national U.S.AI Safety Institute(USAISI)28 hosted at the US Nationa
120、l Institute of Standards and Technology(NIST),along with a U.S.AI Safety Institute Consortium,which brings together more than 200 organisations to develop science based guidelines and standards for AI measurement and policy.CanadaCanada was the first country in the world to issue a national AI strat
121、egy in 2017,which was updated in 2022 29.Also in 2022,the Canadian government introduced Bill C-27,which included the Artificial Intelligence and Data Act(AIDA).This is still under negotiation in the Canadian Parliament in early 2024 and aims to standardise the rules regarding the design,development
122、 and use of AI across Canada.ChinaIn 2017 China issued an AI Development Plan 30 which set a goal for the Chinese AI industry to be generating more than 1 trillion(RMB)annually by 2030,and in mid-2023 China announced it was preparing a national Artificial Intelligence Law.Up to this point China has
123、focussed on AI regulations relating to specific AI applications,including AI regulations for recommendation algorithms and synthetically generated material(such as deepfakes)and in 2023 China released draft measure for managing generative AI services 31.At the provincial and city level,Shanghai and
124、Shenzhen have also enacted their own local AI regulations.JapanIn 2016,Japan introduced Society 5.0 which envisions a sustainable and economically successful future for Japan,powered by advanced technologies such as AI.To achieve this Japan has published seven social principles for human centric AI
125、and issued its first National AI Strategy in 2019,with updates in 2021 and 2022,to focus on AIs ability to tackle pandemics,natural disasters,and climate change.In 2023,Japan also chaired the G7 group of nations,where the G7 Digital&Technology Ministers endorsed a G7 AI Action Plan to enhance global
126、 interoperability of Trustworthy AI and the G7 agreed to convene further discussions on the international governance of generative AI systems.SingaporeIn 2014 Singapore launched its Smart Nation 32 initiative so that“Singapore will be a nation where people can live meaningful and fulfilled lives,ena
127、bled seamlessly by technology”.Since then,Singapore has made significant investments in AI research and development,surpassing many other countries in AI spending as a percentage of GDP.A National AI Office oversees the delivery of the National AI Strategy,which focuses on the deployment of AI in se
128、ven specific sectors.Singapore has also launched an AI Verify Framework&Toolkit which enables companies to measure and demonstrate their responsible AI practices,ensuring transparency,fairness,and accountability in their AI systems.RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council1
129、7AustraliaThe Australian Government is backing critical and emerging technologies to strengthen Australias future,including a focus on AI.The Australian AI Action Plan published in 2021(prior to the national election)was archived by the new incoming government and a new consultation on Safe&Responsi
130、ble AI was issued.In early 2024 the government published its interim response to the consultation which proposed a risk based approach to regulating AI and an advisory group to support the development of options for mandatory safeguards in high risk scenarios.Australia has also issued a voluntary AI
131、 Ethics Framework and in 2023 established a Responsible AI Network.At the state level,New South Wales(NSW),which contains the cities of Syndey and Melbourne,has also issued an AI strategy and implemented an AI assurance framework for NSW government agencies.Global Partnership on Artificial Intellige
132、nce(GPAI)A Global Partnership on Artificial Intelligence(GPAI)33 was first announced by Canadian Prime Minister Justin Trudeau and French President Emmanuel Macron at the 2018 G7 Summit in Canada as an international and multi-stakeholder initiative to support cutting edge research and applied activi
133、ties for AI priorities,which includes the responsible development and use of AI.It was officially launched in 2020,with 15 founding member countries and brings together experts from governments,industry,academia and civil society.In 2023,this had grown to 28 countries,plus the EU and more than 100 e
134、xperts.ArgentinaAustraliaBelgiumBrazilCanadaCzech RepublicDenmarkFranceGermanyIndiaIrelandIsraelItalyJapanRepublic of KoreaMexicoNether-landsNew ZealandPolandSenegalSerbiaSingaporeSloveniaSpainSwedenTurkiyeUKUSAEUThe GPAI is hosted with a permanent secretariat at the OECD in Paris,who oversees a GPA
135、I Council and GPAI Steering Committee.The GPAI Council are Ministers from all of the member countries and provides strategic direction to the GPAI,whilst the GPAI Steering Committee is an elected body comprised of five government and six non-government representatives,who develop the work plans and
136、establish the working groups and is supported by a Multistakeholder Expert Group(MEG).The GPAI is also supported by two Centres of Excellence,in Montreal and Paris,which facilitate the working groups with their research and practical projects.The two Centres of Excellence are CEIMIA(International Ce
137、ntre of Expertise in Montreal for the Advancement of AI)in Canada and INRIA(French National Institute for Research in Digital Science&Technology)in France.RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council18The working groups are initially focussed on four themes,which are:Responsib
138、le AI 34(Montreal)Future of Work 35(Paris)Data Governance 36(Montreal)Innovation&Commercialisation 37(Paris)Global Partnership on AI StructureThe 2022 GPAI annual report 38 states that“during this time of escalating geopolitical tensions and economic instability,countries cannot afford to withdraw a
139、nd make their own rules.We need a united front,and we need to speak with one voice on AI”.The annual report also made 12 recommendations for GPAI members in 2023,under four pillars which are to:1.Initiate practical actions that can responsibly leverage the potential of AI to advance the UN Sustainab
140、le Development Goals(SDGs)2.Nurture and adopt participatory governance tools that support the inclusion of communities impacted by AI systems(from AI design to deployment)3.Steer emerging technical frontiers towards the public interest and the protection of rights4.Support broader access to the econ
141、omic benefits of AI and data technologiesTrust in Generative AI Global ChallengeIn July 2023 a Global Challenge to Build Trust in the Age of Generative AI 39 was jointly launched by GPAI,OECD,UNESCO,IEEE Standards Association,AI Commons(a partnership of AI stakeholders focussed on bringing the benef
142、its of AI to everyone)and VDE(a technology company focussed on science,standards and testing).Over a two year period,the challenge aims to bring together technologists,policy makers,researchers,experts and AI practitioners to propose and test innovative ideas that promote trust in generative AI syst
143、ems and counter the potential spread of disinformation which could be enhanced by generative AI tools.The challenge hopes to provide tangible evidence about what works and promote approaches that could be implemented and scaled around the world.RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tou
144、rism Council19AI Industry Voluntary Governance MeasuresWhile governments around the world develop their laws and regulations for AI,the private sector has also developed a series of voluntary guardrails,organisations and approaches.The World Ethical Data Foundation has over 25,000 individuals as mem
145、bers,from a variety of technology companies and has produced a voluntary framework for AI developers to use when making AI products and services 40.The framework contains a checklist of 84 questions to guide the development of safe and responsible AI solutions.For example,the framework helps develop
146、ers to consider the data protection laws of various countries and whether it is clear to the user of an AI system that they are interacting with AI.Some example questions from the framework include:Is there any protected or copyrighted material in the training data such as Personally Identifiable In
147、formation(PII),Payment Card Industry(PCI)data,and Protected Health Information(PHI)?Is the team of people who are working on selecting the training data from a diverse set of backgrounds and experiences to help reduce the bias in the data selection?What are the possible dangers of the model?Is there
148、 a plan for the worst case scenarios?Other groups are emerging too.MLCommons41 is a collaborative organisation of engineers and scientists working together to make AI machine learning systems better for everyone and the Partnership for AI(PAI)42 is a collection of industry,academic,civil society and
149、 media organisations working together to share insights and develop actionable guidance material which can be used to inform government policy and advance public understanding of AI.The Partnership for AI group comprises of more than 100 organisations from 17 countries and is organised into five wor
150、kstreams.In 2023,Microsoft joined PAIs collective action promoting responsible practices in the development,creation and sharing of media generated by AI often called synthetic media43.This first of a kind effort was prompted by a belief among many industry experts that the evolving landscape of AI
151、generated media represents a new and exciting frontier for creativity and expression,but also holds the troubling potential for misinformation and manipulation if left unchecked.Eric Horvitz,Microsoft Chief Scientific Officer said“We applaud and support PAIs initiative to build a strong,collaborativ
152、e community dedicated to protecting the public from malicious actors who aim to manipulate,sow discord,and to erode trust in the digital information we consume.”44The Safety Critical AI workstream is developing an AI Incident Database(AID),to track when AI systems fail and aims to be a useful global
153、 repository of problems experienced in the real world as a result of using AI.This can help to better anticipate and manage future risks.RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council20Partnership for AI:AI Incident Database(AID)The Partnership for AI is also developing Shared P
154、rotocols for the Responsible Deployment of Large Language Models(LLMs)which is the underlying technology powering sophisticated AI chatbots(such as Microsoft Copilot and Google Gemini).These protocols will be an outcome from their Global Task Force for Inclusive AI 45 which was supported at the 2023
155、 Summit for Democracy by the Director of the US White House Office for Science&Technology(OSTP)46 who said this will be a first of its kind coalition,with partners from the private sector,academia,and civil society,focused on developing research and design methods for AI that will root out algorithm
156、ic discrimination,safeguard rights,and promote equitable innovation.This global task force responds to the call of the AI Bill of Rights,and were happy to see this important work getting underway.The White House and US President have also held meetings with the AI Industry and in July 2023,the White
157、 House and seven leading AI companies(including WTTC industry members,Microsoft and Google)agreed to voluntary measures that would help manage the risks of AI and move towards the safe,secure and transparent development of AI systems 47.In September 2023,eight other companies agreed to join the volu
158、ntary measures,including WTTC industry member IBM.As part of this agreement the companies committed to(among other measures):Security test their AI systems(by internal and external experts)before their releaseEnsure that people can spot AI by implementing watermarksPublicly report AI capabilities an
159、d limitations on a regular basisResearch the risks such as bias,discrimination and the invasion of privacyThe White House noted that one of the aims of the voluntary agreement was to make it easy for people to tell when content is created by AI,with watermarking of AI generated content also an impor
160、tant topic for the EU,with EU Commissioner Thierry Breton tweeting after the announcement that he was looking forward to“pursuing discussion notably on watermarking”.In late 2023 the White House also issued an Executive Order on AI and is pursuing legislative options to help America lead the way in
161、responsible AI innovation.RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council21A week after the White House and AI industry voluntary agreement was announced,four AI companies(Microsoft,OpenAI,Google and Anthropic)launched the Frontier Model Forum 48,49 as the founding members of a n
162、ew industry body that will focus on the safe and responsible development of frontier AI models,which they defined as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models,and can perform a wide variety of tasks.The Frontier Model Foru
163、ms four core objectives are to:1.Advance AI safety research2.Identify best practices(for the development&implementation of frontier AI models)3.Collaborate with policy makers,academics,businesses&civil society(to share knowledge about trust and safety risks)4.Support AI applications that can address
164、 the worlds greatest challenges(such as climate change,early cancer detection and combating cyber threats)The Frontier Model Forum50,51 has appointed Chris Meserole as its first Executive Director who comes to the role with deep expertise in technology policy,governance and safety having most recent
165、ly served as Director of AI&Emerging Technologies at the Brookings Institution.In October 2023,the Frontier Model Forum also announced a new AI Safety Fund,of more than$10 million(USD)to promote research in the field of AI safety.The AI Safety Fund will support independent researchers from around th
166、e world affiliated with academic institutions,research institutions,and startups.The initial funding comes from Microsoft,OpenAI,Google and Anthropic,with additional philanthropic donations from the Patrick J.McGovern Foundation,the David&Lucile Packard Foundation,Eric Schmid,and Jaan Tallinn.Togeth
167、er this amounts to over$10 million(USD)in initial funding.The voluntary AI commitments made at the White House,included a pledge to facilitate third-party discovery and reporting of vulnerabilities in AI systems.The AI Safety Fund is therefore an important part of fulfilling that commitment by provi
168、ding the external community with funding to better evaluate and understand frontier systems.The primary focus of the fund will be to support the red teaming of AI models which will help develop and test evaluation techniques for the potentially dangerous capabilities of frontier systems.Red Teaming
169、is an ethical attempt to play the enemy and simulate the tactics and techniques of those who may attempt to use AI for wrong and dangerous reasons.The members of the Frontier Model Forum believe that funding in this area is critical to raising safety and security standards for AI systems.It will als
170、o provide valuable insights into the mitigation and control measures that industry,governments,and civil society may need to adopt for very advanced AI systems.The Fronter Model Forum also committed to continue working with other collaborative organisations such as the Partnership for AI and MLCommo
171、ns,and plans to feed its work into other government and multilateral activities such as UN AI initiatives,OECD work on AI risks and the G7 recommendations on AI,which includes a voluntary Code of Conduct 52 for AI developers and International Guiding Principles on AI 53 which were adopted by the G7
172、countries in late 2023.RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council22ACKNOWLEDGEMENTS AUTHORJames McDonaldDirector,Travel TransformationWorld Travel&Tourism Council(WTTC)CONTRIBUTORSJulie ShainockManaging Director,Travel,Transport&Logistics(TTL)MicrosoftShane OFlahertyGlobal D
173、irector,Travel,Transportation and HospitalityMicrosoftSmith CodioDirector,Customer Experience of Automotive,Mobility&Transportation IndustryMicrosoftDESIGNName Zoe RobinsonDISCLAIMERArtificial Intelligence(AI)is a fast evolving area and the information in this document is correct up to the date of p
174、ublication of this report.A separate document from WTTC entitled“Artificial Intelligence:Global Strategies,Policies&Regulations”accompanies this report.This provides useful additional detail on international government approaches to AI and will be periodically updated by WTTC as AI develops and expa
175、nds around the world.The Voice of Travel&Tourism.WTTC promotes sustainable growth for the Travel&Tourism sector,working with governments and international institutions.Council Members are the Chairs,Presidents and Chief Executives of the worlds leading private sector Travel&Tourism businesses.For mo
176、re information,visit:www.WTTC.orgMicrosofts advancements in AI are grounded in our companys mission to empower every person and organisation on the planet to achieve more from helping people be more productive to solving societys most pressing challenges.For more information,visit: ARTIFICIAL INTELL
177、IGENCE(AI)World Travel&Tourism Council23REFERENCES1 Goldman Sachs AI Blog(https:/ Goldman Sachs Potentially Large Economic Effects of AI on Economic Growth(https:/www.gspub- Future of Life Institute Open Letter on AI Risks(https:/futureoflife.org/open-letter/pause-gi-ant-ai-experiments)4 Center for
178、AI Safety Statement on AI Risks(https:/www.safe.ai/statement-on-ai-risk)5 BCS Charter Institute for IT Open Letter on AI(https:/www.bcs.org/articles-opinion-and-research/bcs-open-letter-calls-for-ai-to-be-recognised-as-force-for-good-not-threat-to-humanity)6 UK Government Announcement of Global AI S
179、afety Summit(https:/www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-intelligence)7 NIST AI Risk Management Framework(https:/www.nist.gov/itl/ai-risk-management-framework)8 Microsoft Responsible AI Principles(https:/ Google Responsible AI Principles(https:/ai.google/responsibi
180、lity/principles)10 IBM Responsible AI Principles(https:/ Survey Responsible AI Index(https:/.au/2022-responsible-ai-index)12 Sydney University Ethical AI Course(https:/www.uts.edu.au/data-science-institute/our-research/ethical-artificial-intelligence)13 MIT Ethics of AI Course(https:/prolearn.mit.ed
181、u/ethics-ai-safeguarding-humanity)14 UNIDIR Towards Responsible AI in Defence(https:/unidir.org/wp-content/uploads/2023/05/Brief-ResponsibleAI-Final.pdf)15 OECD Recommendations on AI(https:/legalinstruments.oecd.org/en/instruments/OECD-LE-GAL-0449)16 2019 G20 Summit Declaration,Japan(https:/www.mofa
182、.go.jp/policy/economy/g20_summit/osaka19/en/documents/final_g20_osaka_leaders_declaration.html)17 UNESCO Recommendations on the ethical use of AI(https:/www.unesco.org/en/artificial-intelli-gence/recommendation-ethics)18 Microsoft&UNESCO partner to promote Ethical AI(https:/www.unesco.org/en/article
183、s/unes-co-and-microsoft-commit-promoting-unescos-recommendation-ethics-ai)19 UNESCO Global AI Ethics&Governance Observatory(https:/www.unesco.org/ethics-ai/en)20 UNDP AI Readiness Assessment(https:/www.undp.org/blog/are-countries-ready-ai-how-they-can-ensure-ethical-and-responsible-adoption)21 2022
184、Oxford Insights Government AI Readiness Index(https:/ Rome Call AI Ethics(https:/www.romecall.org)23 UN FAO Abrahamic Commitment to the Rome Call(https:/www.fao.org/e-agriculture/news/ai-eth-ics-abrahamic-commitment-rome-call)24 Pope Francis 2024 World Day of Peace message(https:/www.vatican.va/cont
185、ent/francesco/en/mes-sages/peace/documents/20231208-messaggio-57giornatamondiale-pace2024.html)25 Standford University HAI AI Index Report(https:/aiindex.stanford.edu/report)26 EU Digital Strategy(https:/commission.europa.eu/strategy-and-policy/priorities-2019-2024/eu-rope-fit-digital-age_en)27 UK A
186、I Safety Institute https:/assets.publishing.service.gov.uk/media/65438d159e05fd0014be7bd9/introducing-ai-safety-institute-web-accessible.pdf28 US AI Safety Institute(https:/www.nist.gov/artificial-intelligence/artificial-intelligence-safety-insti-tute)29 Canada AI Strategy(https:/ised-isde.canada.ca
187、/site/ai-strategy/en)30 China AI Development Plan 2017(https:/ China Generative AI Draft Proposals(http:/ Singapore Smart Nation(https:/www.smartnation.gov.sg)33 Global Partnership for AI GPIA(https:/gpai.ai)RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)World Travel&Tourism Council2434 GPIA Responsible AI
188、Working Group Papers(https:/gpai.ai/projects/responsible-ai)35 GPIA Future of Work Working Group Papers(https:/gpai.ai/projects/future-of-work)36 GPIA Data Governance Working Group Papers(https:/gpai.ai/projects/data-governance)37 GPIA Innovation&Commercialisation Working Group Papers(https:/gpai.ai
189、/projects/innova-tion-and-commercialization)38 GPIA 2022 Annual Report(https:/gpai.ai/projects/gpai-multistakeholder-expert-group-report-no-vember-2022.pdf)39 Global Challenge to Build Trust in Generative AI(https:/oecd.ai/en/wonk/global-challenge-partners)40 World Ethical Data Foundation Voluntary
190、Framework(https:/openletter.worldethicaldata.org/en/openletter)41 MLCommons(https:/mlcommons.org/en)42 Partnership for AI(https:/partnershiponai.org)43 PAI Responsible Practices for Synthetic Media Framework for Collective Action(https:/synthetic-media.partnershiponai.org)44 PAI Announces that Micro
191、soft and Meta join the Framework for Collective Action on Synthetic Media(https:/partnershiponai.org/pai-announces-meta-and-microsoft-to-join-framework-for-collective-action-on-synthetic-media)45 PAI Global Task Force for Inclusive AI(https:/partnershiponai.org/global-task-force-for-inclusive-ai)46
192、US White House Statement at Summit for Democracy(https:/www.whitehouse.gov/ostp/news-up-dates/2023/03/30/remarks-of-ostp-director-arati-prabhakar-at-the-summit-for-democracy)47 US White House voluntary agreement with AI companies(https:/www.whitehouse.gov/brief-ing-room/statements-releases/2023/07/2
193、1/fact-sheet-biden-harris-administration-secures-voluntary-commit-ments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai)48 Google Press Release Frontier Model Forum(https:/blog.google/outreach-initiatives/public-policy/google-microsoft-openai-anthropic-frontier-model-f
194、orum)49 Microsoft Press Release Frontier Model Forum(https:/ Frontier Forum(https:/www.frontiermodelforum.org)51 Frontier Forum announces Executive Director and new AI Safety Fund(https:/www.frontiermodelfo-rum.org/updates/announcing-chris-meserole)52 G7 Hiroshima Process Code of Conduct for Advance
195、d AI Systems(https:/digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-code-conduct-advanced-ai-systems)53 G7 Hiroshima Process International Guiding Principles for Advanced AI Systems(https:/digital-strate-gy.ec.europa.eu/en/library/hiroshima-process-international-guiding-prin
196、ciples-advanced-ai-system)World Travel&Tourism Council:Nature Positive Travel&Tourism In Action 2024.All rights reserved.The copyright laws of the United Kingdom allow certain uses of this content without our(i.e.the copyright owners)permission.You are permitted to use limited extracts of this conte
197、nt,provided such use is fair and when such use is for non-commercial research,private study,review or news reporting.The following acknowledgment must also be used,whenever our content is used relying on this“fair dealing”exception:“Source:World Travel and Tourism Council:Responsible Artificial Inte
198、lligence(AI):Overiew of AI Risks,Safety&Governance 2024.All rights reserved.”If your use of the content would not fall under the“fair dealing”exception described above,you are permitted to use this content in whole or in part for non-commercial or commercial use provided you comply with the Attribut
199、ion,Non-Commercial 4.0 International Creative Commons Licence.In particular,the content is not amended and the following acknowledgment is used,whenever our content is used:“Source:World Travel and Tourism Council:Responsible Artificial Intelligence(AI):Overiew of AI Risks,Safety&Governance 2024.All
200、 rights reserved.Licensed under the Attribution,Non-Commercial 4.0 International Creative Commons Licence.”You may not apply legal terms or technological measures that legally restrict others from doing anything this license permits.W T TC S T R AT EG I C PA RT N E R S RESPONSIBLE ARTIFICIAL INTELLIGENCE(AI)