《世界經濟論壇(WEF):2025人工智能與網絡安全:風險與收益的平衡策略白皮書(英文版)(28頁).pdf》由會員分享,可在線閱讀,更多相關《世界經濟論壇(WEF):2025人工智能與網絡安全:風險與收益的平衡策略白皮書(英文版)(28頁).pdf(28頁珍藏版)》請在三個皮匠報告上搜索。
1、In collaboration with the GlobalCyber Security Capacity Centre,University of OxfordArtificial Intelligence andCybersecurity:Balancing Risks andRewardsW H I T E P A P E RJ A N U A R Y 2 0 2 5Transformation of Industries in the Age of AIAI Governance AllianceImages:Getty ImagesDisclaimer This document
2、 is published by the World Economic Forum as a contribution to a project,insight area or interaction.The findings,interpretations and conclusions expressed herein are a result of a collaborative process facilitated and endorsed by the World Economic Forum but whose results do not necessarily represe
3、nt the views of the World Economic Forum,nor the entirety of its Members,Partners or other stakeholders.2024 World Economic Forum.All rights reserved.No part of this publication may be reproduced or transmitted in any form or by any means,including photocopying and recording,or by any information st
4、orage and retrieval system.ContentsReading guide 3Foreword 4Executive summary 5Introduction:The scope 61 The context of AI adoption from experimentation to full business integration 82 Emerging cybersecurity practice for AI 102.1 Shift left 112.2 Shift left and expand right 112.3 Shift left,expand r
5、ight and repeat 112.4 Taking an enterprise view 113 Actions for seniorleadership 124 Steps towards effective management of AI cyber risk 144.1 Understanding how the organizations context influences the AI cyber risk 144.2 Understanding the rewards 154.3 Identifying the potential risks and vulnerabil
6、ities 154.4 Assessing potential negative impacts to the business 174.5 Identifying options for risk mitigation 194.6 Balancing residual risk against the potential rewards 214.7 Repeat throughout the AI life cycle 21Conclusion 22Contributors 23Endnotes 27Artificial Intelligence andCybersecurity:Balan
7、cing Risks andRewards2Reading guideThe World Economic Forums AI Transformation of Industries initiative seeks to catalyse responsible industry transformation by exploring the strategic implications,opportunities and challenges of promoting artificial intelligence(AI)-driven innovation across busines
8、s and operating models.This white paper series explores the transformative role of AI across industries.It provides insights through both broad analyses and in-depth explorations of industry-specific and regional deep dives.The series includes:As AI continues to evolve at an unprecedented pace,each
9、paper in this series captures a unique perspective on AI including a detailed snapshot of the landscape at the time of writing.Recognizing that ongoing shifts and advancements are already in motion,the aim is to continuously deepen and update the understanding of AIs implications and applications th
10、rough collaboration with the community of World Economic Forum partners and stakeholders engaged in AI strategy and implementation across organizations.Together,these papers offer a comprehensive viewof AIs current development and adoption,aswell as a view of its future potential impact.Each paper c
11、an be read stand-alone or alongside the others,with common themes emerging acrossindustries.Impact on industrial ecosystemsCross industryIndustry or function specificImpact on industries,sectors and functionsAdditional reports to be announced.Regional specific Impact on regionsAdvanced manufacturing
12、 andsupply chainsFinancial servicesMedia,entertainment andsportHealthcareTransportTelecommunicationsConsumer goodsLeveraging Generative AI for Job Augmentation and Workforce ProductivityArtificial Intelligences EnergyParadox:BalancingChallenges andOpportunitiesArtificial Intelligence and Cybersecuri
13、ty:Balancing Risks andRewardsAI in Action:Beyond Experimentation to Transform IndustryBlueprint to Action:Chinas Path to AI-Powered Industry TransformationArtificial Intelligence inFinancial ServicesFrontier Technologies in Industrial Operations:The Riseof Artificial Intelligence AgentsArtificial In
14、telligence in Media,Entertainment and SportThe Future of AI-Enabled Health:Leading the WayIntelligent Transport,Greener Future:AI as a Catalyst to Decarbonize GlobalLogisticsUpcoming industry report:TelecommunicationsUpcoming industryreport:Consumer goodsArtificial Intelligence andCybersecurity:Bala
15、ncing Risks andRewards3ForewordAdoption of artificial intelligence(AI)is accelerating across the economy as organizations seek to harness its potential rewards.To support this,the AI Governance Alliance,launched by the World Economic Forum in June 2023,was established to provide guidance on the resp
16、onsible design,development and deployment of AI systems.Historically,insufficient attention has been given to the potential cybersecurity risks of AI adoption and use.This report highlights the steps that need to be taken to ensure that cybersecurity is fully embedded within the AI adoption life cyc
17、le.Amid a business landscape that is increasingly focused on responsible innovation,this report offers a clear executive perspective on managing AI-related cyber risks.It empowers leaders to invest and innovate in AI with confidence,and exploit emerging opportunities for growth.To unlock full potent
18、ial,it is essential to develop a comprehensive understanding of these cyber risks and related mitigation measures.Throughout the report,we explore a central question:How can organizations reap the benefits of AI adoption while mitigating the associated cybersecurity risks?This report provides a set
19、of actions and guiding questions for business leaders,helping them to ensure that AI initiatives align with overall business goals and stay within the scope of organizations risk tolerance.It additionally offers a step-by-step approach to guide senior risk owners across businesses on the effective m
20、anagement of AI cyber risks.This approach includes:assessing the potential vulnerabilitiesand risks that AI adoption might create for an organization,evaluating the potential negative impacts to the business,identifying the controls required and balancing the residual risk against anticipated benefi
21、ts.Though focused on AI,the approach can be adapted for secure adoption of other emerging technologies.This report draws on insights from a World Economic Forum initiative,developed in collaboration with the Global Cyber Security Capacity Centre(GCSCC)at the University of Oxford.Through collaborativ
22、e workshops and interviews with cybersecurity and AI leaders from business,government,academia and civil society,participants explored key drivers of AI-related cyber risks and identified specific capability gaps that need to be addressed to secure AI adoption effectively.Sadie Creese Professor of C
23、ybersecurity;Director and Technical BoardChair,Global Cyber Security Capacity Centre,University of OxfordJeremy Jurgens Managing Director,WorldEconomic ForumArtificial Intelligence andCybersecurity:Balancing Risks andRewardsJanuary 2025Artificial Intelligence andCybersecurity:Balancing Risks andRewa
24、rds4Executive summaryAI technologies offer significant opportunities,and their application is becoming increasingly prevalent across the economy.As AI system compromise can have serious business impacts,organizations should adjust their approach to AI if they are to securely benefit from its adoptio
25、n.Several foundational features capture best practices for securing and ensuring the resilience of AI systems:1.Organizations need to apply a risk-based approach to AI adoption.2.A wide range of stakeholders need to be involved in managing the risks end-to-end within the organization.A cross-discipl
26、inary AI risk function is required,involving teams such as legal,cyber,compliance,technology,risk,human resources(HR),ethics and relevant front-line business units according to specific needs and contexts.3.An inventory of AI applications can help organizations to assess how and where AI is being us
27、ed within the organization,including whether it is part of the mission-critical supply chain,helping reduce“shadow AI”and risks related to the supply chain.4.Organizations need to ensure adequate discipline in the transition from experimentation to operational use,especially in mission-criticalappli
28、cations.5.Organizations should ensure that there is adequate investment in the essential cybersecurity controls needed to protect AI systems and ensure that they are prepared to respond to and recover from disruptions.6.It is necessary to combine both pre-deployment security(i.e.the“security by desi
29、gn”principle also called“shift left”)and post-deployment measures to monitor and ensure resilience and recovery of the systems in use(referred to in this report as“expand right”).As the technology evolves,this approach needs to be repeated throughout the life cycle.This overall approach is described
30、 in the report as“shift left,expand right and repeat”.7.Technical controls around the AI systems themselves need to be complemented by people-and process-based controls on the interface between the technology and businessoperations.8.Care needs to be paid to information governance specifically,what
31、data will be exposed to the AI and what controls are neededto ensure that organizational data policies are met.It is crucial for top leaders to define key parameters for decision-making on AI adoption and associated cybersecurity concerns.This set of questions can guide them in assessing their strat
32、egies:1.Has the appropriate risk tolerance for AI beenestablished and is it understood by all riskowners?2.Are risks weighed against rewards when new AI projects are considered?3.Is there an effective process in place to govern and keep track of the deployment of AI projects?4.Is there clear underst
33、anding of organization-specific vulnerabilities and cyber risks related to the use or adoption of AI technologies?5.Is there clarity on which stakeholders need to be involved in assessing and mitigating the cyber risks of AI adoption?6.Are there assurance processes in place to ensure that AI deploym
34、ents are consistent with the organizations broader organizational policies and legal and regulatory obligations?By prioritizing cybersecurity and mitigating risks,organizations can safeguard their investments in AI and support responsible innovation.A secure approach to AI adoption not only strength
35、ens resilience but also reinforces the value and reliability of these powerful technologies.A secure approach to AI adoption can allow organizations to innovate confidently.Artificial Intelligence andCybersecurity:Balancing Risks andRewards5Introduction:The scopeThis report is part of a series explo
36、ring the transformative role of artificial intelligence(AI)across industrial ecosystems,along with cross-industry,industry-specific and regional perspectives.It is specifically focused on how organizations can reap the benefits of AI adoption while mitigating the associated cybersecurity risks.The b
37、usiness benefits of adopting AI can be considerable,but the cyber risks of embedding these technologies into an organization are not always considered from the outset.By adopting AI,businesses may find themselves vulnerable to new threats that they do not yet know how to defend themselves against.Th
38、e impact of AI on cybersecurity can be considered to fall into three broad categories:The use of AI by threat actors:Threat actors are using AI to enhance their capabilities and make their tactics,techniques and procedures more potent,and attacks more effective.The use of AI by defenders:In parallel
39、,cyber defenders are harnessing AI to enhance cybersecurity capabilities,facilitating wider prevention,more accurate threat detection,autonomous remediation and more rapid and effective incident response.Cybersecurity for AI:The use of AI is creating an expanded attack surface that might be exploite
40、d by threat actors.Existing methods need to be extended to address new vulnerabilities that are inherent in AI,but that may not be as relevant for“classical”ITsystems.This report focuses on the third of these namely,the need to adopt AI systems with due consideration for the emergent cyber risks.It
41、contains guidance for business leaders and senior risk owners on managing the cyber risks associated with the implementation of AI technologies while innovating in their use of AI.Cyber risks related to AI adoption have to be considered by business leaders and senior risk owners alike.The triangle o
42、f AI impacts on cybersecurityFIGURE 1Cybersecurity capabilities need to innovate to protect the business;consequence of attacks are tightly linked to business processesNew attack surface offers new targets and attack vectors,which will need to be defendedNext-generation cyber arms race driven by thr
43、eat interest and potential for collateral damageWider attack surface and cyber harms to enterpriseNew attack surface and propagation of risks across businessesEnhanced cyber defence tools:Better prevention and attack detection,and more effective incident responseMore potent cyberattacks:Toxicity of
44、cyberspace increases,targeting of victims more effectiveImpacts of AI on cybersecurityArtificial Intelligence andCybersecurity:Balancing Risks andRewards6The use of AI by threat actorsBOX 1Cybercriminals can harness AI capabilities to amplify the scale,sophistication and speed of their malicious act
45、ivities,presenting unprecedented challenges in cybersecurity defence.Impersonation,social engineering and spear phishing:The criminal use of AI has not only bolstered the scope and efficiency of cybercrime(including identity theft,fraud,data privacy violations and intellectual property breaches),but
46、 has also lowered the barriers to entry for criminal networks that previously lacked the technical skills.1 A research study found that large language model(LLM)-automated phishing can lead to an over-95%reduction in costs,while maintaining or even exceeding previous success rates.2 Reconnaissance:A
47、I has enhanced reconnaissance efforts for cybercriminals by automating and refining the information-gathering process.Attackers can efficiently analyse vast amounts of data from various sources,such as by scraping social media,public records and network traffic to identify potential targets and vuln
48、erabilities.Though not a novel use case,AI tools can process and correlate this data with greater speed and accuracy,making target selection and external surface scanning more efficient and effective.3 For example,AI can detect and map out organizational structures,pinpoint weaknesses in security co
49、nfigurations and predict likely security behaviours and responses.Discovering and exploiting zero-days:AI allows cybercriminals to accelerate the process of discovering unpatched vulnerabilities such as zero-days unknown vulnerabilities that do not have any patch or fix available more efficiently an
50、d at scale.AI-enabled reconnaissance tools not only streamline the identification of zero-day vulnerabilities but also make it easier to create custom malware capable of exploiting these weaknesses before patches can be deployed.Researchers have also found that multiple GPT-4 models working in tande
51、m are capable of autonomously exploiting zero-day vulnerabilities.4 Compromising AI systems:This involves cybercriminals exploiting weaknesses in AI training datasets via data poisoning attacks,5 model architectures and operational frameworks.Data poisoning can degrade a models performance and relia
52、bility,leading to erroneous outputs6 with far-reaching,sector-specific consequences.In the financial sector,for example,a successful data poisoning attack could manipulate algorithms used for credit scoring or fraud detection.Such outcomes not only undermine the integrity of systems,but also expose
53、institutions to significant financial losses and reputational damage.In the next decade,companies will be defined by their AI strategy:innovators will succeed,while resistors will vanish.Todays chief information security officers(CISOs)play a critical role in this journey,and must move from blocking
54、 the use of AI,to enabling it.But with the technology still in its infancy,the lack of understanding around AI has the potential to shift the balance of power to threat actors.The only viable defence is fighting AI with AI developing personalized,adaptive security approaches that can protect an orga
55、nization at speed and at scale.Matthew Prince,CEO and Co-Founder,CloudflareArtificial Intelligence andCybersecurity:Balancing Risks andRewards71Cybersecurity requirements for AI technologies should be considered in tandem with business requirements.How a business is using AI should determine securit
56、y needs,what to protect and when.There are numerous influencing factors that drive cybersecurity requirements,including:the criticality of the business processes and control systems using AI and the degree of dependency these processes have on the AI system outputs;the sensitivity of the data and de
57、vices that AI is involved in processing and controlling;and the risk culture of the organization and its approach to digital innovation.Businesses are innovating with AI in a range of ways,and are at various stages in the adoption cycle:Experimentation and piloting:Much of current AI deployment by b
58、usinesses is explorative or experimental.According to research from the AI Governance Alliance,organizations are commonly using“smaller,use-case-based approaches that emphasize ideation and experimentation”.7 There is,however,a risk of experiments becoming embedded within live business operations wi
59、thout the rigorous risk assessment,system testing and user training required.Unconscious use of AI through product features(off-the-shelf software):For some organizations,the adoption process involves a more gradual and at times passive approach.Under this approach,AI is introduced in enterprise pro
60、cesses through new features or the enhancement of tools and platforms already available in an organizations ecosystem e.g.enterprise resource planning(ERP),HR and IT management platforms.This process presents the risk of introducing shadow AI.A lack of formal roll-out programmes may decrease transpa
61、rency,which can in turn weaken management processes and leadershipoversight.Businesses require visibility and close coordination with vendors to assess AI feature capabilities and effectively evaluate potential risks.Furthermore,lax software management in organizations can amplify this type of risk
62、due to the introduction of AI through unsanctioned or unmonitored tools(e.g.open source tools used by developers,browsers or software plugins).The context of AI adoption from experimentation to full business integrationUnderstanding business context is essential for identifying the security needs of
63、 AI.Artificial Intelligence andCybersecurity:Balancing Risks andRewards8 Roll-out and integration into live operations:Some organizations have already identified the business opportunities presented by AI and are moving to full deployment.However,they may not be conducting proper cyber risk assessme
64、nts or implementing appropriate controls.Organizations need to ensure that theres adequate discipline around the transition from experimentation to operational use,especially in mission-critical applications.The cybersecurity markets ability to support specialized tools for protecting the confidenti
65、ality,integrity and availability of related systems and services may also not be mature enough to enable these organizations to implement AI systems securely.Disparate projects across the organization:In most large businesses,there are multiple projects exploring the use of AI across different funct
66、ions and channels.These are not necessarily following a coordinated process,so assessment of risk to the business may not be sufficiently aligned.This applies to both full roll-out and gradual creep scenarios.Hosted by third-party versus on-premises:Often,businesses are using third-party AI services
67、 hosted in the cloud.Such operations do not absolve the business from managing cybersecurity of the AI assets,but they do change the mitigation controls available and create a need to negotiate appropriate protections from the suppliers.Internal AI tools development:Many organizations started offeri
68、ng AI features in their public digital services.Some of these are based on existing commercial or open-source tools.Others are developed internally.In either case,security requirements need to be properly established at the development stage.Organizations may also be entering the decision-making pro
69、cess on risk at different stages:AI technologies may already have been embedded into the business processes or core assets.In this case,risk owners need to map what has been implemented and assess how to manage security retroactively.In other cases,the process might start with a risk-reward-based de
70、cision about whether to embed AI into operations or products.Under this approach,the AI system is only moved into the live environment when the rewards are determined to outweigh or justify the risks.This risk-reward-based decision necessitates a proactive approach to security,which can be integrate
71、d during the design phase.AI holds enormous potential to advance the way people live and work,but we must ensure that we apply these powerful tools ethically and sustainably.Rapid advances in AI create opportunities but also introduce significant cybersecurity and governance challenges.As AI systems
72、 become more integrated into our lives,we must build secure AI platforms that protect against adversarial attacks and safeguard data integrity by following secure-by-design principles.Additionally,we need to introduce the appropriate level of governance in both development and usage to ensure trustw
73、orthy AI.Antonio Neri,President and Chief Executive Officer,Hewlett Packard EnterpriseArtificial Intelligence andCybersecurity:Balancing Risks andRewards92Emerging cybersecurity practice for AISecuring AI systems demands early mitigation,ongoing operational security,enterprise-level risk management,
74、and frequent reassessment of vulnerabilities.While the understanding of attackers and defenders use of AI is well established,the recognition of the AI system as an asset to be protected is relatively new.Literature is emerging on the cybersecurity risks associated with AI systems.A range of initiat
75、ives are seeking to outline and categorize the cybersecurity threats and risks emerging from the use of AI,including from MITRE8 and the UK National Cyber Security Centre(NCSC).9 Emerging guidance and policies are highlighting requirements needed to address these risks,including(but not limited to):
76、The Dubai AI Security Policy10 The Cyber Security Agency(CSA)of Singapores Guidelines and Companion Guideon Securing AI Systems11 The UK Department for Science,Innovation and Technologys(DSITs)developing AI Cyber Security Code of Practice12 The National Institute of Standards and Technologys(NISTs)t
77、axonomy of attacks andmitigations13 The Open Worldwide Application Security Projects(OWASP)AI Exchange14Simultaneously,evidence of real-world AI cybersecurity vulnerabilities,threats and incidents is being collected,and numerous repositories and databases are being created.15Artificial Intelligence
78、andCybersecurity:Balancing Risks andRewards10AI systems do not exist in isolation.Organizations need to consider how the business processes and data flows built around AI systems can be designed in a way that reduces the business impact that a cybersecurity failure might cause.Where assurance on the
79、 security of underlying AI or on the effectiveness of defences is limited,its crucial to consider how any compromise might be overcome.This could include implementing additional controls outside the system itself,or reviewing what data should or should not be exposed to the AI.To enable such an end-
80、to-end view,risks and controls need to be integrated into wider governance structures and enterprise risk management processes.Alongside shifting left and expanding right,any approach for mitigating the cybersecurity risks associated with AI adoption needs to consider how the technology will evolve
81、and how business use will change over time.This should be facilitated via repeated re-evaluation of risks and controls,alongside frequent rehearsal and regular testing of the organizations preparedness(e.g.war gaming,tabletop exercises,disaster recovery drills).This presents another opportunity to f
82、urther integrate cyber risk assessment and intelligence capabilities into the resilience cycle and adjust testing strategies based on evolving AI risk profiles and threat actor developments observed across the industry.This means that leaders need to expand right,i.e.embed cyber resilience,and repea
83、t.2.4 Taking an enterprise view 2.3 Shift left,expand right and repeat The question of how to secure AI is closely related to a wider body of work related to AI safety.This work is a significant aspect of the AI Governance Alliances(AIGAs)agenda.This approach promotes the need to“shift left”,i.e.imp
84、lement safety guardrails early in the AI system life cycle(namely,at the building and pre-deployment stages)to mitigate related risks.16 As an example of safe and secure-by-design AI technologies,it mandates the use of processes that address inherent vulnerabilities in the AI systems and services be
85、ing used and procured by organizations.Not all risks can be mitigated at the building and pre-deployment stages.It is not possible to eliminate all system vulnerabilities,and there will always be threat actors who will succeed in circumventing the mitigating measures in place.Tocomplement the securi
86、ty-by-design practices that help organizations develop AI technologies securely and ethically,businesses need to implement cybersecurity practices that will protect AI systems once they are in use.This requires:An understanding of the wider risks faced by businesses using and depending on AI An unde
87、rstanding of the risks associated with the criticality of the data being processed Effective operational cybersecurity capabilities to protect against these risks and detect attacks Effective response and recovery processes to deal with incidents when they occur In short,organizations will need to b
88、oth“shift left and expand right”.2.1 Shift left2.2 Shift left and expand right Artificial Intelligence andCybersecurity:Balancing Risks andRewards113Actions for seniorleadershipLeaders decision-making on AI adoption should be guided by security considerationsLeaders are responsible for ensuring that
89、 adoption of AI technologies aligns with their organizations goals and objectives,and that the risks that arise fall within the scope of their organizations risk tolerance.Cutting through the hype to understand risk and rewardBefore making any decision to deploy AI into core operations,businesses ne
90、ed to ensure that the benefit is commensurate with costs and risks.To besure of this,businesses need to take the potential risks of AI system failures(either accidental or due to malicious attacks)into account.Because of the speed of AI evolution,the risk-reward balancing decision may need to be rev
91、iewed on a frequent basis.Promoting AI security-by-designand by-defaultBecause AI is rapidly evolving and security standards are relatively immature,business leaders should be aware that some products are likely to be less secure than others,and should therefore be approached with more caution.Leade
92、rs should demand robust third-party risk management and use the organizations purchasing power to promote AI security-by-design and by-default.Embedding AI cyber risks into cross-organizational riskmanagementManaging AI-related cyber risks effectively requires a multidisciplinary approach.Technology
93、 and security teams alone cannot prevent incidents from occurring.Front-line business teams need to assess the potential business impacts,and specialists e.g.in HR and/or legal teams need to assess the potential liabilities that might arise.They have a significant role to play in establishing contin
94、gent mitigation.Such multidisciplinary arrangements may already be embedded within the organizations enterprise risk management.If not,they will need to be created bespoke to AI challenges.Managing the decision-making process in a large organization can be complex.Some organizations may have a centr
95、al AI policy,with divisional or local leadership responsible for decision-making within that policy.Smaller organizations may be able to operate a flatter governance structure,with decisions being made by the boardroom.In both cases,it is important to be very clear about where accountability for cyb
96、er risks sits.Ensuring adequate investment in essential cybersecurity operationsLeaders need to ensure adequate investment in the cybersecurity controls and tools that are needed to protect AI systems,and ensure that the business is prepared to respond to and recover from disruptions.Chief informati
97、on security officers need to be empowered to challenge both technology teams and business teams seeking to embed the technology within their operations.Security teams should be equipped with the necessary resources to adapt their capabilities and address new threats arising from AI use within the or
98、ganization.Innovation investments for AI should be coupled with security investments to ensure that security is embedded throughout the AI system life cycle.This approach will help organizations define a reusable approach for mitigating complex technology risks,leaving them better prepared for futur
99、e disruptions.Engaging with national and sector-specific strategies andstandardsBusiness leaders should be aware of the rapidly changing regulatory environment(particularly that relating to the markets they operate in).It will be necessary to consider how the specific local and regional AI contexts
100、including strategies and standards impact business operations and risks.Additionally,relevant controls will need to be put in place to ensure businesses are meeting their obligations.For many,this will mean not only a watching brief on legal and regulatory compliance matters,but also on emerging thr
101、eats and technological risks.Artificial Intelligence andCybersecurity:Balancing Risks andRewards12Questions for business leaderstoconsiderIt is crucial for business leaders to define and communicate key parameters within which decision-making on AI adoption and its associated cybersecurity can be co
102、nducted.This set of questions is designed to guide them in assessing their current strategies,identifying potential vulnerabilities and cultivating a culture of security within their organizations.1.Has the right risk tolerance for AI technologies been established and is it understood by all risk ow
103、ners?The organization might choose to be an early mover,recognizing the potential risks,or might take a more conservative approach.In both cases,there is a need to oversee the management of cybersecurity risks before,during and after the deployment of AI systems.The oversight and leadership scrutiny
104、 should generate evidence that AI risks are well understood,that stretch scenarios have been considered and that choices are in line with the wider risk tolerance of the business.2.Is there a proper balancing of the risks against the rewards when new AI projects areconsidered?Its crucial to assess h
105、ow the potential upsides of AI projects align with the strategic direction of the business,when balanced against the novel risks these technologies might introduce.The potential rewards should be well qualified,and consideration should be given to the potential risks in any decision to use in operat
106、ions.3.Is there an effective process in place to govern and keep track of the deployment ofAI projects within the organization?This is particularly challenging in complex organizations in which experimentation and deployment may be occurring in multiple departments and subsidiaries.A clear process s
107、hould be defined for making decisions on AI projects(including when to move them from experimentation to operational use).It is also important to monitor live AI systems to make sure users are not inadvertently exposing the organization to additional risk.4.Is there a clear understanding of the orga
108、nization-specific vulnerabilities and cyber risks related to the use or adoption ofAI technologies?There are novel vulnerabilities associated with AI technologies such as data-poisoning,inference engine sabotage and prompt jailbreaking.These could lead to operational disruption and data loss,or coul
109、d exacerbate issues such as a lack ofexplainability and reliability,or potential for bias.A comprehensive risk assessment is required to identify the vulnerabilities of the AI systems and potential impact of compromise on the business.Timely access to relevant threat intelligence and advice will sup
110、port greater situational awareness ofthe organizations risk exposure.5.Is there clarity on which stakeholders within the organization need to be involved in assessing and mitigating the cyber risks fromAI adoption?There must be involvement from relevant front-line business teams,from legal,risk,audi
111、t and compliance,and from communications and technology.The various ways in which the AI is embedded into the operational and decision-making processes of the business need to account for the possibility of security failure,and mitigating controls put in place around deployment and operation need to
112、 limit the potential impact of adverse cyber events.The relevant accountable stakeholders should be identified.Clear responsibilities need to be set for AI-related cyber risks,and associated duties need to be clarified should a cyber incident occur.6.Are there assurance processes in place to ensure
113、that AI deployments are consistent with the organizations broader organizational policies and legal and regulatory obligations(for example relating to data protection or health and safety)?Proposals for new AI deployments need to be tested to ensure compliance with wider organizational policies.Form
114、al sign-off by relevant accountable stakeholders within the organization may be required.This review process will need to be revisited as the technology and its business useevolve.Artificial Intelligence andCybersecurity:Balancing Risks andRewards134There are several contextual factors that may infl
115、uence the risk exposure of organizations adopting AI:Understanding how the organizations context influences the AI cyber riskSteps towards effective management of AIcyber riskEvaluating the cyber risks resulting from AI adoption is essential for all organisations intending to innovate.This chapter p
116、resents a set of steps for implementing oversight and control of cyber risks related to AI adoption and use.It is designed to be used by senior risk owners within an organization.The steps aim to guide the assessment of cybersecurity risks resulting from the adoption of AI technologies,and the imple
117、mentation of the necessary mitigations.The decision-making process will,in many cases,be iterative.Senior risk owners should revisit risk-reward evaluations after analysing the potential impact scenarios.The process starts with an assessment of the AI risk context of the organization,and ends with t
118、he deployment of leading practices throughout the AI life cycle.Step 1Characteristics influencing the cyber risks faced by organizations adopting AIFIGURE 2Creator of its own AI modelsLevel of AI autonomyNature of businessGeographical contextThreat context Provider to others Early adopters versus mo
119、re conservative users Level of local innovation/service provision Level of oversightby humans Level of influence on critical processes(and explainability of influences)Risk tolerance Size/resource(includingfor cybersecurity)Sector Safety-criticalfunctionalities Downstream dependencies on business pr
120、ocesses Adversarial context(see threat actors context)(Stable)cybersecurityand relatedregulations/legislation Operational collaboration bodies/networks,e.g.threat-intelligence sharing Local market for cybersecurity products and services Compliance with(various competing)standards Infrastructure sove
121、reignty(versus outsourced capability)Capability/resource Intent Frequency CredibilityAI outputs drive critical business processes autonomouslyConsumer of AI services AI outputs inform decision-making by humansCritical infrastructure organizationNon-critical infrastructure organizationHigh national/r
122、egional cybersecurity capacityLimited national/regional cybersecurity capacityPolitically motivated sabotageCybercriminals and activistsPosition in the AI supply chain and appetite for innovationArtificial Intelligence andCybersecurity:Balancing Risks andRewards14Position in the supply chain and app
123、etite for innovation:Organizations leading in AI innovation(either as sellers or consumers with market-leading capabilities)are likely to face risks from using newer technologies that may contain undiscovered vulnerabilities.More conservative users that procure more mature AI technologies may face f
124、ewer risks,as more will be known about vulnerabilities and effective control practices.Nature of business:Which sectors the business operates in can affect their risk exposure.For example,critical national infrastructure organizations may be more likely to face high threat levels from attackers moti
125、vated by high harm potential or value,and to be subject to cybersecurity regulation.The size of the business could influence its resources for implementing AI risk mitigation,while the level of dependence from other businesses downstream affects the extent to which impacts of compromise might propag
126、ate.Geographical context:Where the organization is conducting business will have a strong influence on their cybersecurity posture and residual cyber risk.The level of cybersecurity capacity of the country may influence the level of cybersecurity regulation that the organization is subject to.This m
127、ight also affect the organizations access to a skilled professional workforce though this might be less of an issue for large multinational organizations and the availability of trusted sovereign cybersecurity infrastructures or threat/intelligence sharing channels.Level of AI autonomy:Where autonom
128、ous AI drives business processes without human oversight,this may create greater risk.Lower risk is faced when there is little autonomy or strong human oversight to limit risk propagation.Threat context:The type of threat actor faced by an organization determines the level of risk.More capable,resou
129、rced and motivated threat actors will create greater risk for potential victims.It is necessary for organizations to consider how these risk contexts apply to them.This then informs later steps,during which the potential risks and impacts will be identified.There may be a lack of clarity around the
130、true benefits of AI technologies,as use cases are still in development,making accurate risk-reward analysis challenging.However,understanding the business drivers for the implementation of AI technologies will help to promote understanding of the expected rewards that are being sought.Research by th
131、e AI Governance Alliance has informed categorization of the opportunities that generative AI is perceived to be creating for businesses:17 Enhancing enterprise productivity Creating new products or services Redefining industries and societies(e.g.making sectors such as healthcare more efficient and
132、responsive to market changes e.g.accelerating drug discovery).It is essential to build understanding of the proposed integration of AI in the business.This should incorporate which systems,processes,information and data is involved,as well as which stakeholders and why.Key questions can help organiz
133、ations to develop an understanding of the new risk exposure that the use of AI might bring:1.What parts of the business might be dependent on AI and could be impacted should the AI systems be compromised?2.What key business value,e.g.revenue,reputation,process efficiency,need to be protected?3.Might
134、 the deployment of AI put crown jewels assets of greatest value to the organization and broader critical assets and processes at risk?4.What new assets and processes related to theAI system itself need to be protected?New technology brings the potential for new vulnerabilities.These typically fall i
135、nto the followingcategories:Inherent software vulnerabilities Vulnerabilities introduced by humans configuration and use of the technologies,particularly since this may require new and untrained practice Vulnerabilities in interfaces with other digital systems,e.g.weak links between software,hardwar
136、e,operating systemUnderstanding the rewardsIdentifying the potential risks and vulnerabilitiesStep 2Step 3Artificial Intelligence andCybersecurity:Balancing Risks andRewards15Organizations need to develop an understanding of what vulnerabilities might be introduced as they adopt AI technologies,and
137、of which security properties might be weakened should threat actors successfully exploit them.Consider Figure 3,which details the potential areas of vulnerability of the AI system:The core AI infrastructure and supporting infrastructure that needs to be taken into consideration How this could expand
138、 attack surface and how this infrastructure might be compromised The security properties that must therefore be considered at risk New tech,same need for securityBOX 2The traditional CIA triad remains critical:the compromise of AI systems and supporting infrastructure has the potential to impact on
139、the Confidentiality,Integrity and Availability of data and assets.Other important security properties include:Explainability:refers to the concept that human users can comprehend the outputs generated by the AI model.Traceability:a property of the AI that signifies whether it allows users to track i
140、ts processes including understanding the data used and how it was processed by the models.A lack of explainability or traceability may affect the organizations ability to investigate and mitigate against the impacts of an AI-system compromise.AI system attack surface and security propertiesFIGURE 3I
141、nputData sources feeding into AI models(customer data,customer requests,internal requests,sensors,internal applications e.g.calendars)Examples of compromise Prompt injection Model evasion(input data altering model behaviour)JailbreakingRelated security properties Data integrity:lineage,completeness,
142、bias management,timeliness(up-to-date)Availability of input data Confidentiality of input dataMonitoring and loggingTools for monitoring the performance and security of AI systemsData storageExamples of compromise Leakage of data Manipulation or insertion of data(leading to model poisoning)Underlyin
143、g hardware/software stack,operating systemExamples of compromise Exploitation of vulnerabilities leading to compromise of underlying infrastructureAPIs and interfacesExamples of compromise Exploitation of vulnerabilities leading to data compromise at APIs Manipulated input or output dataModel develo
144、pment and updateExamples of compromise Malign insertion of vulnerabilities(backdoors)Developer errors Compromise of development environmentOutputData outputted by the AI modelExamples of compromise Manipulation of data post-output(e.g.through API compromise)Leakage of data post-output Otherwise prev
145、enting output data from reaching business applicationsRelated security properties Data integrity Data reliability Availability of output data Confidentiality of output data Explainability of output dataBusiness applications(What is the output data used for)(Non-exhaustive list)Driving business proce
146、sses Presenting information to end users/clients(recommendation engines,chatbots)Core AI infrastructureDirectly supporting infrastructureRelated security properties Integrity(of monitoring information)Confidentiality of monitoring and model dataModelAI model deployed in a live environmentExamples of
147、 compromise Exploitation of vulnerabilities Alteration of model codeRelated security properties Integrity of model Reliability of model(can it produce accurate and consistent information)Model explainability and traceability Confidentiality of model Availability of model functionality Manipulation o
148、f monitoring tools integrity Data leakage from monitoring tools Compromise of monitoring tools accessLateral movement,e.g.to access AI model codeExamples of compromiseTrainingThe process of training the AI model on datasets,which may continue during deploymentExamples of compromise Training data poi
149、soning Compromise of training environmentRelated security properties Data integrity Availability of training data Confidentiality of training dataArtificial Intelligence andCybersecurity:Balancing Risks andRewards16The negative impacts caused by the compromise of AI technologies may go beyond those
150、associated with traditional cyber risks.Key novel risks of AI-enabled business1.Limited fairness due to inherent bias in products2.Limited explainability of AI model,leading to reduced potential for human scrutiny 3.Unreliable outputs that decrease confidence and impede the ability to check the syst
151、em reliability 4.New exploitable attack surface with limitedcontrols5.Privacy risks relating to personal data exposure via pattern-of-life generation6.Exposure of confidential data through(possibly accidental)inclusion in AI training datasetsThese risks can lead to negative impacts to the business,i
152、ncluding reputational damage,loss of market position,loss of revenue,and legal and regulatory violations.Assessing potential negative impacts to the businessTechnical impacts of AI compromise can lead to business impactsFIGURE 4123Technical impactsBusiness-application impactBusiness applications e.g
153、.customer-relationship management system;accounting software;cyber-physical systems etc.Business processes Depends on types of business process involvedPropagation to dependent internal business processesExternal impactsIndividual usersClient organisationsSocietal functionsLack of explainability or
154、traceability may affect ability to mitigate impacts and reduce harmsIntegrity and reliability of data input Integrity of business-process outputsAvailability of business-process outputs(Depends on extent to which human oversight versus full automation affects level of impact on business processes)(D
155、epends on extent to which internal business processes are interdependent)(Depends on extent to which internal business processes impact on external processes)Integrity of application outputAvailability of data input Availability of application outputBusiness-process impactImpact propagationCompromis
156、e of the integrity or availability of data fed from AI models into business applicationsBreach of confidentiality of the data,business-process-related IP,or AI modelsAbuse of an organizations AI models by an adversary(e.g.using them to disseminate harmful content)Business impactsExplainability or tr
157、aceability of data input Explainability or traceability of application outputHarmsStep 4Artificial Intelligence andCybersecurity:Balancing Risks andRewards17Harm-propagation treesAttacks on AI systems can propagate further harms to businesses.They can also affect the wider ecosystem for example,thro
158、ugh impacts on downstream clients processes or on societal processes that affect citizens.Analysing how an initial impact event might lead to further harms can strengthen resilience planning,as a more intricate set of events can be forecasted and planned for.Harm-propagation trees are a tool for ach
159、ieving this.They are a map of the negative consequences resulting from each event.18 The process of creating a harm tree starts with identifying an initial impact event,and recording any impacts that could potentially result from it.Any further impacts that might result from these new impacts are th
160、en recorded in an iterative process.Figure 5 shows an example of the harms a business might experience from an initial businessinterruption.The full scale of potential harms is broad,including the costs of incident response services such as legal and public relations(PR)services,forensics and breach
161、 counselling.It also includes other technical costs,such as for restoration and hosting during the period of compromise.Harm tree example.Initial impact:business interruptionFIGURE 5Cloud hosting for the period of compromiseNotification of regulatorIncident responseDigital restoration(e.g.recoding o
162、f website)Restoration from back upsDigital restoration(e.g.recoding of website)Cost for new infrastructurePrivacy/breach councellingForensicsSoftware consultantsCredit monitoringLawyer servicesNotificationIndemnityUpdate defensesPR servicesRegulatory finesBusinessinterruptionSource:Axon,L.et al.(201
163、9).2019 International Conference on Cyber Situational Awareness,Data Analytics And Assessment.Artificial Intelligence andCybersecurity:Balancing Risks andRewards18Many existing cybersecurity control frameworks that are not AI-specific remain relevant for addressing cyber risks associated with AI ado
164、ption.What may differ is the way in which these controls need to be applied to protect the AI system,as well as any potential gaps they leave for specific risks.Basic cyber hygiene is thefoundation It is critical to have a secure foundation of existing cybersecurity controls in place i.e.basic cyber
165、 hygiene to manage the cyber risks related to AI adoption.Some key practices include:Avoiding vulnerabilities in the AI systemsRobust threat and vulnerability management practices help remediate critical exposures detected across systems,including AI technologies.It must also be complemented by secu
166、re configurations of the underlying hardware and software.Limiting blast radiusImplementing controls for protecting the perimeters of systems such as segmentation of networks and databases,and data-loss prevention help limit the impact of an initial compromise of AI systems.Accessing controlEnsuring
167、 that the AI systems and the infrastructure hosting AI algorithms and data are protected by access controls such as multi-factor authentication and strong privileged access management(PAM).These should be embedded as foundational security measures.Third-party risk managementStrong procurement proces
168、ses for assessing the security of AI models and training data are also critical to avoiding integrity issues and reducing cyber risk exposures.Information sharingOrganizations should collaborate with peers across businesses and governments to ensure that threat-and incident-sharing mechanisms take A
169、I-related cyber risks into account.Education and awarenessLeaders need to develop an understanding of both the opportunities and risks associated with AI,and invest in training programmes to enhance AI awareness,create an organization-wide culture of responsible AI adoption and help employees recogn
170、ize potential risks.Training should be tailored to the role of employees.Mind the gaps:basic cyber hygiene is not enoughSome existing critical control capabilities will need to be tailored and updated in order to mitigate the cyber risks related to AI adoption,while other critical control capabiliti
171、es will need to be developed from scratch to adequately mitigate the cyber risks of AI adoption.Examples of the former are set out in Table 1 and examples of the latter in Table 2.Identifying options for risk mitigationStep 5Example of existing control capabilities that need to be tailoredTABLE 1Con
172、trolDescriptionInventory of enterprise devices and softwareEnsuring that all new assets(devices and software)relating to AI infrastructure(as well as the models)are inventoriedBusiness critical assetmappingMapping the infrastructure supporting the new AI system including databases and application pr
173、ogramming interfaces(APIs)to ensure that its criticality is understood and that it is protected accordinglyInformation governanceEnsuring that the application of AI to personal and other sensitive data does not undermine organizational information governance policies and data protection regulationsP
174、re-deployment integrity processesTailoring security-by-design processes(such as hardening,secure coding,etc.)specifically for AI data,inference models and technologies.Business incident response strategyRefreshing incident response procedures and business continuity plans to account forthe impacts o
175、f AI-related cyber risksIncident recovery tools and managementUpdating tools and playbooks for recovering AI systems that have been compromised(e.g.“roll-back”procedures for AI models)Defining the criteria under which AI should be switched off,if possibleExercisingAdapting the exercises with AI-rela
176、ted cybersecurity incidents to cover all major scenariosArtificial Intelligence andCybersecurity:Balancing Risks andRewards19Example of existing control capabilities that need to be developedTABLE 2ControlDescriptionTraining data securityData inputs need to be protected and managed to avoid delibera
177、te poisoning and accidental damage to the AI system.Prompt curationPrompts need to be curated to mitigate risks of prompt injection and jailbreaking.Output verificationThe integrity and reliability of AI outputs need to be verified.Currently,this is mostly driven by humans.Monitoring and detectionTh
178、e behaviours of AI systems need to be monitored to detect manipulation in a timelymanner.Red teaming and adversarial testingGuidelines and tools for red-teaming AI models,systems and processes using AI outputs are required.This is particularly critical for regulated sectors that already mandate such
179、 testing.AI systems could be harnessed to red-team AI models with greater efficacy.Application of risk controls to the attack surfaceFIGURE 6Application software security:Ringfencing AI systems until their security is validated,before they are put into production and integrated with critical busines
180、s processesMonitoring and detection:Tools for monitoring AI-system behaviours to detect manipulationInventory of core AI assets:Ensuring that all new assets(devices and software)relating to AI infrastructure are mapped Inventory of supporting infrastructure:Mapping the infrastructure supporting the
181、new AI infrastructure,such as databases and APIs,to ensure that its criticality is understood and that it is protected accordingly Data protection:Ensuring that new data requiring protection is identified,and its criticality(e.g.impact on business processes via the AI models)is mappedIncident-respon
182、se management:Incident-response procedures and refreshed business-continuity plans to account for the impacts of AI-related cyber risksIncident-response management:Approaches and tools for recovering AI systems that have been compromised(“roll-back”procedures for AI models)Penetration testing:Guidel
183、ines and tools for red-teaming AI modelsMonitoring and detection:Approaches and tools for verifying the integrity and reliability of AI outputs including the role of human oversightMonitoring and logging Manipulation of monitoring tools integrity Data leakage from monitoring tools Compromise of moni
184、toring tools accessLateral movementInput Data poisoning Prompt injection Model evasion(input data causing altered model behaviour)Model development and update Malign insertion of vulnerabilities(backdoors)Developer errors Compromise of development environmentTraining Training data poisoning Compromi
185、se of training environmentOutput Manipulation of data post-output(e.g.through API compromise)Core AI infrastructureModel Exploitation of vulnerabilities Alteration of model codeDirectly supporting infrastructureData storage Leakage of data Manipulation or insertion of data(leading to model poisoning
186、)Underlying hardware/software stack,operating system Exploitation of vulnerabilities leading to compromise of underlying infrastructureAPIs and interfaces Exploitation of vulnerabilities leading to data compromise at APIs Manipulated input or output dataBusiness applications(What is the output data
187、used for)(Non-exhaustive list)Driving business processes Presenting information to end users/clients(recommendation engines,chatbots)Artificial Intelligence andCybersecurity:Balancing Risks andRewards20While implementing controls will reduce the cyber risk exposure of the organization,some residual
188、risks are likely to remain.Decision-making on the adoption of AI should be informed by consideration of these risks in light of the potential rewards.Clarity on the qualified opportunity facilitates decision-making on residual risk exposure.Leadership needsto acknowledge the residual risk,and makead
189、ecision on whether to accept it or refuse it.In a case of refusal,additional controls will need to be put in place.Threats are constantly evolving,so organizations will need to regularly review the steps outlined above to ensure they are properly positioned.These steps are meant to be an iterative p
190、rocess and not a one-time activity.Balancing residual risk against the potential rewardsRepeat throughout the AI life cycleStep 6Step 7Artificial Intelligence andCybersecurity:Balancing Risks andRewards21ConclusionTo fully benefit from the opportunities that AI technologies can bring,organizations n
191、eed to ensure that the associated risks are proactively understood and managed.This is not a task that technology and security teams can perform in isolation.The process has to involve multiple stakeholder groups within the business,including top leadership and senior risk owners.Decision-making and
192、 investment choices need to be informed by proper evaluation of risks and rewards.The questions for business leaders and steps for senior risk owners outlined in this report highlight key considerations,and are designed to aid decision-making processes.They can be applied to help organizations ensur
193、e that the value from these technologies is realized and sustained.AI and its associated risks are in constant evolution.As such,it is crucial that business leaders continuously update their understanding of the technology to keep up to date.Successful businesses will be well positioned to harness c
194、ybersecurity as a competitive advantage.In the context of AI adoption,this will enable organizations to innovate confidently and build trust in their services and brands.Security leaders have an important role to play in aiding the secure adoption of AI technology across the wider economy.The commun
195、ity should collaborate on a global scale to develop and align AI security tools and standards that accommodate the diverse functionalities of different AI models.The community should also work together to exchange good practices in the secure deployment of AI systems,and in the protection of these s
196、ystems(and their business interfaces)when in use.There is a need to enhance collaboration between the AI and cybersecurity communities,regulators and policy-makers through dialogues and joint initiatives.It will also be crucial to establish clear accountability mechanisms for securing the AI supply
197、chain and provide effective incentives for security-by-design within AI products.Lastly,it should be recognized that new tools and techniques are required to manage the novel security vulnerabilities driven by AI.While the market is maturing,remaining capability gaps should be addressed with some ur
198、gency.Artificial Intelligence andCybersecurity:Balancing Risks andRewards22ContributorsAcknowledgementsMaria BassoDigital Technologies Portfolio Manager,Centreforthe Fourth Industrial Revolution DigitalTechnologies,World Economic ForumTal GoldsteinHead,Strategy and Policy,Centre for Cybersecurity,Wo
199、rld Economic ForumJill HoangInitiatives Lead,AI and Digital Technologies,Centre for the Fourth Industrial Revolution DigitalTechnologies,World Economic ForumCathy LiHead,AI,Data and Metaverse;Member,ExecutiveCommittee,World Economic ForumGiulia MoschettaInitiative Lead,Centre for Cybersecurity,World
200、Economic ForumWe extend our thanks to all experts and leaders who contributed to the research:Paige AdamsGroup Chief Information Security Officer,Zurich Insurance GroupBushra AlBlooshiSenior Consultant,Research and Innovation,DubaiElectronic Security Center(DESC)Hussain AldawoodCybersecurity Innovat
201、ion&Partnerships Director,NEOMLampis AlevizosHead,Cyber Defense Innovation,Volvo GroupHessah AlmajhadChief Cybersecurity Officer,Saudi Information Technology Company(SITE)Doron Bar ShalomDirector,Strategic Product Innovation,Security,Microsoft Alejandro BecerraDigital Security Director,Telefonica Hi
202、spAmMauricio BenavidesChief Executive Officer,Metabase QSarith BhavanHead,Cybersecurity and Technology Platform,Mubadala Investment CompanyJanus Friis BindslevChief Digital Risk Officer,PensionDanmarkFrancesca BoscoChief of Staff,CyberPeace InstituteJalal BouhdadaGlobal Cybersecurity Director,DNVGra
203、nt BourzikasChief Security Officer,CloudflareMarijus BriedisChief Technology Officer,Nord SecurityNiall BrowneGlobal Chief Security Officer,Palo Alto NetworksIan BuffeyChief Information Security Officer,AtkinsRalis GroupNicholas ButtsDirector,Global Cybersecurity and AI/Emerging Tech Policy,Microsof
204、t Claudio CalvinoSenior Managing Director and Global Head,DataScience,FTI Consulting Lead authorsLouise AxonResearch Fellow,Global Cyber Security Capacity Centre,University of OxfordJoanna BouckaertCommunity Lead,Centre for Cybersecurity,WorldEconomic ForumSadie CreeseProfessor,Cybersecurity,Univers
205、ity of OxfordAkshay JoshiHead,Centre for Cybersecurity,World Economic ForumJamie SaundersOxford Martin Fellow,University of OxfordArtificial Intelligence andCybersecurity:Balancing Risks andRewards23David CaswellManaging Director,Cyber-AI Leader for U.S.Government&Public Sector,DeloitteRonald Charro
206、nSenior Cybersecurity Technology Advisor,Canadian Centre for Cyber SecurityPiotr CiepielaPartner,Global/EMEIA Cyber Technologies Leader,EYClaudionor CoelhoChief AI Officer,ZscalerMichael DanielPresident and Chief Executive Officer,Cyber ThreatAllianceDebashis Das Principal,Office of the Chief Inform
207、ation Security Officer,Amazon Web ServicesMaria del Rosario RomeroHead,IT Security,Pan American EnergyTyler DerrChief Technology Officer,Broadridge FinancialSolutions Stefan DeutscherPartner and Director,Cybersecurity and IT Infrastructure,Boston Consulting GroupHazel Diez CastaoChief Information Se
208、curity Officer,Banco SantanderGlenda DsouzaStrategy Lead,Group Technology Office,Mahindra Stephane DuguinChief Executive Officer,CyberPeace InstituteGregory EskinsHead,Global Cyber Insurance Center,MarshMcLennanSabrina FengChief Risk Officer,Technology,Cyber and Resilience,London Stock Exchange Grou
209、p Sergio FidalgoGroup Chief Security Officer and Group Chief Information Security Officer,BBVABobby FordSenior Vice-President and Chief Security Officer,Hewlett Packard EnterpriseShannan FortInternational Cyber Product Leader,MarshMcLennanSimon GaniereGlobal Head,Cyber Intelligence Center,UBSJavier
210、Garcia QuintelaChief Information Security Officer,Repsol Akash Kumar GargArtificial Intelligence Technical Advice Production Lead,Australian Cyber Security CentreMatan GetzChief Executive Officer,Aim SecurityJonathan GillChief Executive Officer,PanaseerDaniel GislerChief Information Security Officer
211、,Oerlikon GroupPankaj GoyalChief Operating Officer,Safe Securities Richard HaleSenior Vice-President,Global Cyber Security Strategy,Sony Randy HeroldChief Information Security Officer,ManpowerGroup Mark HughesGlobal Managing Partner,Cyber Security Services,IBMLars IdlandVice-President,Security;Chief
212、 Information Security Officer,Equinor Ann IrvineChief Data and Analytics Officer,ResilienceAmit JainExecutive Vice-President and Head,Cybersecurity,HCLTechStefan JschkeSenior Vice-President and Head,Enterprise IT Security,Volvo GroupAli El KaafaraniChief Executive Officer,PQShieldMohit KapoorGroup C
213、hief Technology Officer,Mahindra Steven KellyChief Trust Officer,Institute for Security andTechnologyDaniel KendziorGlobal Data and Artificial Intelligence(AI)Security Practice Lead,AccentureShaun KhalfanSenior Vice-President and Chief Information Security Officer,PayPalHoda Al KhzaimiDirector,Centr
214、e for Cybersecurity,New York University Abu DhabiArtificial Intelligence andCybersecurity:Balancing Risks andRewards24Sigmund KristiansenChief Cyber Security Officer,Aker BP Georgios KryparosChief Information Security Officer,Einride Ayelet KutnerChief Technology Officer,At-Bay Christine LaiAI Secur
215、ity Lead,Cybersecurity and Infrastructure Security AgencyAamir LakhaniGlobal Strategist and Architect,Fortinet Philomena LaveryGlobal Chief Information Security Officer,AVEVA Jason LeeChief Information Security Officer,Splunk a CiscoCompanySimon LeechDirector,Cybersecurity Center of Excellence,Hewle
216、tt Packard EnterpriseChris LythChief Information Security Officer,Arup Group David MabryVice-President and Chief Information Security Officer,Gulfstream Aerospace Derek MankyChief Security Strategist and Global Vice-President,Threat Intelligence,Fortinet Clemens MeiserDivision AI and Security,German
217、 Federal Office for Information SecurityEduardo MelendezChief Information Security Officer,Grupo SalinasMichael MeliGroup Chief Information Security Officer and Managing Director,Julius Baer Eiichiro MitaniExecutive Officer,Chief Information Officer,Mitsubishi ElectricPaulo MonizHead,CyberSecurity a
218、nd Information Technology Risk,Energias de Portugal(EDP)Sean MortonSenior Vice President,Strategy&Services,TrellixBarbara ONeillGlobal Chief Information Security Officer,EYMark OrsiChief Executive Officer,Global Resilience FederationChristine PalmerAI Expert,US Department of Homeland SecurityPerikli
219、s PapadopoulosAI Security Strategist,AccentureTom ParkerChief Executive Officer,Hubble TechnologyPankaj PaulDirector,Strategy and Innovation,Burjeel Holdings Sriram RamachandranChief Risk Officer,Mahindra Amanda ReathDirector,Cyber Programme Management,Canadian Centre for Cyber SecurityPhilip Reiner
220、Chief Executive Officer,Institute for Security andTechnologyCyril ReolGroup Chief Information Officer,Mercuria Craig RiceChief Executive Officer,Cyber Defence AllianceHarold RivasChief Information Security Officer,TrellixJason RugerChief Information Security Officer,LenovoSreekumar SGlobal Head,Prac
221、tise for Cybersecurity,HCLTechAmir Abdul SamadHead,Cybersecurity,PETRONASMiguel Sanchez San VenancioGlobal Chief Security and Intelligence Officer,Telefnica Stephen ScharfManaging Director and Chief Information SecurityOfficer,BlackrockRalf SchneiderSenior Fellow and Head,Cybersecurity and NextGenIT
222、 Think Tank,Allianz Tomer SchwartzCo-Founder and Chief Technology Officer,DazzLeo SimonovichVice-President;Global Head,Industrial Cyber andDigital Security,Siemens Energy Charley SnyderHead,Security Policy,Google Colin SoutarManaging Director,Cyber Risk,Deloitte Artificial Intelligence andCybersecur
223、ity:Balancing Risks andRewards25Emanuele SpagnoliChief Information Security Officer,Mundys Mark StamfordFounder and Chief Executive Officer,OccamSecMark SwiftChief Information Security Officer,Trafigura Group Neha TanejaChief Information Security Officer,Hero GroupJennifer TangAssociate,Cybersecurit
224、y and Emerging Technologies,Institute for Security and Technology(IST)Omar Al-ThukairVice President and Chief Digital Officer,SaudiAramcoIan TienChief Executive Officer,Mattermost Phil TonkinField Chief Technology Officer,DragosAbdullah Al TuraifiHead,AI Cybersecurity,Saudi AramcoSwantje WestpfahlDi
225、rector,Institute for Security and Safety Fabian WilliHead,Cyber Key Accounts,Swiss ReRainer ZahnerHead,Cybersecurity Governance&Cyber Risk Management,Siemens Jelena Zelenovic MatoneChief Information Security Officer,European Investment Bank Raphael ZimmerHead,Division,AI and Security,German Federal
226、Office for Information SecurityCyber Security Agency of SingaporeIsrael National Cyber DirectorateWe additionally thank the World Economic Forums Partnership against Cybercrime(PAC)community members for their insights on AI andcybercrime.ProductionPhoebe BarkerDesigner,Studio MikoLouis ChaplinEditor
227、,Studio MikoLaurence DenmarkCreative Director,Studio MikoArtificial Intelligence andCybersecurity:Balancing Risks andRewards26Endnotes1.Federal Bureau of Investigation(FBI)San Francisco.(2024).FBI Warns of Increasing Threat of Cyber Criminals Utilizing Artificial Intelligence.https:/www.fbi.gov/cont
228、act-us/field-offices/sanfrancisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence;United Nations Office on Drugs and Crime.(2024).Transnational Organized Crime and the Convergence of Cyber-Enabled Fraud,Underground Banking and Technological Innovation in Sout
229、heast Asia:A Shifting Threat Landscape.https:/www.unodc.org/roseap/uploads/documents/Publications/2024/TOC_Convergence_Report_2024.pdf.2.Heiding,F.,Schneier,B.&Vishwanath,A.(2024).AI Will Increase the Quantity and Quality of Phishing Scams.Harvard Business Review.https:/hbr.org/2024/05/ai-will-incre
230、ase-the-quantity-and-quality-of-phishing-scams.3.Cantos,M.,Riddell,S.&Revelli,A.(2023).Threat Actors are Interested in Generative AI,but Use Remains Limited.GoogleCloud.https:/ al.(2024).Teams of LLM Agents can Exploit Zero-Day Vulnerabilities.Arxiv.https:/arxiv.org/abs/2406.01637.5.Oprea,A.,Fordyce
231、,A.&Andersen,H.(2024).Adversarial Machine Learning:A Taxonomy and Terminology of Attacks and Mitigations.National Institute of Standards and Technology.https:/www.nist.gov/publications/adversarial-machine-learning-taxonomy-and-terminology-attacks-and-mitigations.6.Cantos,M.(2019).Breaking the Bank:W
232、eakness in Financial AI Applications.Google Cloud Blog.https:/ Economic Forum.(2024).Unlocking Value from Generative AI:Guidance for Responsible Transformation.https:/www3.weforum.org/docs/WEF_Unlocking_Value_from_Generative_AI_2024.pdf.8.MITRE Adversarial Threat Landscape for Artificial-Intelligenc
233、e Systems(ATLAS).(n.d.).Home.https:/atlas.mitre.org/.9.UK National Cyber Security Centre.(n.d.).The near-term impact of AI on the cyber threat.https:/www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat.10.Government of Dubai.(n.d.).Dubai Electronic Security Center launches the Dubai AI Security Poli
234、cy.https:/www.desc.gov.ae/dubai-electronic-security-center-launches-the-dubai-ai-security-policy/.11.Cyber Security Agency of Singapore.(2024).Guidelines and Companion Guide on Securing AI Systems.https:/www.csa.gov.sg/Tips-Resource/publications/2024/guidelines-on-securing-ai.12.UK Department for Sc
235、ience,Innovation&Technology.(2024).Call for views on the Cyber Security of AI.https:/www.gov.uk/government/calls-for-evidence/call-for-views-on-the-cyber-security-of-ai/call-for-views-on-the-cyber-security-of-ai.13.Oprea,A.,Fordyce,A.&Andersen,H.(2024).Adversarial Machine Learning:A Taxonomy and Ter
236、minology of Attacks and Mitigations.National Institute of Standards and Technology.https:/www.nist.gov/publications/adversarial-machine-learning-taxonomy-and-terminology-attacks-and-mitigations.14.Open Worldwide Application Security Process.(2024).AI Exchange.https:/owaspai.org/.15.AI Risk and Vulne
237、rability Alliance.(2024).AI Vulnerability Database.https:/avidml.org/;Organisation for Economic Co-operation and Development(OECD)Policy Observatory.(2024).OECD AI Incidents Monitor.https:/oecd.ai/en/incidents-methodology;Open AI.(2024).Disrupting malicious uses of AI by state-affiliated threat acto
238、rs.https:/ Economic Forum.(2024).Presidio AI Framework:Towards Safe Generative AI Models.https:/www3.weforum.org/docs/WEF_Presidio_AI%20Framework_2024.pdf.17.World Economic Forum.(2024).Unlocking Value from Generative AI:Guidance for Responsible Transformation.https:/www3.weforum.org/docs/WEF_Unlock
239、ing_Value_from_Generative_AI_2024.pdf.18.Axon,L.et al.(2019).Analysing cyber-insurance claims to design harm-propagation trees.https:/ora.ox.ac.uk/objects/uuid:496b5fb7-9da3-4305-a0b1-e4cbf0c41bfb/files/m75e3108c23c67618e62a642ac8c3f8f8.Artificial Intelligence andCybersecurity:Balancing Risks andRew
240、ards27World Economic Forum9193 route de la CapiteCH-1223 Cologny/GenevaSwitzerland Tel.:+41(0)22 869 1212Fax:+41(0)22 786 2744contactweforum.orgwww.weforum.orgThe World Economic Forum,committed to improving the state of the world,is the International Organization for Public-Private Cooperation.The Forum engages the foremost political,business and other leaders of society to shape global,regional and industry agendas.