《亞馬遜云科技:生成式人工智能安全報告(英文版)(35頁).pdf》由會員分享,可在線閱讀,更多相關《亞馬遜云科技:生成式人工智能安全報告(英文版)(35頁).pdf(35頁珍藏版)》請在三個皮匠報告上搜索。
1、IBM Institute for Business Value|Research InsightsSecuring generative AIWhat matters now2IBM Security works with you to help protect your business with an advanced and integrated portfolio of enterprise cybersecurity solutions and services infused with AI.Our modern approach to security strategy use
2、s zero-trust principles to help you thrive in the face of uncertainty and cyberthreats.For more information,please visit:https:/ IBM can helpHow AWS can helpFor over 15 years,Amazon Web Services has been the worlds most comprehensive and broadly adopted cloud offering.Today,we serve millions of cust
3、omers,from the fastest growing startups to the largest enterprises,across a myriad of industries in practically every corner of the globe.Weve had the opportunity to help these customers grow their businesses through digital transformation efforts enabled by the cloud.In doing so,we have worked clos
4、ely with the C-suite,providing a unique vantage point to see the diverse ways executives approach digital transformationthe distinct thought processes across C-suite roles,their attitudes and priorities,obstacles to progress,and best practices that have resulted in the most success.For more informat
5、ion,please visit:https:/1Only 24%of current generative AI projects are being secured.While a majority of executives are concerned about unpredictable risks impacting gen AI initiatives,they are not prioritizing security.A changing threat landscape demands a new approach to securing AI.Built on a fou
6、ndation of governance,risk,and compliance,securing AI infrastructure means securing applications,data,models,and model usage.Organizations are turning to third-party products and partners for over 90%of their gen AI security requirements.Just as with the transition to cloud,partners can help assess
7、needs and manage security outcomes.Generative AI solutions can be as vulnerable as they are valuable if security is an afterthought.Key takeaways23Innovation versus security:Its not a choice,its a testAs organizations rush to create value from generative AI,many are speeding past a critical element:
8、security.In a recent study of C-suite executives,the IBM Institute for Business Value (IBM IBV)found that only 24%of current gen AI projects have a component to secure the initiatives,even though 82%of respondents say secure and trustworthy AI is essential to the success of their business.In fact,ne
9、arly 70%say innovation takes precedence over security.This perceived trade-off contrasts with executives views of the wide-ranging risks of gen AI.Security vulnerabilities are among their biggest areas of concern(see Figure 1).These worries are well-founded.Cybercriminals are already benefitting fro
10、m both generative and traditional AI(see Perspective,“Understanding the generative AI threat landscape”).More realistic email phishing tactics and deepfake audios are making headlines,as are data leaks from employees careless use of public tools such as ChatGPT.1 Looking ahead,potential threats to c
11、ritical AI systems are even more troubling.As AI-powered solutions become more capable and more ubiquitous integrated within critical infrastructure such as healthcare,utilities,telecommunications,and transportationthey could be as vulnerable as they are valuable,especially if security is an afterth
12、ought.Q.What are you most concerned about in adopting generative AI?FIGURE 1Executives expressed a broad spectrum of concerns regarding their adoption of gen AI.IntroductionQ2.What are you most concerned about in adopting generative AI?Increased potential for business disruptionLoss of creative thin
13、king and problem-solvingUnpredictable risks and new security vulnerabilities arising as a result of generative AIDifficulty in attracting,retaining,or developing talent with appropriate skillsNew attacks targeting existing AI models,data,and servicesUncertainty about where and how much to invest56%5
14、2%51%48%47%47%4While a consolidated AI threat surface is only starting to form,IBM X-Force researchers anticipate that once the industry landscape matures around common technologies and enablement models,threat actors will begin to target these AI systems more broadly.2 Indeed,that convergence is we
15、ll underway as the market is maturing rapidly,and leading providers are already emerging across hardware,software,and services.3 The gap between executives angst and action underscores the need for cybersecurity and business leaders to commit to securing AInow.With new IBM IBV research showing many
16、organizations are still in the evaluation/pilot stages for most generative AI use cases such as information security(43%)and risk and compliance(46%),this is the time to get ahead of potential threats by prioritizing security from the start.4To address the need for more specific guidance on where to
17、 begin,the IBM IBV and IBM Security have teamed with Amazon Web Services(AWS)experts to share leading practices and recommendations based on recent research insights.Part one of this report provides a framework for understanding the gen AI threat landscape.In part two,we discuss the three primary wa
18、ys organizations are consuming gen AI and the related security considerations.Part three explores resource challenges and the role of partners.Part four offers an action guide of practical steps leaders can take to secure AI across their organizations.With many organizations still evaluating and pil
19、oting generative AI solutions,now is the time to get ahead of new security threats.5PerspectiveUnderstanding the generative AI threat landscape5 Generative AI introduces new potential threat vectors and new ways to mitigate them.While the technology lowers the bar even further for low-skill threat a
20、ctors,helping them develop more sophisticated exploits,it also enhances defenders capacity to move faster with greater efficiency and confidence.5Red teamBlue teamAllows more targeted,convincing phishing messages on a mass scaleEnables autonomous theft of sensitive data and intellectual property,and
21、 evasion of antivirus software through AI-enhanced malwareMakes it easier to pass through online filters and enable illegal activities such as fraudulent account creation Removes the guardrails on gen AI chatbots,so they trick victims into giving away personal data or login credentials Uses publicly
22、 available data to generate possible passwordsProvides a real-time view into security and compliance posture and automates compliance tasksGenerates summaries of security cases and incidents,and identifies similar cases for improved forensic analysisDetects threats based on natural language descript
23、ions of cyber incident behaviors and patternsAccelerates analysis of event inputs/outputs and generation of test scenariosCollates telemetry data across sources,and speeds analysts understanding of security log dataSocial engineering and fraudData theftIdentify theft and impersonationAI jailbreaksPa
24、ssword crackingContinuous regulatory complianceCase managementAccelerated threat huntingIncident simulation and pen testingData interpretation*Application,data,model,or infrastructure vulnerabilities,such as misconfigurations,accidental disclosures,and policy/controls oversightsTransforms automation
25、 using API discovery,testing,and protectionVulnerability exploitsAPI securityapi67For generative AI to deliver value,it must be secure in the traditional sensein terms of the confidentiality,integrity,and availability of data.6 But for gen AI to transform how organizations workand how they enable an
26、d deliver valuemodel inputs and outputs must be reliable and trustworthy.While hallucinations,ethics,and bias often come to mind first when thinking of trusted AI,the AI pipeline faces a threat landscape that puts trust itself at risk.Each aspect of the pipelineits applications,data,models,and usage
27、can be a target of threatssome familiar and some new(see Figure 2).7Part oneSeeing threats in a new lightFIGURE 2Some emerging threats look familiar while others are entirely new.Source:IBM Security.Some emerging threats look familiar while others are entirely new.FIGURE 2New threat landscapeConvent
28、ional threats Conventional threats that take on new meaningare business-as-usual such as social engineeringsuch as more professional-looking phishing tacticsspecific to AI/gen AI,such as model extraction or inversion exploits1238Conventional threats,such as malware and social engineering,persist and
29、 require the same due diligence as always.For organizations that may have neglected their security fundamentals or whose security culture is still in the formative stages,these kinds of threats will continue to be a challenge.Given the increasing adoption of AI and automation solutions by threat act
30、ors,organizations without a strong security foundation will also be ill-prepared to address the new twists on conventional threats introduced by gen AI.Take phishing emails as an example.With gen AI,cybercriminals can create far more effective,targeted scamsat scale.8 IBM Security teams have found g
31、en AI capabilities facilitate upwards of a 99.5%reduction in the time needed to craft an effective phishing email.9 This new breed of email threats should moderately impact companies with mature approaches to identity management,such as standard practices for least privilege and multifactor authenti
32、cation as well as zero-trust architectures that restrict lateral movement.But those who lag in these areas run the risk of incidents with potentially devastating reach.10 The reality is that security deficiencies are indeed impacting a significant number of organizations,as results from an IBM IBV s
33、urvey of more than 2,300 executives suggest.Most respondents reported their organiza-tions capabilities in zero trust(34%),security by design(42%),and DevSecOps(43%)are in the pilot stage.11 These organizations will need to continue investing in core security capabilities as they are critical for pr
34、otecting generative AI.Organizations without a strong security foundation will also be ill-prepared to address the new twists on conventional threats introduced by gen AI.9Lastly,a set of fundamentally new threats to organizations gen AI initiatives is also emerginga fact recognized by nearly half(4
35、7%)of respondents in our survey(see Figure 3).Prompt injection,for instance,refers to manipulating AI models to take unintended actions;inversion exploits cull information about the data used to train a model.These techniques are not yet widespread but will proliferate as adversaries become more fam
36、iliar with the hardware,software,and services supporting gen AI.12 As organizations move forward with gen AI solutions,they need to update their risk and governance models and incident response procedures to reflect these emerging threats.In a recent AWS Executive Insights podcast,security subject-m
37、atter experts emphasized that threat actors will go after low-hanging fruit firstthreats with the greatest impact for the least amount of effort.13 When choosing security investments,leaders should prioritize those use cases,such as supply chain exploits and data exfiltration.FIGURE 3Emergent threat
38、s to AI operations require updates to organizations risk and governance models.Source:IBM Security.Exploit difficultyPotential impactPrompt injectionSupply chainexploitsModel evasionModel extractionData poisoningChange the behavior of AI models by altering the data used to train themInversionexploit
39、sReveal information on the data used to train a model,despite only having access to the model itselfSteal a models behavior by observing the relationships between inputs and outputsManipulate AI models into performing unintended actions by dropping guardrails and limitations put in place by the deve
40、lopersGenerate harmful models that hide malicious behavior,or target vulnerabilities in systems connected to the AI modelsData exfiltrationAccess and steal sensitive data used in training and tuning models through vulnerabilities,phishing,or misused privilege credentialsCircumvent the intended behav
41、ior of an AI model by crafting inputs that trick itBackdoor exploitsAlter a model subtly during training to cause unintended behaviors under certain triggers1011A simple framework outlines an effective approach to securing the AI pipelinestarting with updating governance,risk,and compliance(GRC)stra
42、tegies(see Figure 4).Getting these principles right from the beginningas core design considerationscan accelerate innovation.A governance and design-oriented approach to generative AI is particularly important in light of emerging AI regulatory guidance such as the EU AI Act(see Perspective,“A glimp
43、se into new and proposed AI regulations around the world”).14 Those who integrate and embed GRC capabilities in their AI initiatives can differentiate themselves while also clearing their path to value,capitalizing on investments knowing they are building on a solid foundation.Part twoThree AI enabl
44、ement models,three risk profilesFIGURE 4Securing the AI value stream starts with updating risk and governance models.Source:IBM Security.Securing the AI value stream starts with updating risk and governance models.FIGURE 4Governance,risk,and complianceSecure the data collection and handlingSecure th
45、e applications using and enabling AISecure the model inference and live useSecure the model development and trainingSecure the infrastructure12PerspectiveA glimpse into new and proposed AI regulations around the world15 AI regulations are evolving as quickly as gen AI models and are being establishe
46、d at virtually all levels of government.Organizations can look to automated AI governance tools to help manage compliance with changing policy requirements.A sampling of regulations includes:Europe EU AI ActUS Maintaining American Leadership in AI Executive Order Promoting the Use of Trustworthy AI
47、in the Federal Government Act Executive Order AI Training Act National AI Initiative ActCanada AI and Data Act Directive on Automated Decision-MakingBrazil AI BillChina Algorithmic Recommendations Management Provisions Ethical Norms for New Generation AI Opinions on Strengthening the Ethical Governa
48、nce of Science and Technology Draft Provisions on Deep Synthesis Management Measures for the Management of Generative AI ServicesJapan Guidelines for Implementing AI Principles AI Governance in Japan Ver.1.1India Digital India ActAustralia Uses existing regulatory structures for AI oversight1213Next
49、,leaders can shift their attention to securing infrastructure and the processes comprising the AI value stream:data collection,model development,and model use.Each presents a distinct threat surface that reflects how the organization is enabling AI:using third-party applications with embedded gen AI
50、 capabilities;building gen AI solutions via a platform of pre-trained or bespoke foundation models;or building gen AI models and solutions from scratch.16 Each adoption route encompasses varying levels of investment,commitment,and responsibility.Working through the risks and security for each helps
51、build resilience across the AI pipeline.While some organizations have already anchored on an adoption strategy,some are applying multiple approaches,and some may still be finding their way and formalizing their strategy.From a security perspective,what varies with each option is who is responsible f
52、or whatand how that responsibility may be shared (see Figure 5).17Source:AWS Security,IBM Security.FIGURE 5The principles of shared responsibility extend to securing generative AI models and applications.Access controls to data and modelsGenerative AI as an applicationUsing“public”services or an app
53、lication or SaaS product with embedded generative AI featuresGenerative AI as a platformBuilding an application using a pre-trained model,or a model fine-tuned on organization-specific dataBuild your ownTraining a model from scratch on an organizations own dataTraining data and data managementPrompt
54、 controlsModel developmentModel inferenceModel monitoringInfrastructureService userService provider14Using third-party applications embedded with generative AIOrganizations that are just getting started may be using consumer-focused services such as OpenAIs ChatGPT,Anthropics Claude,or Google Gemini
55、,or they are using an off-the-shelf SaaS product with gen AI features built in,such as Microsoft 365 or Salesforce.18 These solutions allow organizations that have fewer investment resources to gain efficiencies from basic gen AI capabilities.The companies providing these gen AI-enabled tools are re
56、sponsible for securing the training data,the models,and the infrastructure housing the models.But users of the products are not free of security responsibility.In fact,inadvertent employee actions can induce headaches for security teams.Similar to how shadow IT emerged with the first SaaS products a
57、nd created cloud security risks,the incidence of shadow AI is growing.With employees looking to make their work lives easier with gen AI,they are complicating the organizations security posture,making security and governance more challenging.19 First,well-meaning staff can share private organization
58、al data into third-party products without knowing whether the AI tools meet their security needs.This can expose sensitive or privileged data,leak proprietary data that may be incorporated into third-party models,or expose data artifacts that could be vulnerable should the vendor experience a cyber
59、incident or data breach.20 Second,because the security team is unaware of the usage,they cant assess and mitigate the risks.21 Third-party softwarewhether or not sanctioned by the IT/IS teamcan introduce vulnerabilities because the underlying gen AI models can host malicious functionality such as tr
60、ojans and backdoors.22 One study found that 41%of employees acquired,modified,or created technology without their IT/IS teams knowledgeand predicts this percentage will climb to 75%over the next three years,exacerbating the problem.23 Key security considerations include:Have you established and comm
61、unicated policies that address use of certain organizational data(confidential,proprietary,or PII)within public models and third-party applications?Do you understand how third parties will use data from prompts(inputs/outputs)and whether they will claim ownership of that data?Have you assessed the r
62、isks of third-party services and applications and know which risks they are responsible for managing?Do you have controls in place to secure the application interface and monitor user activity,such as the content and context of prompt inputs/outputs?15Using a platform to build generative AI solution
63、sTraining foundation models and LLMs for generative AI applications demands tremendous infrastructure and computing resourcesoften beyond what most organizations can budget.Hyperscalers are stepping in with platforms that allow users to tap into a choice of pre-trained foundation models for building
64、 gen AI applications more specific to their needs.These models are trained on a large,general-purpose data set,capturing the knowledge and capabilities learned from a broad range of tasks to improve performance on a specific task or set of tasks.Pre-trained models can also be fine-tuned for a more s
65、pecific task using a smaller amount of an organizations data,resulting in a new specialized model optimized around distinct use cases,such as industry-specific requirements.24 The open-source community is also democratizing gen AI with an extensive library of pre-trained LLMs.The most popular of the
66、sesuch as Metas Llama and Mistral AIare also available via general-purpose gen AI platforms(see Perspective,“Risk or reward?Adopting open-source models”).Platforms offer the advantage of having some security and governance capabilities baked in.For example,infrastructure security is shared with the
67、vendor,similar to any cloud infrastructure agreement.Perhaps the organizations data already resides with a specific cloud provider,in which case fine-tuning the model may be as simple as updating configurations and API calls.Additionally,a catalogue of enhanced security products and services is avai
68、lable to complement or replace the organizations own(see case study,“EVERSANA and AWS advance artificial intelligence apps for the life sciences industry”).However,when organizations build gen AI applications integrated with pre-trained or fine-tuned models,their security responsibilities grow consi
69、derably compared to using a third-party SaaS product.Now they must tackle the unique threats to foundation models and LLMs referenced in part one of this report.Risks to training data as well as the model development and inference fall squarely on their radar.Applying the principles of ModelOps and
70、MLSecOps(machine learning security operations)can help organizations secure their gen AI applications.25Key security considerations include:Have you conducted threat modeling to understand and manage the emerging threat vectors?Have you identified open-source and widely used models that have been th
71、oroughly scanned for vulnerabilities,tested,and vetted?Are you managing training data workflows,such as using encryption in transit and at rest,and tracking data lineage?How do you protect training data from poisoning exploits that could introduce inaccuracies or bias and compromise or change the mo
72、dels behavior?How do you harden security for API and plug-in integrations to third-party models?How do you monitor models for unexpected behaviors,malicious outputs,and security vulnerabilities that may appear over time?Are you managing access to training data and models using robust identity and ac
73、cess management practices,such as role-based access control,identity federation,and multifactor authentication?Are you managing compliance with laws and regulations for data privacy,security,and responsible AI use?16Case studyEVERSANA and AWS advance artificial intelligence apps for the life science
74、s industry26 Given regulatory requirements,life sciences companies need generative AI solutions that combine security,compliance,and scalability.EVERSANA,a leading provider of commercial services to the global life sciences industry,is turning to AWS to accelerate gen AI use cases across the life sc
75、iences industry.The objective is to harness the power of gen AI to help pharmaceutical and life science manufacturers drive efficiencies and create business value while improving patient outcomes.EVERSANA will apply its digital and AI innovation capabilities coupled with Amazon Bedrock managed gen A
76、I services to leverage best-of-breed foundation models.EVERSANA maintains full control over the data it uses to tailor foundation models and can customize guardrails based on its application requirements and responsible AI policies.In its first applicationin partnership with AWS and TensorIoT,the te
77、am sought to automate processes associated with medical,legal,and regulatory(MLR)content approvals.EVERSANAs strategy to leverage gen AI to solve complex challenges for life sciences companies is part of what EVERSANA calls“pharmatizing AI.”Jim Lang,chief executive officer at EVERSANA,explained,“Pha
78、rmatizing AI in the life sciences industry is about leveraging technology to optimize and accelerate common processes that are desperate for innovation and transformation.”This approach has led to streamlining critical processes from months to weeks.EVERSANA anticipates that once it automates its ML
79、R capabilities,it can further improve time-to-approval from weeks to mere days.EVERSANA anticipates automation of MLR processes can improve approval time from weeks to days.1617PerspectiveRisk or reward?Adopting open-source models27 In contrast to proprietary LLMs that can only be used by customers
80、who purchase a license from the provider,open-source LLMs are free and available for anyone to access.They can be used,modified,and distributed with far greater flexibility than proprietary models.Designed to offer transparency and interoperability,open-source LLMs allow organizations with minimal m
81、achine learning skills to adapt gen AI models for their own needsand on their own cloud or on-premises infra-structure.They also help offset concerns about the risk of becoming overly reliant on a small number of proprietary LLMs.Risks with using open-source models are similar to proprietary models,
82、including hallucinations,bias,and accountability issues with the training data.But the trait that makes open source popularthe community approach to developmentcan also be its greatest vulnerability as hackers can more easily manipulate core functionality for malicious purposes.These risks can be mi
83、tigated by adopting security hygiene practices as well as software supply chain and data governance controls.Open-source LLMs allow organizations with minimal machine learning skills to adapt gen AI models for their own needs.1718Building your own generative AI solutions A few large organizations wi
84、th deep pockets are building and training LLMsand smaller,more tailored language models(SLMs)28from scratch based solely on their data.Hyperscaler tools are helping accelerate the training process,while the organization owns every aspect of the model.This can afford them performance advantages as we
85、ll as more precise results.29 In this scenario,on top of the governance and risk management outlined for applications based on pre-trained and fine-tuned models,the organizations own data security posture takes on greater importance.As the organizations data is now incorporated into the AI model its
86、elf,responsible AI becomes essential to reducing risk exposure.Being the primary source for AI training data,organizations are responsible for making sure that dataand the outcomes based on itcan be trusted.That means protecting the source data following strict data security practices(see Perspectiv
87、e,“Why responsible AI starts with security ABCs”).And it means protecting the models from being compromised or exploited by malicious actors.Access controls,encryption,and threat detection systems are critical pieces in preventing data from being manipulated.The trustworthiness of an AI solution may
88、 be measured by its ability to offer unbiased,accurate,and ethical responses.If organizations do not practice responsible AI,they risk damage to their brands from faultyeven dangerousoutput from their gen AI models.Despite these risks,fewer than 20%of executives say they are concerned about a potent
89、ial liability for erroneous outputs from gen AI.In other IBM IBV research,only 30%of respondents said they are validating the integrity of gen AI outputs.30If secure and trustworthy data is the basis for value generationand much of our research indicates it isleaders should focus on the security imp
90、lications of(ir)responsible AI.31 Doing so can highlight the various ways AI models may be manipulated.In the absence of bias or explainability controls,such manipulation can be hard to recognize.This is why organizations need a strong foundation in governance,risk,and compliance.As an extension of
91、the organizations data security posture,software supply chain security also becomes more consequential when creating LLMs.These models are built on top of complex software stacks that include multiple layers of software depen-dencies,libraries,and frameworks.Each of these components can introduce vu
92、lnerabilities that can be exploited by attackers to compromise the integrity of the AI model or the underlying data.Unfortunately,adoption of software supply chain security best practices is still nascent at many organi-zations,according to recent IBM IBV research.For example,only 29%of executives i
93、ndicated they have adopted DevSecOps principles and practices to secure their software supply chain,and only 32%have implemented continuous monitoring capabilities for their software suppliers.32 Both practices are vital to helping prevent cyber incidents throughout the software supply chain.Key sec
94、urity considerations include:Do you need to bolster data security practices to help prevent theft and manipulation and support responsible AI?How can you shore up third-party software security awareness and practices;for example,ensuring that zero-trust principles are in place?Do you require procure
95、ment teams to check supplier contracts for security vulnerability controls and risk-related performance measures?19PerspectiveWhy responsible AI starts with security ABCsAs AI moves from experimentation into production,the ABCs of securityawareness,behavior,and culturebecome even more important for
96、helping ensure responsible AI.For AI to be designed,developed,and deployed with good intent for the benefit of society,trust is an imperative.33 Consistent with many emerging technologies,well-informed employees and partners can be an assetespecially in light of new multimodal and rich-media-based p
97、hishing tactics enabled by gen AI.Enhancing employee awareness of the new risks leads to proactive behaviors and,over time,a more robust security culture.As AI solutions become more integral to operations,a standard practice should be to communicate new functionality and associated security controls
98、 to employees,while reiterating the policies in place to protect proprietary and personal data.Established controls should be updated to address new threats,with the core principles of zero trust and least privilege limiting lateral movement.Emphasizing a sense of ownership about security outcomes c
99、an reinforce security as a common,shared endeavor connecting virtually all stakeholders and partners.Responsible AI is about more than policies its a commitment to safeguard the trust thats critical to the organizations continuing success.Enhancing employee awareness of new risks from AI can lead to
100、 proactive behaviors and,over time,a more robust security culture.192021Developing and securing generative AI solutions requires capacity,resources,and skillsthe very things organizations dont have enough of.34 In fragmented IT environ-ments,security takes on higher levels of complexity that require
101、 even more capacity,resources,and skills.Leaders quickly find themselves in a dilemma.AI-enhanced toolsAI-powered security products can bridge the skills gap by freeing overworked staff from time-consuming tasks.This allows them to focus on more complex security issues that require expertise and jud
102、gment.By optimizing time and resources,AI effectively adds capacity and skills.With improved insights,productivity,and economies of scale,organizations can adopt a more preventive and proactive security posture.Indeed,leading security AI adopters cut the time to detect incidents by one-third and the
103、 costs of data breaches by at least 18%.35 New capabilities are also emerging that automate management of compliance within a rapidly changing regulatory environment.The shift to AI security tools is consistent with how cybersecurity demand is changing.While the market for AI security products is ex
104、pected to grow at a CAGR of nearly 22%over the next five years,providers are focusing on developing consolidated security software solutions.To facilitate better efficiency and governance,solution providers are rationalizing their toolsets and streamlining data analysis.36 This more holistic approac
105、h to security enhances visibility across the operations lifecyclesomething 53%of executives are expecting to gain from gen AI.AI-experienced partnersBusiness partners can also help close security skills gaps.Just as with the transition to cloud,partners can assist with assessing needs and managing s
106、ecurity outcomes.Amid the ongoing security talent shortage thats exacerbated by a lack of AI skills,organizations are seeking partners that can facilitate training,knowledge sharing,and knowledge transfer(76%).They are also looking for gen AI partners to provide extensive support,maintenance,and cus
107、tomer service(82%).Finally,they are choosing partners that can guide them across the evolving legal and regulatory compliance landscape(75%).Part threeThe leadership dilemma generative AI requires what organizations have least22Executives are also in search of partners to help with strategy and inve
108、stment decisions(see Figure 6).With around half(47%)saying they are uncertain about where and how much to invest,its no surprise that three-quarters(76%)want a partner to help build a compelling cost case with solid ROI.More than half also seek guidance on an overall strategy and roadmap.Q.How impor
109、tant are these when choosing a partner for your generative AI security needs?FIGURE 6Executives are turning to partners to help deliver and support generative AI security solutions.76%Helps us build a compelling cost case with solid ROI70%Spurs innovation and future readiness58%Offers guidance on ou
110、r overall strategy and roadmapStrategy82%Offers extensive support,maintenance,and customer serviceOperational76%Offers training,knowledge sharing,and knowledge transfer75%Focuses on emerging guidelines around legal and regulatory compliance73%Enhances data privacy and security around generative AI s
111、olutionsExpertiseRisk®ulatory posture23Our results indicate that most organizations are turning to partners to enable generative AI for security.While many respondents are purchasing security products or solutions with gen AI capabilities,nearly two-thirds of their security generative AI capabili
112、ties are coming through some type of partnermanaged services,ecosystem/supplier,or hyperscaler(see Figure 7).Similar to cloud adoption,leaders are looking to partners for comprehensive security supportwhether thats informing and advising about generative AI or augmenting their delivery and support c
113、apabilities.Q.How are you enabling generative AI for security capabilities?Note:percentages do not add to 100%due to rounding.FIGURE 7More than 90%of security gen AI capabilities are coming from third-party products or partners.31%20%20%Through a security product or solution(such as a security solut
114、ion with embedded capabilities)21%Through a managed services provider(outsourcing)Through an ecosystem partner or other supplierThrough an infrastructure partner(such as AWS,Azure,and Google)Through an internally developed solution9%Q.How are you enabling generative AI for security capabilities?Note
115、:Percentages do not add to 100%due to rounding.24 Whether just starting to experiment with generative AI,building models on your own,or somewhere in between,the following guidance can help organizations secure their AI pipeline.These recommendations are intended to be cross-functional,facilitating e
116、ngagement across security,technology,and business domains.Part fourAction guide 01Assess Define an AI security strategy that aligns with the organizations overall AI strategy.Ask how your organization is using AI todayfor which use cases,in what applications,through which service providers,and servi
117、ng which user cohorts.Once you answer these questions,then quantify the associated sources of risk.Evaluate the maturity of your core security capabilities,including infrastructure security,data security,identity and access management practices,threat detection and incident response,regulatory compl
118、iance,and software supply chain management.Identify where you must be better to support the demands of AI.Decide where partners can supplement and complement your security capabilities and define how responsibilities will be shared.Uncover security gaps in AI environments using risk assessment and t
119、hreat modeling.Determine how policies and controls need to be updated to address emergent threat vectors driven by generative AI.02Implement Establish AI governance working with business units,risk,data,and security teams.Prioritize a secure-by-design approach across the ML and data pipeline to driv
120、e safe software development and implementation.Manage risk,controls,and trustworthiness of AI model providers and data sources.Secure AI training data in line with current data privacy and regulatory guidelines,and adopt new guidelines when published.Secure workforce,machine,and customer access to A
121、I apps and subsystems from anywhere.2503Monitor Evaluate model vulnerabilities,prompt injection risks,and resiliency with adversarial testing.Perform regular security audits,penetration testing,and red-teaming exercises to identify and address potential vulnerabilities in the AI environment and conn
122、ected apps.04Educate Review cyber hygiene practices and security ABCs(awareness,behaviors,and culture)across your organization.Conduct persona-based cybersecurity awareness activities and education,particularly as they relate to AI as a new threat surface.Target all stakeholders involved in the deve
123、lopment,deployment,and use of AI models,including employees using AI-powered tools.26Clarke RodgersDirector,AWS Enterprise S SahaSenior Security Partner Solutions Architect,AWS AhluwaliaVice President and Global Managing PartnerIBM Consulting Cybersecurity S Kevin SkapinetzVice President,Strategy an
124、d Product ManagementIBM S ParhamGlobal Research Leader,Security and CIOIBM Institute for Business V contributorsHeather DeguzmanSenior Executive Marketing Manager,ContentAmazon Web S DoughertyProgram Director,Product Management,Emerging Security TechnologyIBM S GrayBrand and Content Strategy,Securit
125、yIBM M HectorProduct Manager,Emerging Security TechnologyIBM S MassimiGlobal Principal,Cloud Security Services for AWS ConsultingIBM C MinaProgram Director,Product ManagementEmerging Security Technology and VenturesIBM S NagarajanGlobal Cyber Trust Partner,Portfolio LeaderIBM Consulting Cybersecurit
126、y S PrassinosSecurity CommunicationsIBM M TestermanManager,AI and Platform Product ManagementIBM C TummalapentaDistinguished Engineer and CTOMaster InventorIBM Consulting Cybersecurity S Institute for Business Value editorial and design teamSara Aboulsohn,Visual DesignerKris Biron,Visual DesignerJoa
127、nna Wilkins,Editorial Lead28Related reportsThe CEOs guide to generative AI:CybersecurityThe CEOs guide to generative AI:Cybersecurity.IBM Institute for Business Value.October 2023.https:/ibm.co/ceo-generative-ai-cybersecurityData security as business accelerator?Data security as business accelerator
128、?The unsung hero driving competitive advantage.IBM Institute for Business Value and Amazon Web Services.June 2023.https:/ibm.co/data-securityAI and automation for cybersecurityAI and automation for cybersecurity:How leaders succeed by uniting technology and talent.IBM Institute for Business Value.Ju
129、ne 2022.https:/ibm.co/ai-cybersecurityStudy methodology and approachIn Q3 2023,the IBM Institute for Business Value partnered with Oxford Economics to survey 200 executives about their generative AI security strategy and enablement.Respondents are based in the US and responsible for operations at ei
130、ther US-based organizations or multinational organizations with a significant US presence.Respondents include CEOs,CISOs,CIOs,and Chief Data Officers.Respondents were screened for several inclusion criteria.They indicated whether they are either moderately familiar or very familiar with generative A
131、I.Respondent organizations are either in the piloting or implementation phases of generative AI.Respondents described their familiarity with their organizations security spending and investments as either“aware and consistently involved”or“working on projects and influencing investments.”Respondents
132、 represent the following industries:consumer banking,consumer products,energy and utilities,financial markets,government(federal),government(state/provincial),healthcare providers,industrial manufacturing(industrial products),insurance,IT services,life sciences/pharmaceuticals,manufacturing(non-indu
133、strial),oil and gas,retail,telecommuni-cations,transportation,and travel.About the AWS-IBM Security partnershipIBM is an AWS Premier Tier Consulting Partner,including three security competencies and a total of 16 AWS competencies across IBM Technology and IBM Consulting.Together,IBM and AWS bring fa
134、st,security-rich,open software capabilities to the cloud platform for more than one million customers every day.The power of cloud-native AWS capabilities,combined with 50+IBM solutions available on AWS Marketplace,enables clients to access AI-powered IBM Software with turnkey delivery and integrati
135、on.For more information,visit https:/ Institute for Business ValueFor two decades,the IBM Institute for Business Value has served as the thought leadership think tank for IBM.What inspires us is producing research-backed,technology-informed strategic insights that help leaders make smarter business
136、decisions.From our unique position at the intersection of business,technology,and society,we survey,interview,and engage with thousands of executives,consumers,and experts each year,synthesizing their perspectives into credible,inspiring,and actionable insights.To stay connected and informed,sign up
137、 to receive IBVs email newsletter at can also find us on LinkedIn at https:/ibm.co/ibv-linkedin.The right partner for a changing worldAt IBM,we collaborate with our clients,bringing together business insight,advanced research,and technology to give them a distinct advantage in todays rapidly changin
138、g environment.About Research InsightsResearch Insights are fact-based strategic insights for business executives on critical public-and private-sector issues.They are based on findings from analysis of our own primary research studies.For more information,contact the IBM Institute for Business Value
139、 at .30Notes and sources1 Britton,Mike.“Uncovering AI-Generated Email Attacks:Real-World Examples from 2023.”Abnormal Blog.December 19,2023.https:/ and Kathleen Magramo.“Finance worker pays out$25 million after video call with deepfake chief financial officer.”CNN.February 4,2024.https:/ bans use of
140、 A.I.like ChatGPT for employees after misuse of the chatbot.”CNBC.May 2,2023.https:/ X-Force Threat Intelligence Index 2024.IBM Security.February 2024.https:/ Sabin,Sam.“Generative AI puts GPU security in the spotlight.”Axios.March 22,2024.https:/ players set to shape the AI landscape in 2024.”Digit
141、alis.March 2024.https:/ IBM Institute for Business Value survey of 2,500 global,cross-industry executives on AI adoption.2024.Unpublished data.5 Isola,Laurie.“How cybercriminals are using gen AI to scale their scams.”Okta Blog.January 4,2024.https:/ are creating a darker side to AI.”Cyber Magazine.O
142、ctober 24,2024.https:/ the World of AI Jailbreaks.”SlashNext Blog.September 12,2023.https:/ Prompts You Dont Want Employees Putting in Microsoft Copilot.”BleepingComputer.April 3,2024.https:/ of servers hacked in ongoing attack targeting Ray AI framework.”Ars Technica.March 27,2024.https:/ 2 of 10:C
143、ybersecurity Architecture Fundamentals.”IBM Technology YouTube video.July 2023.https:/ an AI pipeline?”Squark Blog.Accessed April 11,2024.https:/ Attack Trends:How phishing attacks are becoming more sophisticated and harder to identify.”Darktrace Blog.March 20,2024.https:/ X-Force Threat Intelligenc
144、e Index 2024.IBM Security.February 2024.https:/ Attack Trends:How phishing attacks are becoming more sophisticated and harder to identify.”Darktrace Blog.March 20,2024.https:/ McCurdy,Chris,Sholmi Kramer,Gerald Parham,and Jacob Dencik,PhD.Prosper in the cyber economy:Rethinking cyber risk for busine
145、ss transformation.November 2022.Unpublished data.12 Hector,Sam.“Mapping attacks on generative AI to business impact.”Security Intelligence.January 30,2024.https:/ Value(and Threat)of Generative AI for Security Teams.”AWS podcast.Accessed April 22,2024.https:/ The EU Artificial Intelligence Act websi
146、te.Accessed April 11,2024.https:/artificialintelligenceact.eu/15 Ponomarov,Kostiantyn.“Global AI Regulations Tracker:Europe,Americas&Asia-Pacific Overview.”Legal Nodes.March 20,2024.https:/ Saner,Matt and Mike Lapidakis.“Securing generative AI:An introduction to the Generative AI Security Scoping Ma
147、trix.”AWS Security Blog.October 19,2023.https:/ Manral,Vishwas.“Generative AI:Proposed Shared Responsibility Model.”Cloud Security Alliance Blog.July 28,2023.https:/cloudsecurityalliance.org/blog/2023/07/28/generative-ai-proposed-shared-responsibility-model3118 Ibid.19 Salvin,Steve.“What managers sh
148、ould know about the secret threat of employees using shadow AI.”Fast Company.October 26,2023.https:/ Ibid.21 Ibid.22 Hector,Sam.“Mapping attacks on generative AI to business impact.”Security Intelligence.January 30,2024.https:/ Unveils Top Eight Cybersecurity Predictions for 2023-2024.”Gartner Newsr
149、oom.March 28,2023.https:/ Saner,Matt and Mike Lapidakis.“Securing generative AI:An introduction to the Generative AI Security Scoping Matrix.”AWS Security Blog.October 19,2023.https:/ Kerner,Sean Michael.“Exclusive:What will it take to secure gen AI?IBM has a few ideas.”VentureBeat.January 25,2024.h
150、ttps:/ MLSecOps:Industry calls for new measures to secure AI.”TechTarget News.September 13,2023.https:/ Web Services to Pharmatize Artificial Intelligence across the Life Sciences Industry.”EVERSANA news release.July 24,2023.https:/ Bedrock website.Accessed April 11,2024.https:/ Collaborates with AW
151、S and TensorIoT to Automate the Regulatory Review Process.”AWS case study.Accessed April 12,2024.https:/ source large language models:Benefits,risks and types.”IBM Think Blog.September 27,2023.https:/ Javaheripi,Mojan and Sbastien Bubeck.“Phi-2:The surprising power of small language models.”Microsof
152、t Research Blog.December 12,2023.https:/ AWS Trainium website.Accessed April 11,2024.https:/ IBM Institute for Business Value survey of 2,000 global executives responsible for supplier management,supplier sourcing,and ecosystem partner relationships.2023.Unpublished data.31 IBV C-suite Series.Turnin
153、g data into value:How top Chief Data Officers deliver outsize results while spending less.IBM Institute for Business Value.March 2023.https:/ibm.co/c-suite-study-cdo 32 IBM Institute for Business Value survey of 2,000 global executives responsible for supplier management,supplier sourcing,and ecosys
154、tem partner relationships.2023.Unpublished data.33“What is responsible AI?”IBM website.Accessed April 11,2024.https:/ Suggests Growth in Enterprise Adoption of AI is Due to Widespread Deployment by Early Adopters,But Barriers Keep 40%in the Exploration and Experimentation Phases.”IBM Newsroom.Januar
155、y 10,2024.https:/ CEOs Need to Know About the Costs of Adopting GenAI.”Harvard Business Review.November 15,2023.https:/hbr.org/2023/11/what-ceos-need-to-know-about-the-costs-of-adopting-genai35 Fisher,Lisa and Gerald Parham.AI and automation for cybersecurity:How leaders succeed by uniting technolog
156、y and talent.IBM Institute for Business Value.May 2022.https:/ibm.co/ai-cybersecurity36“Artificial Intelligence in Cybersecurity Market by Offering(Hardware,Solution,and Service),Security Type,Technology(ML,NLP,Context-Aware and Computer Vision),Application(IAM,DLP,and UTM),Vertical and Region Globa
157、l Forecast to 2028.”Markets and Markets.December 2023.https:/ is a Cybersecurity Platform?”Trend Micro.Accessed April 11,2024.https:/ Copyright IBM Corporation 2024IBM Corporation New Orchard Road Armonk,NY 10504Produced in the United States of America|May 2024IBM,the IBM logo,and IBM X-Force are tr
158、ademarks of International Business Machines Corp.,registered in many jurisdictions worldwide.Other product and service names might be trademarks of IBM or other companies.A current list of IBM trademarks is available on the web at“Copyright and trademark information”at: document is current as of the
159、 initial date of publication and may be changed by IBM at any time.Not all offerings are available in every country in which IBM operates.THE INFORMATION IN THIS DOCUMENT IS PROVIDED“AS IS”WITHOUT ANY WARRANTY,EXPRESS OR IMPLIED,INCLUDING WITHOUT ANY WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTI
160、CULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT.IBM products are warranted according to the terms and conditions of the agreements under which they are provided.This report is intended for general guidance only.It is not intended to be a substitute for detailed research or the exerci
161、se of professional judgment.IBM shall not be responsible for any loss whatsoever sustained by any organization or person who relies on this publication.The data used in this report may be derived from third-party sources and IBM does not independently verify,validate or audit such data.The results from the use of such data are provided on an“as is”basis and IBM makes no representations or warranties,express or implied.2L73BYB4-USEN-01