《IDC:2025負責任AI的商業價值白皮書(英文版)(39頁).pdf》由會員分享,可在線閱讀,更多相關《IDC:2025負責任AI的商業價值白皮書(英文版)(39頁).pdf(39頁珍藏版)》請在三個皮匠報告上搜索。
1、Ritu Jyoti Group Vice President/General Manager,Worldwide Artificial Intelligence,Automation,Data and Analytics Research Practice,IDC Dave Schubmehl Research Vice President,Conversational Artificial Intelligence and Intelligent Knowledge Discovery,IDC FROM RISK TO REWARD:The Business Case for Respon
2、sible AI White Paper,sponsored by Microsoft|December 2024 2 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI CLICK ANY HEADING TO NAVIGATE DIRECTLY TO THAT PAGE.Table of Contents Executive Summary .3 Introduction .4 Key Findings
3、 from the Survey.7 Responsible AI Tooling .11 Additional Insights from the Study .18 AI Adoption .18 Important Use Cases .20 Advice and Recommendations .22 Conclusion .28 Definitions .30 Generative AI .30 Responsible AI .31 Responsible AI Attributes .31 Appendix 1:Supplemental Data .33 About the IDC
4、 Analysts .37 Message from the Sponsor .38 3 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents Executive Summary Every organization needs to be responsible at the core in the AI era as it helps the organization
5、accelerate realization of the benefits of AI.A responsible-at-the-core organization has the following foundational elements:Core values and governance:It defines and articulates responsible AI(RAI)mission and principles,supported by the C-suite,while establishing a clear governance structure across
6、the organization that builds confidence and trust in AI technologies.Risk management and compliance:It strengthens compliance with stated principles and current laws and regulations while monitoring future ones and develops policies to mitigate risk and operationalize those policies through a risk m
7、anagement framework with regular reporting and monitoring.Technologies:It uses tools and techniques to support principles such as fairness,explainability,robustness,accountability,and privacy and builds these into AI systems and platforms.Workforce:It empowers leadership to elevate RAI as a critical
8、 business imperative and provides all employees with training to give them a clear understanding of responsible AI principles and how to translate these into actions.Training the broader workforce is paramount for ensuring RAI adoption.The purpose of this paper is to provide information and evidence
9、 that a responsible AI approach fosters innovation by aligning AI deployment with organizational standards and societal expectations,resulting in sustainable value for organizations and their customers.4 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business
10、 Case for Responsible AITable of Contents According to IDCs February 2024 Worldwide Semiannual Artificial Intelligence Systems Spending Guide,Version 1,which tracks AI software,hardware,and services across industries and use cases,enterprises worldwide are expected to invest$232 billion on AI soluti
11、ons in 2024.AI solutions are transforming a diverse range of industries,from finance and manufacturing to agriculture and healthcare,by enhancing operations and reshaping the nature of work.Enterprises application of generative AI(GenAI),which is rapidly unfolding,can revolutionize customer experien
12、ces,boost employee productivity,enhance creativity and content creation,and accelerate process optimization.However,AI also creates real risks and unintended consequences.AI systems can inadvertently perpetuate or amplify societal biases due to biased training data or algorithmic design.AI systems a
13、re often trained on large amounts of data collected from various sources.AI program outputs may run into copyright infringement concerns.AI hallucinations are incorrect or misleading results that AI models generate.These errors can be caused by a variety of factors,including insufficient training da
14、ta,incorrect assumptions made by the model,lack of context,or biases in the data used to train the model.So lack of grounding can cause the model to generate outputs that,while seemingly plausible,are factually incorrect,irrelevant,or nonsensical and further deplete trust.Introduction 5 White Paper,
15、sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents As AI technologies become increasingly sophisticated,the security risks associated with their use and the potential for misuse also increase.For example,hackers/bad actors c
16、an control GenAI foundation model output by poisoning the grounding data.Or they could use prompt injection attacks that disguise malicious instructions as user inputs,tricking the large language model(LLM)into overriding developer instructions with the goal of manipulating the model to produce a de
17、sired response.Jailbreaking,a technique that attempts to bypass or subvert the safety filters and restrictions built into LLMs,is also popular with the bad actors.According to IDCs March 2024 Microsoft Responsible AI Survey(n=2,309)(sponsored by Microsoft),which gathered insights on organizational a
18、ttitudes and the state of responsible AI,91%are currently using AI technology at their organization and expect more than 24%improvement in customer experience,business resilience,sustainability,and operational efficiency because of AI in 2024.Respondents who use responsible AI solutions say that it
19、has helped with data privacy,customer experience,confident business decisions,brand reputation,and trust.AI brings not only unprecedented opportunities to businesses but also an incredible responsibility.To ensure trust and fairness with their customers and stakeholders,as well as adhere to emerging
20、 governmental regulations(e.g.,the EU AI Act),organizations need to be focused on responsible AI.The EU AI Act,which aims to govern the way companies develop,use,and apply AI,was approved in May 2024 and went into effect in August 2024.The legislation applies a risk-based approach to regulating AI,w
21、hich means that different applications of the technology are regulated differently depending on the level of risk they pose to society.For AI applications deemed to be“high risk,”for example,strict obligations have been introduced.Such obligations include adequate risk assessment and mitigation syst
22、ems,high-quality training data sets to minimize the risk of bias,routine logging of activity,and mandatory sharing of detailed documentation on models with authorities to assess compliance.The EU AI Act has implications that go far beyond the EU.It applies to any organization with any operation or i
23、mpact in the EU,which means the AI Act will likely apply to you no matter where youre located.Oversight of all AI models that fall under the scope of the Act including general-purpose AI systems will fall under the European AI Office,a regulatory body established by the Commission in February 2024.E
24、ssentially,organizations need to be responsible at the core and proactively operationalize AI governance across the project life cycle,support collaborative risk management,and adhere to evolving AI regulations and their policies and values.91%are currently using AI technology at their organization
25、and expect more than 24%improvement in customer experience,business resilience,sustainability,and operational efficiency because of AI in 2024.6 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents As consumers bec
26、ome more aware of AIs impact,they demand greater transparency and responsible use of AI.Many organizations are integrating responsible AI into their CSR strategies,recognizing that responsible AI practices can enhance their reputation and contribute to societal well-being.Businesses are adopting res
27、ponsible AI to mitigate risks associated with AI,such as biases,security vulnerabilities,and unintended consequences.This proactive approach helps in safeguarding their operations and reputation.Companies that prioritize responsible AI are often seen as leaders in innovation.By addressing social and
28、 moral concerns,they can differentiate themselves in the market and attract more customers and partners.There is a growing trend of collaboration between technologists,legal experts,and other stakeholders to develop comprehensive responsible AI frameworks.This interdisciplinary approach ensures that
29、 diverse perspectives are considered in AI development.With increasing regulations like the EUs AI Act and the U.S.AI Bill of Rights,companies are prioritizing responsible AI practices to ensure compliance and avoid legal repercussions.These trends highlight the industrys recognition of the critical
30、 role responsible AI plays in ensuring sustainable technological advancement.IDC defines RAI as the practice of designing,developing,and deploying AI in a way that ensures fairness,reliability and safety,privacy and security,inclusiveness,transparency,and accountability.To create trust in AI,organiz
31、ations must move beyond defining RAI principles and put those principles into practice.AI governance is essentially the set of processes,policies,and tools that bring together diverse stakeholders across data science,engineering,IT,compliance,legal,and business teams to ensure that AI systems are bu
32、ilt,deployed,used,and managed to maximize benefits and prevent harm.AI governance allows organizations to align their AI systems with business and legal requirements throughout every stage of the machine learning(ML)/generative AI life cycle.IDC defines RAI as the practice of designing,developing,an
33、d deploying AI in a way that ensures fairness,reliability and safety,privacy and security,inclusiveness,transparency,and accountability.7 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents Key Findings from the S
34、urvey According to IDCs Microsoft Responsible AI Survey,over 30%of the respondents note that lack of governance and risk management solutions is the top barrier to adopting and scaling AI(see Figure 1,next page).Equally important to note is that more than 75%of the respondents who use responsible AI
35、 solutions say that it has helped with data privacy,customer experience,confident business decisions,brand reputation,and trust(see Figure 2,page 9).Basically,by being proactive and using RAI tools and technologies to identify,mitigate,and monitor risks throughout the AI life cycle,they can mitigate
36、 unintended negative consequences.As organizations are buying,developing,and deploying AI in a wide variety of solutions,they are also grappling with the need to develop responsible AI policies,procedures,and practices.According to the survey,organizations are still in the early days of developing a
37、nd following a comprehensive responsible AI practice on a worldwide level.While AI is not new and organizations have been using AI-powered solutions for a while,only the more AI-mature organizations have been proactive about embracing it responsibly.GenAI has been a catalyst to broader AI adoption b
38、ut has also brought a lot more issues around data security,IP leakage,hallucinations,copyright infringement,and threats from bad actors.On a regional basis,EMEA,Latin America,and Asia/Pacific lag behind North America in terms of governance structures and technology used to enforce governance.Lack of
39、 human capital,data availability,funding,trust concerns,and regulations have been the key inhibitors to AI maturity(see Figure 3,page 10).A systematic approach requires proven tools,frameworks,and methodologies,enabling organizations to move from principles to practice with confidence.Establishing a
40、 responsible AI approach that is robust,fair,and maintained on an ongoing basis can also enable organizations to communicate and collaborate with confidence.North America has been at the forefront with early adopters and more AI-mature organizations.8 White Paper,sponsored by Microsoft December 2024
41、|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents FIGURE 1 Top Barriers to AI Adoption What have been your top barriers to adopting AI?(Percentage of respondents)Notes:Data is managed by IDCs Global Primary Research Group.Data is weighted by IT spending by co
42、untry.Multiple responses were allowed.Use caution when interpreting small sample sizes.n=2,562;Source:IDCs Microsoft Responsible AI Survey,March 2024 Lack of AI governance and risk management solutions .Lack of skilled personnel(data scientists,data engineers,or AI modelers).Lack of fairness,explain
43、ability,transparency,and data lineage tools .Lack of adequate volumes and quality training data .Cost .Unclear business cases or involvement/support from LOB .Adversarial robustness(security and safety of algorithms).Decision criteria of the solution .Lack of content safety(abuse monitoring).Lack of
44、 model monitoring(data and concept drift)tools .Machine learning operations .Lack of digital watermarking .Hallucinations .31%30%25%25%23%21%21%19%17%17%17%13%7%9 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Content
45、s FIGURE 2 Level of Impact of Organizations Responsible Use of AI Solutions How impactful do you consider your organizations responsible use of AI solutions in preserving each of the following?(Percentage of respondents)Notes:Data is managed by IDCs Global Primary Research Group.Data is weighted by
46、IT spending by country.Use caution when interpreting small sample sizes.Scores are based on a scale of 15(1=not impactful,5=very impactful).n=2,562;Source:IDCs Microsoft Responsible AI Survey,March 2024 Data privacy .Customer experience(satisfaction,loyalty).Confidence in(business)decisions .ESG rat
47、ings and investors actions .Brand reputation .Avoiding hidden costs .Public trust.Limited regulatory backlash legal compliance .Revenue .Avoiding criminal investigation .Employee experience .77%76%76%76%76%74%73%71%69%69%68%10 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk
48、 to Reward:The Business Case for Responsible AI Table of Contents FIGURE 3 Governance Frameworks in Place:Worldwide and Regional Split Which of the following are currently in place at your organization?(Percentage of respondents)Notes:Data is managed by IDCs Global Primary Research Group.Data is wei
49、ghted by IT spending by country.Multiple responses were allowed.Use caution when interpreting small sample sizes.n=2,562;Source:IDCs Microsoft Responsible AI Survey,March 2024 For an accessible version of the data in this figure,see Figure 3 Supplemental Data in the Appendix.89%81%83%87%70%70%67%81%
50、67%66%62%75%60%54%46%66%Clear framework(principles,policies,technologies,and processes).Mechanisms to enforce/apply the framework .Governance structure to oversee implementation .Technologies to enforce responsible AI rules,policies,and processes .Worldwide NA EMEA APAC LATAM 84%73%69%57%11White Pap
51、er,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AITable of Contents IDC is seeing that organizations are using a variety of tools to ensure responsible AI,ranging from software-based monitoring tools to including human oversight(also known
52、 as human in the loop).The tools for monitoring and checking output from AI range from content filtering and abuse monitoring to bias checking and from visual explainability to groundedness detection.This area of software is rapidly evolving,and IDC expects to see a larger set of vendors offering so
53、lutions in this area over the next 1218 months.Figure 4(next page)shows how organizations are thinking about the use of technology combined with human oversight as the RAI tools and technologies are rapidly evolving.Figure 5(page 13)shows how organizations will be allocating their budget to include
54、responsible AI software.Considering their lack of both AI skills and tools to support their RAI requirements,about one-third of the respondents plan to leverage professional services support along with RAI software.Responsible AI Tooling 12 White Paper,sponsored by Microsoft December 2024|IDC#US5272
55、7124 From Risk to Reward:The Business Case for Responsible AI Table of Contents FIGURE 4 Asset Mix for Monitoring After an AI System Has Gone Live:Worldwide and Regional Split To ensure responsible use of AI by your organizations over the next 1218 months,please indicate the most likely mix of asset
56、s to be used for monitoring after an AI system has gone live.(Percentage of respondents)Notes:Totals may not sum up to 100%due to rounding.Data is managed by IDCs Global Primary Research Group.Data is weighted by IT spending by country.Use caution when interpreting small sample sizes.n=2,562(worldwi
57、de),n=611(NA),n=819(EMEA),n=832(APAC),n=300(LATAM);Source:IDCs Microsoft Responsible AI Survey,March 2024 For an accessible version of the data in this figure,see Figure 4 Supplemental Data in the Appendix.mostly by responsible AI governance software but with some oversight by people .Monitoring wil
58、l be done .mostly by people but using some responsible AI governance software .by responsible AI governance platforms only(i.e.,no people involved).by people only(e.g.,ethics boards determine ethical use).No monitoring will be necessary .5%20%23%50%51%8%13%24%7%19%22%49%7%17%23%50%9%19%23%49%Worldwi
59、de APACEMEA LATAMNA 3%2%2%3%1%13 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents FIGURE 5 AI Organizations Budget Allocation,2024 What percentage of your AI organizations spend in 2024 will be for each of the
60、following?(Percentage of respondents)Base=respondents that indicated organizations plan to spend more than$1 on their AI projects in 2024.Notes:Totals may not sum up to 100%due to rounding.Data is managed by IDCs Global Primary Research Group.Data is weighted by IT spending by country.Use caution wh
61、en interpreting small sample sizes.n=2,555(worldwide):n=611(NA),n=819(EMEA),n=830(APAC),n=300(LATAM);Source:IDCs Microsoft Responsible AI Survey,March 2024 For an accessible version of the data in this figure,see Figure 5 Supplemental Data in the Appendix.AI/ML governance tools.Professional services
62、 for responsible AI .AI/ML development platforms/Machine Learning Ops(MLOps)tools .Others .Worldwide APAC EMEA LATAM NA 15%17%34%34%16%17%35%35%15%18%32%35%15%19%31%36%15%18%32%35%The AI regulatory landscape is dynamic,and currently the EU AI Act and American Data Privacy and Protection Act are crit
63、ical regulations for organizations to adhere to(see Figure 6,next page).It is important to note that while the regulations will increase,organizations will continue to spend on AI solutions but do it responsibly using professional services and governance tools and technologies(see Figure 7,page 15).
64、14 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents NA EMEA APAC LATAM FIGURE 6 AI Regulations Critical for Organizations AI Implementations Which of the following emerging AI regulations are critical for your
65、organizations AI implementations?(Percentage of respondents)*Which requires an algorithm design evaluation and algorithmic impact assessment.Notes:Data is managed by IDCs Global Primary Research Group.Data is weighted by IT spending by country.Multiple responses were allowed.Use caution when interpr
66、eting small sample sizes.n=2,562;Source:IDCs Microsoft Responsible AI Survey,March 2024 39%33%27%27%26%26%26%24%21%18%Singapores Model Governance Framework .D.C.Stop Discrimination by Algorithms Act of 2021 .EU AI Act .SR 11-7 Supervisory Guidance on Model Risk Management .UK White Paper on AI .Amer
67、ican Data Privacy Protection Act(ADPPA),specifically Section 207*.NYC Algorithmic Hiring Law No.144 .NIST Risk Management Framework and Industry Playbook .Canadas C-27 Bill:Digital Charter Implementation Act,2022 .CFPB Circular 2022-03/Regulation B/Equal Credit Opportunity Act.Colorado SB 169:Concer
68、ning Protecting Consumers from Unfair Discrimination in Insurance Practices .6%Top Region 15 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents FIGURE 7 Influence of Worldwide Increase in AI Regulations on an Org
69、anizations Responsible AI Spend Plans in the Next Two Years:Worldwide and Regional Split For each of the following areas,how would a worldwide increase in AI regulations influence your organizations responsible AI spend plans in the next two years?(Mean percentage of increase)n=2,562;Source:IDCs Mic
70、rosoft Responsible AI Survey,March 2024 Notes:Data is managed by IDCs Global Primary Research Group.Data is weighted by IT spending by country.Use caution when interpreting small sample sizes.For an accessible version of the data in this figure,see Figure 7 Supplemental Data in the Appendix.AI-power
71、ed solutions .IT professional services for responsible AI.Business professional services for responsible AI.Adversarial robustness:data security and privacy software .Fairness,explainability,data lineage,and transparency tools and software .Drift monitoring and risk management software .Digital wate
72、rmarking and content safety software .Worldwide NA EMEA APAC LATAM 6.4%5.5%5.4%5.3%5.2%5.1%4.8%6.6%7.3%5.4%6.4%5.0%6.4%5.2%5.2%4.3%6.2%4.3%5.7%4.9%5.8%4.8%5.3%3.5%5.3%4.8%5.5%5.4%6.0%4.5%4.8%4.4%5.1%4.6%4.8%16 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Bu
73、siness Case for Responsible AI Table of Contents Over two-thirds of the respondents are planning to use AI/ML platforms with built-in RAI support(see Figure 8),and 39%report that the platform should provide dashboards to assess,monitor,and drive timely actions and multi-persona collaboration(see Fig
74、ure 9,next page).FIGURE 8 Type of Responsible AI Software Used/Planning to Be Used What type of responsible AI software is your organization using/planning to use?(Percentage of respondents)n=2,562;Source:IDCs Microsoft Responsible AI Survey,March 2024 Notes:Data is managed by IDCs Global Primary Re
75、search Group.Data is weighted by IT spending by country.Use caution when interpreting small sample sizes.1%None of the above 31%Standalone responsible AI platform 68%AI/ML platform with built-in responsible AI support 17 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Re
76、ward:The Business Case for Responsible AI Table of Contents FIGURE 9 Critical Capabilities of a Responsible AI platform What do you think are the critical capabilities of a responsible AI platform?(Percentage of respondents)n=2,562;Source:IDCs Microsoft Responsible AI Survey,March 2024 Notes:Data is
77、 managed by IDCs Global Primary Research Group.Data is weighted by IT spending by country.Multiple responses were allowed.Use caution when interpreting small sample sizes.Dashboards to assess,monitor,and drive timely actions/risk management .Multi-persona collaboration(data science,legal/compliance,
78、LOB,etc.).Integrations with data governance systems .Integrations with governance,risk,and compliance and enterprise resource management systems .Integration with Machine Learning Ops(MLOps)/Large Language Model Ops and monitoring tools .Reports for compliance .Multiple deployment options on premise
79、s/public cloud/hybrid .Low-code/no-code support for business analyst persona .Support for industry and geo-specific regulations/policies .Abuse monitoring .Support governance for third-party models and applications .39%38%35%35%34%34%30%30%26%25%39%18 White Paper,sponsored by Microsoft December 2024
80、|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AITable of Contents AI Adoption IDC estimates that the use of AI is growing rapidly in excess of 40%and is projected to maintain its remarkable momentum,driven by the increasing adoption of AI across various industries(see Worldwi
81、de Artificial Intelligence Platforms Software Forecast,20242028:AI Integration Accelerates,Fueling Technological Breakthroughs and Business Transformations,IDC#US52386424,July 2024)IDC research estimates the worldwide economic impact of generative AI by the end of 2033 to be close to$10 trillion.Som
82、e key facts to note from IDCs March 2024 Microsoft Responsible AI Survey are:Over 77%of organizations across the world are either exploring potential use cases or investing significantly in generative AI technologies.91%are currently using AI technology.63%of organizations have an AI strategy tied t
83、o their business objectives,which includes a measurement strategy to evaluate success.Improving operational efficiency,increasing innovation,and reducing cost are the top business objectives for AI initiatives(see Figure 10,next page).Additional Insights from the Study 19 White Paper,sponsored by Mi
84、crosoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents FIGURE 10 Top 3 Business Objectives for Investing in AI Please rank your organizations top 3 business objectives for investing in AI.(Percentage of respondents)n=2,562;Source:IDCs Microsof
85、t Responsible AI Survey,March 2024 Notes:Data is managed by IDCs Global Primary Research Group.Data is weighted by IT spending by country.Multiple responses were allowed.Use caution when interpreting small sample sizes.Improve operational efficiency .Increase innovation .Reduce cost and optimize spe
86、nd .Increase business agility .Improve employee productivity.Reduce business risk(i.e.,regulatory compliance,security downtime).Improve customer experience/customer satisfaction .Increase revenue from new markets,products,and/or customers .Accelerate time to market for new products and services .Inc
87、rease profit .Improve customer acquisition and retention .Increase business resilience .Improve sustainability .32%25%23%23%22%21%20%19%18%18%18%17%16%20White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents Importan
88、t Use Cases Organizations are using AI for a wide range of use cases,including:Software development Automating IT tasks Fraud detection and cybersecurity Product and service innovation Automating business processes Call center conversation summarization and categorization Conversational analysis and
89、 intelligence on call center transcripts IDC expects rapid expansion of AI use cases to help businesses innovate and stay competitive and relevant.It is interesting to note that organizations are prioritizing AI investments in IT operations,IT service management,and machine learning operations(see F
90、igure 11,next page).This is aligned with the need to drive foundational efficiencies so that they can scale the AI adoption for line-of-business use cases that transform customer and employee experiences.Over the next three years,IT operations and IT service management will be the areas in which org
91、anizations invest in AI the greatest(see next page).21 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents FIGURE 11 Business/IT Processes for Which an Organization Will Be Investing in AI For which of these busin
92、ess/IT processes will your organization be investing in AI?(Percentage of respondents)n=2,309(respondents currently using AI technology);Source:IDCs Microsoft Responsible AI Survey,March 2024 Notes:Data is managed by IDCs Global Primary Research Group.Data is weighted by IT spending by country.Multi
93、ple responses were allowed.Use caution when interpreting small sample sizes.For an accessible version of the data in this figure,see Figure 11 Supplemental Data in the Appendix.23%21%18%18%14%18%16%13%14%14%13%10%11%9%29%25%24%22%19%17%16%16%13%12%10%10%10%8%IT operations .Traditional software devel
94、opment life cycle .Content management .Machine learning/deep learning life cycle .Marketing content creation and promotion .Idea to product .Lead to cash .IT service management .Customer service .Finance .Analytics .Sales .Recruit to retire .Source to pay .Invested in past three years Will invest in
95、 next three years 22White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AITable of Contents Organizations do business with organizations that they can trust.There is an incredible urgent need for organizations to operationalize AI gov
96、ernance across the project life cycle,support collaborative risk management,and adhere to regulations and their policies and values.Organizations need to be responsible at the core,leveraging the framework in Figure 12(next page).Advice and Recommendations 23 White Paper,sponsored by Microsoft Decem
97、ber 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents FIGURE 12 Framework for Organizations to Be Responsible at the Core Source:IDC,2024 AI Principles Transparency/Explainability Fairness and equality Robustness,safety,and security Privacy and data prote
98、ction Human in the loop Accountability Operationalize AI Governance Regulations and Laws Standards Company Policies and Values Best Practices Governance Layer Risk Insights Controls Infrastructure Models Open Source LLMs Proprietary LLMs End Users Application Layer UI Filters Monitoring AI Governanc
99、e Committee Diverse and Inclusive Policies Governing Internal AI Use Education,Training,and Awareness Policies Governing External AI Use 24 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents As such,every organiz
100、ation should do the following:Establish its AI principles:This entails commitment to developing technology responsibly and work to establish specific application areas the organization will not pursue.For example,many prohibit the use of facial recognition technology for building AI solutions.Avoid
101、creating or reinforcing unfair bias:AI algorithms and data sets can reflect,reinforce,or reduce unfair biases.Although not simple,and considering they differ across cultures and societies,every organization should seek to avoid unjust impacts on people,particularly those related to sensitive charact
102、eristics such as race,ethnicity,gender,nationality,income,sexual orientation,ability,and political or religious belief.Build and test for safety:Every organization should develop and apply strong safety and security practices to avoid unintended results that create risks of harm.It should test AI te
103、chnologies in constrained environments and monitor their operation after deployment.The organization should design or adopt AI systems that provide appropriate opportunities for feedback,relevant explanations,and appeal.Every organization should incorporate privacy design principles.Establish an AI
104、Governance Committee:Establish an AI Governance Committee that can help reduce the abuse and misuse of artificial intelligence.For the organization to adhere to its AI principles,it is critical that it has diverse(across different functions from legal and compliance to security to data team and from
105、 HR to marketing and finance)and inclusive(different genders,cultures,abilites,and racial backgrounds)representation in the AI Governance Committee:Define organizations policies for governing internal and external AI use:These policies are crafted to align with legal requirements and organizational
106、values,ensuring that AI technologies are used responsibly.Promote transparency and explainability:Encourage the development of AI systems that are transparent about their decision-making processes and can be easily explained to nontechnical stakeholders.Implement diverse testing criteria:Ensure AI m
107、odels are tested against diverse data sets to minimize bias and verify their reliability across various scenarios and populations.Conduct regular AI audits:Schedule periodic audits of AI systems to assess compliance with internal policies and external regulations,iterating on the systems as necessar
108、y to address discovered issues.25 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents Prioritize privacy and data protection:Reinforce privacy and data protection measures in AI operations to safeguard against una
109、uthorized data access and ensure user trust.Invest in AI training:Allocate resources for regular training and workshops on responsible AI practices and concepts along with use that aligns with corporate policies for the entire workforce,including the executive leadership approach.As we all know,it i
110、s not enough to just define principles and policies,but it is critical to leverage an iterative process to operationalize AI governance:Organizations need to keep abreast of global AI regulations.The EU AI Act,a landmark rule that aims to govern the way companies develop,use,and apply AI,was approve
111、d in May 2024 and went into effect in August 2024.The legislation applies a risk-based approach to regulating AI,which means that different applications of the technology are regulated differently depending on the level of risk they pose to society.For AI applications deemed to be“high risk,”for exa
112、mple,strict obligations have been introduced.Such obligations include adequate risk assessment and mitigation systems,high-quality training data sets to minimize the risk of bias,routine logging of activity,and mandatory sharing of detailed documentation on models with authorities to assess complian
113、ce.The EU AI Act has implications that go far beyond the EU.It applies to any organization with any operation or impact in the EU,which means the AI Act will likely apply to you no matter where youre located.This will bring much more scrutiny to tech giants when it comes to their operations in the E
114、U market and their use of EU citizen data.Companies that breach the EU AI Act could be fined from 35 million euros($41 million)or 7%of their global annual revenue whichever amount is higher to 7.5 million euros or 1.5%of global annual revenue.The size of the penalties will depend on the infringement
115、 and size of the company fined.Thats higher than the fines possible under the GDPR,Europes strict digital privacy law.Companies face fines of up to 20 million euros or 4%of annual global turnover for GDPR breaches.Oversight of all AI models that fall under the scope of the Act including general-purp
116、ose AI systems will fall under the European AI Office,a regulatory body established by the Commission in February 2024.26 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents Establish an end-to-end governance laye
117、r framework that includes the following:Infrastructure governance:Running AI systems on infrastructure that has appropriate security and privacy controls built into it is the only surefire way to mitigate one of the most critical risks of generative AI systems to organizations:leakage of sensitive d
118、ata or IP.Model governance:Policies and processes that control the design,development,and deployment of AI models is nothing new.Many organizations have been doing some form of model risk management for years.In the era of generative AI,however,enterprise model governance looks very different becaus
119、e most enterprises arent building their own foundation models.Instead,they are relying on third-party foundation model providers for example,OpenAI and Anthropic.These third-party providers are increasingly investing in tools and processes to manage and mitigate privacy,safety,and security risks at
120、the model level these investments include foundation model evaluations to better quantify model behavior and“alignment”approaches like reinforcement learning through human feedback and constitutional AI,which reduce the likelihood of common failure modes and improve model steerability.These safeguar
121、ds,however,are not tailored to any particular use of foundation models,nor are they grounded in a specific industry or organizations risk tolerance and compliance needs.Enterprises that have a low risk tolerance or specific concerns related to a particular application of foundation models are findin
122、g that the model governance of third-party providers is not sufficient for their needs.In these scenarios,you could explore the use of open source models to enhance your control or implement stronger layers of governance on top of and underneath the model in the other layers of the GenAI stack.Appli
123、cation layer governance:The application layer provides the user interface for generative AI APIs,and so there is a tremendous opportunity to insert governance controls into this layer to prevent a foundation model from being used in dangerous or noncompliant ways.By their nature,GenAI systems are mo
124、re flexible and difficult to predict than traditional software engineering,which presents new challenges for application builders.For example,GenAI applications are vulnerable to prompt injections and misuse by malicious users.It is also easy for GenAI applications to return outputs that are harmful
125、 or in violation of governance policies and requirements.These issues can generally be dealt with by input/output governance,where safeguards(e.g.,automatic content moderation)are added around foundation model API calls to reduce risks.Adding these kinds of governance controls to the application lay
126、er of the generative AI stack 27 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents is a very effective way to reduce the risk of these systems;however,if an organization isnt building its own generative AI appli
127、cations,it still doesnt have control over this layer.End-user governance:Without direct control over models or the application layer,what capacity do you have to govern these systems and mitigate their most egregious risks?For most enterprises,the first line of defense against generative AI risk is
128、end-user governance governing the ways that end users are allowed to interact with generative AI systems.Many enterprises responded to the generative AI revolution by implementing the bluntest instrument when it comes to end-user governance:turning off end-user access.Of course,turning off access to
129、 GenAI chatbots is an effective way to make sure that your employees arent exposing your organization to risk via usage;however,it also blocks your organization from realizing the many benefits and obtaining value from these tools.Examples of end-user governance that allows for safe and responsible
130、exploration of generative AI include:Adopting a code of conduct that defines how users are and are not allowed to interact with generative AI tools Logging end-user interactions and monitoring for risky or edge case inputs and outputs Implementing human-in-the-loop reviews that prevent generative AI
131、 outputs from being used without human feedback or input and enabling users to share effective prompts with one another so they can become better at successfully using generative AI tools 28White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Resp
132、onsible AITable of Contents AI is being regarded as a critical enabler of businesses strategic priorities.Scaling AI can deliver high performance for customers,shareholders,and employees,but organizations must overcome common hurdles to apply AI responsibly and sustainably.AI adoption can bring with
133、 it new,dynamic,organizational,and social issues.Failure to manage these issues can have a significant impact at a human and societal level,leaving organizations exposed to financial,legal,and reputational repercussions.Basically,embracing AI responsibly is a must and not an option.While many organi
134、zations have taken the first step and defined AI principles,translating these into practice is far from easy,especially with few standards or regulations to guide them.Successful organizations understand the importance of taking a systematic approach from the start,addressing these challenges in par
135、allel,while others underestimate the scale and complexity of change required.A systematic approach requires proven tools,frameworks,and methodologies,enabling organizations to move from principles to practice with confidence and supporting the professionalization of AI.Establishing an RAI approach t
136、hat is robust,fair,and maintained on an ongoing basis can also enable organizations to communicate and collaborate with confidence.Conclusion 29 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents Being responsibl
137、e can become more beneficial,especially as governments,regulatory bodies,and international standard-setting bodies consider new rules of the road and standards for the development and deployment of AI.The biggest barrier lies in the complexity of scaling AI responsibly an undertaking that involves m
138、ultiple stakeholders and cuts across the entire enterprise and ecosystem.IDCs Microsoft Responsible AI Survey revealed that over 50%of respondents do not have a fully operationalized and integrated RAI governance structure and tools and technologies to enforce responsible AI adoption.As new requirem
139、ents emerge,they must be baked into product development processes and connected to other regulatory areas,such as privacy,data security,and content.By shifting from a reactive AI compliance strategy to the proactive development of mature responsible AI capabilities,organizations will have the founda
140、tions in place to adapt as new regulations and guidance emerge.This way,businesses can focus more on performance and competitive advantage and deliver business value with social and moral responsibility.Being responsible can become more beneficial,especially as governments,regulatory bodies,and inte
141、rnational standard-setting bodies consider new rules of the road and standards for the development and deployment of AI.30 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AITable of Contents Generative AI Generative AI is a branch
142、 of computer science that involves unsupervised and semi-supervised algorithms that enable computers to create new content using previously created text,audio,video,images,and code in response to short prompts.Generative AI powers foundational models,which are a class of machine learning models that
143、 are trained on diverse data and can be adapted or fine-tuned for a wide range of downstream tasks.The era of the large-scale model was sparked by the emergence of transformer model architecture in 2017,namely the large language model.Generative AI requires significant amounts of data to build and o
144、perate models,and it requires access to significant data technologies to build or train models.While GenAI technologies are relatively new,predictive and prescriptive AI based on various types of machine learning has been providing solutions to problems for over a decade.The combination of predictiv
145、e,prescriptive,and generative AI is promising unprecedented productivity improvements and business transformation opportunities for organizations across the world.Definitions 31 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI T
146、able of Contents Responsible AI As noted previously,responsible AI is the practice of designing,developing,and deploying AI in a way that prioritizes fairness,reliability and safety,privacy and security,inclusiveness,transparency,and accountability.Responsible AI focuses on developing and using AI s
147、olutions in a manner consistent with societal laws,government regulations,organizational values,and user expectations.Planning,oversight,and governance are key aspects of responsible AI.Responsible AI aims to ensure that AI use in the organization is human centered,trustworthy,fair,explainable,priva
148、cy preserving,secure,documented,and governed.Responsible AI Attributes The key attributes and pillars of a responsible AI policy framework are explained in the sections that follow.Accountability Can the AI system and the people who designed and implemented the system be held accountable for the dec
149、isions made?With more power comes more responsibility.As AI capabilities are being leveraged for making critical decisions such as medical treatments,it is important that we include humans in the loop around the AI system to ensure the best results.The chief data officer and chief trust officer(or e
150、quivalent roles)must collaborate to assess their business-specific regulations charter,review the problem at hand,and define the solution on a case-by-case basis subject to the business risk and potential business impact.These are the roles in the organization that have the authority and responsibil
151、ity to be accountable for ensuring responsible AI use and operation.Explainability and Transparency Is the AI system transparent,and can the output of the AI system be explained?AI systems need to be transparent they should be able to safely report key attributes of the AI models,including the data
152、and algorithms used to train the model,bias mitigations performed,model,and its assets.Explainability refers to the ability to understand how the decisions,conclusions,or outputs from the AI system are made.Key personas involved with transparency and explainability include data scientists,auditors,a
153、nd decision-makers.Arriving at meaningful explanations of the AI models reduces uncertainty and helps quantify their accuracy.It is important to establish the right balance between explainability and improved trust in AI models.32 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From
154、Risk to Reward:The Business Case for Responsible AI Table of Contents Fairness Is the AI system fair?AI systems should be fair and unbiased to avoid any unintentional unfair treatment of certain groups.AI systems should use training data and models that are free of bias.Apart from unwanted bias duri
155、ng training from training data,bias can also creep in because of incorrect model build,selection,or deployment.The AI system needs to have correct checks and balances to ensure that the system doesnt discriminate based on gender,race,color,orientation,faith,or anything else.Again,this is part of the
156、 chief trust officers responsibilities within the organization,and this person is charged with making sure that any AI output is fair and unbiased.Inclusiveness Is the AI system inclusive of all genders,races,appearances,languages,abilities,and experiences?AI systems should be developed using inclus
157、ive and accessible practices to be inclusive of all human beings without excluding any groups of people intentionally or unintentionally.Privacy and Security Can the AI system protect the privacy and security of the data/users?AI systems should follow established security and privacy practices to pr
158、otect AI models from adversarial attacks,secure user data,ensure user privacy,and mitigate risks.This is part of the chief security officers job and,in many ways,is the same as what the security organization is or should be doing for the rest of the company.In this particular case,the same principle
159、s,rules,guidelines,and approaches can be applied to AI systems in the same manner as any other applications.Robustness and Security Is the AI system robust and safe?AI systems should be safe and secure,not vulnerable to tampering or compromising the data they are trained on.They also need to be robu
160、st without any performance degradation over time.These systems also need to have appropriate monitoring and human-in-the-loop processes to ensure operational safety.Again,this is part of the chief security officers job and,in many ways,is the same as what the security organization is or should be do
161、ing for the rest of the company.In this particular case,the same principles,rules,guidelines,and approaches can be applied to AI systems in the same manner as any other applications.33 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsib
162、le AI Table of Contents Appendix 1:Supplemental Data This appendix provides an accessible version of the data for the complex figures in this document.Click“Return to original figure”below each table to get back to the original data figure.FIGURE 3 SU PPLE M E NTAL DATA Governance Frameworks in Plac
163、e:Worldwide and Regional Split Worldwide NA EMEA APAC LATAM Clear framework(principles,policies,technologies,and processes)84%88%83%81%89%Mechanisms to enforce/apply the framework 73%81%67%70%70%Governance structure to oversee implementation 69%75%62%66%67%Technologies to enforce responsible AI rule
164、s,policies,and processes 57%66%46%54%60%n=2,562;Source:IDCs Microsoft Responsible AI Survey,March 2024 Notes:Data is managed by IDCs Global Primary Research Group.Data is weighted by IT spending by country.Use caution when interpreting small sample sizes.Return to original figure 34 White Paper,spon
165、sored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents FIGURE 5 SUPPLEM ENTAL DATA AI Organizations Budget Allocation,2024 Worldwide NA EMEA APAC LATAM AI/ML governance tools 35%36%35%35%34%Professional services for responsible AI 3
166、2%31%33%32%34%AI/ML development platforms/Machine Learning Ops(MLOps)tools 18%19%17%18%17%Others 15%15%16%15%15%Base=respondents that indicated organizations plan to spend more than$1 on their AI projects in 2024.Notes:Totals may not sum up to 100%due to rounding.Data is managed by IDCs Global Prima
167、ry Research Group.Data is weighted by IT spending by country.Use caution when interpreting small sample sizes.n=2,555(worldwide):n=611(NA),n=819(EMEA),n=830(APAC),n=300(LATAM);Source:IDCs Microsoft Responsible AI Survey,March 2024 Return to original figure FIGURE 4 SU PPLE M E NTAL DATA Asset Mix fo
168、r Monitoring After an AI System Has Gone Live:Worldwide and Regional Split Worldwide NA EMEA APAC LATAM Monitoring will be done mostly by responsible AI governance software but with some oversight by people 50%49%51%50%49%Monitoring will be done mostly by people but using some responsible AI governa
169、nce software 23%22%24%23%23%Monitoring will be done by responsible AI governance platforms only(i.e.,no people involved)17%19%13%20%19%Monitoring will be done by people only(e.g.,ethics boards determine ethical use)7%7%8%5%9%No monitoring will be necessary 3%2%3%2%1%Notes:Totals may not sum up to 10
170、0%due to rounding.Data is managed by IDCs Global Primary Research Group.Data is weighted by IT spending by country.Use caution when interpreting small sample sizes.n=2,562(worldwide),n=611(NA),n=819(EMEA),n=832(APAC),n=300(LATAM);Source:IDCs Microsoft Responsible AI Survey,March 2024 Return to origi
171、nal figure Appendix 1:Supplemental Data(continued)35 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents FIGURE 7 SU PPLE M E NTAL DATA Influence of Worldwide Increase in AI Regulations on an Organizations Respons
172、ible AI Spend Plans in the Next Two Years:Worldwide and Regional Split Worldwide NA EMEA APAC LATAM AI-powered solutions 6.4%6.4%5.4%7.3%6.6%IT professional services for responsible AI 5.5%5.2%5.2%6.4%5.0%Adversarial robustness:data security and privacy software 5.4%5.7%4.3%6.2%4.3%Drift monitoring
173、and risk management software 5.3%5.3%4.8%5.8%4.9%Business professional services for responsible AI 5.2%5.5%4.8%5.3%3.5%Fairness,explainability,data lineage,and transparency tools and software 5.1%4.8%4.5%6.0%5.4%Digital watermarking and content safety software 4.8%4.8%4.6%5.1%4.4%n=2,562;Source:IDCs
174、 Microsoft Responsible AI Survey,March 2024 Notes:Data is managed by IDCs Global Primary Research Group.Data is weighted by IT spending by country.Use caution when interpreting small sample sizes.Return to original figure Appendix 1:Supplemental Data(continued)36 White Paper,sponsored by Microsoft D
175、ecember 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents FIGURE 11 SUPPLEMENTAL DATA Business/IT Processes for Which an Organization Will Be Investing in AI Invested in past three years Will invest in next three years IT operations 29%23%IT service manag
176、ement 25%21%Machine learning/deep learning life cycle 24%18%Analytics 22%18%Traditional software development life cycle 19%14%Customer service 17%18%Marketing content creation and promotion 16%16%Sales 16%13%Content management 13%14%Finance 12%14%Idea to product 10%13%Recruit to retire 10%10%Lead to
177、 cash 10%11%Source to pay 8%9%n=2,309(respondents currently using AI technology);Source:IDCs Microsoft Responsible AI Survey,March 2024 Notes:Data is managed by IDCs Global Primary Research Group.Data is weighted by IT spending by country.Multiple responses were allowed.Use caution when interpreting
178、 small sample sizes.Return to original figure Appendix 1:Supplemental Data(continued)37 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents About the IDC Analysts Ritu Jyoti Group Vice President/General Manager,Wo
179、rldwide Artificial Intelligence,Automation,Data and Analytics Research Practice,IDC Ritu Jyoti is group vice president/general manager of the Worldwide Artificial Intelligence,Automation,Data and Analytics Research Practice with IDCs Software Market Research and Advisory Practice.Ms.Jyoti is respons
180、ible for leading the development of IDCs thought leadership for AI research and management of the worldwide AI,automation,data and analytics software research team.Her research focuses on the state of enterprise AI efforts and global market trends for the rapidly evolving AI and ML including GenAI i
181、nnovations and ecosystems.Ms.Jyoti also leads insightful research that addresses the needs of AI technology vendors and provides actionable guidance to them on how to crisply articulate their value proposition,differentiate,and thrive in the AI era.More about Ritu Jyoti Dave Schubmehl Research Vice
182、President,Conversational Artificial Intelligence and Intelligent Knowledge Discovery,IDC Dave Schubmehl is research vice president for IDCs Conversational Artificial Intelligence and Intelligent Knowledge Discovery research.His research covers information access and artificial intelligence(AI)techno
183、logies around conversational AI technologies including speech AI and text AI,machine translation,embedded knowledge graph creation,intelligent knowledge discovery,information retrieval,unstructured information representation,knowledge representation,deep learning,machine learning,unified access to s
184、tructured and unstructured information,chatbots and digital assistants,and rich media search in SaaS,cloud,and installed software environments.This research analyzes the trends and dynamics of the Text and Audio AI software markets and the costs,benefits,and workflow impact of solutions that use the
185、se technologies.More about Dave Schubmehl 38 White Paper,sponsored by Microsoft December 2024|IDC#US52727124 From Risk to Reward:The Business Case for Responsible AI Table of Contents Learn more at Microsoft is dedicated to enabling every person and organization to use and build AI that is Trustwort
186、hy,which means AI that is private,safe,and secure.We use our own best practices from decades of research and learnings from building AI products at scale to provide industry-leading commitments and capabilities.Trustworthy AI is only possible when you combine our policy commitments with our product
187、capabilities so you can achieve your AI transformation with confidence.Trust Microsoft for commitments and capabilities that put your AI privacy,safety and security first.The study was commissioned and sponsored by Microsoft.This document is provided solely for information and should not be construe
188、d as legal advice.Message from the Sponsor IDC Research,Inc.140 Kendrick Street,Building B,Needham,MA 02494,USA T+1 508 872 8200 idc idc International Data Corporation(IDC)is the premier global provider of market intelligence,advisory services,and events for the information technology,telecommunicat
189、ions,and consumer technology markets.With more than 1,300 analysts worldwide,IDC offers global,regional,and local expertise on technology and industry opportunities and trends in over 110 countries.IDCs analysis and insight helps IT professionals,business executives,and the investment community to m
190、ake fact-based technology decisions and to achieve their key business objectives.2024 IDC.Reproduction is forbidden unless authorized.All rights reserved.CCPA IDC Custom Solutions produced this publication.The opinion,analysis,and research results presented herein are drawn from more detailed resear
191、ch and analysis that IDC independently conducted and published,unless specific vendor sponsorship is noted.IDC Custom Solutions makes IDC content available in a wide range of formats for distribution by various companies.This IDC material is licensed for external use and in no way does the use or publication of IDC research indicate IDCs endorsement of the sponsors or licensees products or strategies.