《EQUAI:2023年設計和實施負責任的人工智能治理框架的內部指南(英文版)(32頁).pdf》由會員分享,可在線閱讀,更多相關《EQUAI:2023年設計和實施負責任的人工智能治理框架的內部指南(英文版)(32頁).pdf(32頁珍藏版)》請在三個皮匠報告上搜索。
1、1An Insiders Guide to Designing and Operationalizing a ResponsibleAI Governance FrameworkSeptember 20232CO-AUTHORSLaxmi Anish,PepsiCo,Inc.Kathy Baxter,Salesforce Inc.Dawn Bloxwich,Google DeepMindCatherine Goetz,LivePersonTina Huang,EqualAIGabrielle Kohlmeier,VerizonDavid Lincoln,VerizonDavid Marcos,
2、Microsoft CorporationAmanda Muller,Northrop Grumman Corporation Mike Tang,VerizonReggie Townsend,SAS InstituteMiriam Vogel,EqualAIDiya Wynn,Amazon Web Services3TABLE OF CONTENTSExecutive SummaryIntroductionBackground on the EqualAI Responsible AI SummitResponsible AI Governance Framework A.First:Gro
3、und the Process in Your Reality B.Second:Identify Values 1.Trust 2.Culture 3.Accountability 4.Multistakeholder Engagement C.Third:Determine Responsible AI Principles D.Fourth:Establish Accountability and Clear Lines of Responsibility E.Fifth:Use Tools to Support AI Governance 1.Documentation 2.Defin
4、ed Process 3.Multistakeholder Review 4.Metrics,Monitoring,and ReevaluationFurther Discussion and Open QuestionsConclusionV.VI.I.II.III.IV.5910 1212 12 13141516172124 24 242626 28314ACKNOWLEDGMENTS We are grateful for all those who played a crucial role in the production of this white paper and our E
5、qualAI Badge Program.We would like to recognize the esteemed faculty for our Badge Program,including Kathy Baxter,Meredith Broussard,Andrew Burt,Natasha Crampton,Jen Gennai,Cathy ONeil,and Reva Schwartz,and the distinguished speakers at the EqualAI Responsible AI Summit,including Rep.Don Beyer(VA-8)
6、,Dr.Seth Center,Rep.Ted Lieu(CA-36),and Rep.Jay Obernolte(CA-23)for contributing to informative discussions and insights that shaped the findings and perspectives of our conversations and ultimately our paper,as well as inspiring confidence in our policymakers to navigate these complex issues.We wou
7、ld also like to extend sincere thanks to our invaluable EqualAI board members and advisers,including Jonathan Dotan,Victoria Espinel,and Reggie Townsend for their involvement at the Summit.The productive conversations that took place were made possible in large part by the elegant,welcoming venue an
8、d the staff at the House at 1229,including Kathleen Buhle,Johanna Harris,and Karima Ouazzani;we are grateful for their unparalleled professionalism while hosting the Summit.Special thanks are also due to the NP Agency teamAlexandria Dissell,Tom McMahon,and Cara Morris Sternfor editing,designing,and
9、promoting the white paper as well as our indispensable EqualAI community,including James Beasley,Kristi Boyd,Hannah Dudley,Becca Kahn,and Meghna Sinha,for their feedback and contributions.We would like to express our gratitude for our copy editor,Nancy King,and EqualAI staffJames Bond and Jim Wileyf
10、or their work in bringing the white paper across the finish line.Lastly,we would like to recognize Microsoft Corporation for theirsignificant contributions to the white paper,including their instructionandparticipationin the Badge Program,and support forthe Responsible AI Summit.5As businessesand in
11、dividualsincrease their dependence on and investment in artificial intelligence(AI)1,there is a critical need to establish and align on best practices that promote responsible AI to earn and maintain trust in these systems.Since 2017,AI adoption has doubled,and organizations that have embraced AI ar
12、e experiencing the highest financial returns relative to competitors.2 Additionally,numerous studies report that consumers overwhelmingly expect businesses to be responsible and ethical when adopting and developing AI technology.3 Now more than ever,organizations4 must understand and implement pract
13、ices to ensure their AI is responsibleby which we mean safe,inclusive,and effective for all possible end users.An organization must not only consider its consumers but also the impact these new technologies will have on their employees and the companys culture,as well as any unintended use cases.Cur
14、rently,there is a lack of national,let alone global,consensus on standards for responsible AI governance.While there is some indication of progress5 to come on this front,it is not imminent,and organizations cannot wait for the regulatory and litigation landscape to settle before adopting best pract
15、ices for AI governance.The potential harm and liability associated with the complex AI systems currently being built,acquired,and integrated is too significant to delay adoption of safety standards.To help companies prepare for this reality,EqualAI convenes leaders across industry,government,and civ
16、il society to align on risks,liabilities,and best practices in establishing and operationalizing responsible AI governance.1 EqualAI relies on the NIST AI Risk Management Frameworks definition of an AI system:“An AI system is an engineered or machine-based system that can,for a given set of objectiv
17、es,generate outputs such as predictions,recommendations,or decisions influencing real or virtual environments.AI systems are designed to operate with varying levels of autonomy.”(Adapted from:OECD Recommendation on AI:2019;ISO/IEC 22989:2022.)2 McKinsey&Company,“The State of AI in 2022and a Half Dec
18、ade in Review,”McKinsey&Company,December 6,2022,https:/ Business Wire,“WE Communications Brands in Motion 2018 Global Study:Escalating Consumer Expectations Push Brands to Deliver on Innovation,Ethical Responsibility and Functionality,”September 12,2018,https:/ Percent of Organizations Think They Ne
19、ed to Do More to Assure Customers about How Their Data Is Used in AI,New Cisco Research finds,”January 24,2023,https:/ Note that we use“organization”and“company”interchangeably in this paper,given that our intended audience is both types of entities and the findings presented here are equally applic
20、able.5“The EU Artificial Intelligence Act,”June 14,2023,https:/www.artificial-intelligence- Institute for Standards and Technology,“Artificial Intelligence Risk Management Framework,”January 2023,https:/nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.EXECUTIVE SUMMARY6The primary goal of this white p
21、aper is to serve as a resource for organizations of any size,sector,or maturity that are adopting,developing,using,and implementing AI systems with an internal and public commitment to do so responsibly.We hope this will be helpful to additional audiences,such as policy leaders who want to understan
22、d how industry leaders who are committed to this effort are taking the initiative to ensure responsible AI governance.7RESPONSIBLE AI GOVERNANCE FRAMEWORKAccountability andClear Lines of ResponsibilityResponsible AI Values and PrinciplesMultistakeholder ReviewsMetrics,Monitoring,and Reevaluation5.6.
23、4.3.2.1.DocumentationThis white paper builds on the discussion in the culminating seventh session of our EqualAI Badge Program,where senior executives gather to address best practices in responsible AI governance.The final framework they aligned on consists of six main pillars.Each of these six pill
24、ars plays a critical role in establishing the groundwork for an organizations responsible AI governance framework.Together,they empower executives and employees to leverage resources,tools,and guidance to proactively ensure AI is adopted,developed,used,and implemented consistently,safely,and inclusi
25、vely.Implementation of this responsible AI governance framework will also signal to consumers and the general public that an organization is committed to earning and maintaining its trust.At the initial stages of building an AI framework,our participants agreed on key AI values and principles that s
26、erve as the foundation to develop a cohesive process and inform end goals.These include:TrustMultistakeholder PerspectivesCultureAccountabilityRESPONSIBLE AI VALUES Each of these values is commonly accepted among most organizations and serves as the foundation to build a sustainable responsible AI g
27、overnance framework:8RESPONSIBLE AI PRINCIPLESEach of these principles is important to the effective operations of an organization but should be considered as they apply to AI systems more specifically.They include:With a grounding in these values and principles,an organization can begin to set the
28、foundation of its responsible AI governance framework.This white paper explores the six pillars,explains their importance,and offers methods for organizations to operationalize an AI framework consistent with its unique purposes,goals,values,and principles.9INTRODUCTIONEqualAI works with organizatio
29、ns that are eager to seize the benefits of AI while mitigating potential harms and biases.In recent years,companies,governments,and nongovernment entities have progressively developed AI principles and governance frameworks.6 This is an important development as organizations are increasingly becomin
30、g“AI companies,”meaning those that use AI in pivotal areas ranging from hiring and other HR functions to financial or benefits-related determinations.In May 2023,EqualAI gathered industry leaders to align on responsible AI principles to create a governance framework applicable to any organization re
31、gardless of industry,sector,size,or maturity.We are often asked to share our lessons and discussions on best practices with the broader community.We very much agree that sharing this information is in itself a best practice,both to heighten awareness of and alignment on best practices,as well as to
32、crowdsource these evolving standards and provide an opportunity to gather broader input.This white paper shares both themes and learnings from our discussion at the Responsible AI Summit as well as lessons discussed and operationalized throughout the Badge Program.Intent and audience:We are excited
33、to work with our partners(noted herein)to create what is arguably a first-of-its-kind document,which incorporates the views of multiple companies with multidisciplinary perspectives on best practices in responsible AI governance,and we are grateful to our participants and partners for helping us to
34、create this tool.The intended audience includes organizations that are building or deploying responsible AI governance and policymakers looking to understand the challenges encountered and practices adhered to by those companies leading the field of responsible AI governance as they consider the cre
35、ation of appropriate guardrails.A road map for this document:This white paper begins with a background on the EqualAI Responsible AI Summit,including information about the EqualAI Badge Program;the Summits goals;and participating companies.The paper then offers a responsible AI governance framework,
36、which includes commonly accepted organizational values and explains how they relate to responsible AI,key AI principles for organizations to consider adopting,and tools to help implement and monitor responsible AI governance efforts.Finally,the white paper dives into highlights of the participants d
37、eliberations,noting questions for which they did not reach consensus or have time to consider.The end goal:This white paper intends to guide organizations with top-line best practices used by leading practitioners and thought leaders in responsible AI governance,as they adopt,develop,use,and impleme
38、nt AI responsibly.6 Mina Narayanan and Christian Schoeberl,“A Matrix for Selecting Responsible AI Frameworks,”Center for Security and Emerging Technology,June 2023,https:/cset.georgetown.edu/publication/a-matrix-for-selecting-responsible-ai-frameworks/.10BACKGROUND ON THE EQUALAIRESPONSIBLE AI SUMMI
39、TThe EqualAI Responsible AI Summit is a culmination of the EqualAI Badge Program,where past and present Badge Program participants gather to align on responsible AI principles and best practices with the goal of developing consensus on a responsible AI governance framework.Our participants recognize
40、d that the needs of an organization will vary depending on its size,sector,maturity,resources,level of AI use and development,and other critical factors.As such,they focused on identifying principles and creating a framework that includes transferable characteristics regardless of these variables an
41、d offering different options for organizations to customize responsible AI efforts.You will find the high-level themes and findings from our discussions in this paper.OVERVIEW OF EQUALAIEqualAI is a nonprofit organization leading the movement to reduce unconscious bias andotherharms in the developme
42、nt and deployment of artificial intelligence.We work with leaders and experts across business,technology,and government to develop ethical standards,technical tools,and legislative solutions for responsible AI governance.Our flagship programs include the Badge Program for corporate leaders as descri
43、bed above,as well as our CA-accredited Continuing Legal Education(CLE)course,designed to help lawyers understand their role in reducing bias in AI and,in turn,better serve their clients.In addition,EqualAI co-hosts a podcast,In AI We Trust?,featuring AI thought leaders dedicated to defining,developi
44、ng,and deploying responsible,trustworthy AI.11Context:What is the EqualAI Badge Program?The EqualAI Badge Program prepares senior executives at companies developing or using AI systems to govern AI in ways that will help reduce potential harms and liability,empower employees to identify AI risks or
45、risks associated with developing and deploying AI technologies,and broaden the potential use cases and consumer base.Participants stem from a broad cross section of industry(e.g.,cloud-based software,service providers,sales,etc.)and roles(chief data officers,general counsel,chief privacy officers,ch
46、ief responsible AI officers,etc.).In this program,responsible AI governance is divided into six vantage points.Each session brings AI experts to meet with the Badge community and align on best practices for responsible AI governance.Interactive monthly panel discussions help inform participants on h
47、ow to operationalize AI principles with the goal of creating more inclusive AI systems and reducing liability.Senior executives learn best practices and become part of a community of leading experts and executives in the responsible AI field.Upon completion,Badge alumni are invited to quarterly conv
48、enings to continue their education and engagement with the EqualAI community.Badge Program topics and speakers on monthly panel sessions generally include(with some variation):How Bias Translates and Embeds in Our AI with Meredith Broussard(New York University);Ethical Matrix for Creating AI with Ca
49、thy ONeil(ORCAA);Is Your AI Safe to Launch with Kathy Baxter(Salesforce);Tools and Strategies to Operationalize Responsible AI Governance with Jen Gennai(Google)and Natasha Crampton(Microsoft);The Role of Lawyers in Addressing Bias in AI with Alexandra Reeve Givens(Center for Democracy and Technolog
50、y),Andrew Burt(bnh.ai),and Tom Lue(DeepMind);The AI Policy Landscape with staff from Capitol Hill,U.S.Equal Employment Opportunity Commission(EEOC),and the National Institute of Standards and Technology(NIST)2023 Spring Summit participants include senior executives from the following companies:Veriz
51、onHewlett Packard Enterprise(HPE)MicrosoftPepsiCoSAS InstituteAmazon Web Services(AWS)LivePersonNorthrop GrummanSalesforceDeepMind12RESPONSIBLE AI GOVERNANCE FRAMEWORKEstablishing a responsible AI governance framework is crucial for any organization planning to design or integrate AI into their inte
52、rnal functions or external products and services.Such a framework helps to operationalize an organizations AI principles and values and offers a set of guidelines and best practices to ensure the intentional,transparent,and consistent development,deployment,and use of AI technologies.In addition,the
53、 framework will empower employees and allow an organization to engender trust in its AI use more specifically and its operations and intentions more generally.GROUND THE PROCESS IN YOUR REALITYAs an organizations leadership begins its responsible AI governance journey,it is helpful to understand how
54、 the company is currently using AI,and what plans teams have to integrate AI for pivotal functions.After surveying their particular AI landscape and horizon,it is time to develop AI principles that align with the organizations values and establish an infrastructure and process,as detailed below,to s
55、upport these values and ensure they are not impeded by AI use.From there,leadership should ground the processes they develop with tangible use cases to pressure-test the new AI principles and customize a responsible AI governance framework suited to their specific needs.The process should include op
56、portunities to review and iterate as the AI systems will continue to evolve and iterate.IDENTIFY VALUES Responsible AI governance begins with an organizations stated or established values.Values are standards or qualities that set behavioral norms and serve as an organizations cultural cornerstone.T
57、hey underlie business functionsfrom operations to strategythat an organization implements to meet its performance goals,especially during dynamic periods of growth or change.7 In light of rapidly advancing AI systems that have transformed and will continue to transform nearly every industry,ensuring
58、 an organizations values align with AI development and use is essential.7 Brent Gleeson,“Why Core Values Matter(and How to Get Your Team Excited about Them),”Forbes,March 30,2021,https:/ Coleman,“Its Time to Take a Fresh Look at Your Companys Values,”Harvard Business Review,March 28,2022 https:/hbr.
59、org/2022/03/its-time-to-take-a-fresh-look-at-your-companys-values.A.B.13Our Badge participants found that the following values are commonly adopted by most organizations leading in this space.The discussion below(not in any particular order)details why each value is important and how each can establ
60、ish a foundation for responsible AI principles and governance.8TrustAn organizations success heavily depends on trust both internally,among employees,and externally,with consumers and stakeholders.Our Badge participants noted that without establishing trust in the organizations commitment to respons
61、ible AI governance,the process would be extremely challenging if not impossible to achieve.When employees do not have trust in their organization to operate fairly and with integrity,there are numerous,significant consequences:performance suffers,retention decreases,and,as a result,customers may los
62、e trust and are less inclined to purchase products or retain services.9 A recent Harvard Business Review survey10 found four core components that help establish trust with customers:data protection and cybersecurity,treating employees well,ethical business practices,and admitting mistakes quickly an
63、d honestly.The same survey revealed that business leaders highly value how an organization manages its value chains,deploys responsible AI,and reports on environmental,social,and governance(ESG)actions.These elements are key to developing an ecosystem of trust,which establishes a foundation for resp
64、onsible AI governance.Organizations should also make efforts to cultivate trust in AI systems.More trustworthy AI systems will,in turn,strengthen trust with employees and consumers.To this end,National Institute of Standards and Technology(NIST)has identified the following characteristics that make
65、an AI system trustworthy11:accuracy,explainability,interpretability,privacy,reliability,robustness,safety,and security resilience.To promote the development of trustworthy AI,employees should feel empowered to document these characteristics and discuss shortcomings or improvements without negative r
66、ecourse and,optimally,they should expect employers to show interest in suggested improvements.8 National Institute for Standards and Technology,“Artificial Intelligence Risk Management Framework.”9 PwC,“Trust:The New Currency for Business,”June 2022,https:/ Smith,“Lack of Trust Can Make Workplaces S
67、ick and Dysfunctional,”Forbes,October 24,2019,https:/ Ladika,“Trust Has Never Been More Important,”SHRM,July 31,2021,https:/www.shrm.org/hr-today/news/all-things-work/pages/trust-has-never-been-more-important.aspx.10 Tim Ryan,“How Business Can Build and Maintain Trust,”Harvard Business Review,Februa
68、ry 7,2022,https:/hbr.org/2022/02/how-business-can-build-and-maintain-trust?registration=success.11 Reva Schwartz et al.,“Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,”National Institute for Standards and Technology,Special Publication 1270,March 2022,https:/nvlpubs
69、.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf.14CultureCulture has been defined as the ways individuals behave,including the attitudes and beliefs that inform these behaviors,which can be explicitly stated or implicitly known as the norms governing how people should work or interact.12 Bui
70、lding a culture around AI that prioritizes responsible governance is critical to ensuring employees feel encouraged,safe,and even rewarded for openly discussing challenges they encounter in their day-to-day work.13 Further,fostering an inclusive culture will help organizations navigate the reality t
71、hat not all AI risks are the same for all users and will help inform whether a given AI system is the appropriate or the best solution for a challenge.14Salesforces Ethical AI Maturity Model,which is discussed in Session 3 of the Badge Program,provides that organizations establish AI strategies that
72、 ultimately build a responsible AI culture.The model includes four stages with a bottom-up approach:1.Stage one involves individual advocates generating small-scale strategies and earning buy-in.2.Stage two focuses on formal teams and resources to align efforts toward an executable strategic vision.
73、3.Stage three establishes new teams to develop measures and a long-term mentality for sustainable practices.4.Stage four emphasizes an optimized and innovative approach to integrate ethics throughout the entire organization.15A common theme discussed and promoted in the Badge Program is that every e
74、mployee should feel as though they are on the front lines of detecting potential harms,and encouraged and rewarded for promoting AI safety and,thus,building trust.12 Denise Lee Yohn,“Company Culture Is Everyones Responsibility,”Harvard Business Review,February 8,2021,https:/hbr.org/2021/02/company-c
75、ulture-is-everyones-responsibility.13 Schwartz et al.,“Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.”14 National Institute for Standards and Technology,“Artificial Intelligence Risk Management Framework,”January 2023,https:/nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-
76、1.pdf.15 Salesforce,“Salesforce Debuts AI Ethics Mode:How Ethical Practices Further Responsible Artificial Intelligence,”September 2,2021,https:/ Badge participants noted that Key Performance Indicators(KPIs)based only on team or employee“success,”as opposed to broader metrics,may skim over the real
77、 causes of system failures.Instead,companies should empower,encourage,and incentivize employees to contextualize mistakes with supervisors or through determined appropriate channels.Further,participants emphasized the importance of risk mitigation instead of risk avoidance.16 This is because attempt
78、ing to reach a“zero risk”policy may in practice be counterproductive,and virtually unattainable,since not all incidents and potential failures can be eliminated.17Establishing a culture of responsible AI governance requires an investment of time and resources.This is a priority that must be undersco
79、red when establishing budgets and employment evaluations,including the hiring and support for those doing this work within an organization.This work also requires the inclusion of time in the AI integration and/or AI product development process to allow for assessment and mitigation,when necessary.F
80、urther,setting up performance recognition,pay,and promotion incentives that foster trust based on the elements discussed in this section helps to ensure support for and adoption of AI risk mitigation efforts and to strengthen a responsible AI culture.18Accountability Accountability in an organizatio
81、n means that individuals understand what is expected of them,can exercise agency or authority when appropriate,and take responsibility for delivering results.19 Holding people or groups accountable for responsible AI governance is critical to ensuring proactive bias and harm mitigation.To build mean
82、ingful accountability for responsible AI governance,an organization should start by defining clear lines of authority and designating individuals tasked with enforcing responsible AI governance(“responsible AI leaders”).Traditional KPIs may need to be adjusted when they are based only on team or emp
83、loyee“success,”as opposed to broader metric measures that take into account the reality of why an employee or team was unable to achieve their goals,such as identification of the projects potential harms or damaging impacts.2016 Schwartz et al.,“Towards a Standard for Identifying and Managing Bias i
84、n Artificial Intelligence.”17 National Institute for Standards and Technology,“Artificial Intelligence Risk Management Framework.”18 Schwartz et al.,“Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.”19 Michael Bazigos,Diana Ellsworth,and Drew Goldstein,“Where Accounta
85、bility Really Matters,”McKinsey&Company,April 2016,https:/ Ron Carucci,“Its Time to Overhaul Our Understanding of Accountability,”Forbes,June 4,2022,https:/ AI leaders must clearly communicate expectations while empowering and equipping employees with the appropriate resources to meet these expectat
86、ions.If expectations are unmet,organizations can take the opportunity to promote transparent conversations to learn from mistakes and understand what various individuals and/or divisions could have done differently at any point during the AI lifecycle.21 This is especially important due to the evolv
87、ing,and oftentimes unpredictable,nature of AI systems.Accountability should also be embedded within and across teamssuch as sales,marketing,legal,DEI,and othersthat are directly or indirectly involved with training,developing,deploying,using,or monitoring AI systems.22 Finally,organizations should i
88、dentify rewards to reinforce behaviors that align with upholding responsible AI values and principles.Multistakeholder EngagementAs organizations begin to integrate AI into internal processes or client-facing functions,collaborating with or undergoing review with relevant stakeholders is critical to
89、 ensuring comprehensive decision-making on AI.In determining the appropriate representatives to participate in multistakeholder reviews,organizations should take into account a varietyof potential use cases and impact on users,especially those in underrepresented and/or marginalized communities.23 I
90、n particular,organizations should establish processes to solicit insights from underrepresented and marginalized communities that could be impacted by the AI system initially or downstream in order to identify conceivable use cases and potential risks that may not be readily apparent otherwise.Ensur
91、ing a diversity of perspectives and efficient collaboration with both internal and external stakeholders in AI development and deployment enables employees to drive AI innovation,problem-solve,and adapt its development to quickly changing environments.2421 Ibid.22 Schwartz et al.,“Towards a Standard
92、 for Identifying and Managing Bias in Artificial Intelligence.”23 Michael Li,“To Build Less-Biased AI,Hire a More-Diverse Team,”Harvard Business Review,October 26,2020,https:/hbr.org/2020/10/to-build-less-biased-ai-hire-a-more-diverse-team;Arun Shastri,“Diverse Teams Build Better AI.Heres Why,”Forbe
93、s,July 1,2020,https:/ Ajao,“Diversity Within Your AI Team Can Reduce Bias,”TechTarget,December 9,2022,https:/ Chou,“Diverse AI Teams Are Key to Reducing Bias,”VentureBeat,July 22,2021,https:/ McKinsey&Company,“Leading Off,”April 11,2022,https:/ addition,identifying a perspective or role that is miss
94、ing in the AI planning and design discussion is an important point that should be considered at each stage of the AI lifecycle.Given the changing use cases and continuous iterations of AI,it is equally important to always ask who else needs to be at the proverbial“table”or,as Cathy ONeil,EqualAI sen
95、ior adviser,acclaimed author,and instructor in our Badge Program,puts it:“for whom could this technology fail?”This work is crucial to ensuring responsible AI development and use.25 DETERMINE RESPONSIBLE AI PRINCIPLESAligning on AI principles offers a pathway for organizations to operationalize thei
96、r values by setting rules and standards to guide decision-making related to AI development and use.These principles are public statements of how an organization intends to operate in a landscape of transformative technologies.To this end,establishing AI principles is a key step to set the groundwork
97、 for a responsible AI governance framework.Though applicability and prioritization may vary among different industries,our participants identified the following principles they consider to be critical regardless of the industry or organizations size.Below are key AI principles,not listed in a partic
98、ular order,that our Badge community identified.Each principle is labeled with the responsible AI value(s)with which it aligns.Depending on applicability and feasibility,organizations should aim to adopt as many as possible,if not all,of the AI principles.25 Roger Burkhardt,Nicolas Hohn,and Chris Wig
99、ley,“Leading Your Organization to Responsible AI,”QuantumBlack AI by McK-insey,May 2,2019,https:/ Build Less-Biased AI,Hire a More-Diverse Team.”C.bias enters at each of the human touch points of the product life cycle;but each touch point is also an opportunity to identify and eliminate harmful bia
100、sesOperating Thesis:181.Preservation of Privacy:Assure privacy and protection of data and its subjects (Trust,Accountability)If not already in place,organizations should establish generally applicable policies that assure the privacy and protection of data and its owners and assure they address data
101、 issues relating to AI.With regard to privacy,given its legal foundation,a key question is whether to adhere to existing requirements,such as the European Union General Data Protection Regulation(GDPR),or to provide additional protections.It is also important for leaders to recognize that not all da
102、ta owners are subjects of that data.For example,a third-party data vendor may track a consumers shopping habits and own the data but is not the subject of the data itself.Consequently,organizations should thoroughly understand the privacy policies of entities they are acquiring from and evaluate whe
103、ther those policies align with their own organizations responsible AI governance principles and framework.Further,these policies should outline a clear process for addressing breaches in privacy.2.Transparency:Communicate values,principles,framework,policies,and decision-making process on AI,both in
104、ternally and externally (Trust,Multistakeholder Engagement,Culture)An organizations responsible AI values,principles,and governance framework should be made available and accessible for internal and external stakeholders.While these documents may look different for each audience,they should be diges
105、tible for a broad variety of stakeholders with key terms clearly defined.Each document should also include a point of contact for individuals to reach out to with questions or concerns.Where and when possible,organizations should explain their decision-making processes around responsible AI issues t
106、o stakeholders,employees,and consumers.These explanations should be easily understood by technical and non-technical teams,using language designed to communicate with foreseeable audiences,such as consumers or end users who operate in different languages or at different reading levels.Such transpare
107、ncy will help build trust internally and externally,while also mitigating misconceptions or confusion around an organizations approaches,priorities,and positions on AI.3.Human-Centered Focus:Commit to building and using AI that benefits human life and amplifies and augments,rather than displaces hum
108、an abilities (Trust,Multistakeholder Engagement)In some use cases and particularly withelevated risk use-cases,AI should be used to augment,not replace,human capabilities.26 At every point in the AI lifecycle,decision-makers or developers must consider how the technology will directly 26 National In
109、stitute for Standards and Technology,“Trustworthy&Responsible AI Resource Center,”https:/airc.nist.gov/AI_RMF_Knowledge_Base/Playbook/Govern#Govern%201.3.19or indirectly impact humans.Decisions should then be made with consideration of how the technology will have a net-positive impact on humanity.I
110、n addition,employees should be encouraged to refrain from using AI to fulfill their roles or certain tasks without approval or oversight.They must also maintain due diligence to ensure AI-enabled decisions are fair and effective for various audiences,particularly underrepresented and minority groups
111、.4.Respect for Individual Rights and Societal Good:Demonstrate respect for human and civil rights and systems that are built to promote social good(Trust,Culture)At every point in the AI lifecycle,employees should consider the implications the model may have on human and civil rights.Organizations s
112、hould also be transparent with clients and stakeholders about how AI could potentially impact human and civil rights,both positively and negatively,and how the organization plans to either leverage or mitigate those implications,respectively.5.Open Innovation:Commit to innovation that drives opennes
113、s and collective sharing(Multistakeholder Engagement,Culture)Organizations should establish norms that promote openness within the organization and collective sharing of new ideas,practices,setbacks,and breakthroughs.Collective wins should be celebrated,and failures should be viewed as learning oppo
114、rtunities for all.By building a safe and open culture,organizations can best position themselves to detect potential harms and biases early and to proactively address them before they escalate into larger issues.6.Rewarding Robustness:Build a reward system that prioritizes technical robustness(Cultu
115、re,Trust)Organizations should ensure employees understand that working toward“technical robustness,”whereby AI systems reliably perform as they are intended to,is a top priority.To this end,AI development timelines should be considered early in the development lifecycle account to ensure time is bui
116、lt in for technical robustness.Employees should also know they will not suffer negative consequences if honing technical robustness may delay the production timeline.Further,organizations should encourage employees to highlight areas of brittleness,in which an AI system fails to perform as intended,
117、and create a process to address these concerns.Overall,employees should be rewarded for flagging system errors,failures,or brittleness,as well as proposing helpful resolutions or delaying a production timeline to improve robustness.7.Continuous Iteration and Review:Establish continuous learning loop
118、s to integrate feedback from all stakeholders,users,and those impacted downstream(Multistakeholder Engagement,Culture)Internal and external stakeholders should have an opportunity to provide feedback on an organizations principles and responsible AI governance framework.20There should be a clear pro
119、cess in which they can engage with the organization to deliver feedback,and have the option to remain anonymous.Such feedback loops allow the organization to continuously learn and iterate on their principles and framework to meet the evolving needs and realities of responsible AI governance.8.Enlis
120、t Employees Involvement:Enlist employees as your front line to promote responsible AI and to ensure they will feel safe and empowered to flag potential concerns(Culture,Multistakeholder Engagement)Employees play a critical role in ensuring responsible AI governance as they are often at the front lin
121、es of developing,deploying,using,and monitoring AI systems.Organizations should establish mechanism(s)for employees to alert leadership to potential concerns they have either witnessed or experienced,with the option to remain anonymous.Further,leadership should initiate regular conversations with em
122、ployees about responsible AI and why it is important for the organization,and empower employees to proactively act on responsible AI principles.9.Prioritize Fairness Through Accountability:Promote fair and unbiased AI systems through evaluations of data,algorithms,and humans(Trust,Accountability)Hol
123、ding employees and leaders accountable for fulfilling their duties for responsible AI governance is key to ensuring sustainability.Organizations should define and set metrics that align with their values and goals for mitigating bias and promoting fairness.This includes addressing biases that may em
124、erge at any point in the AI lifecycle,and ensuring the technology does not discriminate against individuals or groups based on any protected,or non-protected,characteristics.Leaders should also establish an accountability process to identify the cause of potential risks,foster problem-solving,and le
125、arn from mistakes without undue blame or scapegoating.10.Human-in-the-Loop:Incorporate human input and oversight into all stages of AI decision-making(Accountability,Trust)When using an AI tool,there should be a human-in-the-loop,such as a team member or senior executive(s),who takes ultimate respon
126、sibility for decision-making informed by AI.This responsibility can shift as an AI system evolves in its lifecycle,with different individuals taking responsibility for their specific duties.The individuals in the loop should also have the authority to intervene at any point when stakeholder feedback
127、 or test results indicate that an AI system will result in undesirable outcomes or behaviors.Holding humans accountable for AI decision-making will establish certainty with internal and external stakeholders that humans bear ultimate responsibility for an AI systems output,and in turn,will build tru
128、st.11.Professional Development:Invest in education and training to upskill and/or reskill employees(Trust,Culture)Organizations should support their employees through professional development opportunities that up-or reskill employees with the purpose of working alongside21rather than resisting,fear
129、ing,or resentingAI technologies.Leadership should demonstrate that professional development is a top priority and encourage and incentivize employees to take advantage of such programs.Further,organizations should implement responsible AI and bias training to increase awareness of their own biases t
130、hat may emerge when building,testing,using,or monitoring AI systems.These trainings should be specific and applicable to different teams,and organizations should offer rewards for their successful completion.ESTABLISH ACCOUNTABILITY AND CLEAR LINES OF RESPONSIBILITYOnce an organization establishes i
131、ts responsible AI values and principles,the next step is to design an AI governance framework to operationalize efforts across the enterprise.EqualAI supports the NIST Artificial Intelligence Risk Management Framework(NIST AI RMF)to inform and guide an organization in thinking through and laying the
132、 foundation for AI governance.27 Building on the NIST AI RMF,Badge participants pinpointed the need to identify and task key individuals and groups(noted as“tiers”below)with the duties and authority to execute a responsible AI governance framework.The structure below is applicable to any organizatio
133、n regardless of size or industry and can be customized to an organizations specific needs.Tier 1:Designate an Authority Who Owns AI GovernanceAn organization should start by designating one senior executive who is ultimately responsible for AI governance.This individual could reside in the C-suite(e
134、.g.,CIO,CAIO,CLO,COO,or CTO),and if not,they must occupy a position of significant power and influence within an organization.This individual should have the authority to assign or move resources to support a responsible AI mandate.Efforts to initiate a responsible AI governance framework could also
135、 emerge at any level within an organization.For these efforts to be sustained and successful,it is critical to have the support and leadership of the C-suite and board of directors,including this tier 1 authority figure.28 Importantly,the tier 1 authority figure must embody the leadership skills nee
136、ded to make difficult decisions that may be unpopular at the time,but necessary to achieve the organizations responsible AI mandate.27 National Institute of Standards and Technology,“Artificial Intelligence Risk Management Framework.”28 Multiple speakers and participants in the Badge Program touched
137、 on the necessity of leadership buy-in from either the C-suite,board of Directors,or both for sustained and effective responsible AI governance efforts.D.22Tier 2:Institute a Steering or Oversight CommitteeTo support tier 1 leadership,an organization should appoint and launch a steering committee.Th
138、is groups mandate is to support the implementation of the responsible AI governance framework and principles by developing an enterprise-wide AI strategy,establishing accountability structures,and managing AI concerns identified by internal and external stakeholders.The steering committee should hav
139、e a chairperson who directly reports to and supports the tier 1 authority figure.Members of the steering committee should meet routinely,drive key decisions on AI,and derive from representatives across the enterprise including,but not limited to(and in no particular order):human resources;customer r
140、epresentatives diversity,equity,and inclusion(DEI)leadership;AI development or other technical teams;legal/general counsel;compliance;supply chain;and others who touch on AI production,acquisition,or use,oversee its safety and compliance,or who manage employees or functions where AI is operating to
141、serve or support critical functions.It is also important to note that steering committee members should not only include individuals with technical skills but,to do this work effectively and bring in the necessary perspectives and expertise,should alsoincludethose with social science skills and back
142、grounds such as human rights or philosophy.Members of the steering committee should be tasked with the following duties:Inform the chairperson and tier 1 authority figure on sectoral and enterprise-wide AI concerns.Develop an enterprise-wide strategy to institute the responsible AI governance framew
143、ork that includes,but is not limited to,initiating regular programming or opportunities to spread awareness(introduction to the framework,workshops to unpack specific elements,etc.)Collaborate with the relevant teams to ensure the responsible AI governance framework does not conflict with other inte
144、rnal frameworks(e.g.,cybersecurity,privacy),but rather incorporates and/or complements these efforts.Document the committees practical application of the organizations AI principles and framework to help establish precedent with which to understand past decisions and to serve as a guide and a baseli
145、ne for comparison with future decisions and assessments.23Tier 3:Appoint Responsible AI AmbassadorsResponsible AI ambassadors are representatives across the enterprise who,in addition to steering committee members,can understand,explain,and promote the framework.Ambassadors should include representa
146、tives from the same teams as steering committee members(see above).This group will be the largest of the three tiers(To clarify:all steering committee members are ambassadors,but not all ambassadors are steering committee members.)Ambassadors play a key role in building a culture that prioritizes th
147、e responsible development and use of AI.They serve as a resource for employees to better understand how to operationalize the framework and execute initiatives stemming from the steering committee,which will help scale efforts for responsible AI governance across the enterprise.To this end,ambassado
148、rs are tasked with the following duties:Support steering committee members in promoting the responsible AI governance framework and principles within their teams through initiating regular programming or opportunities to share awareness(introduction to the framework,workshops to unpack specific elem
149、ents,etc.).Provide responsible AI training opportunities and professional development programs for their team members.Set up a“flag system,”a formal mechanism for employees to notify ambassadors about issues they would like additional training on or about concerns they want to flag to the steering c
150、ommittee.Serve as a resource for colleagues who have questions,concerns,or feedback on the framework or principles Determine agreed-upon ethical boundaries of what will be built or deployed.After determining the principles and for each significant development in the responsible AI governance framewo
151、rk and implementing the leadership tiers outlined above,an organizations leaders should consider hosting a town hall or similar forum to communicate the structure and solicit feedback from employees.Such a forum is important to reassure employees that they are seen and heard on these issues,which wi
152、ll in turn build trust and support positive employee morale.24 USE TOOLS TO SUPPORT AI GOVERNANCEOnce values,principles,and a governance framework have been established,organizations should use the following tools,metrics,and monitoring practices to execute and evaluate the success and sustainabilit
153、y of these efforts.The same practices could also be applied to the design and development of AI systems.Throughout the EqualAI Badge Program and Summit,our participants discussed and identified the following methods to best position organizations to effectively develop AI systems and launch responsi
154、ble AI governance efforts.DocumentationAn organization should standardize how responsible AI management processes are implemented and mandate documentation practices at each stage to ensure consistency and accountability.29 Documentation encourages knowledge sharing,which empowers employees to under
155、stand how processes work and expectations for what a final projector product should look like.30 The documentation process is especially important withAI systems given that they will touch multiple individuals,entities,and enterprises once deployed.For powerful AI models,it is crucial that these doc
156、umentation practices are digestible to both technical and non-technical audiences.Clear and consistent documentation will help inform future users,support effective decision-making,and prepare organizations in heavily regulated industries for audits.One mechanism to facilitate this effort is through
157、 the EqualAI Algorithmic Impact Assessment(AIA)tool,which helps organizations check if their AI systems align with the NIST AI RMF.31Defined ProcessAn effective corporate process is one where there is a set of defined activities and tasks that,once completed,will accomplish an organizational goal.32
158、 Establishing clear processes allows organizations to bolster employee esteem and provide a sense of stability and control.33 This is particularly important in the AI space,where rapidly advancing AI technologies are heavily disrupting various industries.Organizational processes are only as useful a
159、s they are clear to and accepted by key stakeholders.Research indicates that employees will support a managers decision,29 Schwartz et al.,“Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.”30 Atlassian,“The Importance of Documentation(Because Its Way More Than a Forma
160、lity),”https:/ EqualAI,“EqualAI Algorithmic Impact Assessment(AIA),”https:/www.equalai.org/aia/.32 Mary K.Pratt et al.,“Business Process,”TechTarget,https:/ Jacob Adler,“Success and Why the Process Matters:Joel Brockner,”Wharton Work/Life Integration Project,April 22,2016,http:/worklife.wharton.upen
161、n.edu/2016/04/success-and-why-the-process-matters-joel-brockner/.E.25even if they disagree,if they believe the process the manager used to make the decision was fair.The ever-progressing nature of AI technologies will inevitably yield competing opinions among employees about an organizations decisio
162、n on AI development and use.As such,establishing processes that employees believe are fair and inclusive will benefit the cohesiveness of the community and reinforce a supportive culture.Further,in many regulated industries,organizations may be subject to audits and need to demonstrate structure,cer
163、tainty,and fairness as part of their decision-making processes.While there is no current official industry-wide auditing process for AI systems,organizations should take proactive measures in creating internal processes to best position themselves for future scrutiny.All processes should be document
164、ed and digestible for all potential audiences,including technical and non-technical readers.If an organization is in the business of developing or using AI systems,they must create a process to ensure consistent understanding and application of expectations for the creation and/or adoption of AI sys
165、tems.This includes prioritizing policies and resources based on the assessed risk level and potential impact of an AI system.A helpful process to ensure consistent understanding and application of expectations for the creation and/or adoption of AI systems could include the following:identifying the
166、 type of AI system in use or expected to be in use,determining intended use-case(s),pinpointing the applicable AI principles,reviewing the ethical dilemmas or concerns employees have flagged,tracking decisions made about the system and why,and analyzing whether the AI system produces outcomes that a
167、lign with the organizations responsible AI goals.These are just a few examples of steps organizations could string together to create a consistent process for a products lifecycle.Ultimately,organizations should design processes to fit their needs and goals,and prioritize identifying clearly defined
168、 authorities and responsibilities that will help them reach their responsible AI goals.Organizations should also put forth clear processes and documentation practices that assess the applicability and success of the responsible AI governance framework and principles.Leadership should provide clear e
169、xpectations for each step of these processes,and identify exactly who is responsible for each step,as well as each overall process.Further,organizations should establish clear processes for escalation in the event issues arise that require additional oversight or scrutiny.34 Employees should have th
170、e option to remain anonymous if they decide to escalate a concern,and there should be policies in place to prevent retaliation from other employees if an escalation results in a hostile environment.34 Aaron De Smet,Gerald Lackey,and Leigh M.Weiss,“Untangling Your Organizations Decision Making,”McKin
171、sey&Company,June 21,2017,https:/ ReviewIn designing and building new AI systems,it is important to include multidisciplinary reviews,meaning the broadest variety of perspectives that can help add insight on different potential use cases,biases,and questions that could arise based on views that were
172、not previously explored or integrated.Ideally,these diverse perspectives should be included at every point of the AI development lifecycle to prevent and mitigate potential biases and harms.An effective process includes internal and external stakeholders whostem from a variety of backgrounds,vantage
173、 points,and expertise.Additionally,as an organization develops its responsible AI governance framework,it should proactively seek out input from diverse internal and external stakeholders.These perspectives will help shape a comprehensive and robust framework that best suits the needs of an organiza
174、tion and ensures the broadest variety of perspectives are taken into account.This process is helpful for numerous goals,including best positioning the framework for employee buy-in,as well as accounting for new use cases and end userswho were not previously considered.Additionally,after a framework
175、is introduced,there should be routine reviews of how the framework is performing,as detailed further below.Metrics,Monitoring,and ReevaluationThere is not currently a consensus on standard metrics to evaluate or monitor AI systems for risk and harm.A challenge in identifying such metrics includes th
176、e fact that a one-size-fits-all approach may be oversimplified,lack critical nuance,and can fail to take into account the differences in affected groups and context.35 One possible tool organizations can use to help measure and monitor an AI systems risk is the EqualAI AIA tool,which includes a bias
177、 and cost-benefit evaluation section that prompts developers to think about potential risks the technology could pose to different populations.36Additionally,there should be a stated cadence of retesting and documenting AI systems,as determined by the product ownership team,which may vary based on t
178、he level of risk,use,and other factors.While there is no universally accepted risk-based standard,the NIST AI RMF offers an effective definition of risk:the composite measure of an events probability of occurring and the magnitude or degree of the consequences of the corresponding event.The impacts,
179、or consequences,of AI systems can be positive,negative,or both and can result in opportunities or threats.When considering the negative impact of a potential event,risk can be a function of 1)the negative impact,or magnitude of harm,that would arise if the circumstance or event occurs;and 2)the like
180、lihood of occurrence.37 Using this definition,organizations can determine the frequency and rigor of monitoring AI systems as appropriate.35 National Institute for Standards and Technology,“Artificial Intelligence Risk Management Framework.”36 EqualAI,“EqualAI Algorithmic Impact Assessment(AIA).”37
181、National Institute for Standards and Technology,“Artificial Intelligence Risk Management Framework.”27Responsible AI Governance Framework:Metrics for Success Identifying metrics to evaluate the success of a governance framework will allow an organization to understand which principles are resonating
182、 most with employees and identify areas that need improvement.Such metrics could include:The number of ambassadors across the enterprise and diversity of participation(e.g.,percentage of business units involved,demographic composition of total governance team,business units and demographic diversity
183、 of ambassadors,etc.).The number of enrollments and course completions of the Responsible AI Training course established by ambassadors.The number of potential risks identified and mitigated.The number of alterations to the principles or framework made due to questions posed by ambassadors.The numbe
184、r of contacts from other stakeholders(internal/external)to ambassadors/number of AI-related concerns escalated.Once metrics are established,an organization should establish routine and thorough monitoring practices on its governance framework.This will ensure sustained success and up-to-date princip
185、les and governance practices.Further,the effectiveness of the framework should be evaluated at every steering committee meeting.Organizations can use the metrics above to review progress and determine how frequently the committee should meet to ensure their organizations responsible AI goals are met
186、.28In addressing organizational values and AI principles and developing a responsible AI governance framework,there were a few areas of deliberation,delineated below,that our participants identified and discussed without reaching a final resolution.Deliberative DiscussionsCentralized vs.Decentralize
187、d Steering CommitteeWhile there was universal consensus that an individual should bear ultimate accountability for responsible AI governance(tier 1 authority figure),there was no consensus on whether the steering committee should have a chairperson who is solely responsible for leading the committee
188、 or distributing responsibilities among all steering committee members.In a centralized steering committee structure,the chairperson leads the steering committee with representatives from across the organization,and engages in periodic communication with key external stakeholders.The chairperson is
189、responsible for and has the authority to oversee the responsible AI governance framework and principles.Other steering committee members will perform their duties in a part-time capacity.While these individuals are held accountable for their duties,the chairperson bears the ultimate responsibility f
190、or fulfilling the mandate of the committee.On the one hand,there is a need for a central steering committee figure to ensure that progress is consistent,efforts are not sidelined due to competing work requirements,and there is a clear line of authority when members do not agree or encounter conflict
191、.On the other hand,if full responsibility falls on a single individual,other members may not be as diligent about fulfilling their duties,which could eventually overwhelm the chairperson.Alternatively,in a decentralized structure,the steering committee still has a chairperson who works with represen
192、tatives across the organization and key external stakeholders.However,all members and the chairperson are responsible for fulfilling the steering committee mandate and are held equally accountable for implementing the responsible AI governance framework.Distributing responsibility could foster more
193、sustainable progress.However,there is a risk of deadlock in progress if conflict arises in the absence of an authority figure for resolution,or that members will inevitably face competing work priorities that cause them to delay or fail to fulfill their responsibilities entirely.Ultimately,the steer
194、ing committee structure will depend on a variety of factors,such as the organizations size,industry,and culture.29Steering Committee MembershipThere was debate on the a trade-off between involving as many teams and stakeholders as possible and the ability to align and move quickly on decisions on a
195、steering committee.A large and diverse steering committee could ensure the greatest chance of encapsulating diverse perspectives and reduce the risk of missing crucial viewpoints.This would help mitigate unconscious biases and help members learn from each other about the different factors to conside
196、r on any given AI issue.However,more steering committee members will inevitably slow its ability to align on decisions and may prove too cumbersome to act as quickly as needed to keep pace with AI development.Conversely,a smaller steering committee brings agility as fewer members are needed to align
197、 on decision-making.The risk is losing potentially valuable perspectives of stakeholders who are not present that could better inform the committees decision-making for responsible AI.Other options for achieving a diverse and fresh set of views could be to have rotating seats on the committee(e.g.,1
198、8-month terms)for representatives from certain functions(e.g.,legal or engineering)and not having tenure/seniority be a barrier to inclusion.The size of the committee may ultimately depend on the culture of decision-making at an organization.For example,organizations that value debate and collaborat
199、ion may opt for a larger steering committee to capture more perspectives while organizations with a top-down culture may thrive with a smaller committee where a few key individuals are driving decisions.Additionally,participants deliberated which teams should make up the core of a steering committee
200、 but did not reach consensus of what this list looks like.Some teams that were discussed include AI development,legal/general counsel,DEI,human resources,and compliance.Incentives for EmployeesThe responsible AI governance framework relies heavily on the involvement of committed and passionate emplo
201、yees.The participants discussed the need for developing incentives to reward employees for successfully fulfilling their roles as ambassadors or steering committee members,but were unable to finalize exactly what these incentives look like.Some ideas that were discussed include an additional annual
202、bonus or adding in performance review metrics used to evaluate for promotions.30Open QuestionsIn addition to the discussions above,there are many issues that warrant further scrutiny.1.Ownership and AdvocacyThe selection process to identify specific steering committee members or ambassadors deserves
203、 significant deliberation.Considerations include:Should these individuals be selected based on nominations from their colleagues?Should there be an application process where candidates are interviewed and must provide references?Should they be appointed by the tier 1 authoritative figure?Additionall
204、y,the number of steering committee members and ambassadors remains an open question.Should there be more ambassadors than steering committee members,or vice versa?What are the pros and cons of each option?Further,what should be the minimum time commitment for a steering committee member or ambassado
205、r?Another important issue is how to prevent symbolic or performative participation(i.e.,someone is part of the steering committee but fails to fulfill duties)without raising the barrier or deterring employees from getting involved.There is a fine line between a simple pathway to involvement and ensu
206、ring that these individuals are committed to doing the work needed to meet the organizations responsible AI goals.2.Tools,Processes,Metrics,and MonitoringLow-and high-risk AI use cases warrant varying levels of scrutiny,but that begs the all-important question of what constitutes a low-or high-risk
207、use case.For example,obvious low-risk and high-risk use cases were brought up,like a TV showrecommending algorithm versus an AI-powered diagnostic healthcare tool,but many use cases are in the gray area and need to be allocated with a risk level for review.In addition,an organizations risk tolerance
208、 must be established.NIST defines this as an organizations readiness to bear risk to achieve its objectives.38 Risk tolerance would vary based on an organizations industry,sector,and use case,but further discussion on the precise factors that could shape an organizations risk tolerance was not touch
209、ed upon.Further,an organization should determine an amendment process for the responsible AI governance framework.These aforementioned issues are just some of the many pressing issues the EqualAI community plans to scrutinize and unpack at upcoming convenings.38 National Institute for Standards and
210、Technology,“Artificial Intelligence Risk Management Framework.”31This white paper attempts to capture a unique and unprecedented event,where a broad cross section of industry leaders congregated at the EqualAI Responsible AI Summit to align on the best practices for responsible AI governance.Such co
211、nvenings,set against a backdrop of uncertainty about national and international AI standards,are of the utmost importance as organizations cannot afford to wait for certainty to emerge from the legal landscape.Savvy companies,like those listed as co-signatories and participants in this discussion,ar
212、e taking the lead in setting standards and norms to govern AI responsibly.We hope the findings in this white paper are of use to a variety of audiences.For starters,this paper can serve as a beacon for industry leaders who are curious about how other organizations are navigating AI governance intent
213、ionally and responsibly.Policymakers can benefit from understanding how responsible industry players are thinking about the risks stemming from AI and how to manage them appropriately.The general public can have a glimpse into how the companies providing their AI-enabled products and services are or
214、 should be developing and deploying responsible AI.We,at EqualAI,feel honored and privileged to be working closely with industry leaders and policymakers who are dedicated to building and using AI responsibly.We look forward to expanding the EqualAI community and building upon the foundational work in this white paper.CONCLUSION32EqualAI is a nonprofit focused on reducing bias and other harms in the development and use of Artificial Intelligence(AI)through promotion of responsible AI practices.EqualAIai_equal