Amdocs:2025中倫理AI的五大支柱:人工智能技術在金融服務業中的合規與負責任應用Playbook(英文版)(13頁).pdf

編號:617404 PDF  中文版  DOCX 13頁 1.52MB 下載積分:VIP專享
下載報告請您先登錄!

Amdocs:2025中倫理AI的五大支柱:人工智能技術在金融服務業中的合規與負責任應用Playbook(英文版)(13頁).pdf

1、Five Pillars of Ethical AIA playbook for compliant and responsible use of AI in financial services organizations.2Table of ContentsFive Pillars of Ethical AITable of contentsPage0201040503The AI Center of ExcellenceGoverning Ethical Use of GenAI0403041212Be a Trailblazer in Ethical AIHow Amdocs Can

2、HelpFive Pillars of Ethical AI051.Compliance and Regulatory Adherence062.Policies and Standards093.Risk Management115.Transparency and Explainability104.Roles and Responsibilities3Governing Ethical Use of GenAIFive Pillars of Ethical AIAs generative artificial intelligence(GenAI)increasingly pervade

3、s the technology and business landscape,governing ethical use across the enterprise is a critical consideration.Dedicated regulatory measures are only just beginning to emerge so,currently,industry is largely responsible for defining responsible,compliant,and safe use of the technology.Wherever your

4、 business is on that journey,this playbook provides a framework to ensure you are aware of,and able to deal with,potential pitfalls.It centers on five pillars of ethical AI:compliance and regulatory adherence,policies and standards,risk management,roles and responsibilities,and transparency and expl

5、ainability.Why Prioritize Ethical AI?Reduce exposure to risk:When AI models go wrong,legal,operational,and reputational damage can result.Minimize this risk with robust policies and procedures.Demonstrate industry responsibility:If industry fails to set a precedent for ethical use of AI,authorities

6、are more likely to take a heavy-handed approach with future regulations.Get ahead of regulations:AI policies and standards lay foundations for future compliance,saving time,money,and inconvenience when legal requirements emerge.Nurture responsible use from the ground up:Build enterprise-wide awarene

7、ss and understanding,helping employees embrace ethical AI now to avoid problems later.Be at the forefront of ethical AI:Earn competitive advantage as a leader in responsible use of AI,fostering trust with customers and other stakeholders.Why AI Needs Tailored Governance Many financial services enter

8、prises consider AI the next step in data analytics maturity.Consequently,theres a common assumption that existing data governance can extend to AI.Yet this approach doesnt offer the necessary scale and precision.AI brings new risks surrounding potential misuse of data as well as issues linked to the

9、 indeterministic nature of the large language models(LLMs)which GenAI is based on.For instance,LLMs may generate different responses to the same input due to their probabilistic sampling methods and the inherent randomness of the decoding process.Thats why AI governance requires a tailored approach

10、and a high level of scrutiny,particularly in the financial services sector.AI has the power to derive enormous business value from data,but without careful governance it can cause great harm.Governing Ethical Use of GenAI4The AI Center of ExcellenceFive Pillars of Ethical AI Five Pillars of Ethical

11、AIRaise the bar on AI governance with an AI CoE that cultivates best practices across the enterprise.Figure 1:The AI Center of Excellence is founded on the Five Pillars of Ethical AI.The AI Center of Excellence When adopting rapidly evolving technologies there can be a fine line between opportunity

12、and risk,between confidence and loss of trust,and between success and failure.Amdocs has a long history helping organizations navigate new technology pathways and transformations.To maximize success in AI adoption,we advocate an AI Center of Excellence(CoE).The AI CoE establishes and oversees implem

13、entation of standards,tooling,best practices,policies,procedures,monitoring,education and more.Together these measures foster ethical use of the technology.In this playbook we detail five pillars of ethical AI which provide a framework for the CoE.We illustrate potential pitfalls,and outline measure

14、s the CoE can take to mitigate risks.Our goal is to help you master responsible use of AI at scale.Compliance and Regulatory AdherencePolicies and StandardsRisk ManagementRoles and ResponsibilitiesTransparency and Explainability12345AI Center of ExcellenceFive Pillars of Ethical AI51.Compliance and

15、Regulatory Adherence1.Compliance and Regulatory AdherenceEmerging AI RegulationsThe AI Act Regulation(EU)2024/16891,the worlds first dedicated legal framework for AI,came into force in the EU on 1 August 2024.Known as the AI Act,it lays down rules to“provide AI developers and deployers with clear re

16、quirements and obligations regarding specific uses of AI”.The Act identifies four levels of risk for AI systems:unacceptable,high,limited,and minimal(see Figure 2).Potential Pitfall:Chatbots Breaching Data RulesRegulatory obligations should be carefully considered throughout the development of GenAI

17、-powered tools.For instance,chatbots use of personal data must comply with rules surrounding consent and disclosure.The CoE can reduce the risk of non-compliance by identifying existing regulations that relate to GenAI use cases,as well as monitoring emerging regulatory trends and requirements.Under

18、stand Existing Regulations,Monitor Emerging Requirements The regulatory landscape for AI is less mature than that for areas such as the use and storage of personal data.Nevertheless,dedicated requirements are beginning to emerge.Monitoring regulatory discussions and developments at a global and cros

19、s-industry level is vital so the CoE can keep one step ahead of future requirements.Organizations must also adhere to existing standards and guidelines that are relevant to the implementation of AI systems.Data protection regulations are a case in point.Regulations from bodies such as the Financial

20、Industry Regulatory Authority(FINRA)are applicable to some aspects of AI use,as is the EUs general data protection regulation(GDPR).1 The EUs AI ActFive Pillars of Ethical AIUnacceptable riskHigh riskLimited riskMinimal riskCredit scoring is specifically called out as a high-risk application under t

21、he AI Act.Other financial services applications(e.g.fraud detection and insurance underwriting)also represent a high level of risk.AI practices that pose unacceptable risks will be prohibited.For certain limited risk applications,such as AI-enabled customer interactions,companies will face transpare

22、ncy obligations,e.g.making users aware that they are interacting with a machine.Figure 2:The EUs AI Act sets out four levels of risk for AI systems and practices.Companies need to determine the risk level of AI-enabled applications and comply with relevant obligations.62.Policies and StandardsPotent

23、ial Pitfall:Unfair and Inconsistent Credit DecisioningGenAI can significantly reduce loan processing times,but the risk of unfair or inconsistent approvals and rejections must be carefully managed.Policies and standards implemented by the CoE play a vital role ensuring models are trained to deliver

24、consistent results within set parameters.2.Policies and Standards Identify Policy Gaps and Conflicts,Then Build a Bespoke Framework A core task of the CoE is to build a comprehensive framework of policies and standards which meet the organizations specific requirements.This is especially important f

25、or enterprises using AI in high-risk applications such as loan processing.Policies can encompass everything from broad principles of ethical AI to very specific matters such as the avoidance of bias.They represent a high-level indication of intent.Standards are more practical,setting out rules or de

26、fining what should be done to satisfy the policies.The following tables summarize a range of policies and standards that might be considered by the AI CoE.They can be configured or combined as necessary.Its a good idea for the CoE to revisit policies and standards on a regular basis.As GenAI becomes

27、 more sophisticated,the way its used in the business will evolve.Dedicated regulatory requirements and industry standards will also continue to emerge.In such a dynamic space,the framework must be rigid enough to facilitate consistent good practice but adaptive enough to encompass change.While the A

28、I Act will only apply in the EU,it may be indicative of whats to come in other parts of the world.Industry-level Guidance and Regulations Some authorities are signalling their intent to address specific risks of AI or encouraging organizations to take a proactive stance on risk management.In May 202

29、4 the Australian Prudential Regulation Authority(APRA)indicated it will not add to its rule book at present but will use its“strong supervision approach to stay close to entities as they innovate and consider management of AI risks.”2 Regulatory authorities for other industries and markets will like

30、ly take a similar stance.If industry does not take steps to leverage AI in an ethical way,authorities may take a harder line on regulations.Some proposed legislative frameworks suggest that companies should only be permitted to work on high-risk applications of AI if they have a government licence t

31、o do so3.Five Pillars of Ethical AI2 Therese McCarthy Hockeys remarks to AFIA Risk Summit 20243 Senators Want ChatGPT-Level AI to Require a Government License7Policies to mandate ethical AIPolicy typePurposeKey elementsAI ethicsGuide the ethical use of AI technologies.Principles of fairness,accounta

32、bility,transparency,and non-discrimination.Guidelines for ethical decision-making and handling ethical dilemmas.Commitment to human-centric AI and human oversight.AI information securityProtect AI systems and data from security threats.Security measures for data and AI models.Incident response and r

33、ecovery plans.Regular security assessments and updates.AI bias and fairnessIdentify,mitigate,and monitor biases in AI systems.Bias detection methods and tools.Strategies for mitigating identified biases.Regular audits and fairness assessments.Data privacy and protectionEnsure that data used in AI sy

34、stems is handled responsibly and complies with privacy laws.Data collection,storage,and processing practices.User consent and data subject rights.Data anonymization and encryption methods.Compliance with relevant data regulations.AI risk managementIdentify,assess,and mitigate risks associated with A

35、I systems.Risk assessment frameworks.Risk mitigation strategies and controls.Continuous monitoring and review of AI risks.AI complianceEnsure that AI systems comply with relevant laws and regulations.Overview of applicable regulations and standards.Compliance monitoring and reporting mechanisms.Trai

36、ning and awareness programs for staff.2.Policies and StandardsTable 1:Policies for ethical AI can cover general use and specific matters.Five Pillars of Ethical AI8Standards to support ethical AI StandardPurposeKey elementsAI model development and deploymentGuide the ethical use of AI technologies.P

37、rinciples of fairness,accountability,transparency,and non-discrimination.Guidelines for ethical decision-making and handling ethical dilemmas.Commitment to human-centric AI and human oversight.AI transparency and explainabilityEnsure that AI decisions and processes are transparent and understandable

38、.Documentation and communication of AI models functionality.Methods for making AI decisions explainable to non-technical stakeholders.Transparency standards for AI outputs and impacts.AI governance and accountabilityProtect AI systems and data from security threats.Security measures for data and AI

39、models.Incident response and recovery plans.Regular security assessments and updates.AI decommissioning standardsGuide the ethical use of AI technologies Principles of fairness,accountability,transparency,and non-discrimination.Guidelines for ethical decision-making and handling ethical dilemmas.Com

40、mitment to human-centric AI and human oversight.2.Policies and StandardsTable 2:Standards outline what needs to happen to ensure AI policies are upheld.Five Pillars of Ethical AI93.Risk Management Define and Classify Risks,Then Mitigate Accordingly AI presents direct and indirect risk scenarios whic

41、h may change over time.So,risk management needs to be continually monitored and validated by the AI CoE to minimize harm to customers,wider stakeholders,and the organization itself.Clearly defining operational risk,legal risk,and risk of reputational damage is a good place to start.Each category can

42、 then be assigned robust strategies for risk assessment,monitoring,and mitigation.For instance,risk mitigation might involve the implementation of controls to support unbiased and reliable outcomes.This could be further enhanced with fallback mechanisms or manual override options if the AI system fa

43、ils or behaves unexpectedly.Risk reduction should be designed into all applications as standard,with employees educated about the importance of this.These measures will give the enterprise a head start when dedicated regulatory measures emerge.Potential Pitfall:Data Biases Skew Insurance Estimations

44、AI-driven models for insurance underwriting make it quicker to assess risk levels and determine premiums.However,historical claims data that the models are based on may contain embedded biases which lead to unfair or inaccurate estimations.The CoE is responsible for identifying prejudice and taking

45、steps to avoid it.3.Risk ManagementFive Pillars of Ethical AI104.Roles and ResponsibilitiesFive Pillars of Ethical AI“There must be a“human in the loop”.This doesnt necessarily mean human involvement in AI decisions for example,stopping a potentially fraudulent transaction requires fast action.Inste

46、ad,it is about someone being accountable for the algorithm,its sound operation,and the outcomes it delivers.”Therese McCarthy Hockey,APRA4.Roles and ResponsibilitiesPotential Pitfall:False Positives in Fraud Detection GenAI is increasingly used in complex fraud detection scenarios such as anti-money

47、 laundering.However,LLMs propensity for hallucination brings the risk of false positives,e.g.,suggesting innocent parties are involved in fraudulent activity.This underlines the need for the CoE to make sure theres always a human in the loop.Ensure Human Accountability is Clearly AssignedResponsibil

48、ity for enterprise AI system outcomes begins and ends with people.Accountability is everything,so upholding effective AI governance requires full clarity on the roles and responsibilities of teams and individuals.This applies within the CoE and across the wider organization.A dedicated AI governance

49、 committee focused on accountability is essential.It should be a cross-disciplinary group holding the necessary expertise and authority to monitor and manage ethical use.This allows business leaders and technology experts to work collaboratively,with specific areas of responsibility clearly assigned

50、 to avoid any ambiguity.Together,these individuals take care of the selection,implementation,and monitoring of policies across everything from risk management to education.11Five Pillars of Ethical AIMake Transparency and Explainability Central to Everything GenAI models can be prone to issues such

51、as hallucination,where they produce information that appears plausible but is in fact fabricated.This phenomenon can have serious consequences,so all outputs should be both transparent and explainable.Transparency and explainability make it possible to pinpoint the source of truth and audit AI model

52、s for fairness and avoidance of bias.A vital function of the AI CoE is to ensure all steps of AI system predictions,decisions,or recommendations can be retraced.This means documenting how the system was developed,the data it was trained on,processing steps,and model architectures that have been used

53、.Ideally,this will be presented in a way that is accessible and understandable for people in non-technical roles.It may be beneficial to develop an explanation interface so AI models and ML activities are fully understood and trusted.Technical teams should also validate AI models before they are put

54、 in the hands of users.5.Transparency and Explainability Potential Pitfall:Misleading Chatbot Guidance on LoansGenAI-powered chatbots can facilitate instant answers to customer or broker queries about loan rates or likelihood of approval.However,loan guidelines are frequently updated,so outdated or

55、misleading information could be provided in error.This is where transparency and explainability come to the fore.It must always be possible to pinpoint the source of chatbot responses,and any guidance should include a disclaimer.5.Transparency and Explainability12How Amdocs Can Help Amdocs has a lon

56、g history helping enterprises in heavily regulated industries make effective and compliant use of technology.Our technical experts and industry consultants work with customers in-house teams to devise accurate roadmaps that address the most complex challenges.We bring deep knowledge of cloud,data,AI

57、 systems and the wider AI landscape to aid ethical practice.Our offering encompasses strategic implementation of policies,mapping human accountability through the allocation of roles and responsibilities,and technical aspects of system transparency and explainability.Amdocs has a wealth of experienc

58、e in setting up and managing enterprise-wide CoEs in large organizations to support various new technologies with broad impacts such as cloud.Our experience and tested methodologies for CoE success can help ensure your AI CoE ramps up rapidly and meets its goals.Contact us today to set up an initial

59、 call,consultation,or workshop.Together we can navigate an ethical AI journey that gives your business the edge on corporate responsibility and commercial success.Five Pillars of Ethical AIBe a Trailblazer in Ethical AIHow Amdocs Can Help Be a Trailblazer in Ethical AISetting up an AI CoE to impleme

60、nt the five pillars summarized in this playbook will enable you to take the higher ground on ethical AI.Earning a reputation for leadership in ethical AI boosts trust amongst customers,employees,and other stakeholders.But the benefits go further than that.Having a robust AI governance framework will

61、 also put the business in a stronger position when AI regulations come into force.Critical factors like human accountability,transparency,and explainability will already be intrinsic to the development and use of AI systems.Whats more,enterprises at the forefront of ethical AI could be invited to he

62、lp shape industry standards,guidelines,and best practices.Prioritizing responsible use of AI is the right thing to do,both ethically and commercially.Harish KumarPractice lead Cloud data platformsAmdocsAmdocs helps those who build the future to make it amazing.With our market-leading portfolio of so

63、ftware products and services,we unlock our customers innovative potential,empowering them to provide next-generation experiences for both the individual end user and enterprise customers.Our employees around the globe are here to accelerate financial institutions migration to the cloud,enable them to differentiate in the digital era,and automate their operations.Listed on the NASDAQ Global Select Market,Amdocs had revenue of$5.00 billion in fiscal 2024.For more information,visit Amdocs at 2025 Amdocs.All rights reserved.

友情提示

1、下載報告失敗解決辦法
2、PDF文件下載后,可能會被瀏覽器默認打開,此種情況可以點擊瀏覽器菜單,保存網頁到桌面,就可以正常下載了。
3、本站不支持迅雷下載,請使用電腦自帶的IE瀏覽器,或者360瀏覽器、谷歌瀏覽器下載即可。
4、本站報告下載后的文檔和圖紙-無水印,預覽文檔經過壓縮,下載后原文更清晰。

本文(Amdocs:2025中倫理AI的五大支柱:人工智能技術在金融服務業中的合規與負責任應用Playbook(英文版)(13頁).pdf)為本站 (Yoomi) 主動上傳,三個皮匠報告文庫僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對上載內容本身不做任何修改或編輯。 若此文所含內容侵犯了您的版權或隱私,請立即通知三個皮匠報告文庫(點擊聯系客服),我們立即給予刪除!

溫馨提示:如果因為網速或其他原因下載失敗請重新下載,重復下載不扣分。
客服
商務合作
小程序
服務號
折疊
午夜网日韩中文字幕,日韩Av中文字幕久久,亚洲中文字幕在线一区二区,最新中文字幕在线视频网站