《澳大利亞證券和投資委員會:2024年AI治理調查報告面對人工智能創新的治理安排(英文版)(43頁).pdf》由會員分享,可在線閱讀,更多相關《澳大利亞證券和投資委員會:2024年AI治理調查報告面對人工智能創新的治理安排(英文版)(43頁).pdf(43頁珍藏版)》請在三個皮匠報告上搜索。
1、Beware the gap:Governance arrangements in the face of AI innovation R E POR T 798|OC TOBE R 20242About this reportASIC reviewed how 23 AFS licensees and credit licensees are using and planning to use artificial intelligence,how they are identifying and mitigating associated consumer risks,and their
2、governance arrangements.This report outlines the key findings from that review.About ASIC regulatory documentsIn administering legislation ASIC issues the following types of regulatory documents:consultation papers,regulatory guides,information sheets and reports.DisclaimerThis report does not const
3、itute legal advice.We encourage you to seek your own professional advice to find out how the Corporations Act 2001,National Consumer Credit Protection Act 2009 and other applicable laws apply to you,as it is your responsibility to determine your obligations.Examples in this report are purely for ill
4、ustration;they are not exhaustive and are not intended to impose or imply particular rules or requirements.For privacy reasons,the names of case-study subjects have been changed.CONTENTSFOREWORD3EXECUTIVE SUMMARY 4AI GOVERNANCE:QUESTIONS FOR LICENSEES 8 WHY LOOK AT AI?9FINDINGS:USE OF AI 10FINDINGS:
5、RISK MANAGEMENT AND GOVERNANCE 18WHERE TO FROM HERE FOR LICENSEES?33APPENDIX 1:REVIEW METHODOLOGY AND DEFINITIONS 38APPENDIX 2:ACCESSIBLE DATA POINTS 40APPENDIX 3:KEY TERMS 41A S I C R E P 7 9 83Evereiusa idipictem re la nobit utaqui dolum la nimus sit,sundebit occatur.Am,to endia necae.Ficillu ptat
6、em eatquae rerit,temque non pelecat ut reprehenti resenissita ellabo.Itateni hilluptatemForeword Artificial intelligence(AI)is transforming many aspects of our lives,including how we engage with financial products and services.The potential benefits to business and individuals are enormous digital i
7、nnovations including AI are estimated to contribute$315 billion to Australias GDP by 20301.To fully realise those benefits,we must balance innovation and protection.The integrity of our financial system and the safety of the consumers who interact with it relies on us finding the right balance.For s
8、ome time,ASIC has been reminding licensees that existing obligations apply to their use of AI.ASIC has also been building an understanding of how AI is actually being used in the sectors we regulate.This report is ASICs first examination of the ways Australian financial services(AFS)and credit licen
9、sees are implementing AI where it impacts consumers.Concerningly,it finds that there is the potential for a governance gap.Simply put,some licensees are adopting AI more rapidly than their risk and governance arrangements are being updated to reflect the risks and challenges of AI.There is a real ri
10、sk that such gaps widen as AI use accelerates and this magnifies the potential for consumer harm.While the approach to using AI where it impacts consumers has mostly been cautious for licensees,it is worrying that competitive pressures and business needs may incentivise industry to adopt more comple
11、x and consumer-facing AI faster than they update their frameworks to identify,mitigate and monitor the new risks and challenges this brings.As the race to maximise the benefits of AI intensifies,it is critical that safeguards match the sophistication of the technology and how it is deployed.All enti
12、ties who use AI have a responsibility to do so safely and ethically.Our review comes at a pivotal time in the development of AI regulation in Australia.We support the Australian Governments Voluntary AI Safety Standard and intention to introduce mandatory guardrails ensuring testing,transparency and
13、 accountability for AI in high-risk settings.However,licensees and those who govern them should not take a wait-and-see approach to legislative and regulatory reform.Current licensee obligations,consumer protection laws and director duties are technology neutral and licensees need to ensure that the
14、ir use of AI does not breach any of these provisions.ASICs work to engage with and monitor licensees AI use will continue,particularly as we consider how they embed the requirements of any future AI-specific regulatory obligations.I call on industry to consider the findings of this review and reflec
15、t on the questions posed to ensure that innovation is balanced with the responsible,safe and ethical use of this technology.Joseph Longo ASIC Chair 1 Department of Industry,Science and Resources,List of Critical Technologies in the National Interest:AI TechnologiesA S I C R E P 7 9 84Executive summa
16、ryArtificial intelligence has the potential to transform the provision of financial services and credit in Australia.It provides opportunities for more efficient,accessible and tailored products and services.However,AI can also amplify existing risks to consumers and introduce new ones.Potential har
17、ms include bias and discrimination,provision of false information,exploitation of consumer vulnerabilities and behavioural biases,and the erosion of consumer trust.To help shape our understanding of risk to consumers and to inform our regulatory response,we reviewed the use of AI by 23 AFS and credi
18、t licensees.Our reviewWe analysed 624 AI use cases that 23 licensees in the banking,credit,insurance and financial advice sectors were using,or developing,as at December 2023.These were use cases that directly or indirectly impacted consumers and included generative AI and advanced data analytics(AD
19、A)models.As part of our review,we also asked licensees about their risk management and governance arrangements for AI,and how they planned to use AI in the future.We met with 12 of the licensees in June 2024 to discuss their use cases and governance arrangements.What we foundWe observed a rapid acce
20、leration in the volume of AI use cases.We also observed a shift towards more complex and opaque types of AI such as generative AI.But on the whole,the way licensees used AI was quite cautious in terms of decision making and interactions with consumers:AI generally augmented rather than replaced huma
21、n decision making and there was only limited direct interaction between AI and consumers.The majority of licensees told us they are planning to increase their use of AI.Given the fast-moving nature of AI and competitive pressures in industry,there is potential for the way AI is used and the associat
22、ed risk to consumers to shift quickly.We are concerned that not all licensees are well positioned to manage the challenges of their expanding AI use.Some licensees were updating their governance arrangements at the same time as increasing their use of AI.And in the case of two licensees,AI governanc
23、e arrangements lagged AI use.Governance and risk management arrangements are,by their nature,slow to change.It is therefore likely that any gap between the use of AI and governance arrangements will widen as AI adoption increases.This could leave licensees unprepared if they want to respond quickly
24、but safely to innovations from competitors.KEY STATISTICS57%of all use cases were less than two yearsold or in development.61%of licensees in the review planned toincrease AI use in the next 12 months.92%of generative AI use cases reported wereless than a year old,or still to be deployed.Generative
25、AI made up 22%of all use cases indevelopment.Only 12 licensees had policies in place for AIthat referenced fairness or related conceptssuch as inclusivity and accessibility.Only 10 licensees had policies that referenceddisclosure of AI use to affected consumers.A S I C R E P 7 9 85OUR FINDINGSUse of
26、 AIFINDING 1:The extent to which licensees usedAI varied significantly.Some licensees had been using forms of AI for several years and others were early in their journey.Overall,adoption of AI is accelerating rapidly(see page 11).FINDING 2:While most current use cases usedlong-established,well-under
27、stood techniques,there is a shift towards more complex and opaque techniques.The adoption of generative AI,in particular,is increasing exponentially.This can present new challenges for risk management(see page 13).FINDING 3:Existing AI deployment strategieswere mostly cautious,including for generati
28、ve AI.AI augmented human decisions or increased efficiency;generally,AI did not make autonomous decisions.Most use cases did not directly interact with consumers(see page 15).Executive summaryRisk management and governanceFINDING 4:Not all licensees had adequatearrangements in place for managing AI
29、risks(see page 19).FINDING 5:Some licensees assessed risksthrough the lens of the business rather than the consumer.We found some gaps in how licensees assessed risks,particularly risks to consumers that are specific to the use of AI,such as algorithmic bias(see page 20).FINDING 6:AI governance arra
30、ngementsvaried widely.We saw weaknesses that create the potential for gaps as AI use accelerates(see page 24).FINDING 7:The maturity of governanceand risk management did not always align with the nature and scale of licensees AI use in some cases,governance and risk management lagged the adoption of
31、 AI,creating the greatest risk of consumer harm(see page 29).FINDING 8:Many licensees relied heavily onthird parties for their AI models,but not all had appropriate governance arrangements in place to manage the associated risks(see page 31).We observed a rapid acceleration in the volume of AI use c
32、ases,and a shift towards more complex and opaque types of AI such as generative AI.But on the whole,the way licensees used AI was quite cautious.We found some gaps in how licensees assessed risks to consumers from AI,and for some licensees,governance arrangements lagged their AI use.This creates ris
33、k of consumer harm.A S I C R E P 7 9 86Where to from here for licensees?ASIC supports innovation in the financial system that is balanced with appropriate consumer protections and market integrity safeguards.While licensees deployment strategies were somewhat cautious,there is fertile ground for con
34、sumer harm where use of AI leaps ahead of governance arrangements and controls.We expect licensees to carefully consider their readiness to deploy AI safely and responsibly.Decisions that licensees make now about how they will govern their AI use will determine whether they establish solid foundatio
35、ns on which to deliver the expected benefits and manage risks to themselves and their customers.Many licensees told us that they were updating their governance arrangements in relation to AI.This is welcome,but there is more to do.AI presents novel challenges,and licensees governance arrangements sh
36、ould lead their AI use as it increases and evolves.Licensees should consider the findings of this report,and the questions on pages 3536,to help them consider their readiness to deploy AI safely,responsibly and in compliance with existing obligations.Licensees obligations and resources for licensees
37、The regulatory framework for financial services and credit is technology neutral.Licensees need to consider their existing regulatory obligations before deploying AI.In particular,licensees need to consider the general licensee obligations,directors duties,and consumer protection provisions,includin
38、g prohibitions against unconscionable conduct and false or misleading representations(see page 34).There are a number of resources that licensees can draw on as they deploy AI,such as the recently issued Voluntary AI Safety Standard.This standard gives practical guidance to all Australian organisati
39、ons on how to safely use and innovate with AI.Licensees who invest the time now will also be in a better position to comply with any future AI-specific regulatory obligations.The future regulatory landscapeThe landscape of AI regulation in Australia is evolving.The Australian Government recently con
40、sulted on how it proposes to define high-risk AI,and the introduction of mandatory guardrails to promote the safe design,development and deployment of high-risk AI use.The proposed guardrails include requirements related to testing,transparency and accountability of AI.ASIC supports the introduction
41、 of regulatory measures to mandate guardrails for the use of AI in high-risk settings.The findings of this review have informed our contribution to the Governments proposals.ASICs focusWe remain focused on advancing digital and data resilience and safety,targeting technology-enabled misconduct and t
42、he poor use of AI.Understanding and responding to the use of AI across the entities we regulate is a key priority for ASIC.We will:continue to monitor how our regulatedpopulation uses AI,and the adequacy of theirrisk management and governance processescontribute to the Australian Governmentsdevelopm
43、ent of AI-specific regulationengage and collaborate with domestic andinternational regulator counterparts,andwhere necessary and appropriate,takeenforcement action if licensees use of AI resultsin breaches of their obligations.AI presents novel challenges,and licensees governance arrangements should
44、 lead their AI use as it increases and evolves.Licensees should review their arrangements in line with our findings.Executive summaryA S I C R E P 7 9 87CASE STUDYBeware the gap between AI use and governance One licensee cited 10 AI use cases in scope,but adoption appeared to front-run their governa
45、nce and risk management arrangements.The licensee had no overarching AI strategy setting out how and why the licensee had decided to use AI in its operations.The licensee produced no policies setting out standards to guide the design,deployment and oversight of AI,and had not articulated the key ris
46、ks associated with AI and ADA in their risk management framework(e.g.a lack of explainability for complex models).None of the licensees use cases were risk rated.The licensee used an AI model to predict consumer credit default risk by producing a risk score.The score produced by the model was one in
47、put into credit decisions.It had the potential to result in consumers being refused credit or offered less credit than they otherwise would have been.An internal report to a senior committee dated approximately 10 months after deployment of the model stated that it was developed with limited underst
48、anding of the third-party platform used,there was incomplete model documentation with missing critical elements,and poor governance and a lack of a monitoring process.The report further described the model as a black box with no ability to explain the variables in the scorecard or the impact they ar
49、e having on an applicants score.Although the licensees report stated that the model has been stable,it noted that its ability to monitor the model was limited.The report proposed to revise the model,to ensure it is explainable,documented,and has a robust governance process in place.The licensee cont
50、inued to use the model for several months before replacing it with a simpler model,to ensure scoring outcomes and the model were explainable.Despite the issues identified with the above AI model,the licensee reported having plans to expand their use of AI.They also noted that if they did not engage
51、with these capabilities,they would be left behind by competitors.The licensee referred to ongoing work to update their governance and risk management frameworks.However,this example exemplifies the risk in proceeding to adopt AI without adequate foundations in place,and the risk that gaps between us
52、e cases and governance will remain or widen in the face of competitive pressures.Executive summaryA S I C R E P 7 9 88AI governance:Questions for licensees1TAKING STOCKWhere is AI currently being used in your business?2 STRATEGY What is your strategy for AI,now and in the future?3FAIRNESSHow will yo
53、u provide services efficiently,honestly and fairly when using AI?4ACCOUNTABILITY Who is accountable for AI use and outcomes in your business?5RISKS How will you identify and manage risks to consumers and regulatory risks from AI?6ALIGNMENTAre your governance arrangements leading or lagging your use
54、of AI?7POLICIES Have you translated your AI strategy into clear expectations for staff?8RESOURCES Do you have the technological and human resources to manage AI?9OVERSIGHT What human oversight does your AI use require,and how will you monitor it?10THIRD PARTIES How do you manage the challenges of us
55、ing models developed by third parties?11REGULATORY REFORM Are you engaging with the regulatory reform proposals on AI?For more details,see pages 3536A S I C R E P 7 9 89Why look at AI?The use of AI in financial services and credit creates the potential for significant benefits to consumers,such as m
56、ore efficient,accessible and tailored products and services.But AI can amplify existing risks and create new risks to consumers.The potential risks to consumersUnfair or unintended discrimination due to biased training data or algorithm design:Biased AI outputs could have disproportionate,negative i
57、mpacts on vulnerable individuals or groups,including financial exclusion(for example,being denied access to credit or insurance,or paying a higher price).Incorrect information provided to consumers about products or services:AI models canprovide information or advice that appears correct,but contain
58、s factual errors or fallacies.This exposes consumers to the risk of harm from relying upon such misleading or false information.Manipulation of consumer sentiment or exploitation of behavioural biases:AI can allowfor faster iteration of marketing and advertising material,and bespoke micro-targeting.
59、AI can play on customers feelings and restrict or manipulate their choices.Breaches of data privacy and security:AImodels may contain or reproduce confidential or sensitive information without the prior and informed consent of impacted individuals.AI models can also be vulnerable to cyber attacks an
60、d data leaks.An erosion of consumer trust and confidence due to a lack of:explainability AI models may usetechniques that are too complex to beunderstood and explained by humans and betrained on data that is too vast and complexfor humans to process,resulting in a blackbox,where decisions may not be
61、 traceable.transparency Consumers may not beinformed when AI has been used to makedecisions that impact them,or when theyare interacting with AI and AI generatedinformation,andcontestability Consumers may not beprovided with a process and the necessaryinformation to contest the outcome of adecision
62、facilitated by AI.Contestability isfurther undermined if consumers are unawareof the use of AI.MANAGING RISKS FROM AIRisks are very specific to each AI use case.For example,they can arise from the data input,from the technique or model used,as well as from the purpose,context,and level of automation
63、 of the models.Risks can also arise throughout the AI lifecycle and can change over time.Because AI operates at scale,using vast amounts of data,risks can be amplified,and have the potential to cause harm at scale.This means that AI creates new challenges for licensees in managing risks to consumers
64、 from AI.While this review did not test the outcomes from individual AI use cases,we have made observations on whether licensees are prepared to manage the risk of harms from the use of AI.A S I C R E P 7 9 810FINDINGS:Use of AIA S I C R E P 7 9 811FINDING 1The extent of AI use varied significantly
65、but overall adoption is accelerating What we didWe collected data from 23 licensees on the number of AI use cases in use or in development(as at December 2023)where AI interacted with or impacted consumers.What we foundAll but two licensees reported at least one AIuse case that directly or indirectl
66、y impactedconsumers.The number of use cases each licenseereported varied significantly see Figure 1.024681012More than 10026100625Fewer than 6Number of use cases reportedNumber of licenseesFigure 1:Use cases reported by licenseesNote:See Table 2 for the data shown in this figure(accessible version).
67、A S I C R E P 7 9 812FINDING 1(continued)The extent of AI use varied significantly but overall adoption is accelerating What we didWe reviewed a total of 624 use cases(see Appendix 1)and mapped them to the year they were deployed.What we foundAI adoption is increasing rapidly:57%of all use cases rep
68、ortedwere less than two years old orin development.Of the 624 usecases reported to us,20%werestill in development and had notyet been deployed.The adoption of generative AI is,unsurprisingly,a very recentdevelopment:92%of generativeAI use cases were deployed in2022 or 2023,or in developmentas at Dec
69、ember 2023.We can expect the pace ofchange to continue:61%oflicensees in the review told usthey planned to increase theiruse of AI in the next 12 months.The remainder were planning tomaintain their current AI use.Figure 2:Number of AI use cases by deployment year0306090120150200020082007200920102011
70、201220132014201520162017201820192020202120222023DevNumber of use casesNon-generative AIGenerative AIChatGPT released(30 Nov 2022)Note 1:See Table 3 for the data shown in this figure(accessible version)Note 2:Dev=Advised to be in development by the licensee as at Dec 2023 see Appendix 1 for more info
71、rmation.The development dates of 12 use cases were not provided or did not have a clear date and are not reflected in this graph.This graph includes use cases reported as in production or in development as at Dec 2023.It does not include use cases built and decommissioned before the data collection,
72、or use cases where the model technique was not specified.A S I C R E P 7 9 813FINDING 2Most current use cases applied long-established,well-understood techniques.But there was a shift towards more complex and opaque techniques,including generative AIWhat we didWe assessed the complexity of model tec
73、hniques used in each of the 624 use cases.Complex and opaque techniques can pose additional challenges for oversight.Challenges include understanding and explaining how AI obtains its results,determining whether results are reliable and accurate,and knowing whether outputs are unfairly biased or dis
74、criminatory.What we foundThe majority of current use cases relied onwell-known and established machine-learning techniques that produced explainable and interpretable results.We observed an increase in the use of morecomplex and opaque techniques(such as neuralnetworks used in deep learning and gene
75、rativeAI),which are used for the processing andanalysis of large volumes of images,audioand text data see Figure 3.Together theserepresent 32%of the use cases we saw underdevelopment.The use of generative AI is set to increaseexponentially.While generative AI made up only5%of use cases that were in
76、use,it made up 22%of those in development.While generative AI made up only 5%of use cases that were in use,it made up 22%of those in development Figure 3:Model techniques by status0%10%20%30%40%Supervised learning:ClassificationSupervised learning:RegressionDeep learningUnsupervised learningGenerati
77、ve AIMiscellaneousNot specifiedCurrent(n=488)In development(n=124)A S I C R E P 7 9 8Note:See Table 4 for the data shown in this figure(accessible version)14FINDING 2(continued)Most current use cases applied long-established,well-understood techniques.But there was a shift towards more complex and o
78、paque techniques,including generative AIHow different model techniques were usedSupervised learning:Classification models weremostly used to predict if a consumer was likely to take out a financial product using explainable models such as logistic regression.Supervised learning:Regression models wer
79、eprimarily used to derive prices,rates or forecast future time series.Deep learning models were mostly used fornatural language processing and optical character recognition,primarily when scanning analogue form data to speed up loan,insurance,or other form-heavy business processes.Unsupervised learn
80、ing models were mostly usedfor detecting strange or anomalous patterns in areas such as internal audit and fraud detection.Generative AI models were used to generate firstdrafts of materials,or responses to customers in carefully constrained circumstances see page 15 for more information.Miscellaneo
81、us models were mostly non-predictive models,such as search engine optimisation or pattern matching.Not specified models were models wherelicensees did not disclose the model technique.In some cases,these were models built by third parties,and licensees did not have this information.WHAT IS GENERATIV
82、E AI?Generative AI is a type of AI that focuses on creating or generating novel content such as images,text,music,video,designs or recommendations.Unlike traditional AI techniques that produce output that is programmed or copied from existing data,generative AI techniques are designed to generate ou
83、tput based on patterns,structures and examples learned from large data sets during the training process.Generative AI models have certain characteristics that make them particularly prone to risks of harm.For example,they:tend to use large amounts of data for thetraining of the model.The presence of
84、incomplete data in training sets mean thatmodels have the potential to provide biasedor inappropriate resultscan generate outputs that are false orinaccuratecan use complex techniques that are noteasily interpretable or explainable,andcan be subject to novel cyber attacks.A S I C R E P 7 9 815FINDIN
85、G 3The way AI was used was mostly cautious What we didWe looked at what licensees were using AI for,the role AI played in decision making and the types of data used by the 624 use cases.This allowed us to make some observations around risks posed by the use of AI.We also compared what we saw against
86、 use cases observed overseas or in literature.What we foundAI use was mostly cautious.Generally,AI was used to assist or augment humandecision making or increase efficiency,rather than make autonomous decisions.Most AI use was internal facing.WhereAI did directly interact with consumers,itgenerally
87、operated within set parametersor alongside specific rules.Decision makingGenerally,models were not providing ungoverned outputs or replacing human judgement.Decision making generally involved either:Non-automated decisions decisions where the modelproduced the output with a final check or verificati
88、on performed by a human.For example:income/expense verifications for credit applications,andsuggested responses for customer service staff.Automated decisions decisions made without humanintervention,but operating within specific criteria,thresholds and rules set by humans.For example:credit score c
89、alculations that had to meet thresholdsset by humans,and that operated alongside other setrules or checks(e.g.serviceability),andmodels that predict the likelihood of a transactionbeing fraudulent,which were referred to a human forreview if they exceed a defined threshold.Sources of dataMost data us
90、ed by these models tended to be from internal sources.For example:customer financial information,such as transactionhistory or asset holdings,ordetails provided by customers when they applied forloans,lodged claims or requested quotes for financialproducts.HOW GENERATIVE AI WAS BEING USEDMost curren
91、t uses of generative AI or those in development were internal facing;they involved supporting staff and creating operational efficiencies.In the limited instances where generative AI was used to interact with consumers,it was used within prescribed parameters (i.e.pre-vetted chatbot responses;chatbo
92、ts deployed in limited circumstances).Generating first drafts of documents,such as correspondence or marketing material Call analysis;summarisation of call transcripts and consumer correspondence(e.g.for hardship identification)Chatbots for internal use,and for customer engagement Internal assistanc
93、e:retrieving internal policiesA S I C R E P 7 9 8FINDING 3(continued)16Key uses of AI among licensees Table 1 sets out key uses of AI among the licensees in our sample.At the time of our review and within the sample in scope,we did not see examples of the more concerning uses of AI observed in liter
94、ature or overseas,such as the use of unconventional third-party data sources(e.g.social media activity)to inform credit or insurance decisions,or the use of generative AI models to produce targeted marketing messages to consumers to maximise sales,based on consumers perceived emotional responses.Thi
95、s is point-in-time information(December 2023)and could change quickly,given the pace of innovation.Table 1:Key uses among licenseesArea of useMost common usesEmerging uses(less commonly observed and/or in development)Credit decisioning and managementPredicting credit default risk to support a decisi
96、on,either by producing a score or rating where a minimum threshold must be met to proceed,or with other rules in automated decisioning.Monitoring existing credit holders to inform contact and collection strategies.Accuracy improvements for decisioning,including to predict probability of recovery for
97、 defaults or arrears,and to prioritise customer contact.MarketingAnalysing a consumers spending patterns to segment them into specific groups so that they receive relevant marketing messages or offers.Optimising marketing communications and engagement by predicting best forms and times for contact.G
98、enerative AI generating draft marketing copy for review.Customer engagement and customer value propositionChatbots to answer simple customer questions based on pre-scripted responses.Cash flow forecasting and budgeting tools to assist customers with personal finances and to engage with their finance
99、s and with AI tools.Predicting credit card or product rewards offers likely to be of interest for customers.Use of generative AI by customer-contact staff to summarise key information from customer complaints so they can respond to complaints in a more efficient and timely manner.The optimisation of
100、 consumer-facing apps and website layouts for ease of customer use based on browsing history and most-used features of the app.A S I C R E P 7 9 817Area of useMost common usesEmerging uses(less commonly observed and/or in development)Fraud detectionFraud detection activities,including transaction mo
101、nitoring,and identification of fraudulent documents,and applications or claims.Use of biometric information for identity verification.Identifying possible mule accounts and instances of account takeover.Identifying customers who may be susceptible to scams,to proactively prevent them.Business effici
102、encies and compliance Internal process efficiency,such as business analytics,quality assurance,and assistance for staff.Document indexing or data enrichment to improve information extraction from documents and support efficiencies in decision making.Triaging incoming complaints to enable more effici
103、ent complaints handling.Call transcription analytics that assist in quality assurance reviews of customer contact staff to ensure that treatment of customer issues and queries is within established quality and compliance standards.Anomaly detection to identify internal errors or non-compliance and t
104、o efficiently target internal audit activities.Automated data cleaning,verification and integrity checks to correct for any potential errors such as spelling mistakes or incorrect labels in consumer form application data.Identification of financial hardship or vulnerability indicators in conversatio
105、ns missed by staff.Pricing optimisation Predicting the likelihood that a customer will switch to a competitor to drive targeted retention offers.To assist in determining discretionary discount offers on products upon a customers request for a review.Insurance Actuarial models for risk,cost and deman
106、d modelling.Supporting the claims process:Claims triaging,decision engines to support claims staff,document indexation,identifying claims for cost recovery.Identifying lapse propensity and prompts to contact consumers.Automating a component of the claims decisioning process,but humans remain respons
107、ible for overall claims decision.Use of machine learning to increase efficiencies in the underwriting process,focused on automating the extraction of information and summarising key information about a customers application.The use of generative Al and natural language processing techniques to extra
108、ct and summarise key information from claims,emails and other key documents.FINDING 3(continued)A S I C R E P 7 9 818FINDINGS:Risk management and governanceA S I C R E P 7 9 819FINDING 4There were gaps in arrangements for managing some AI risksWhat we didWe asked licensees about how they identify an
109、d manage AI risks,including risks to consumers.We also reviewed any frameworks,policies and procedures that supported this.What we foundApproximately half of licensees had specificallyupdated their risk management policies orprocedures to address AI risks.Other licenseesrelied on their existing poli
110、cies or procedureswithout making changes.Licensees generally had documented policiesor procedures for managing risks that arerelevant to,but not specific to AI such as thoseassociated with privacy and security,and dataquality.There were gaps in arrangements for managingsome of the more unique AI ris
111、ks,and formanaging challenges such as transparency andcontestability.How licensees approached risk managementLicensees took different approaches to managing risk from AI.Approximately half of the 23 licensees had made specific changes to their risk management arrangements to reflect the characterist
112、ics of AI.They had updated existing policies with AI-specific content,or had created bespoke AI-related policies,standards or guidance.However,it was not clear in all cases that these documents considered all AI risks,or that they were operationalised consistently;some were limited to generative AI
113、and some provided only guiding principles,without establishing clear standards.Most of the remaining licensees indicated that they relied on existing risk management frameworks and documents such as codes of conduct,or IT policies.Some of these licensees told us they had considered the adequacy of t
114、heir existing documentation in light of their AI use but in other cases,it was not clear that the reliance on existing materials was the result of a deliberate decision.What policies included(or didnt include)Nearly all licensees produced policies that broadly referenced risks that are relevant,but
115、not specific to AI,such as privacy and security and data quality.Only 12 of the licensees in the review had AI policy documents,guidance or checklists that referenced fairness,or related concepts such as risks of discrimination or bias against individuals,communities or groups.In some cases,referenc
116、es were principles-based,and it was not clear how consideration of these principles was embedded into operations.Only 10 of the licensees had documented requirements or principles in place about disclosure to consumers when they were interacting with or affected by AI.Of these,some only prompted con
117、sideration of whether disclosure is appropriate and did not prescribe an approach to disclosing.No licensees appeared to have implemented specific contestability arrangements for AI,though some noted this concept in principle.Some licensees referred to the availability of internal dispute resolution
118、;though take-up of this in relation to AI will be impacted by the fact that consumers would not necessarily be aware AI was being used.A S I C R E P 7 9 820FINDING 5There were gaps in licensees assessments of AI risksWhat we didWe asked licensees to set out the risks they had identified for each AI
119、use case,how they mitigated these,and the frequency and type of monitoring they did.What we found We found some gaps in licensees assessment of risks:Some licensees considered the risks of AIthrough a business lens rather than focusing onpotential harm to consumers,and they did notconsistently ident
120、ify AI-specific risks such asalgorithmic bias,or fully consider the impact of AIuse on their regulatory obligations.We observed some weaknesses in how licenseesprovided meaningful human oversight,andin how they monitored for and responded tounexpected model outputs.We observed that licensees conside
121、ration oftransparency and contestability was relativelyimmature.Business vs consumer riskMany AI use cases were driving business efficiencies,and/or providing outputs to accountable human decision makers.These characteristics reduced the potential risk of consumer harm,which likely accounted for som
122、e licensees more limited identification of consumer harm.However,this was not the case for all use cases,and we identified gaps in how some licensees considered risks to consumers.For example,some licensees identified the risk of an incorrect model output,but noted the consumer could contest it,or s
123、taff could override it.However,they did not consider the potential harm if the model output caused a consumer to abandon their transaction altogether,potentially without knowing they could contest it(or indeed that AI was used).In some instances,licensees were focused on business risk,and did not fu
124、lly consider and manage the effects of their models on consumers.In those instances,mitigation and monitoring activity was also skewed towards business risk rather than consumer risk.For example,we observed instances where licensees used overseas-developed models for identity verification.They ident
125、ified the business risk of failing to identify fraud and escalated cases that failed verification for manual review.However,their responses did not identify the potential for some groups to be disproportionately impacted,if overseas-developed models had not been adequately trained on a data set that
126、 was representative of the licensees Australian customers.We observed instances where licensees focused on business over consumer risk,or where the use of AI could have implications for licensees compliance with existing conduct and consumer protection obligations,but this was not identified as a po
127、ssible risk.A S I C R E P 7 9 821FINDING 5(continued)There were gaps in licensees assessments of AI risksImpact on regulatory obligations We observed instances where AI use cases could have potential implications for licensees compliance with existing conduct and consumer protection obligations,but
128、this was not identified as a possible risk.For example,customer segmentation by AI models in marketing could potentially identify customers who are not in a products target market and lead to breaches of the design and distribution obligations.This risk was generally not identified,though when promp
129、ted,licensees referred to existing controls to ensure compliance with design and distribution obligations,or to human oversight(i.e.a human in the loop).Failure to consider the impact on regulatory obligations is particularly a risk where decisions about AI models or use cases are made without input
130、 or oversight by risk and compliance functions.Few licensees considered algorithmic biasVery few licensees proactively identified risks of algorithmic bias in their responses about particular use cases,or indicated they actively tested for bias.Algorithmic bias describes systematic and repeatable er
131、rors in a computer system that create unfair outcomes,such as privileging one category over another in ways different from the intended function of the algorithm.In some cases,this was likely due to algorithmic bias being less relevant,given the nature of their use cases.More licensees demonstrated
132、that they had considered and mitigated this risk when we specifically queried this,but one licensee indicated that they did not test for bias.Some licensees indicated they were aware that possible algorithmic bias in their data sets could influence outcomes,but they did not appear to test for a disp
133、arity in outcomes on an ongoing basis.CASE STUDYNo evidence of consideration of impact on regulatory obligations A licensee recognised that a model was under-predicting risk for a particular customer cohort.In response,the licensee adjusted the settings for the model,which,among other things,had the
134、 effect of increasing pricing offered to that cohort.The licensee used the outputs from the updated model even where it acknowledged that some consumers within the cohort could potentially be eligible for a lower price,based on outputs from other assessments that were subsequently introduced.We did
135、not see evidence that the licensee had considered the flow-on impacts of this approach for consumers,in the context of the general obligation to provide financial or credit services efficiently,honestly and fairly.A S I C R E P 7 9 822FINDING 5(continued)There were gaps in licensees assessments of A
136、I risksTransparency and contestability for consumers is a complex area and consideration of this was relatively immature The review highlighted that the question of whether the use of AI should be disclosed to consumers is a challenging one,as well as the question of what information should be provi
137、ded and when.Transparency is important as it allows for generally greater engagement and informed decision-making,but there are limits to the effectiveness of disclosure in protecting consumers.A small number of licensees had considered whether their use of AI should be disclosed to consumers,and ma
138、ny of these appeared to consider the appropriateness of disclosure on a case-by-case basis.Few licensees identified that a lack of transparency and contestability about the use of AI could erode consumer trust.This stance on disclosure likely reflects the nature of their use cases,with few licensees
139、 using AI to make automated decisions or interact directly with consumers.However,this potentially also reflects a lack of maturity in considering these issues.Discussions with licensees highlighted transparency and contestability as a challenging area,with licensees questioning:how much AI had to b
140、e involved in aninteraction or decision before it shouldbe disclosedwhether consumers would finddisclosure useful,andwhether it was necessary to introducetransparency now,given some modelshad been in use for a long time.CASE STUDYNot disclosing AI use to a consumer making a claimOne licensee used a
141、third-party AI model to assist with indexing documents submitted for insurance claims,which included sensitive personal information,to improve efficiencies for claims staff.The licensee identified that consumers may be concerned about their documents being sent to a third party to be read by AI,but
142、decided not to specifically disclose this to consumers.The licensees documents explained that its privacy policy stated that consumers data would be shared with third parties,and the data was at all times kept in Australia.But consumers were not specifically informed that some sharing of information
143、 involved AI,or about whether they could opt out.While the AI use in this case only involved the provision of administrative support functions to human claims assessors,rather than any AI-driven decisions,it illustrates the complexity of the issue and the potential for loss of consumer trust.A S I C
144、 R E P 7 9 823FINDING 5(continued)There were gaps in licensees assessments of AI risksThe importance of meaningful human oversight We asked licensees to provide information about whether their models operated with a human in the loop.Most licensees told us that this was the case for most of their mo
145、dels.In practice,however,this ranged from using the models output to inform a human decision-maker,to referring exceptions to a human for review,to having human involvement in training and to periodically testing the models operation.What constitutes meaningful human oversight depends on the nature
146、of the use case.Some licensees had purposefully decided that a human would be involved in and accountable for each decision where AI was involved and had documents affirming the accountability of humans for decisions.Other licensees conducted periodic checks of models in line with established contro
147、ls,to identify issues such as model drift.But in some cases,licensees arrangements did not appear to provide sufficient human oversight,particularly where licensees did not fully understand models,as seen in the case study on page 7.Monitoring and responding to issues was not consistentMost licensee
148、s were monitoring their models,but practices varied widely,and we identified some gaps:A small number of licensees who werein the early stages of their AI journeysreported only testing at the pre-deployment and deployment stagesand relying on trigger-based reviewsfor post-deployment monitoring.Bette
149、rpractice was licensees conductingperiodic reviews of the model dataand model output to ensure continualoversight.In many cases,monitoring practicesfocused on testing outputs againstbusiness metrics,rather than a morecomprehensive analysis that consideredpossible consumer harm.Where licensees identi
150、fied shifts orunexpected outputs,they differed intheir response.Better practice was toconduct root cause analysis,includingtesting for consumer impact.Poorerpractice was a licensee simply amendingmodel thresholds to bring outputs withintheir business risk appetite,withoutroot cause analysis or asses
151、sment ofcustomer impact.CASE STUDYDifferent approaches when issues arose with AI models Licensee A and Licensee B each deployed models to predict consumer credit default risk by producing a risk score.Poorer practice:When scores were recalibrated by the external vendor on whose platform the model wa
152、s built,Licensee A noted:time did not allow for thorough testing,and no documentation was created to ascertain theimpact of this change.A S I C R E P 7 9 8Better practice:When Licensee Bs model produced unexpected scores,Licensee B:detected this as part of routine weekly monitoring conducted a root
153、cause analysis to address theunderlying issue,and investigated to identify any consumer impact(andfound none).24FINDING 6AI governance arrangements varied widely.We identified weaknesses that create the potential for gaps as AI use accelerates What we didThe effectiveness of governance and risk mana
154、gement frameworks in relation to AI is a key factor in determining what risks a licensees AI use poses.We therefore reviewed each licensees approach to governance and the maturity of their governance arrangements.What we foundThe maturity of AI governance andoversight varied significantly.Licensees
155、satsomewhere on a spectrum of maturity of AIgovernance.We also identified some weak points ingovernance arrangements that will impacthow licensees are able to manage risksfrom AI use,particularly if they accelerateadoption.Maturity of governance arrangementsWe identified three broad categories of ap
156、proaches to governance that formed a spectrum from least to most mature:The least mature took a latent approach thathad not considered AI-specific governanceand risk.The most mature took a strategic,centralisedapproach.Licensees falling in between generallyadopted decentralised approaches thatlevera
157、ged existing frameworks.Licensees werent always entirely within one of the three categories,but sat somewhere along the spectrum.The most mature licensees developed strategic,centralised AI governance approaches.The least mature licensees had not considered AI risks and governance,with no or few for
158、mal arrangements.A S I C R E P 7 9 825FINDING 6(continued)AI governance arrangements varied widely.We identified weaknesses that create the potential for gaps as AI use accelerates Least matureLatentAt the time of our review,the least mature licensees had not considered AI risks and governance,with
159、no or few formal arrangements.Where these licensees used AI,they relied entirely on their existing frameworks.Any weaknesses in those translated into weaknesses in AI governance.Leveraged and decentralisedSome licensees leveraged their existing governance and risk management arrangements to govern A
160、I.While these licensees had considered the risks and opportunities of AI,their approaches tended to be decentralised,and determined by various parts of the business,based on their requirements.These licensees generally did some or all of the following:considered that AI risks were coveredby existing
161、 risk classes and did notinclude AI explicitly in their riskappetite statementrelied on individual business linesto propose one-off AI use cases toaddress business needsdemonstrated ownership andaccountability for AI at a model orbusiness unit level,but did not alwayshave a senior executive accounta
162、bleoverallhad pre-existing governancearrangements,policies and proceduresfor well-established forms of AIhad documented AI and/or dataethics principles.Licensees variedin how well they incorporated theseinto relevant existing policies andoperationalised them in practice.The efficacy of the leveraged
163、,decentralised approaches depended on:whether the licensee had consideredits AI strategy and risk appetitethe robustness of existing governanceand risk management arrangements,andthe nature and extent of the licenseesAI use.Most matureStrategic and centralisedThe more mature licensees developed stra
164、tegic,centralised AI governance approaches.These licensees generally:had a clearly articulated AI strategyincluded AI explicitly in their riskappetite statementdemonstrated clear ownershipand accountability for AI at anorganisational level,including anAI-specific committee or councilreported to the
165、board about AIstrategy,risk and usehad AI-specific policies andprocedures that reflected arisk-based approach,and thesespanned the whole AI lifecycleincorporated consideration of AIethics principles in the above,andtold us they were investing inresources,skills and capability.A S I C R E P 7 9 826FI
166、NDING 6(continued)AI governance arrangements varied widely.We identified weaknesses that create the potential for gaps as AI use accelerates AI GOVERNANCE ARRANGEMENTS:BETTER AND POORER PRACTICES OBSERVEDAI strategy:Better AI strategies set out clear objectives and principles for AI use,and consider
167、ed the skills,capabilities and technological infrastructure required to deliver on the strategy.Poorer AI strategies did not align AI use with desired outcomes and objectives or inform organisational risk appetite.Board reporting:Better practices included periodic reporting to the board/relevant boa
168、rd committee on holistic AI risk.Poorer practices included ad-hoc reporting on a subset of AI-related risks,or none at all.Oversight:Seven licensees had,or were in the process of,setting up a committee or council to oversee AI.Better practices were cross-functional,executive-level committees with cl
169、ear responsibility and decision-making authority over AI use and governance.Poorer practices included committees that met infrequently and had a poorly defined mandate.AI ethics principles:Twelve licensees had incorporated some of the eight Australian AI Ethics Principles in their AI policies and pr
170、ocedures.However,in some cases the references were high level and it was unclear how principles were to be applied in practice across the AI lifecycle.Licensees did not necessarily refer to all eight ethics principles they were weaker in considering the disclosure of AI outputs and contestability.Po
171、orer practices included licensees relying on their organisational codes of conduct or other general policies instead of any explicit AI ethics principles.A S I C R E P 7 9 827FINDING 6(continued)AI governance arrangements varied widely.We identified weaknesses that create the potential for gaps as A
172、I use accelerates Weaknesses in governance and risk managementWe identified the following weak points in governance,which can indicate that licensees arrangements have not been fully operationalised or are starting to lag their AI use.These are particularly relevant for licensees taking a leveraged
173、and decentralised approach.Licensees and their boards may not have clear visibility of their AI use Some licensees required extra time to collate use cases to respond to our notices.We suspect that in some cases a lack of an AI inventory,or the recording of models in several dispersed model register
174、s,contributed to this.CASE STUDYIncomplete model register One licensee required all models as defined in its Model Risk Policy to be entered into a model register and had developed a Model Risk Management System to maintain its register and manage model lifecycle workflow activities.However,in respo
175、nding to ASICs request,the licensee identified models missing from the register and failures to comply with the Model Risk Management System,suggesting that the licensees centralised oversight remained incomplete.A S I C R E P 7 9 8FINDING 6(continued)AI governance arrangements varied widely.We iden
176、tified weaknesses that create the potential for gaps as AI use accelerates 28Evolving arrangements lead to complexity and fragmentation Some licensees AI governance frameworks and policies were spread across several documents,which had developed iteratively in response to particular issues and AI im
177、plementations,creating a risk of gaps and inconsistencies.These licensees may have difficulty overseeing their AI use and compliance with complex and fragmented frameworks,especially as AI use increases.Evolving expectations are not applied to existing models In some cases,licensees expectations evo
178、lved,for example around the application of ethical considerations to consumer-facing models.However,updated policies and procedures were not necessarily applied to the existing suite of models,nor was there an expectation that they do so.Applying evolving policies to all existing models is important
179、 to ensure that they are implemented consistently.CASE STUDYFailure to apply evolving policiesOne licensee had introduced a requirement that disclosure to consumers be considered in the context of the ethical principle around transparency.When we queried how they had considered disclosure for a part
180、icular consumer-facing model with a direct impact on consumers making an insurance claim,they said that while they had considered the costs and benefits of disclosure to the consumers at the time of the models inception several years ago,there had been no formal process for this.They indicated that
181、they had not applied their current policy to models already in use.They told us that they would certainly consider the question about explainability and transparency for new deployments It wasnt in our process then;it is now.A S I C R E P 7 9 829FINDING 7The maturity of governance and risk managemen
182、t did not always align with the nature and scale of licensees AI useWhat we didWe compared the maturity of licensees governance arrangements to the nature and scale of their AI use to identify potential risks and gaps.What we foundWe expected to see a clear correlationbetween those licensees with th
183、e most matureframeworks and the greatest AI use.Instead,the picture was more nuanced.For some licensees,their governancearrangements led their AI use.For most,AIgovernance and use was broadly aligned,butwhere they were updating their governancearrangements in parallel with increased AIuse,this creat
184、ed a risk.For a small number oflicensees,governance arrangements laggedtheir use of AI.As AI use accelerates,there is a risk that thegap between AI deployment and appropriategovernance arrangements will widen.Figure 4:Licensees AI governance maturity relative to AI useAI useAI governance maturityAI
185、use and governance broadly alignedLow AI use and AI governance maturityAI use and governance broadly alignedSignificant AI use and AI governance maturityGovernance lagged AI useSignificant AI use and low AI governance maturityGovernance led AI useLow AI use and high AI governance maturityGreatest ri
186、sk-immediate action requiredLow risk-caution requiredManaged risk-caution requiredLeast risk-vigilance required9Note:For an accessible version of this figure,see page 30.A S I C R E P 7 9 830Significant AI use with low AI governance maturity:AI governance arrangements lagged AI useTwo licensees had
187、started deploying consumer-impacting AI use cases without considering AI challenges in a systematic way or making changes to their existing governance and risk management arrangements.Weaknesses in existing frameworks meant their arrangements were not adequate to manage AI risks.This cohort represen
188、ts the greatest source of risk.Both licensees in this cohort were relatively small.Neither was using or developing generative AI.Significant AI use with high AI governance maturity:AI use and governance broadly alignedFor many licensees,governance arrangements and models were broadly aligned,but pot
189、entially coming under pressure,especially as AI use accelerates.Eight licensees within this cohort had significant consumer-impacting use cases and mature governance arrangements relative to others.This cohort included licensees of various sizes.Most of the generative AI use cases that were in use(1
190、8 of the 22)and in development(23 of 26)belonged to licensees in this cohort.The challenge for these licensees will be to maintain the adequacy of their arrangements and ensure they are fully operationalised as their AI use grows in scale and complexity,particularly if their approach to AI governanc
191、e is already fragmented.Low AI use with low governance maturity:AI use and governance broadly alignedNine licensees had limited AI use and had not put specific AI governance arrangements in place.Most licensees in this cohort had few use cases and had limited plans to expand,but some were considerin
192、g uplifts to their governance frameworks to prepare for future AI use.There was only one very limited use of generative AI among this cohort.Licensees in this cohort do not currently present significant risk,but risks could emerge if their posture towards AI changes without first establishing approp
193、riate governance arrangements.Low AI use with high AI governance maturity:Governance arrangements led AI useFour licensees had relatively mature frameworks and yet did not have significant consumer-impacting models.This suggests that their decision to progress carefully is a deliberate one and is in
194、formed by a well-considered AI strategy and a thorough assessment of risk.This cohort had particularly well-advanced governance frameworks relative to their use cases.This cohort was starting to explore generative AI but was cautious in deployment and had appropriate frameworks in place.FINDING 7(co
195、ntinued)The maturity of governance and risk management did not always align with the nature and scale of licensees AI useVIGILANCE REQUIREDOf the 23 licensees reviewed,14 were planning to increase their use of AI.Of these,13 were also planning,or had commenced,an uplift in AI governance.Only one app
196、eared to have uplifted their governance before their anticipated uptick in AI use.These figures underline the need for licensees to regularly review whether their governance arrangements are aligned to the scale and complexity of their use,and to consider the potential for gaps if AI uptake outpaces
197、 governance uplifts.Licensees should be regularly reviewing and updating their governance and risk management arrangements to ensure the arrangements do not fall behind their evolving AI use.A S I C R E P 7 9 831FINDING 8Many licensees relied heavily on third parties for their AI models,but not all
198、had appropriate governance arrangements in place to manage riskWhat we didWe asked licensees to identify which models were developed by third parties,and how they managed these relationships.Using third parties to develop or deploy models can bring significant benefits,such as overcoming limitations
199、 of resourcing and technical skills,especially for smaller licensees.However,improperly managed third-party models can introduce risks,such as a lack of transparency and control,and security and privacy concerns.There are additional challenges in risk management and oversight where licensees do not
200、have insight into the operation and training of models.What we found30%of all use cases in our review had models that were developed by third parties.Some licensees relied heavily on third parties for their models:For four licensees,100%of the models in theiruse cases were developed by a third party
201、.For 13 licensees,50%or more models weredeveloped by a third party.Some licensees did not have robust third-party management procedures.Better practices saw licensees setting the same expectations for models developed by third parties as for internally developed models.A S I C R E P 7 9 832CASE STUD
202、YPoorer practice oversight of third-party modelsMost models reported by one particular licensee were developed by third parties.This licensee was not able to identify the AI technique used for all of its models and acknowledged the challenges:In our experience vendors are hesitant to provide details
203、 beyond standard marketing literature due to intellectual property concerns.The licensee described processes for understanding accuracy and fitness for purpose of third-party models,but did not produce a third-party supplier policy or documented process for validating,monitoring and reviewing third-
204、party models.CASE STUDYBetter practice oversight of third-party models One licensee had supplier risk frameworks in place that complemented its model risk requirements for third-party developed models,and set clear expectations,including to:obtain proof of independent validation from the supplierand
205、 validate the model internally before use establish service-level agreements to ensure modelsare implemented appropriately,including back-ups anddisaster recovery plans,and establish a process to be notified of model changes,toobtain performance monitoring results and to considerfourth-party risks.T
206、he licensee reported:All third-party models are subject to the same governance principles as internally developed models.FINDING 8(continued)Many licensees relied heavily on third parties for their AI models,but not all had appropriate governance arrangements in place to manage riskA S I C R E P 7 9
207、 833Where to from here for licensees?A S I C R E P 7 9 834Licensees must consider their existing regulatory obligationsWhat licensees need to do to comply with their existing regulatory obligations when using AI depends on the nature,scale and complexity of their business.It also depends on the stre
208、ngth of their existing risk management and governance practices.This means there is no one-size-fits-all approach for the responsible use of AI.The regulatory framework for financial services and credit is technology neutral.There are several existing regulatory obligations that are relevant to lice
209、nsees safe and responsible use of AI in particular,the general licensee obligations,consumer protection provisions and directors duties.For example:Licensees must do all things necessary to ensurethat financial services or credit services areprovided in a way that meets all of the elementsof efficie
210、ntly,honestly and fairly.Licenseesshould consider how their AI use may impact theirability to do so;for example,if AI models bringrisks of unfairly biased or discriminatory treatmentof consumers,or if the licensees are not able toexplain AI outcomes or decisions.Licensees must not engage in unconsci
211、onableconduct.Licensees must ensure that their AI usedoes not result in acting unconscionably towardsconsumers.Licensees must ensure that AI is notused to unfairly exploit consumer vulnerabilities orbehavioural biases.It is also critical that licenseesmitigate and manage the risks of unfair bias and
212、discrimination of vulnerable consumers from AIuse.Licensees must not make false or misleadingrepresentations.Licensees must ensure that therepresentations they make about their AI use,model performance and outputs are consistentwith how they operate.If licensees choose to relyon AI-generated represe
213、ntations when supplyingor promoting financial services,they must ensurethat those representations are not false ormisleading.Licensees should have measures for complyingwith their obligations,including their generalobligations,and these should be documented,implemented,monitored and regularly review
214、ed.If the use of AI poses new risks or challengesto complying with obligations,licensees shouldidentify and update relevant compliance measures.Licensees must have adequate technologicaland human resources.Licensees should considerwhether there are staff with the skills andexperience to understand t
215、he AI used,and whocan review AI-generated outputs.Licensees shouldhave sufficient technological resources to maintaindata integrity,protect confidential information,meet current and anticipated future operationalneeds(including in relation to system capacity),and comply with all legal obligations.Li
216、censees must have adequate risk managementsystems.Licensees should consider how the use ofAI changes their risk profile,whether this requireschanges to their risk management frameworks,and whether they are still meeting their riskmanagement obligations in light of their use of AI.Licensees remain re
217、sponsible for outsourcedfunctions,and they should have measures inplace to choose suitable service providers,tomonitor their performance,and deal appropriatelywith any actions by such providers.Licenseesshould consider how these expectations apply ifthey use third-party providers at any stage in the
218、AI lifecycle.Company directors and officers must dischargetheir duties with a reasonable degree ofcare and diligence.These duties extend to theadoption,deployment and use of AI.Directorsand officers should be aware of the use of AIwithin their companies,the extent to which theyrely on AI-generated i
219、nformation to discharge theirduties and the reasonably foreseeable associatedrisks.A S I C R E P 7 9 835AI governance:Questions for licensees1.Taking stock:Where on your AI journey are you?Do you knowwhere AI is used in your organisation?Do you have an AI inventory,and areyou confident that it is be
220、ing adequatelymaintained?2.AI strategy:Are you clear where you are going,and why?Do you have a clear and documented strategyfor what you want to achieve with AI,now andin the future?How does this align with yourbusiness objectives and risk appetite?3.Ethics and fairness:What ethical challenges does
221、your use of AIraise?How do you meet your obligations to providefinancial services and credit efficiently,honestlyand fairly when using AI?4.Accountability:Who is accountable for AI use and outcomes,at model level and overall?Do they get thereporting they need to do their job?How are you measuring co
222、nsumer outcomesfrom AI?Are you delivering benefits andavoiding harms?For accountable entities under the FinancialAccountability Regime(FAR),have youconsidered the use of AI in key functions whenassigning accountable persons and establishingclear lines of accountability?5.Risk:Are you clear on conduc
223、t and regulatorycompliance risk from AI,particularly risk toconsumers?What is your risk tolerance?How are you identifying,mitigating andmonitoring risk throughout the AI lifecycle?How will you document this,and monitoradherence to it?Do you have staff from multiple disciplinesinvolved in assessing r
224、isk,and not just technicalexperts?Are your assessments of risk,your controls andyour monitoring still adequate if your risk profilechanges with your use of AI?6.Alignment:Do your governance arrangements lead your AIuse,now and for your future AI plans?How do the risks and ethical challenges changewi
225、th a move towards more complex and opaqueAI,such as generative AI?What changes doyou need to make as a result to ensure yourgovernance leads your use?7.Policies and procedures:Have you translated your AI strategy andassessment of risk into policies for your staff,setting clear expectations through t
226、he AIlifecycle?Is your approach risk based?Dopolicies lead your AI use?Are your AI policies and procedures fit forpurpose,now and for anticipated future use?How do you ensure your staff adhere to your AIpolicies and procedures?A S I C R E P 7 9 8368.Resourcing:Do you have the right technological and
227、 humanresources at all levels?How do you ensureyour resources remain fit for purpose as AI useaccelerates and evolves?How do you ensure your staff at all levels,including compliance and internal audit staff,have the skills and voice to engage with AIdecisions and monitoring in their roles?9.Oversigh
228、t and monitoring:Are you clear on what human oversight youexpect?Do you have procedures for whenthings go wrong?Do you have an action plan if a model is foundto be producing unexpected outputs?Have you considered the adequacy of yourbusiness continuity,backup and disasterrecovery plans for AI system
229、s?10.Third parties:How do you manage the challenges of relyingon third parties?How will you validate,monitor and reviewthird-party AI models?11.Regulatory reform:Are you engaging with the regulatory reformproposals on AI?AI governance:Questions for licenseesA S I C R E P 7 9 837 AppendicesA S I C R
230、E P 7 9 838APPENDIX 1Review methodology and definitionsDefinition of AIWe defined AI broadly,to include both:advanced data analytics the autonomous or semi-autonomous examination of data or content using sophisticated techniques and tools,beyond those of traditional business intelligence(BI),to disc
231、over deeper insights,make predictions and generate recommendations,and generative AI a category of AI that focuses on creating or generating novel content in forms such as image,text,music,video,designs and recommendations.Generative AI systems are designed to produce output that is not explicitly p
232、rogrammed or copied from existing data,but rather is generated based on patterns,structures and examples learned from large datasets during the training process.We adopted this broad definition because risks to consumers are not limited to newer,more complex techniques that are the subject of widesp
233、read debate,such as generative AI.If governance is inadequate,and risks are not well identified,mitigated,and monitored,consumer harm can arise even from techniques or models that have been used for many years.Review scope We reviewed the current and planned uses of AI,as at December 2023,by a sampl
234、e of 23 licensees.The licensees were drawn from the banking,credit,general and life insurance,and financial advice sectors.We looked at use cases where AI interacted with or impacted consumers.The sample was not representative of AI use generally,or of the sectors in the review.We selected licensees
235、 that we identified as most likely to be using AI,based on their business model and ASICs intelligence.Some were found to be early on the AI journey in our review.We limited the scope of our review to AI use cases that directly or indirectly impacted consumers.The scope did not include:back-office f
236、unctions investing,markets and trading activities,or models used for compliance with laws administered by other regulators.The review was intended to provide ASIC with an understanding of how licensees are using and planning to use AI,and how they are considering and mitigating associated risks.We d
237、id not test for consumer outcomes from individual AI models.Review methodologyWe reviewed information for 624 use cases provided by the 23 licensees:For licensees with a relatively small number of use cases,we reviewed detailed information for all of their use cases.For licensees with a larger numbe
238、r of AI use cases,we reviewed detailed information for a subset of their use cases,selected by ASIC.We also asked the 23 licensees to respond to questions and provide relevant documents to enable ASIC to understand their AI strategies,policies,processes and practices.We reviewed licensees responses
239、and supporting material these covered governance and oversight,risk management,consumer benefits,harms and outcomes,monitoring,reporting,and future plans.We held meetings with 12 of the licensees in the review during June 2024,to ask for further context and clarification about their use of AI and th
240、eir governance arrangements.A S I C R E P 7 9 839APPENDIX 1Review methodology and definitionsThe nature of the use cases and their models varied significantly.In preparing this report,we have considered the context and operation of the models and provided generalised views.In some circumstances such
241、 as where a use cases model contributed to decisions affecting consumers we have characterised explainable and interpretable attributes positively.However,we acknowledge that there is an inherent trade-off between the complexity and explainability of a model,and that a more complex model is not inhe
242、rently riskier than a simpler model.The risks of AI are heavily context dependent.Data provided by licenseesModel techniquesAs highlighted by the not specified category in Figure 3,some licensees did not provide detail about the model technique used in a use case due to commercial sensitivities or a
243、 lack of transparency from third-party providers.We have used the category not specified for these use cases or,where possible,assigned the use case models into categories based on model characteristics inferred from information provided to us.As such,there may be some small variances present in the
244、 actual model types and categories.Number of use casesLicensees had varying approaches to responding to our request for use case information.Certain licensees responded to our request by providing one use case per line item,while some larger licensees provided multiple use cases in a single line ite
245、m.Unless the number was specifically confirmed by the licensee,we have based the number of licensees use cases on the number of line items provided.As such,there may be some licensees with a greater number of use cases in scope than set out in Figure 1.Model development yearLicensees had varying app
246、roaches to responding to our request for information about the date of deployment for use cases.Some licensees preferred to provide us with the date they updated a model rather than the original date of deployment.For consistency,we have chosen to take the earlier date when reporting this data.There
247、 may also be a selection of use cases that were decommissioned before we requested the information.Since they were not currently in use or in deployment,these models would have been omitted from the sample and are not reflected in Figure 2.Use cases and their corresponding models were classified as
248、current if they were operational and/or working on live data at the time of the review.Use cases and their corresponding models were classified as in development if the model had not yet been deployed by the licensee for use on live,real-time data streams,was part of a pilot study,was still being bu
249、ilt,or was scheduled for deployment.A S I C R E P 7 9 840APPENDIX 2Accessible data points Table 2:Number of use cases reported by licenseesNumber of use cases reportedNumber of licenseesFewer than 6116256261003More than 1003Note:This is the data shown in Figure 1.Table 3:Number of AI use cases by de
250、ployment yearDeployment yearUse cases:non-generative AIUse cases:generative AI200020200100200200200300200400200500200600200750200810200930201030201170201214020134020141102015602016170201711020182302019170202044120214112022842202311818Dev9626 Note:This is the data shown in Figure 2.Table 4:Model tech
251、niques by statusModel techniquesCurrent(n=488)In development(n=124)Supervised learning:Classification42%39%Supervised learning:Regression18%17%Deep learning13%10%Unsupervised learning7%3%Generative AI5%22%Miscellaneous2%7%Not specified13%2%Note:This is the data shown in Figure 3.A S I C R E P 7 9 84
252、1APPENDIX 3Key termsADAAdvanced data analytics the autonomous or semi-autonomous examination of data or content using sophisticated techniques and tools,beyond those of traditional business intelligence,to discover deeper insights,make predictions and generate recommendationsAI Artificial intelligen
253、ce a collection of interrelated technologies that can be used to solve problems autonomously and perform tasks to achieve defined objectives.In some cases,this is done without explicit guidance from a human being.For the purposes of this report,AI includes advanced data analytics and generative AIAI
254、 lifecycleIncludes the following stages:design,data and modelling phase;verification and validation phase;deployment phase;and operating and monitoring phase.These phases may take place in an iterative manner and are not necessarily sequentialalgorithmA set of instructions that guide a computer in p
255、erforming specific tasks or solving problems.Algorithms can range from simple tasks,like sending reminders,to complex problem solving,which is crucial in AI and machine learningalgorithmic biasSystematic and repeatable errors in a computer system that create unfair outcomes,such as privileging one c
256、ategory over another in ways different from the intended function of the algorithmcontestabilityThe ability for the outputs or use of an AI system to be challenged by people impacted by that AI systemdeep learningA machine-learning technique that uses interconnected layers of neurons to learn and un
257、derstand patterns in data,especially in tasks like image recognition and speech synthesis.Deep refers to the fact that the circuits are typically organised into many layers,which means that computation paths from inputs to outputs have many steps.Deep learning is currently the most widely used appro
258、ach for applications such as visual object recognition,machine translation,speech recognition,speech synthesis and image synthesisexplainabilityThe ability of an AI system to be comprehended and trusted by humans.Explainable AI allows an understanding of how an AI system has produced a specific outp
259、utgenerative AIA category of AI that focuses on creating or generating novel content in forms such as images,text,music,video,designs and recommendations.Generative AI systems are designed to produce output that is not explicitly programmed or copied from existing data,but rather is generated based
260、on patterns,structures and examples learned from large datasets during the training processlicenseeA person who holds an Australian financial services licence under section 913B of the Corporations Act 2001 and/or an Australian credit licence under section 35 of the National Consumer Credit Protecti
261、on Act 2009machine learningA branch of AI and computer science that focuses on the development of systems that are able to learn and adapt without following explicit instructions,imitating the way that humans learn,gradually improving their accuracy,by using algorithms and statistical models to anal
262、yse and draw inferences from patterns in dataA S I C R E P 7 9 842APPENDIX 3modelA machine-learning or AI algorithm that has been trained to do a particular taskmodel techniqueA simplified way of referring to a models particular algorithm to perform a certain task alongside the underlying structure
263、or design of a machine-learning model.Also referred to as a models architecture in technical termsnatural language processingA branch of AI with techniques to help computers understand,interpret and manipulate human languageneural networksComputer models inspired by the human brains structure.These
264、interconnected artificial neurons,organised in layers,learn from data to make predictions in machine learning,underpinning deep learningoptical character recognitionA process that converts an image of text into a machine-readable text formatsupervised learningA sub-category of machine learning where
265、 algorithms learn from labelled data to make predictions or classifications,often with high accuracytraining dataThe data used in the first instance to develop a machine-learning model,from which the model creates and refines its rulestransparencyThe disclosure provided to people about when they are
266、 engaging with an AI system or when AI has been used to make decisions that impact themuse caseA model or several models that are applied to a specific context for example,a logistic regression model applied to predict a customers likelihood of defaultunsupervised learningA sub-category of machine learning where algorithms group data objects based on similarities,without prior category specificationsKey termsA S I C R E P 7 9 8asic.gov.au