《歐盟人工智能法案全面 3:治理計劃.pdf》由會員分享,可在線閱讀,更多相關《歐盟人工智能法案全面 3:治理計劃.pdf(28頁珍藏版)》請在三個皮匠報告上搜索。
1、Practical steps to comply with the EU AI Acts mandatory requirementsThe Governance ProgramWELCOME AND INTRODUCTIONSOliver Patel,AIGP,CIPP/EEnterprise AI Governance Lead,AstraZenecaAI Governance Faculty and Research Advisory Board Member,IAPPMember,OECD Expert Group on AI Risk and AccountabilitySESSI
2、ON OUTLINEI.Requirements for high-risk AI systemsII.Transparency obligations for providers and deployers of certain AI systemsIII.General-purpose AI modelsIV.AI Governance at the EU levelV.Practical steps to prepare for the EU AI ActVI.Questions and AnswersI.Requirements for high-risk AI systemsTech
3、nical documentationRecord keepingData&data governanceHuman oversightQuality management systemRisk management systemAccuracy,robustness&cybersecurityTransparency&Instructions for UseConformity assessmentRegistration of high-risk AIRegulatory cooperationAuthorizedrepresentativeProvidersI.Requirements
4、for high-risk AI systemsDeployersDeployersAssign human oversightRelevant&representative input dataMonitoring&incident reportingEmployers must inform workers about high-risk AI useFollow Instructions for UseMaintain AI system logs&records Notice&disclosure for AI decision-makingRegulatory cooperation
5、Fundamental rights impact assessmentI.Requirements for high-risk AI systemsFundamental rights impact assessmentApplies only to high-risk AI systems listed in Annex III(aside from critical infrastructure).Must be completed by deployers that are:bodies governed by public law private entities providing
6、 public services deployers using AI systems to evaluate credit worthiness and credit scores Deployers using AI systems for risk assessment and pricing for health and life insuranceI.Requirements for high-risk AI systemsAI value chainAny distributor,importer,deployer or other third-party shall be con
7、sidered a provider of a high-risk AI system if:They put their name or trademark on a high-risk AI system already on the market;or They make a substantial modification to a high-risk AI system;or They modify the intended purpose of an AI system which is not high-risk.Distributors and importers of hig
8、h-risk AI systems must verify that a high-risk AI system is in conformity with the AI Acts requirements(i.e.,check CE marking).II.Transparency obligations for providers and deployers of certain AI systemsScope and applicabilityAI systems which interact with individuals or generate content.Sometimes
9、referred to as limited-risk AI systems.These requirements can apply to high-risk AI systems,in addition to all other requirements for high-risk AI systems.These separate requirements can also apply to AI systems which are not classified as high-risk.II.Transparency obligations for providers and depl
10、oyers of certain AI systemsAI systems which interact directly with peopleProviders must develop the AI system in a way which informs end users that they are interacting with an AI system.Exceptions:Unless it is obvious to a reasonably well-informed,observant person.AI systems used by law enforcement
11、 to detect,prevent,investigate or prosecute criminal offences.II.Transparency obligations for providers and deployers of certain AI systemsUsing emotion recognition or biometric categorization AI systemsDeployers must inform individuals about the use of the AI system.Exceptions:AI systems used to de
12、tect,prevent or investigate criminal offences,subject to appropriate safeguards.II.Transparency obligations for providers and deployers of certain AI systemsAI systems which generate or manipulate contentDeployers must disclose that deep fake content(i.e.,image,audio,video)is AI generated.Limited ex
13、ceptions apply.Deployers must disclose that AI generated text,on matters of public interest,is AI generated,unless there is human review.Providers must ensure that AI outputs(e.g.,audio,image,video or text)are detectable as AI generated.III.General-purpose AI modelsKey conceptsGeneral-purpose AI mod
14、els:have the capability to competently perform a wide range of distinct tasks.General-purpose AI models with systemic risk:models with high-impact capabilities(i.e.,amount of compute used for training,measured in FLOPs).Essential component of an AI system,but not an AI system itself.Distinct from th
15、e concept of high-risk AI system.III.General-purpose AI modelsKey requirements for general-purpose AI modelsProviders must maintain extensive technical documentation.Put in a place a policy to comply with EU copyright law.Publish detailed summary about content used for training data.Appoint authoriz
16、ed representative in the EU(if applicable).III.General-purpose AI modelsKey requirements for general-purpose AI models with systemic risk(in addition to previous slide)Providers must perform state-of-the-art model evaluation.Notify the Commission and assess/mitigate potential systemic risks.Track an
17、d report on any serious incidents.Ensure adequate cybersecurity protection.N.B.The requirements and categories are not mutually exclusiveHigh-risk AI systemsLimited risk AI systems*General-purpose AI modelsAn AI system can be both high-risk and subject to the separate transparency requirements,or no
18、t high-risk and only subject to the transparency requirements.Similarly,a general-purpose AI model can be part of a high-risk AI system,standalone,or part of an AI system which is not high-risk.*Transparency-requiring AI systems,which interact directly with individuals or generate/manipulate content
19、How the AI Acts requirements will impact AI governancePublicunderstanding&awarenessOngoing&continuous complianceTransparency&disclosureRiskadmissionAI liability&litigationIV.AI governance at the EU levelEuropean AI BoardAdvisory ForumScientific Panel of Independent ExpertsNational authoritiesAI Offi
20、ceEU BodiesIV.AI governance at the EU levelEU Bodies at the member state levelNotifying AuthorityMarket Surveillance Authority Each Member State shall establish or designate as national competent authorities at least one notifying authority and at least one market surveillance authority for the purp
21、oses of this Regulation V.Practical steps to prepare for the EU AI Act10 key pillars for Enterprise AI Governance5 questions to risk assess AI projects5 questions to ask AI providersTop tips for AI governance professionals10 Key Pillars for Enterprise AI GovernanceAI Discovery:Inventory and Catalogu
22、eAI Risk Management FrameworkAI Governance Policies and StandardsThird-Party Risk Management AI Governance Board and Oversight10 Key Pillars for Enterprise AI Governance(continued)Education,training and awarenessMetrics,Assurance and AuditRegulatory monitoring and preparednessPrivacy,Cyber Security
23、and Data GovernanceTechnical guardrails and tooling5 questions to risk assess AI projectsWhat data has been used for AI training and development?What outputs does the AI system generated?How are those outputs going to be used?What are the implications if the AI outputs are flawed,inaccurate or biase
24、d?What problem are we solving with AI?What is the use case?143255 questions to ask AI providersHow have you mitigated key risks(e.g.,bias,accuracy,explainability)?Can you provide Instructions for Use and wider technical documentation?Will you use our data to train,retrain or improve your models and
25、services?Will you provide any indemnification?What is your approach to AI governance and EU AI Act compliance?14325Top tips for AI governance professionals Learn to speak the language of AI and data science Be pragmatic,proportionate and risk-based Be persistent,patient and politically savvyFurther
26、resourcesFollow Oliver Patel,CIPP/E on LinkedIn for more AI governance and policy content NIST AI Risk Management Framework and Playbook resources OECD resources:Catalogue of tools and metrics for trustworthy AIFramework for AI Risk Classification IAPP AI Governance CenterQUESTIONS&ANSWERSOliver Pat
27、el,AIGP,CIPP/EEnterprise AI Governance Lead,AstraZenecaAI Governance Faculty and Research Advisory Board Member,IAPPMember,OECD Expert Group on AI Risk and AccountabilityDid you enjoy this session?Is there any way we could make it better?Let us know by filling out a speaker evaluation.1.Open the Cvent Events app.2.Enter IAPP AIGG24(case and space sensitive)in search bar.3.Tap Schedule on the bottom navigation bar.4.Find this session.Click Rate this Session within the description.5.Once youve answered all three questions,tap Done.How Did Things Go?(We Really Want To Know)