《AI 法案解釋:Presentation on the AI Act.pdf》由會員分享,可在線閱讀,更多相關《AI 法案解釋:Presentation on the AI Act.pdf(19頁珍藏版)》請在三個皮匠報告上搜索。
1、27 November 2024,9:30 10:30Session 4:The AI Act Explained26-27NovemberThe EggBrusselsAgenda26-27NovemberThe EggBrusselsTimeTopic09:30-09:40Introduction with SlidoModerator:Karen Oldhoven09:40-10:05Presentation on the AI ActSpeaker:Martin Ulbrich,Senior Expert,CNECT.A.2-Artificial Intelligence Regula
2、tion and Compliance,AI Office10:05 10:25Open Q&AModerator:Karen Oldhoven and speaker:Martin Ulbrich10:25-10:30Wrap up and closingModerator:Karen Oldhoven26-27NovemberThe EggBrusselsMartin UlbrichSenior ExpertCNECT.A.2-Artificial Intelligence Regulation and ComplianceAI OfficeThe EU AI Act:A risk-bas
3、ed approach for rules on AI systemsLIMITED RISKPermitted,but transparency obligations(e.g.watermarking and labelling)UNACCEPTABLE RISKProhibitedHIGH RISKPermitted,but requirements to ensure AI is safe and trustworthyMINIMAL RISKPermitted,with no restrictionsFirst regulation on AI worldwideRisk-based
4、 approachEntered into force 1 August 2024Complementary and coherent with the Union rulebookA limited set of particularly harmful AI practices are bannedSocial Scoringfor public and private purposes leading to detrimental or unfavourable treatmentBiometriccategorisationto deduce or infer race,politic
5、al opinions,religious or philosophical beliefs or sexual orientation,exceptions for labelling in the area of law enforcementReal-time remote biometric identificationIn publicly accessible spaces for law enforcement purposes,-with narrow exceptions and with prior authorisation by a judicial or indepe
6、ndent administrative authorityIndividual predictive policingassessing or predicting the risks of a natural person to commit a criminal offence based solely on this profiling without objective factsEmotion recognitionin the workplace and education institutions,unless for medical or safety reasonsUnta
7、rgeted scraping of the internetor CCTV for facial images to build-up or expand biometric databasesSubliminal,manipulative techniquesor exploitation of vulnerabilities to manipulate people in harmful ways Unacceptable riskHigh-risk AI systems will have to comply with certain rules1.High-risk systems
8、embedded in products covered by Annex II 2.High-risk(stand-alone)use cases listed in Annex III:Biometrics:Remote biometric identification,categorization,emotion recognition;Critical infrastructures:e.g.safety components of digital infrastructure,road traffic Education:e.g.to evaluate learning outcom
9、es,assign students in educational institutions Employment:e.g.to analyse job applications or evaluate candidates,promote or fire workers Essential private and public services:determining eligibility to essential public benefits and services;credit-scoring and creditworthiness assessment,risk assessm
10、ent and pricing in health and life insurance Law enforcement:Border management:Administration of justice and democratic processesWhat are high-risk requirements and obligations?ProvidersRequirements for the AI system,e.g.data governance,human oversight,accuracy&robustness,operationalised through har
11、monised standardsConformity assessment before placing the system on the market and post-market monitoringQuality and risk management to minimize the risk for deployers and affected personsRegistration in the EU databaseDeployersCorrect deployment,training of employees,use of representative data and
12、keeping of logsPossible information obligations vis-a-vis affected personsPossible fundamental rights impact assessment(applies only to some deployers,incl.public sector)Public sector also has to register the deployment of high-risk AI in EU databaseTrust through disclosureWhen interacting with an A
13、I:Humans have to be informed if they interact with an AI and this is not obviousDeployers have to inform humans if decisions are made about them involving the use of an AI system that is high-risk according to Annex III,e.g.in recruitmentAI-generated content:AI systems that generate output need to i
14、nclude machine readable marksLabelling of audio and video content that constitutes a deep fakeLabelling of text that is intended to inform the public on matters of public interestAddressing transparency risksTransparency for all general-purpose AI modelsRisk management for those with systemic riskCo
15、des of practice developed together with stakeholders will detail out rulesGeneral-purpose AI models=highly capable AI models used at the basis of AI systems such as ChatGPTThe EU AI Act:Transparency and risk management for powerful AI modelsSlide 9Rules for general-purpose AI modelsEU level:AI Offic
16、e within CommissionRules for AI systemsNational authorities following the market surveillance systemAI Board with Member States to coordinate at EU levelScientific Panelsupports with technical adviceAdvisory Forumsupports with input from stakeholdersAI Office structureAI Office structureExcellence i
17、n AI and RoboticsAI Regulationand ComplianceAI SafetyAI Innovation andPolicy CoordinationAI for Societal GoodA1A2A3A4A5Lead Scientific AdvisorAdvisor for International AffairsHead of AI Office/Director of Directorate A Kicked-off in June 2024 Directorate A of DG CNECT Established by Commission Decis
18、ion 2024/390AI Act next steps2 Feb.20252 Aug.20252 Aug.20262 Aug.2027Governance&GPAIAll other rulesEntry into force:1 Aug.2024Embedded high-risk AI systems(Annex I)Coordinating drawing up of Code of practice on GPAI COM coordinates drawing-up by GPAI developers and other stakeholders,call for partic
19、ipants and multistakeholder consultation launched in July 2024Contributing to preparation of standards for high-risk requirementsCOM has mandated CEN/CENELEC to develop standards and actively follows the process;next step is amendment of mandatePreparing guidelines,implementing and delegated actsTo
20、guide and detail how the AI Act should be implemented and applied,starting with guidelines on the AI system definition and on the prohibitionsCommission priorities to support an effective implementation:ProhibitionsCNECT launched AI Pact to support the implementation and foster anticipated applicati
21、on of the AI ActSetting up the governance structureGrowing the AI Office,set-up of advisory groups and supporting EU Member States in the build-up of national governance systemsAdvisory groups to steer the implementation processAI BoardHigh-level representatives&experts from Member StatesCoordinatio
22、n of coherent AI Act application across EUAdvising and steering on all matters of AI policyScientific PanelIndependent experts with scientific or technical expertiseAdvises and supports in enforcement of AI ActCan issue qualified alerts about possible systemic risksAdvisory ForumAdvises AI Office an
23、d provides stakeholder inputDiverse composition,balancing commercial and non-commercial interestsAdvisory groups foreseen by the AI Act:Call for expression of interest will be launched in Q4/2024 or Q1/2025First meeting took place on 10 SeptemberProcess for set-up will be launchedin Q4/202420252024
24、Innovation ecosystem AI regulatory sandboxes Interplay with MDR and IVDR Prohibitions Standards Steering Group on GPAI AI Act interplay with other Union legislation Annex III High-risk Law enforcement and security Financial services PHASE 1 PHASE 3PHASE 2 Market surveillance authorities(AdCo)Notifyi
25、ng authorities 2026+Organisation of the AI Board SubSub-GroupsGroupsPossibility for ad hoc meetings on specific topics ad hoc meetings on specific topics on suggestion of Member StatesReserve slidesCurrent AI Act priorities:3.Preparing Commission guidelinesPractical guidance on prohibitions Practica
26、l guidance on AI system definition To be adopted before rules start to apply on 2 February 2025.To be adopted before rules start to apply on 2 February 2025.Slide 16Issues to be covered by the guidelines:Rationale of the prohibitions Interplay with high-risk AI systems and other Union law Definition
27、s EnforcementGuidelines on each individual prohibition:Rationale and objectives of the prohibition Main components and concepts of the prohibition Diverse examples of different individual use casesCurrent AI Act priorities:1.Launch of General-Purpose AI Code of Practice Process to detail out AI Act
28、rules in Code of Practiceby 2 May 2025Open call for participants with 1000 applicants andmulti-stakeholder consultation with 430 responsesFinalised selection of Chairs and Vice Chairs for Working Groups from among independent expertsPlenary and Working Groups kicked off on 30 September,additional workshops with providers and MS authorities Slide 1826-27NovemberThe EggBrusselsQuestions?