《歐盟人工智能法案全面 2:了解風險.pdf》由會員分享,可在線閱讀,更多相關《歐盟人工智能法案全面 2:了解風險.pdf(35頁珍藏版)》請在三個皮匠報告上搜索。
1、#AIGG24#AIGG24AI Governance AI Governance Global 2024Global 2024Training 6-7 JuneConference 4-5 June#AIGG24#AIGG24#AIGG24#AIGG24EU AI ACT-RISK CLASSIFICATIONBrenda Leong,PartnerLuminos.Law#AIGG24#AIGG24Welcome and Introductions Background and DefinitionsRisk CategoriesCompliance ObligationsExamples
2、and Use CasesQ&AAGENDA OUTLINE#AIGG24#AIGG24BACKGROUND AND DEFINITIONS#AIGG24#AIGG24CONTEXTGeneral Principles:Be safe,secure and trustworthySupport workersAdvance equity and civil rightsProtect consumersManage risks Be fair,transparent,and accountablePromote innovationLife Cycle GovernanceUse Case S
3、electionDesign and Feature SelectionData and DevelopmentDeployment,monitoring Decommissioning#AIGG24#AIGG24EU AI ACT-BACKGROUNDPrimary objective-ensure AI systems respect EU values and fundamental rights and comply with existing legal standards.Focus on human-centric and trustworthy AI,emphasizing t
4、he need for AI systems to be secure,transparent and accountable,thus safeguarding citizens rights and freedoms.EU commitment to leading the ethical approach to AI globally.Ensuring that AI technologies are not detrimental to public interests.#AIGG24#AIGG24DEFINITIONSAI System:The EU AI Act aligns wi
5、th internationally recognized criteria,following OECD guidelines,and defines an AI system as the following:A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment,and that,for explicit or implicit objectives,infers,from th
6、e input it receives,how to generate outputs such as predictions,content,recommendations,or decisions that can influence physical or virtual environments.(Art.3(1)An AI model is the base program-which must be incorporated with a user interface or other aspects to become an AI system.Rules apply to mo
7、dels when integrated into an AI system-once on the market.#AIGG24#AIGG24DEFINITIONS(Draft text)Foundation models-“developed from algorithms designed to optimize for generality and versatility of output.often trained on a broad range of data sources and large amounts of data to accomplish a wide rang
8、e of downstream tasks,including some for which they were not specifically developed and trained.”Generative AI-“capable of generating text,images,and other contentdevelopment and training require vast amounts ofdata.”An example of a GPAI(next slide)(Recital 105)#AIGG24#AIGG24DEFINITIONSGeneral Purpo
9、se AI(GPAI)Models:an AI model,including where such an AI model is trained with a large amount of data using self-supervision at scale,that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market a
10、nd that can be integrated into a variety of downstream systems or applications,except AI models that are used for research,development or prototyping activities before they are placed on the market;General Purpose AI(GPAI)Systems:an AI system which is based on a general-purpose AI model and which ha
11、s the capability to serve a variety of purposes,both for direct use as well as for integration in other AI systems#AIGG24#AIGG24DEFINITIONSGeneral Purpose AI(GPAI)Models:Presumed to have the highest impact on society,fundamental rights and valuesDetermined based on computing power-expressed in“float
12、ing point operation”or FLOP-set at 1025Commission can overrule this standard(high risk at lower power threshold)Does not cover pre-release models(R&D,prototyping)Technical documentation(detailed)for downstream providers;copyright compliance;description of training dataExempts open-source models exce
13、pt if they are GPAI with systemic risk#AIGG24#AIGG24DEFINITIONSGeneral Purpose AI Models with Systemic Risk(GPAI_SR):a risk that is specific to the high-impact capabilities of general-purpose AI models,having a significant impact on the Union market due to their reach,or due to actual or reasonably
14、foreseeable negative effects on public health,safety,public security,fundamental rights,or the society as a whole,that can be propagated at scale across the value chain;GPAI requirements+assessments specific to systemic risks;adequate securityProviders ID the systems and notify EU CommissionEU Commi
15、ssion can also identify and designateMaintain and publish and up-to-date listApplies to otherwise exempt open-source modelsTechnical documentation and evaluation specifics and system architecture#AIGG24#AIGG24DEFINITIONSModel“Providers”:a natural or legal person,public authority,agency or other body
16、 that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark,whether for payment or free of charge;“Provider”has specific obligationsdetailed s
17、ummaries of training data;copyright policies;output identification;submit for evaluation#AIGG24#AIGG24RISK CATEGORIES#AIGG24#AIGG24CATEGORIESURAIHRAILimitedMinimalSocial Scoring;Live surveillance;exploit or manipulate;predictive policingEmployment,education,access to services,vehicle safety,law enfo
18、rcement;biometrics;profilingImpersonation,chatbots,deepfakes All Remaining(spam filters,inventory mgt,video games)ProhibitedConformity AssessmentTransparency ObligationsNo Obligation#AIGG24#AIGG24UNACCEPTABLE RISKProhibited,with some exceptions:Subliminal techniques;purposefully manipulative or dist
19、orting behavior;impairing ability for an informed decisionExploiting vulnerabilities of a person/group age,physical/mental abilitiesSocial credit scoringPredictive policingCreating FR systems based on scraping facial imagesEmotional recognition systems in work or educational settingsBiometric catego
20、rization systems according to sensitive or protected attributesReal time remote biometric identification systems in publicly accessible spaces(exceptions)#AIGG24#AIGG24Annex IAnnex IIIUsed as a component in/or listed product(extensive list)Machinery,toy safety,medical devicesPerforms functions liste
21、d under use casesBiometric purposes not already prohibitedCritical infrastructureEducation or Vocational trainingEmploymentAccessing public servicesLaw enforcementImmigration managementAdministration of justice;democratic processesHIGH RISKExplicitly Identified in the Act:#AIGG24#AIGG24HIGH RISKAddi
22、ng to Annex III:Complex analysis-with exceptionsShall not be considered as high risk if it does not pose a“significant risk of harm,to the health,safety or fundamental rights of natural persons,including by not materially influencing the outcome of decision making”(unless includes profiling of natur
23、al persons)Narrow procedural tasksTo improve previous human activityUsed to analyze,not replace,human reviewPerform a preparatory taskDoes the Provider CONSIDER that the Annex III covered AI system poses significant risk?If not-register and have documentation available upon request#AIGG24#AIGG24HIGH
24、 RISKAdding to Annex III based on complex analysis-criteria:Intended purposeExtent of current or expected useNature and amount of data processed and usedSpecial categories of personal dataExtent to which it acts autonomously;possibility for a human to override a(potentially harmful)decision or recom
25、mendationExtent to which is has already caused harm(health,safety,rights)#AIGG24#AIGG24HIGH RISKCriteria(cont):Potential extent of harm or adverse impact-intensity,affecting groups,disproportionate impactExtent of personal dependence on output(ability to opt-out)Imbalance of power-vulnerability of i
26、mpacted personsEase of reversibility,including technical solutionsMagnitude and likelihood of benefitsExisting measures for prevention or redress#AIGG24#AIGG24COMPLIANCE OBLIGATIONS#AIGG24#AIGG24HIGH RISK-COMPLIANCEOutlined-then assigned to Providers,Importers,Distributors,etc.Registering and report
27、ing;conformity assessment,common specifications,certificatesRisk management measures in place(detailed)“Serious incident”reportingTesting,post-market monitoring-lifecycle risk managementForeseeable impacts for intended use and foreseeable misuseImpact on childrenAccuracy,robustness,and cybersecurity
28、Transparency-machine readable,detectable as generated or manipulatedData Governance(extensive)Technical Documentation and record keepingHuman Oversight#AIGG24#AIGG24HIGH RISK-COMPLIANCESubject to strict obligations before put on the market:Adequate risk assessment and mitigation systems;High quality
29、 of the datasets feeding the system to minimise risks and discriminatory outcomes;Logging of activity to ensure traceability of results;Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;Clear and adequate information to
30、the user;Appropriate human oversight measures to minimise risk;High level of robustness,security and accuracy.#AIGG24#AIGG24GPAI MODELS-COMPLIANCEPrimary focused on Providers of GPAI models:Technical documentation-training,testingInformation and documentation to downstream providersClear guidance as
31、 to copyright implicationsTransparency;embedded safeguards;copyright of training dataCodes of practice until harmonized standards are availableException for Open-Source models#AIGG24#AIGG24GPAI_SR MODELS-COMPLIANCE Perform and document model evaluations before market Adversarial testing,accuracy,mit
32、igationsMonitor systemic risks and ensure accountability,adequate governance and post-market oversightSerious incident reporting;corrective measures(EU and national authorities)CybersecurityObligations for Providers(in addition to GPAI):#AIGG24#AIGG24TRANSPARENCY OBLIGATIONSLimited Risk:AI Systems i
33、ntended to interact directly with natural persons,but also for GPAI and Generative AIClear notice to natural person that they are interacting with AIGenerated synthetic outputs clearly marked in machine readable format;detectable as generated or manipulatedSpecifically required for deepfakes as well
34、Emotion recognition systems and biometric categorization inform persons and protect/process data per regulations#AIGG24#AIGG24EXAMPLES AND USE CASES#AIGG24#AIGG24HIGH RISK AI PROCESS*EU Commission Image#AIGG24#AIGG24AI IN EMPLOYMENTPOSSIBLE HIGH RISK:Used in the recruitment or selection process exam
35、ples in this context include targeted job ads,to analyze and filter job applications,and to evaluate(rank/score)candidates.Used to make decisions affecting the terms of work-related relationships,promotion and termination,contractual issues,allocation of tasks based on individual behavior or persona
36、l traits or characteristics and to monitor and evaluate performance and behavior of persons in such relationships#AIGG24#AIGG24AI IN EMPLOYMENTIF HIGH RISK-EMPLOYERS MUST:Inform workers representatives and affected workers that they will be subject to the AI system;Implement human oversight by indiv
37、iduals who have adequate competence,training,and authority,as well as the necessary support;Monitor use of the system,and if an issue like discrimination arises,immediately suspend using the system and notify both the provider,the importer or distributor,and the“relevant market surveillance authorit
38、y”;Maintain the logs automatically generated by the system for an appropriate period,which must be at least six months;andComply with any applicable data privacy laws.#AIGG24#AIGG24AI IN EDUCATIONPOSSIBLE HIGH RISK:Used to determine access or admission,or to assign people to education or training in
39、stitutions.Intended to be used to evaluate learning outcomes,or for the purpose of assessing the appropriate level of education that individuals will receive or will be able to access.Used for monitoring and detecting students who are cheating on tests.#AIGG24#AIGG24BIOMETRICSDefined:Biometric categ
40、orization system;remote biometric identification system;real-time remote biometric ID system;post remote biometric ID systemProhibited Categorize people based on their biometric data to deduce or infer their race,political opinions,trade union membership,religious or philosophical beliefs,sex life o
41、r sexual orientation.Untargeted scraping of facial images from the internet or CCTV footage.High RiskAny system(not prohibited)that does more than identity verification/authenticationUsed for identification or categorisation,or emotion recognition in other contextsExceptionsLaw enforcementSolely to
42、enable cybersecurity and personal data protection measures#AIGG24#AIGG24EXCEPTIONS(Recap)Used exclusively for military,defense or national security purposes,regardless of the type of entity carrying out those functionsAI systems and models,including their output,specifically developed and put into s
43、ervice for the sole purpose of scientific research and developmentAny research,testing and development activity regarding AI systems or models prior to being placed on the market or put into serviceDeployers who are individuals and use the AI systems in a purely personal non-professional activityHOW
44、EVER:any AI system will automatically be considered to be a high-risk AI system if the AI system performs profiling of individuals#AIGG24#AIGG24QUESTIONS?#AIGG24#AIGG24RESOURCE LIST EU AI Act:“Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on art
45、ificial intelligence(Artificial Intelligence Act)and amending certain Union legislative acts”Draft with all three versions of the EU AI Act in parallel columns Jun 20,2023European Parliament Version,June 14,2023 Jun 14,2023Artificial intelligence act,Councils General Approach Nov 25,2022Artificial i
46、ntelligence act,Commission proposal Apr 21,2022 AI-Questions and Answers-European Commission Dec 12,2023 Press release:European Commission welcomes political agreement on AI Act Dec 9,2023 Statement by Commission President von der Leyen on the AI Act Dec 9,2023 Council of the European Union Press Re
47、lease-Dec 9,2023 European Parliament Press Release-Dec 9,2023 The EU AI Act Is(Almost)Here.What It Means for Your Business,Goodwin,February 14,2024 The EUs Artificial Intelligence Act,explained,Coin Telegraph,February 8,2024#AIGG24#AIGG24Did you enjoy this session?Is there any way we could make it b
48、etter?Let us know by filling out a speaker evaluation.1.Open the Cvent Events app.2.Enter IAPP AIGG24(case and space sensitive)in search bar.3.Tap Schedule on the bottom navigation bar.4.Find this session.Click Rate this Session within the description.5.Once youve answered all three questions,tap Done.How did things go?(We really want to know)