《歐盟委員會:2024人工智能在處理和生成新數據方面的作用:開放數據環境中的法律政策挑戰報告(英文版)(62頁).pdf》由會員分享,可在線閱讀,更多相關《歐盟委員會:2024人工智能在處理和生成新數據方面的作用:開放數據環境中的法律政策挑戰報告(英文版)(62頁).pdf(62頁珍藏版)》請在三個皮匠報告上搜索。
1、 1 This study has been prepared as part of data.europa.eu,an initiative of the European Commission.The Publications Office of the European Union is responsible for the management of data.europa.eu contracts.For more information about this paper,please contact the following.The role of artificial int
2、elligence in processing and generating new data An exploration of legal and policy challenges in open data ecosystems 2 European Commission Directorate-General for Communications Networks,Content and Technology Unit G.1 Data Policy and Innovation Email:CNECT-G1ec.europa.eu data.europa.eu Email:infod
3、ata.europa.eu Author:Hans Graux Pieter Gryffroy Magdalena Gad-Nowak Liesa Boghaert Last update:July 2024 https:/data.europa.eu/DISCLAIMER The information and views set out in this publication are those of the author(s)and do not necessarily reflect the official opinion of the Commission.The Commissi
4、on does not guarantee the accuracy of the data included in this study.Neither the Commission nor any person acting on the Commissions behalf may be held responsible for the use that may be made of the information contained herein.Luxembourg:Publications Office of the European Union,2024 European Uni
5、on,2024 The re-use policy of European Commission documents is implemented by Commission Decision 2011/833/EU of 12 December 2011 on the re-use of Commission documents(OJ L 330,14.12.2011,p.39,ELI:http:/data.europa.eu/eli/dec/2011/833/oj).Unless otherwise noted,the re-use of this document is authoris
6、ed under a Creative Commons Attribution 4.0 International(CC BY 4.0)licence(https:/creativecommons.org/licences/by/4.0/).This means that re-use is allowed provided appropriate credit is given and any changes are indicated.ISBN:978-92-78-44246-0 doi:10.2830/412108 Catalogue number:OA-02-24-797-EN-N 3
7、 Table of ContentsTable of Contents An introduction to the potential impact of artificial intelligence systems on open data ecosystems 5 1.Artificial intelligence and open data ecosystems.5 2.Problem statement and structure of this research paper.6 3.Working definitions in this paper.7 Artificial in
8、telligence and fundamental rights.9 1.Introduction.9 2.Fundamental rights framework.9 3.Privacy and data protection as a fundamental right in Europe.10 Council of Europe legal framework.10 European Union legal framework.11 General data protection regulation.11 Data subjects rights.18 Other accountab
9、ility mechanisms.21 4.Example case:navigating the impact of artificial intelligence in healthcare.23 Introduction.23 Artificial intelligence in healthcare:a short landscape overview.23 Risks associated with the use of artificial intelligence in healthcare.25 5.Risk mitigating measures:general strate
10、gies and approaches.26 6.Conclusion.26 Artificial intelligence and intellectual property the(lack of)creativity of artificial intelligence,and its dependence on pre-existing inputs.27 1.Artificial intelligence and training data:addressing copyright challenges.27 2.Can an artificial intelligence be a
11、 creator?Dealing with(non)creative outputs.31 3.Artificial intelligence outputs and copyright infringement.33 4.The future of generative artificial intelligence and copyright.34 5.Conclusions.36 A legislative attempt to reduce problems:the ambitions of the EUs Artificial Intelligence Act.37 1.Overvi
12、ew of the origins and principles of the Artificial intelligence Act.37 The Artificial Intelligence Act and its ambitions context and background.37 When will the Artificial Intelligence Act commence(material scope)?.38 The Artificial Intelligence Act a risk-based approach to artificial intelligence.3
13、8 Regulated roles under the Artificial Intelligence Act(personal scope).42 4 The Artificial Intelligence Act territorial scope.43 Provider obligations under the Artificial Intelligence Act.43 Deep dive into provider obligations for high-risk artificial intelligence systems:data management and data g
14、overnance.46 Deep dive into provider obligations for high-risk artificial intelligence systems:the risk management system.48 Deployer obligations under the Artificial Intelligence Act.49 Enforcement and fines.50 2.What does the Artificial Intelligence Act mean in practice for open data ecosystems?.5
15、2 Understand your project and your role.52 Using open data in artificial intelligence applications.53 Risk assessment and risk management of open data artificial intelligence use cases.55 Timeline of the Artificial Intelligence Act and expectations for the future.56 3.Conclusion.57 Overall conclusio
16、n on legal challenges in the intersection between artificial intelligence and open data.59 Bibliography.61 5 A An introduction to the potential impact of n introduction to the potential impact of a artificial rtificial i intelligence systemsntelligence systems onon open data open data ecosystemsecos
17、ystems 1.1.Artificial Artificial i intelligence and open data ecosystemsntelligence and open data ecosystems The general impact of artificial intelligence(AI)systems on businesses,governments and the global economy is currently a hot topic.This isnt surprising,considering that AI is believed to have
18、 the potential to bring about radical,unprecedented changes in the way people live and work.The transformative potential of AI originates to a large extent from its ability to analyse data at scale,and to notice and internalise patterns and correlations in that data that humans(or fully deterministi
19、c algorithms)would struggle to identify.In simpler terms:modern AIs flourish especially if they can be trained on large volumes of data,and when they are used in relation to large volumes of data.A highly visible example of this process is the current popularity of generative AI systems(AISs),which
20、are capable of generating seemingly new texts,images,videos or other data at the users request.They do so by analysing patterns in large volumes of input data(pre-existing texts,images and videos),from which they then deduce common patterns.Thereafter,based on prompts from the users,they can generat
21、e new outputs that reproduce the characteristics of the input data.Generative AI chat systems have been broadly taken up by the market and allow fast text responses to be generated that can easily be mistaken for qualified human answers.Comparable systems exist for image and video outputs.Because of
22、 these characteristics,there is an inherent close connection between AI and open data.Compared to other computing techniques,AIs have a remarkable ability to extract insights from large datasets and to produce useful new outputs;but to make them work effectively,substantial sets of accessible data,t
23、o be used as training material,are essential.The accessibility and free use of large volumes of data are two of the main characteristics of open data.In other words:open data ecosystems can become and may already be the source material that high performance AIs need.For AI systems(AISs)to function p
24、roperly,the following three critical factors,known as the three Vs,are necessary.Data volume AI requires significant amounts of data to be trained on.Data variety diverse data sources enhance AI capabilities and reduces the risk of biases.Data veracity bad training data will result in bad performanc
25、e,so data truthfulness is crucial.Reliable sources play a role in determining data quality.Open data can help to satisfy these preconditions.While none of the three Vs are inherently present in every single open dataset,the breadth of data will help to satisfy the volume and variety requirements.Mor
26、eover,in the European open data community,the reliability of data sources will 6 help to satisfy the veracity requirement.In summary,open data ecosystems have the potential to help construct reliable AIs by providing a repository of usable training data;and inversely,the open data community can bene
27、fit from AIs by using them as a tool to trawl through large datasets and obtain insights that would otherwise not be readily apparent.In this way,the combination of AI and open data has the potential to revolutionise data ecosystems,enabling innovation and facilitating informed decision-making.2.2.P
28、roblem statement and structure of this research paperProblem statement and structure of this research paper Despite these clear potential benefits,AIs can also be a source of new challenges from a legal and policy perspective.Problems can present themselves on both the input side(how AIs are created
29、 and trained)and the output side(how they are brought to market and how their impacts can be managed and controlled).On the input side,there are many legal concerns in relation to how AIs obtain access to training materials and whether their use of that training material is lawful.When the training
30、materials consist of human-made creative works,they are likely to be subject to intellectual property rights,including particularly copyright protection.In this case,the question might reasonably be raised as to whether and to what extent the use of copyright protected material is lawful in the abse
31、nce of any consent or licence from the copyright holder.Will an AI respect open data licences?Would it need to?A comparable problem presents itself with respect to fundamental rights in general and the right to data protection in particular:when an AI is trained on data that contains personal data(i
32、.e.information that can be linked to a specific natural person),is this lawful under European data protection legislation?What would be the legal basis and how can the principles of data protection law be observed when training the AI and when allowing it to be used?Similarly,there are questions of
33、product liability and product quality:who is ultimately responsible for ensuring that an AI is trustworthy and what does trustworthiness actually imply in general purpose AIs that have no explicitly defined usage limitations?How can risks of a particular AIS be identified and managed?From the output
34、 side,the same topics can be examined from a different angle.Is an AI capable of producing original works that are subject to intellectual property rights protections,given that those new works are not created by a human being and that they are generated by introducing prompts to the AI,which will t
35、hen try to recall and combine patterns from pre-existing works?Equally importantly,how can the outputs of AIs be used in a manner that is fully respectful of the EUs fundamental rights framework,given that AIs can also be used in very sensitive contexts,such as healthcare(e.g.the identification of t
36、umours)or public administration(e.g.the detection of fraud in relation to public resources)?Who is ultimately responsible in the event of failures,when it may be complex to determine whether the problem lay with training data,the AI algorithm itself,the context in which it was used or a lack of dili
37、gence in the individual user?7 And what are the legal requirements for bringing an AI product to market,or for using it in a particular company or public administration?There is thus a plethora of legal and policy questions for which there is not always a clearly defined answer yet.Part of the solut
38、ion,as will be extensively discussed in this paper,may come from the EUs proposed AI Act,which was approved by the European Parliament on 13 March 2024.The act is still undergoing final checks and is expected to be adopted and published before the end of the current EU legislature.The objective of t
39、his paper is to provide an overview of some of the main legal questions and currently available answers,building on a webinar series organised by the official portal for European data(data.europa.eu).The webinars focused on three topics in particular,which will also be examined in detail in this pap
40、er:data ownership,data use and legal insights in relation to intellectual property rights;fundamental rights,ethics and data protection;the regulatory approach of the EU in the emerging AI Act(AIA).It goes without saying that neither the webinars nor this paper were exhaustive and other legal topics
41、 could still be examined in greater detail.The objective is,however,not comprehensiveness,but rather to obtain an accurate and representative overview of some of the main legal and policy challenges today.This legal research paper is intended as a resource for data policymakers,AI companies and the
42、general public.Policymakers can get a better understanding of the risks and opportunities in AI usage,and which legal risks and constraints to take into consideration.AI companies can get insights into the legal concerns and constraints surrounding AI,including their use of training data and require
43、ments for bringing their products to market.The general public can learn how AI already affects them,and what their protection mechanisms are,under current and future law(such as the AIA).3.3.Working definitions in this paperWorking definitions in this paper This paper relies on a few important conc
44、epts that dont always have a clear or universally accepted meaning.To minimise misinterpretation,the following working definitions are used,which were based on the most current version of the proposed AIA.8 Concept Working definition Artificial intelligence system A machine-based system designed to
45、operate with varying levels of autonomy,that may exhibit adaptiveness after deployment and that,for explicit or implicit objectives,infers,from the input it receives,how to generate outputs such as predictions,content,recommendations or decisions that can influence physical or virtual environments.T
46、raining data Data used for training an AIS through fitting its learnable parameters.General purpose AI model An AI model,including where such an AI model is trained with a large amount of data using self-supervision at scale,that displays significant generality and is capable of competently performi
47、ng a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications,except AI models that are used for research,development or prototyping activities before they are released on the market.General p
48、urpose AI system An AIS based on a general-purpose AI model,that has the capability to serve a variety of purposes,both for direct use as well as for integration into other AISs.9 A Artificial intelligencertificial intelligence and fundamental and fundamental rightsrights 1.1.I Introductionntroducti
49、on AISs have undeniably ushered in a new era,reshaping the fabric of our society with their transformative capabilities.From enhancing economic efficiencies through streamlined processes and reduced costs to enabling breakthroughs in research,facilitating autonomous transportation and powering smart
50、 home appliances,the breadth of opportunities presented by AI-based technologies is boundless.Indeed,these innovations stand as a beacon of hope,offering invaluable assistance in addressing some of the most pressing challenges of our time.However,amidst their promise lies a crucial caveat:the potent
51、ial for significant,and sometimes catastrophic,impacts on both individual rights and societal well-being,if deployed without due consideration for fundamental human rights.With their ability to amass vast troves of personal data,AISs may have a significant impact on individual rights.These impacts e
52、ncompass various areas of concerns,including personal autonomy,freedom of expression and the prevention of discrimination.Among the myriad impacts of AI,privacy and data protection emerge as the twin pillars most prone to being affected by AIs technological advancements.As we delve deeper into the i
53、ntricacies of personal data processing by AISs,it becomes increasingly imperative to establish a comprehensive understanding of the broader legal framework governing data protection within the European Union.This foundation is crucial for understanding the detailed complexities and potential risks i
54、nvolved when AI intersects with fundamental rights.2.2.Fundamental rights frameworkFundamental rights framework Fundamental rights represent a set of inherent and legally protected human entitlements essential for upholding dignity,equality and freedom.Within the European context,fundamental rights
55、encompass a broad spectrum of civil,political,economic and social dimensions.These rights guarantee various aspects of human existence,including the right to life and integrity,liberty and security,privacy,freedom of expression and religion,education,non-discrimination and equality before the law.Th
56、ey serve as the bedrock of democratic societies,ensuring that individuals can live with autonomy and respect for their human dignity.The fundamental rights framework in Europe is underpinned by several key elements.At its core lies the Charter of Fundamental Rights of the European Union(the charter)
57、which codifies the extensive array of rights and freedoms guaranteed to all individuals within the European Union.The charter,along with the European Convention on Human Rights,holds significant legal weight and serves as the primary source of fundamental rights law and policy within the EU.This fra
58、mework additionally draws strength from international human rights instruments,such as the Universal Declaration of Human Rights(1948)and major UN human rights conventions,which provide further guidance and standards.10 While AI can impinge upon various fundamental rights(such as individual personal
59、 autonomy or the right to be free from discrimination),the salience of its threats to privacy and personal data emerges notably due to AIs heavy reliance on data.In Section 3,we provide a general background on the legal framework for data protection in the European Union.A basic understanding of thi
60、s framework is crucial for understanding the interplay between the application of AI and the fundamental right to privacy and the protection of personal data.3.3.Privacy and data protection as a fundamental right in EuropePrivacy and data protection as a fundamental right in Europe Throughout histor
61、y,various civilisations have recognised the importance of personal privacy and data protection.Over centuries,societies have developed increasingly sophisticated understandings of privacy and data protection,reflecting evolving cultural norms and technological advancements.Ancient civilisations such
62、 as the Roman Empire had laws protecting the confidentiality of correspondence,emphasising the value of private communication.Similarly,the Magna Carta,signed in 1215,established principles of individual rights and liberties,laying the groundwork for modern concepts of privacy and data protection.Du
63、ring the Enlightenment period,thinkers such as John Locke and Jean-Jacques Rousseau emphasised the importance of individual autonomy and the right to privacy in their philosophical writings.These ideas influenced the drafting of modern legal frameworks,including the United States Constitutions Fourt
64、h Amendment,which protects against unreasonable searches and seizures.In the 20th century,the horrors of totalitarian regimes underscored the critical need for safeguards against government intrusion into personal lives,leading to the inclusion of privacy protections in international human rights in
65、struments such as the Universal Declaration of Human Rights.These historical precedents demonstrate the enduring significance of privacy as a fundamental human right across different cultures and epochs.In the digital age,with the proliferation of data-driven technologies,concerns about privacy and
66、data protection have become more pronounced,prompting legislative efforts worldwide to safeguard individuals rights in an increasingly interconnected and data-centric world.Throughout European history,personal data and privacy have been regarded as inherent rights,deeply ingrained in the fabric of s
67、ociety.These principles find expression in two complementary systems of fundamental rights protection:the Council of Europes European Convention on Human Rights and the Charter of Fundamental Rights of the European Union and EU treaties.Council of Europe legal frameworkCouncil of Europe legal framew
68、ork Although the right to privacy is not explicitly delineated as a standalone right within the European Convention on Human Rights(ECHR),its protection is enshrined in Article 8(1).This provision safeguards everyones entitlement to respect for their private and family life,their home and their corr
69、espondence.Any governmental interference with these rights must be justified and proportionate.Given the expansive scope of personal data processing nowadays,it often intersects with an individuals right to privacy as articulated in Article 8(1)of the ECHR.Additionally,the Council of Europe took a l
70、andmark step in 1981 by ratifying the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data,known as Convention 11 108.This seminal agreement,updated in 2018,serves as a cornerstone of data protection in Europe.Convention 108 aims to uphold individuals rig
71、hts and fundamental freedoms,with a particular emphasis on the right to privacy,in the context of automated processing of personal data.By addressing the challenges posed by technological advancements,Convention 108 reinforces the Council of Europes commitment to safeguarding privacy rights in an in
72、creasingly digitised world.European Union European Union legallegal frameworkframework Within the European Unions legal framework,an extensive array of primary and secondary norms play a pivotal role in safeguarding personal information.Among the primary norms,those enshrined in the charter hold par
73、ticular importance.This foundational document allocates considerable attention to the subject matter,with two dedicated articles.Article 7 of the charter underscores the importance of respecting private and family life,along with the sanctity of communications and the home environment.Complementing
74、this,Article 8 serves as a robust safeguard,offering explicit protection for personal data a notable distinction from the ECHR,which lacks a dedicated article on data protection.It is noteworthy that Article 52(3)of the charter aims to establish coherence between the ECHR and the charter itself,spec
75、ifying that when rights in the charter align with those protected by the ECHR,their interpretation and extent mirror those of the latter.Expanding upon primary legislation,the European Union bolsters the protection of personal data through secondary legislation.The journey in EU data law began in 19
76、95 with the adoption of Directive 95/46/EC,known as the data protection directive.This directive laid the groundwork for subsequent legislation,including Directive 2002/58/EC,commonly known as the e-privacy directive,which addresses personal data processing and privacy protection in the electronic c
77、ommunications sector.Notably,in 2016,the EU implemented the well-known general data protection regulation(GDPR),a landmark development in data protection law,which entered into full application as of May 2018.National laws of EU Member States(MSs)further complement this framework,ensuring that funda
78、mental rights are upheld and respected at both European and domestic levels.Together,these elements form a comprehensive framework designed to protect the rights and dignity of individuals within the European Union and beyond.G General eneral data protection regulationdata protection regulation The
79、GDPR represents a significant milestone in data protection regulation,setting a new standard for privacy rights and accountability in the digital age.It stands as the most comprehensive and detailed framework to date,governing the collection,storage and processing of personal data.At its core,the GD
80、PR establishes stringent obligations for entities that determine the purposes and means of data processing(data controllers)and for entities that provide services(processors),while simultaneously bestowing specific rights upon individuals,known as data subjects.By establishing clear rules and robust
81、 safeguards,it aims to foster trust and confidence in the handling of personal data,ultimately enhancing privacy and data protection for individuals within the European Union.Prior to the GDPR,data protection laws within the European Union were fragmented and varied across MSs,resulting in inconsist
82、encies and gaps in protection.The GDPR sought to harmonise these laws 12 and enhance privacy rights for individuals throughout the EU.Its overarching objective was to empower individuals to have greater control over their personal data and to ensure that organisations handling such data did so respo
83、nsibly and transparently.One of the defining characteristics of the GDPR is its extraterritorial scope,which means that it applies not only to organisations operating within the EU but also to those outside the EU that process data of EU residents.This extended reach ensures that the protection of p
84、ersonal data is not confined by geographical boundaries,reflecting the global nature of data flows in the digital age.Given the heavy reliance of AI on data,much of which may be personal in nature,it becomes imperative for developers and deployers of AISs to adhere strictly to the regulations and ob
85、ligations stipulated by the GDPR.Personal data are useful at various stages of AIS development,including the training,testing and validation of AI models.During deployment,personal data can serve as input for predictions concerning individuals.Furthermore,outcomes produced by AISs may themselves qua
86、lify as personal data,as seen in scenarios like the derivation of an individuals risk score for developing a particular disease based on medical history,lifestyle patterns and genetic predispositions.Moreover,certain AI models may inherently consist of personal data,rendering such data indispensable
87、 for their effective functioning(e.g.in facial recognition systems,the AI model is built upon vast datasets containing images of individuals faces;without access to such personal data,the AI model lacks the necessary foundation to perform its intended function effectively).Therefore,ensuring complia
88、nce with the GDPR becomes paramount not only in the handling of personal data used as input or generated as output,but also in the fundamental design and structure of AISs where personal data form an intrinsic part thereof.The GDPR applies uniformly to all methods of processing personal data.However
89、,the intricate operations inherent to AISs introduce unique complexities.While the GDPR provides a comprehensive framework for safeguarding personal data,the dynamic and evolving nature of AI development presents distinct challenges in upholding its principles.Therefore,a thorough examination of the
90、se principles is essential to understand the complexities and hurdles faced by AI developers in ensuring compliance within this rapidly evolving landscape.This section delves deeper into the GDPRs fundamental principles and explains the difficulties of adhering to them within the dynamic and challen
91、ging realm of AI.GDPR principlesGDPR principles The principles relating to the processing of personal data are enumerated and explained in Article 5 of the GDPR,and are explained below.Lawfulness,fairness and transparency The principle in general Enshrined in Article 5(1),point(a),of the GDPR,this p
92、rinciple stipulates that the data processing must be lawful,fair and transparent to the data subject.Firstly,organisations must have a legitimate basis for processing personal data.Processing is lawful only when carried out under one or more of the legitimate grounds enumerated in Article 6(1)of the
93、 GDPR:the data subjects consent;the necessity to enter or perform a contract;the need to comply with a legal obligation;13 the protection of vital interests of the data subject;the performance of a task carried out in the public interest or the exercise of official authority;the legitimate interest
94、of the controller or a third party.Although all of the six items listed provide a valid legal ground for data processing,the two most frequently relied on are the first and the last ones(i.e.consent and legitimate interest).The second fundamental aspect of the principle under consideration pertains
95、to the obligation of controllers to transparently communicate to individuals the manner in which their data are used,commonly referred to as data processing.Controllers are mandated to uphold transparency and integrity in their dealings with data subjects,refraining from any form of misinformation o
96、r deception.They are entrusted with the responsibility to furnish data subjects with comprehensive details in accordance with Articles 12 and 13 of the GDPR.This entails disclosing the purpose of data processing,the duration of storage,the rights afforded to the data subjects,the categories of perso
97、nal data involved,the origins of collected data if derived from external sources,the presence of automated decision-making processes,including profiling,alongside providing substantive insights into the underlying rationale,significance and anticipated implications for the individuals concerned.Such
98、 transparency not only fosters trust between controllers and data subjects but also ensures compliance with regulatory frameworks,thereby safeguarding individual privacy rights and promoting ethical data handling practices.The principle in the context of artificial intelligence In the context of AI
99、development,adhering to the principle of lawfulness,fairness and transparency presents significant challenges.Firstly,the complexity of AI algorithms and their reliance on vast datasets make it difficult to ensure the legality and fairness of data processing activities.AISs may inadvertently generat
100、e biased outcomes or make decisions based on incomplete or biased data,leading to unfair treatment of individuals.Additionally,the opacity of AI algorithms poses challenges to transparency,as understanding how AISs operate can be difficult(the black-box phenomenon).In many instances,individuals may
101、find themselves subjected to decisions made by AISs without a clear understanding of how or why those decisions were reached.This lack of transparency not only undermines accountability but also limits individuals capacity to challenge or contest such decisions.The right to challenge decisions made
102、by AISs is integral to safeguarding fundamental rights,yet it becomes increasingly elusive in the absence of transparent data processing practices.Moreover,the lack of transparency in data processing can also impede an AI developers ability to rely on certain legal grounds for processing.For instanc
103、e,it may become challenging to obtain informed consent from data subjects when the processing activities within a given AIS are complex and the underlying logic is difficult to explain.In such scenarios,AI developers may be compelled to resort to alternative,albeit less certain,legal grounds for pro
104、cessing,such as legitimate interest.This underscores the complexity surrounding data processing in AISs and the importance of transparency in enabling individuals to make informed decisions about their data.This lack of transparency not only undermines trust but also obstructs individuals capacity t
105、o exercise their rights under the GDPR,including the right to access and rectify their personal data(discussed further below).Moreover,the swift advancement and widespread adoption of AI technologies often outstrip the development of regulatory frameworks,creating a formidable challenge for organisa
106、tions to maintain compliance with evolving legal standards and uphold the fundamental principle of lawfulness.Reconciling the imperatives driving AI innovation with the imperative to safeguard data protection 14 principles demands continuous vigilance and proactive measures.Striking a delicate balan
107、ce between technological progress and regulatory compliance necessitates concerted and sustained efforts to confront and resolve these inherent challenges.Purpose limitation The principle in general One of the key tenets of the GDPR,the principle of purpose limitation embodied in Article 5(1),point(
108、b),of the regulation,stipulates that personal data should only be collected for specified,explicit and legitimate purposes,and prohibits further processing of personal data for purposes that are incompatible with the purposes that led to the initial data collection.By the same token,controllers shal
109、l refrain from collecting any personal data that are unnecessary,inadequate or irrelevant for these specified purposes.While subsequent processing for different purposes is not inherently prohibited,repurposing collected data is only permissible if the further processing aligns with the original pur
110、pose for which the data were initially collected(in which case no legal basis separate from that which allowed the initial collection of the personal data is required).For instance,Article 5(1),point(b),of the GDPR allows further processing of personal data for archival,historical research,or statis
111、tical purposes,presuming compatibility with the original purpose.In order to assess whether the purpose of further processing is compatible with the purpose for which the personal data were initially collected,the controller should carry out a formal compatibility assessment of the intended further
112、processing activity.This compatibility test should take into account several factors,such as:any link between the original purpose and the purpose of the intended further processing;the context in which the personal data have been collected,in particular the reasonable expectations of data subjects
113、as to the further use of their data,based on their relationship with the controller;the nature of the personal data,in particular whether special categories of personal data are being processed;the consequences of the intended further processing for data subjects;the existence of appropriate safegua
114、rds in both the original and intended further processing operations.Additionally,following a positive outcome of the compatibility assessment,the controller,prior to initiating the intended further processing,may be required to inform the data subject about the intended further processing activity,a
115、s the application of the principles set out in the GDPR(in particular the information of the data subject on those other purposes and on his or her rights,including the right to object)should be ensured.There are two exceptions to this general prohibition on further processing for non-compatible pur
116、poses,namely,where further processing is based on data subjects consent or where further processing is based on an EU or MS law which the data controller is subject to.In these two cases,further processing is allowed under the GDPR,irrespective of the purpose compatibility(in other words the control
117、ler is presumed to be allowed to further process the personal data irrespective of the compatibility of the purposes).The principle in the context of artificial intelligence 15 In the realm of AI,adhering to the principle of purpose limitation poses significant challenges for controllers.Defining th
118、e potential uses of collected data upfront is often exceedingly difficult,as processing purposes can remain ambiguous during the initial stages of data collection.Consequently,it has become commonplace in AI development to repurpose data at later stages.AI models,initially trained for specific purpo
119、ses,often uncover unforeseen correlations within datasets,leading to a complete shift in their intended use.Thus,requiring AI developers to predetermine data collection purposes before processing begins could simply stifle innovation.Data minimisation The principle in general Embodied in Article 5(1
120、),point(c),of the GDPR,this principle seeks to restrict the indiscriminate collection of personal data.It mandates that only the minimal amount of personal data necessary for the intended purpose should be processed.Controllers are obliged to abstain from gathering data that are not directly and str
121、ictly relevant to the specified purpose or more than necessary.The principle in the context of artificial intelligence Observing the principle of data minimisation can be challenging for AI developers for several reasons.Firstly,this principle clashes with the very nature of AI-based technologies,wh
122、ich rely on the accumulation and analysis of massive amounts of data to function effectively.The basic functioning of AI models is grounded in their ability to learn from data,to draw inferences and to uncover correlations between various datasets.By definition,AI models require large datasets to ef
123、fectively learn and generalise patterns.After all,the more data the AIS ingests,the more accurate its calculations and predictions will be.Additionally,the complexity and interconnectedness of AI algorithms may make it difficult to identify which specific data points are truly essential for achievin
124、g the desired outcomes.Consequently,developers of AISs may feel tempted to collect excessive amounts of data(including personal data)to enhance the accuracy of their AISs.Moreover,the lack of clear guidelines or standards for determining data relevance and necessity in AI development further complic
125、ates the adherence to the data minimisation principle.Accuracy The principle in general Outlined in Article 5(1),point(d),of the GDPR,this principle implies the requirement that personal data be accurate and kept up to date at all times.Controllers are tasked with the responsibility of taking reason
126、able steps to ensure the accuracy of the data they process.They must regularly review personal data and promptly rectify or erase any inaccuracies,as processing inaccurate data may result in adverse consequences for the data subjects.The principle in the context of artificial intelligence Observing
127、the principle of accuracy of personal data in the context of AI poses notable challenges for developers for many reasons.Firstly,AI algorithms often rely on vast and diverse datasets to train and refine their models,making it difficult to ensure the accuracy of every data point.AISs feed on data fro
128、m various sources,however,the more diverse the sources,the higher the likelihood of encountering inaccuracies.Additionally,AISs may encounter issues with data quality,including errors,biases and inconsistencies,which can compromise the accuracy of the resulting insights and predictions.While some le
129、vel of inaccuracy in the data used as input or the data produced as output of the AI models is accepted(as they aim to discover general tendencies or trends),such inaccuracies may harm individuals when they are used to create profiles or deliver inferences about those 16 individuals.Moreover,AI algo
130、rithms may uncover unexpected correlations or patterns in data that challenge conventional notions of accuracy,requiring careful interpretation and validation by human experts.Furthermore,the dynamic nature of data in AI applications,with continuous updates and changes,presents ongoing challenges in
131、 maintaining data accuracy over time.Last,but not least,given the prevalence of cyber threats,there is a significant risk of malicious actors targeting the AIS and tampering with the data used to train the AI model,potentially leading to inaccurate outputs.Storage limitation The principle in general
132、 This principle,outlined in Article 5(1),point(e),of the GDPR,emphasises that personal data should only be retained in a manner that allows for the identification of data subjects for as long as necessary to fulfil the purposes for which the data was collected.Put simply,controllers must ensure that
133、 the duration of data retention aligns proportionately with the original objectives of data collection and is limited in time.Extending data storage beyond this period may be permissible solely for archiving purposes in the public interest,scientific or historical research purposes or statistical pu
134、rposes,provided that appropriate safeguards are in place.The principle in the context of artificial intelligence Observing the principle of storage limitation,as outlined in the GDPR,presents notable challenges in the development of AI-based technologies.Firstly,as already noted,AISs often require v
135、ast amounts of data to train and refine their models,which inevitably leads to concerns about the storage of personal data beyond what is strictly necessary for the intended purposes.For example,AI-powered applications in healthcare may accumulate extensive patient data for predictive analytics,lead
136、ing to questions about the retention period for historical medical records.The dynamic and iterative nature of AI development further complicates adherence to storage limitations,as the ongoing refinement of algorithms may necessitate the retention of historical data for continuous improvement.Also,
137、collaborative research and development efforts in AI often involve data sharing among multiple stakeholders,resulting in the accumulation of extensive datasets across various platforms and organisations.This raises questions about the appropriate storage duration and scope,particularly in cross-bord
138、er collaborations where differing regulatory requirements may apply.Furthermore,the potential for unintended data retention in AISs,such as cached or redundant data stored in memory or temporary storage,poses challenges in ensuring compliance with storage limitation requirements.Integrity and confid
139、entiality The principle in general Enshrined in Article 5(1),point(f),of the GDPR,this principle mandates that personal data must undergo processing in a manner that guarantees the security of the information.This entails safeguarding against unauthorised or unlawful disclosure or access to processe
140、d personal data(the confidentiality aspect)and protecting against accidental or unlawful alteration of or damage to personal data(the integrity aspect).Additionally,measures must be in place to prevent unintentional or unlawful loss of access to or destruction of personal data(the availability aspec
141、t).Among the most crucial techniques to ensure a high level of security are the encryption and pseudonymisation of personal data.The principle in the context of artificial intelligence Ensuring adherence to the principles of integrity and data confidentiality presents considerable hurdles during the
142、 development of AI technologies.Firstly,AISs frequently operate using extensive 17 datasets comprising sensitive personal data,heightening the vulnerability to unauthorised access,disclosure or alteration of information.For instance,AI applications used in the health sector or in the financial secto
143、r may handle confidential financial data,necessitating stringent measures to uphold the confidentiality and integrity of such data to prevent unauthorised access or tampering.Moreover,the interconnected nature of AISs,often reliant on shared data sources and collaborative training processes,further
144、complicates the preservation of data integrity and confidentiality.Collaborative AI development initiatives,involving multiple stakeholders and data-sharing arrangements(such as cross-border research and innovation projects),may introduce weaknesses in data security and confidentiality,particularly
145、when exchanging sensitive information across organisations and borders.Furthermore,the intricate nature of AI algorithms and their susceptibility to adversarial attacks amplify the challenges of safeguarding data integrity and confidentiality.These attacks,including techniques like data poisoning an
146、d model inversion,exploit AIS vulnerabilities to compromise data integrity or expose confidential information.Addressing these challenges demands robust implementation of data encryption,access controls and security protocols throughout the AI development life cycle,to ensure the integrity and confi
147、dentiality of data.Accountability The principle in general Outlined in Article 5(2)of the GDPR,this principle mandates that controllers are responsible for demonstrating compliance with the GDPRs principles and for implementing appropriate measures to ensure compliance.These measures include in part
148、icular data protection impact assessments and maintaining detailed records of processing activities and will be addressed in more detail below.The principle in the context of artificial intelligence While Article 5(2)of the GDPR places the onus on controllers to demonstrate compliance with the GDPRs
149、 principles and to implement appropriate measures,to ensure adherence,the dynamic nature of AISs substantially complicates those accountability efforts.Firstly,the intricate algorithms and machine learning processes inherent in AISs often result in complex decision-making processes that are difficul
150、t to trace or explain.This opacity can hinder controllers ability to fully understand and document the underlying mechanisms behind AI-driven decisions,thus impeding their ability to demonstrate compliance.Additionally,the sheer volume and variety of data processed by AISs pose challenges in conduct
151、ing comprehensive data protection impact assessments.AI algorithms may ingest vast amounts of data from diverse sources,making it challenging for controllers to assess the potential risks to individuals privacy and ensure compliance with GDPR requirements.Moreover,the evolving nature of AI technolog
152、ies introduces uncertainty regarding the adequacy of existing accountability measures.As AISs evolve and adapt over time,controllers must continuously reassess and update their compliance strategies to effectively mitigate risks and ensure accountability.Furthermore,the collaborative nature of AI de
153、velopment,involving multiple stakeholders and data-sharing agreements,further complicates accountability efforts.Ensuring accountability among various stakeholders and organisations in different countries involved in AI development requires robust governance structures and clear delineation of respo
154、nsibilities.As demonstrated above,upholding the fundamental principles of the GDPR may prove challenging in the realm of AI development.Addressing these challenges necessitates proactive efforts from the AI providers throughout the AI development life cycle.Only through such concerted efforts can 18
155、 organisations effectively navigate the complexities of AI development while upholding the principles of data processing outlined in the GDPR.Data subjectData subject s rightss rights In addition to the fundamental principles of data processing,the GDPR grants data subjects a range of rights to empo
156、wer them in relation to their personal data.These rights,which can be invoked by data subjects,whose personal data is processed in the context of AI development and deployment,are discussed below.Rights in relation to automated decision-making and profiling The GDPR includes safeguards aimed at miti
157、gating the risks associated with automated decision-making and profiling.These are especially meaningful in the context of AI,given how many decisions and actions nowadays are executed without human intervention and facilitated by AISs.Article 22 of the GDPR explicitly grants individuals the right t
158、o not be subject to decisions made solely through automated processes,if such decisions have legal implications or similarly significantly affect them.This provision acknowledges the potential consequences of algorithmic decision-making on individuals rights and seeks to ensure accountability and tr
159、ansparency in automated processes.With the increasing reliance on AISs to make critical decisions in various domains,such as finance,healthcare and employment,the protection afforded by this right becomes increasingly important.It underscores the need for AISs to operate ethically and transparently,
160、with mechanisms in place that allow individuals to challenge automated decisions and understand the rationale behind them.Furthermore,the right not to be subject to automated decision-making underscores the importance of human oversight and accountability in the development and deployment of AI tech
161、nologies.While AISs can offer efficiency and innovation,they must also respect individuals rights and ensure fair and equitable treatment for all.Therefore,AI developers must implement robust mechanisms for oversight,accountability and transparency to uphold individuals rights and prevent potential
162、harms arising from automated decision-making and profiling.Right to access Individuals have the right to obtain confirmation as to whether or not their personal data is being processed and,if so,to access that data and information about how it is being processed.In the context of AI,this right takes
163、 on added significance and complexity.Data subjects have the right to obtain confirmation from controllers as to whether their personal data is being processed and,if so,to access that data and relevant information about this processing.However,in the realm of AI,accessing personal data may not alwa
164、ys be straightforward due to the intricate nature of AI algorithms and the vast amounts of data processed.AISs often operate on extensive datasets,with personal data used by them being dispersed across multiple platforms,databases or organisations.Consolidating and accessing this fragmented data can
165、 be complex,especially when data interoperability issues or data silos exist.Furthermore,controllers who are AI developers may face resource constraints or technical limitations when responding to data access requests.Processing large volumes of data to respond to data access requests may require si
166、gnificant time,resources and expertise.Therefore,ensuring effective access to personal data in the context of AI requires controllers to implement 19 transparent and user-friendly mechanisms that enable data subjects to understand and exercise their rights effectively.Right to rectification Linked t
167、o the controllers obligation to maintain accurate and up-to-date data,the right to rectification empowers data subjects to request the correction(i.e.rectification or completion)of inaccurate or incomplete personal data held by controllers.This right remains relevant across all stages of the AIS lif
168、e cycle.For instance,during the development phase,data subjects can seek the correction of their information contained in the training dataset.Similarly,during the deployment phase,they may contest the accuracy of the outputs generated by the AISs.The predictions and inferences generated by AISs oft
169、en involve personal data,as defined in Article 4(1)of the GDPR.This includes both direct identifiers,such as names and addresses,and indirect identifiers or information that,when combined with other data,can identify an individual.However,rectifying the output of an AIS can be challenging as it prim
170、arily comprises statistical predictions rather than factual statements(even though the outputs may often be presented or interpreted as factual statements).Prediction scores are not inherently inaccurate merely because the factual reality doesnt match the prediction(e.g.a 99.5%percent change of a ca
171、ncer being present can be a reasonable and correct estimate,even if no cancer is detected afterwards);therefore,depending on the context and the presentation,the right to rectification may not apply if the personal data is not factually incorrect.Right to erasure/right to be forgotten The GDPR in Ar
172、ticle 17 grants data subjects the right to request the erasure of their personal data under certain circumstances.When a data subject exercises this right,the controller is obligated not only to delete the data that they have processed directly but also to notify all other known recipients with whom
173、 they shared the data about the data subjects request.This right can only be exercised in certain limited instances,for example,when the data is no longer necessary for the purposes for which it was collected or if the processing is unlawful.It can also be exercised by data subjects who object to th
174、e processing of their data and for whom the controller cannot demonstrate other overriding legitimate grounds for further processing.Exercising this right within the realm of AI might be a tough nut to crack.AISs often incorporate vast amounts of data from diverse sources located in various location
175、s.Data is usually replicated across multiple systems for backups.All of this makes it difficult to track and identify specific instances of personal data for erasure.Moreover,the dynamic and evolving nature of AI algorithms complicates the erasure process,as data may be continuously processed and in
176、tegrated into AI models over time.The source data can become increasingly difficult or even impossible to find or remove.In order to be able to entirely erase ones personal data included in an AI model,it may be necessary to retrain the AI model based on a data set that no longer includes the erased
177、 data and is not influenced by the algorithmic shadow of that individuals data.This,however,might not be feasible due to the substantial computational and engineering expenses,along with time limitations,particularly concerning complex AI models.Additionally,the inherent opacity of AI decision-makin
178、g processes may hinder data subjects ability(or indeed any partys ability)to determine whether their personal data has been completely erased from AISs.The proliferation of AI-based applications across various sectors and industries also raises concerns about the widespread dissemination and potenti
179、al replication of personal data,further complicating the erasure process.Exercising the right to erasure may also be problematic,due to uncertainties regarding the scope of the request.20 Specifically,it may be unclear whether the request should only pertain to the data directly provided by the data
180、 subject or also encompass the data derived or inferred from that initial dataset.This ambiguity raises questions about the extent to which AISs should erase not only the raw data but also any insights,predictions or conclusions drawn from it.The reference case on this right in the EU is the C-131/1
181、2 case,commonly known as the Google Spain case.In this landmark ruling,the claimant requested the removal of certain search engine results generated by Googles algorithm.These results were based on inferences drawn from the claimants personal data.The Court of Justice of the European Union(CJEU)rule
182、d in favour of the claimant,affirming the individuals right to have such derived data erased from the search engine(but not from the original websites where the data were hosted).This case underscores the significance of ensuring that data erasure requests extend beyond just the raw data to encompas
183、s any derived or inferred information generated by AI algorithms.To address the above challenges,controllers should seek to design their AISs in a way that the deletion requests can be effectuated,in accordance with the principle of privacy by design.They should implement robust data governance prac
184、tices and transparency mechanisms to ensure the effective erasure of personal data from AISs.Additionally,clear guidelines and standards should be established for the secure and permanent deletion of personal data within the context of AI development and deployment.Right to restriction of processing
185、 A substitute to the right to erasure,the right to restriction of processing,grants individuals the authority to limit the processing of their personal data under specific circumstances,such as when the accuracy of the data is contested or when the processing is unlawful.As a result,controllers must
186、 limit the processing operations they carry out on the data and may only store it.The concerns with exercising the right to restriction of processing are similar to those related to the right to erasure.Due to the fact that AISs operate on extensive datasets sourced from diverse channels,it may be e
187、specially intricate for individuals to pinpoint and control the processing of their specific personal data.Also,the dynamic nature of AI algorithms,continually learning and evolving from new data inputs,complicates efforts to enforce processing restrictions effectively.The aforementioned opacity inh
188、erent in AI decision-making processes exacerbates the challenge,as individuals may struggle to monitor and enforce limitations on the processing of their personal data by AISs.Additionally,the interconnectedness of AISs across various platforms and networks may lead to inadvertent processing of rest
189、ricted personal data beyond the intended scope.To address these challenges,there is a pressing need for enhanced transparency and communication mechanisms to empower individuals in monitoring and enforcing restrictions on their personal data processed by AISs.Furthermore,controllers must establish r
190、obust controls and mechanisms within AISs to facilitate data subjects in exercising their right to restrict processing effectively,ensuring compliance with data protection regulations and upholding individuals privacy rights.Right to data portability The right to data portability,enshrined in Articl
191、e 20 of the GDPR,enables individuals to obtain their personal data in a structured,commonly used and machine-readable format and to transfer that data between different services or platforms.In the AI setting,exercising this right might present some significant hurdles.Firstly,personal data derived
192、from further examination of provided information is exempt from the right to portability.This signifies that the outcomes generated by AI models,such as predictions and classifications regarding 21 individuals,lie outside the purview of portability rights.In certain instances,some or all of the char
193、acteristics used to train the model may have originated from prior analysis of personal data.For example,a credit score obtained through statistical analysis based on an individuals financial data might subsequently be employed as a feature in a machine learning model.In such cases,the credit score
194、is not encompassed within the scope of data portability rights,even if other attributes are.Secondly,extracting and transferring personal data in a usable format from complex and interconnected datasets may be particularly challenging.At the same time,the proprietary algorithms and formats used by A
195、ISs may not be readily compatible with other services or platforms,hindering seamless data portability.Moreover,the dynamic nature of AI algorithms,which continuously evolve based on new data inputs,adds an additional layer of complexity to the portability process.Individuals may struggle to ensure
196、the accuracy and completeness of their transferred data,particularly when dealing with AI-driven insights and predictions that are constantly evolving.Overcoming these challenges requires the development of standardised data formats and interoperability protocols tailored to AISs,along with enhanced
197、 transparency and accountability mechanisms to facilitate individuals in exercising their right to data portability effectively.Right to object A fundamental provision of the GDPR,which empowers individuals to object to the processing of their personal data on grounds related to their particular sit
198、uation,when the processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller or when the processing is necessary for the purpose of legitimate interest of the controller or a third party.Moreover,individual
199、s can object to the processing of their personal data for marketing purposes.Exercising this right leads to unique obstacles in the context of AI.Unlike traditional data processing methods,AISs often operate autonomously,with their internal decision-making processes being opaque or difficult to inte
200、rpret,commonly referred to as black boxes.These systems rely on complex algorithms and extensive datasets to make decisions,which may lead individuals to struggle in comprehending the logic behind AI-driven decisions and identifying instances where their data is being processed in ways they object t
201、o.AISs might produce conclusions and forecasts derived from intricate connections and patterns in the data,complicating individuals ability to determine if their objections are justified or relevant.When decisions are made autonomously by AISs,individuals may find it challenging to identify who to a
202、ddress their objections to and how to effectively communicate their concerns.Moreover,the widespread adoption of AI across diverse applications and sectors further complicates exercising the right to object.Individuals may interact with multiple AISs operated by different entities,making it difficul
203、t to centrally manage objections and ensure consistent compliance with data protection preferences.Addressing these challenges requires enhanced transparency,accountability and accessibility measures to empower individuals to assert their rights effectively in the AI-driven digital landscape.OtherOt
204、her accountability mechanismsaccountability mechanisms The GDPR not only grants extensive rights to data subjects but also establishes a robust framework of control aimed at safeguarding these rights.Central to this framework are the accountability and oversight obligations imposed on controllers.Th
205、ese accountability obligations not only enhance 22 transparency and trust but also reinforce the protection of individuals rights.They serve as a cornerstone of the GDPRs regulatory approach,ensuring that controllers are held responsible for their data processing activities and that appropriate meas
206、ures are in place to protect individuals personal data.Below,we examine those other accountability obligations that the GDPR imposes on controllers,with a specific focus on those controllers who are AI developers.Data protection impact assessment As stipulated in Article 35 of the GDPR,controllers a
207、re obligated to conduct data protection impact assessments(DPIAs)for data processing activities that are likely to result in high risk to individuals rights and freedoms.DPIAs are systematic assessments aimed at identifying,assessing and mitigating the risks associated with data processing.They are
208、particularly important when implementing new technologies or processing sensitive personal data.The purpose of DPIAs is to ensure that controllers proactively address privacy risks and comply with data protection principles.They involve assessing the necessity and proportionality of data processing
209、activities,evaluating potential risks to data subjects and implementing measures to mitigate identified risks.Given the complexity and potential implications of AI technologies,DPIAs are particularly crucial in this setting.AISs depend on massive amounts of data and complex algorithms to make decisi
210、ons or predictions,which can pose significant risks to individuals privacy and rights.Controllers who are AI developers must therefore carefully assess the potential risks associated with AI-driven data processing activities,including the potential for bias,discrimination,or infringement of individu
211、als rights.DPIAs in the AI sector involve evaluating the transparency and fairness of AI algorithms,assessing the potential impacts on individuals rights and implementing measures to mitigate identified risks.By conducting DPIAs,AI controllers demonstrate their commitment to accountability and trans
212、parency in AI development and deployment,ensuring that individuals rights are adequately protected in the rapidly evolving landscape of AI technologies.The upcoming AIA introduces a new requirement under Article 29a,mandating deployers of high-risk AISs(HRAIS)to conduct a fundamental rights impact a
213、ssessment(FRIA)to evaluate the potential impact of AISs on fundamental rights.Unlike a DPIA under the GDPR,which focuses primarily on data protection risks,a FRIA considers broader societal implications,including ethical,social and fundamental rights considerations,ensuring a more comprehensive eval
214、uation of AI deployments,and should be conducted in conjunction with a DPIA.Record of processing activities Article 30 of the GDPR outlines the requirements for controllers to maintain comprehensive records of processing activities.These records serve as a vital repository of information,encompassin
215、g key details essential for ensuring compliance with data protection regulations.The records must include details such as the contact information of the controller,joint controller,representative,and data protection officer,where applicable.Additionally,they should delineate the specific purposes be
216、hind the data processing activities and provide a thorough description of the categories of data subjects and personal data involved.Furthermore,the records must document the categories of recipients to whom the personal data have been or will be disclosed,including any transfers to non-EU countries
217、 or international organisations,along with the requisite safeguards.Anticipated timeframes for the erasure of different data categories should be outlined whenever feasible,alongside a general overview of the technical and organisational security measures implemented to safeguard the data.Maintainin
218、g a comprehensive record of processing activities poses significant challenges for AI developers,mainly due to the intricate and multifaceted nature of processing operations within AISs.23 The complex algorithms and iterative nature of AI development make it arduous to accurately document all data p
219、rocessing activities,particularly with the vast array of data sources and evolving models involved.Furthermore,the decentralised structure of AI development teams and the involvement of numerous stakeholders add layers of complexity to the task,making it even more challenging to maintain thorough re
220、cords of processing activities.Data Protection Officer As stipulated in Article 37 of the GDPR,some controllers and processors are also obliged to designate a data protection officer(DPO).The DPO plays a crucial role in overseeing the organisations data protection strategy and ensuring compliance wi
221、th data protection laws.Responsibilities include advising on data protection obligations,monitoring compliance,guiding data protection impact assessments,and acting as a contact point for data subjects and supervisory authorities.Mandatory appointment of a DPO applies to public authorities,organisat
222、ions engaged in systematic monitoring of data subjects on a large scale,and those processing special categories of personal data extensively.Given that AISs often manage exorbitant datasets and may involve systematic monitoring or processing of sensitive personal information,AI providers may be subj
223、ect to the DPO requirement under the GDPR.However,AI developers may face challenges in appointing a DPO,as finding an individual with both expertise in data protection regulations and a deep understanding of the complex nature of AI can be particularly daunting.4.4.Example caseExample case:n navigat
224、ing the avigating the i impact of mpact of artificial intelligenceartificial intelligence in in h healthcareealthcare IntroductionIntroduction Building upon the foundational principles of the GDPR in the context of AI,this chapter delves into the implications of AI with a focus on the healthcare sec
225、tor.The choice to spotlight the healthcare industry stems from its unique position as both a pioneer and a significant beneficiary of AI technologies.Healthcare represents a domain where the integration of AI has rapidly evolved in the recent years,transforming traditional practices and opening new
226、avenues for improved patient care,diagnosis and treatment.Furthermore,healthcare data is inherently sensitive and highly regulated,making it a prime example to elucidate the complex interplay between AI advancements and privacy concerns.With the increasing digitisation of medical records,the prolife
227、ration of wearable health devices and the adoption of telemedicine platforms,the healthcare sector offers a rich landscape to examine AIs impact on privacy rights.This chapter explores the challenges and opportunities of AI integration in healthcare,providing insights into maintaining privacy safegu
228、ards amidst rapid technological advancements.It examines the reasons for the rapid adoption of AI in healthcare,offers real-world examples of successful AI applications,discusses privacy and security risks and proposes mitigating measures to protect individuals privacy rights.A Artificial intelligen
229、certificial intelligence in healthcare:a in healthcare:a short short landscape overviewlandscape overview 24 The adoption of AI in healthcare has surged in recent years,driven by several factors.AI tools have the capacity to enhance accuracy,minimise expenses and streamline processes in contrast to
230、conventional diagnostic methods.Furthermore,AI has the potential to mitigate the likelihood of human errors while delivering more precise outcomes within shorter timeframes.It offers unparalleled capabilities to process and analyse large volumes of healthcare data,including electronic health records
231、(EHRs),medical images and genomic data,at speeds and scales beyond human capacity.This ability enables healthcare providers to extract valuable insights from complex datasets,leading to more accurate diagnoses,personalised treatment plans and improved patient outcomes.This rise in digital health tec
232、hnologies,coupled with the increasing demand for remote healthcare services,has accelerated the integration of AI-driven solutions into clinical practice.Telemedicine platforms,wearable devices and mobile health applications leverage AI algorithms to deliver virtual consultations,remote monitoring a
233、nd predictive analytics,enhancing access to healthcare services and empowering patients to take control of their health.Numerous examples of EU funded research and innovation projects can illustrate the application of AI in various healthcare settings.Oncorelief is a Horizon 2020 project which devel
234、oped a user-centred AIS designed to function as an intuitive smart digital assistant,which aims to revolutionise post-treatment care by providing personalised support tailored to each cancer survivors needs.This AI-driven assistant not only assists with post-treatment activities and tasks but also p
235、roactively suggests actions to improve the patients overall health,wellbeing and active healthcare engagement.By facilitating a continuous wellness journey,the Oncorelief assistant ensures that cancer survivors remain actively involved in maintaining their health during the critical post-treatment p
236、eriod,promising to enhance long-term health outcomes and quality of life for survivors.Rebecca is a Horizon 2020 project which aims to leverage real-world data to enhance clinical research and improve current clinical practices.By integrating clinical data with information on patients daily behaviou
237、rs like physical activity,diet,sleep and online interactions collected through mobile and wearable devices,Rebecca generates new insights.It creates novel functional and emotional indicators for each patient to assess their well-being and quality of life,thereby optimising their care.The Rebecca 360
238、 platform,comprising unobtrusive mobile applications,supports breast cancer survivors in their daily lives and facilitates their communication with healthcare professionals.It also contains information on future post-cancer treatment guidelines and practices.Oncoscreen,a Horizon Europe research and
239、innovation project,is dedicated to developing AI-driven solutions for personalised colorectal cancer screening and early,non-invasive and cost-efficient detection.By integrating advanced AI algorithms with cutting-edge medical diagnostic and imaging technologies,Oncoscreen aims to revolutionise colo
240、rectal cancer diagnosis,enabling early intervention and improved patient outcomes.LUCIA is another EU-funded project which aims to understand and discover new risk factors that contribute to the development of lung cancer;AI models are used to identify environmental,biological,demographic,community
241、and individual level risk factors associated with the formation of lung cancer,by combining open data sources(e.g.environmental data)with retrospective clinical data from clinical partners,and prospective data collected during clinical studies,data collected via medical devices and through patient 2
242、5 questionnaires.Additionally,the AI models help determine risk scores for lung cancer,which can be used to screen patients and detect lung cancer at an early stage.Risks associated with the use of Risks associated with the use of artificial intelligenceartificial intelligence in healthcarein health
243、care Despite its transformative potential,the widespread adoption of AI in healthcare raises significant privacy and security concerns.In fact,security and patient privacy are the core concerns in the healthcare sector when it comes to AI,as access to patient medical data is central to the training
244、of AI models and the use of AI-based solutions in the delivery of healthcare.The growing prevalence of AI solutions and technology in healthcare,highlighted most recently by the COVID-19 pandemic,has demonstrated the potential for significant ramifications on the rights of patients and citizens.One
245、of the primary risks is the risk of personal data being shared and used without the patients explicit consent.As stipulated in Article 9 of the GDPR,processing of personal data concerning health shall be by default prohibited.Such processing shall only be allowed under certain conditions enumerated
246、in Article 9(2)of the GDPR.The most commonly invoked condition among these,is the case where the data subject has given explicit consent for the processing of those personal data for one or more specified purposes.In reality,however,AISs often analyse and process personal health information without
247、individuals informed consent.Another persistent concern is data repurposing,also known as function creep.This phenomenon involves the unauthorised or unintended use of data collected for one purpose being repurposed for other unrelated or unexpected ends.A striking example of function creep occurred
248、 in Singapore,where data collected through the governments COVID-19 tracing app,intended for public health monitoring and contact tracing,was repurposed for unrelated endeavours,such as criminal investigations.Similarly,in Germany,COVID tracking data was used by police to identify individuals who we
249、re present at a restaurant where a death occurred,demonstrating a concerning trend of expanding the use of collected data beyond its original purpose.Furthermore,the integration of AI-driven technologies and the reliance on them in healthcare introduces cybersecurity risks,encompassing cyberattacks
250、targeting AISs,data breaches leading to identity theft or medical fraud and the exploitation of AI algorithm vulnerabilities to manipulate medical decisions or endanger patient safety.An illustrative case is the September 2020 cyberattack on Dusseldorf University Hospital,which interfered with the h
251、ospitals data and rendered the system inoperable.As a result,a patient could not be admitted to the hospital and had to be redirected to another facility in a distant city,which ultimately resulted in her demise.Although it was later argued that it could not be proven that the death was directly cau
252、sed by the cyberattack,because the patient was already suffering a life-threatening condition,this case brought to the forefront the real physical harms that cyberattacks can cause in the healthcare sphere.Similarly,the Elekta case in April 2021 demonstrated how cyberattacks on AISs can directly imp
253、act patient rights,with a ransomware attack affecting 170 health systems in the United States(US)and delaying cancer treatment care nationwide.Additionally,AI-controlled personal medical devices,such as insulin pumps for diabetes patients,face hacking risks,potentially allowing remote manipulation a
254、nd the administration of excessive insulin doses.26 5.5.Risk mRisk mitigating measures:itigating measures:general general strategies and approachesstrategies and approaches To effectively mitigate the privacy and security risks associated with the deployment of AI in healthcare,a multifaceted approa
255、ch is essential.Firstly,robust data protection mechanisms,such as encryption,pseudonymisation and access controls,should be employed to safeguard patient data against unauthorised access and breaches.Organisations must ensure awareness and understanding of data privacy and security risks,emphasising
256、 that AI developers and deployers should comply with applicable laws,such as the GDPR.Custodians of data must give top priority to safeguarding and discouraging alternative data usage in order to uphold the privacy and confidentiality of patients.Transparent and accountable AI governance frameworks
257、should be established to ensure that AISs adhere to ethical principles,regulatory requirements,and best practices for data privacy and security.Requiring organisations deploying AI to conduct FRIAs,as mandated for HRAISs by the pending AIA,while also conducting comprehensive data protection impact a
258、ssessments(DPIAs)to identify and mitigate potential privacy risks associated with AI deployment,can further enhance privacy protection.Advocating for the use of synthetic data,artificially generated and disconnected from real individuals,could also enhance privacy and security by minimising the risk
259、s associated with real patient data.Ongoing research efforts to enhance AIS security and protect algorithms against cyberattacks are imperative.Continuous monitoring,auditing and evaluation of AISs performance and compliance with privacy regulations are essential to proactively detect and mitigate a
260、ny privacy breaches or security incidents.Moreover,continuous staff training and awareness programs should be implemented to educate healthcare professionals about the importance of privacy protection and security measures when using AI technologies.Collaborative efforts between healthcare instituti
261、ons,technology providers,regulators and policymakers are also crucial to establish standardised protocols,guidelines and regulations for the responsible development and deployment of AI applications in healthcare while ensuring the protection of patient privacy and data security.Through these concer
262、ted efforts,the healthcare industry can navigate the complexities of AI deployment while prioritising patient privacy and data security.6.6.ConclusionConclusion In this exploration of fundamental rights and data protection in the context of AI,we have delved into the intricate dynamics shaping the i
263、ntersection of technology and human rights.It is clear that,while AI holds tremendous promise,its widespread adoption must be accompanied by robust privacy protections and regulatory safeguards.By adhering to GDPR principles,implementing privacy-enhancing technologies and adopting transparent and ac
264、countable AI governance frameworks,stakeholders can harness the transformative potential of AI while safeguarding individuals fundamental right to privacy and data protection.As the AI landscape continues to evolve,it is imperative to find new and more effective ways strike a balance between innovat
265、ion and privacy protection,and to ensure that AI-driven advancements benefit society,while respecting individuals privacy rights.27 A Artificial intelligencertificial intelligence and intellectual and intellectual propertyproperty the(lack of)the(lack of)creativity of creativity of artificial intell
266、igenceartificial intelligence,and and its its dependencedependence on on prepre-existing existing inputsinputs The uptake of generative AI is reshaping our perception of creativity.As generative AISs continue to evolve,they are increasingly capable of producing outputs that blur the lines between hu
267、man and machine-generated content.From generating texts,to creating art and music,generative AISs demonstrate remarkable potential in redefining traditional notions of creativity.However,this technological advancement also raises questions about copyright on both the input and output side.This chapt
268、er delves into the complex interplay between AI and copyright,and explores the potential impact of AI-generated output on the creative ecosystem.It aims to explain how copyright(and related rights)currently apply to generative AI,and to inspire future dialogue on the interaction between copyright an
269、d AI.1.1.A Artificial intelligencertificial intelligence and training data:addressing copyright and training data:addressing copyright challengeschallenges In the realm of AI,the importance of training data cannot be overstated.These vast datasets serve as the foundation upon which AISs learn,adapt
270、and make decisions.However,the use of such datasets for training purposes raises questions regarding copyright and ownership.This is particularly the case for generative AISs,which are designed to learn patterns and structures from large datasets,and on the basis thereof,generate new data or content
271、 that mimics or resembles human-created content.The foundation models of such generative AISs,including large language models(LLMs)and text-to-image models are often trained on datasets that include publicly available materials,such as web pages,images,articles,blog posts and tweets.Many of these ma
272、terials are,however,not owned by the generative AISs trainer and are potentially protected by copyright.So what does this copyright protection entail?From a policy perspective,copyright is meant to encourage the creation of original works by providing the authors of such works with exclusive rights
273、to control the exploitation of their work and protect its integrity.Encouraging the creation of original works contributes to the cultural,social and economic advancement of society and is therefore desirable.A work that is eligible for copyright protection can take various forms.Article 2 of the Be
274、rne Convention provides a list of literary and artistic works that are generally copyrightable.These include 28 books,dramatic works,musical compositions,choreographies,sculptures and cinematographic works.It is a common misconception however that copyright is only a matter for writers,composers and
275、 other artists.Essentially,copyright protects any work that is both expressed in a concrete form and original.The first criterion entails that copyright protection may be granted to expressions,but not to mere ideas,procedures,methods of operation or mathematical concepts(even if they were original)
276、.Put differently,as has been affirmed by the European Court of Justice,in order for copyright protection to apply,a work must be expressed in a manner which makes it identifiable with sufficient precision and objectivity.Consequently,there will be no copyright infringement when you copy someones ide
277、a or use it as an inspiration,as long as you give a different expression to this idea.Likewise,copyright protection is not granted to a style,genre,trend or technique.Making a work of anti-authoritarian street art for example,does not by definition imply an infringement of Banksys copyright.The seco
278、nd criterion of originality entails that a work(or parts of a work)can only be protected by copyright if it(or they)contain(s)elements which are the expression of the intellectual creation of the author of the work.This means that for copyright protection to exist,a work needs to be the authors own
279、intellectual creation,reflecting their personality.As such,a work will only be copyright protected if the author has been able to express their creative abilities in the production of the work by making free and creative choices that stamp the work created with their personal touch.This means that f
280、or the purpose of copyright protection,it is entirely irrelevant if a work is pretty or ugly,if it has artistic value or not or what the quality of the work is.If a work is expressed in an original form,this automatically triggers copyright protection.If you yourself,for example,take a photo of almo
281、st anything,and you have made original choices regarding the perspective,shadow play,composition,colours,etc.in such a way that the photo expresses your own intellectual creation,then this photo will be copyright protected.Needless to say,the originality standard in EU copyright law is considered to
282、 be rather low.Copyright protection in principle vests in the author or authors of the protected work,i.e.the person(s)whose intellectual creation the work(s)express(es).This means that copyright originally always vests in one or more physical person(s)and not in a legal person.However,this does not
283、 exclude that legal persons(such as companies)can hold copyright in works.Legal persons can become copyright holders through copyright transfer agreements(or exceptionally through specific legislative provisions).Copyright protection moreover persists for quite a long time.More precisely,a copyright
284、 holder can exercise its rights throughout the life of the author and for 70 years after his death.As a result,many works that are publicly available online,such as books,news articles,photos,videos,music and paintings,may(still)be protected by copyright.If such works are scraped from the Internet(i
285、.e.found and locally stored using a fully automated tool),and further used as training data for generative AISs,this has copyright implications.Indeed,around the globe,many artists have already expressed their strong dissatisfaction with generative AISs using their works for training purposes withou
286、t authorisation,arguing that AIS providers are stealing their intellectual property and are potentially even undermining their jobs.That is why some artists have organised themselves and founded a European Guild for AI Regulation to bring to the public attention how their data and intellectual prope
287、rties are being exploited without 29 their consent,on a scale never seen before.Moreover,several lawsuits for copyright infringement have been filed by copyright holders against AIS providers for allegedly scraping copyright protected works from the internet and using them in the creation of AI prod
288、ucts.This raises the question as to whether authors can control the use of their copyrighted works for training generative AISs through their copyright.Under EU copyright law(specifically the information society directive),the exclusive rights of a copyright holder consist of economic rights on the
289、one hand and moral rights on the other.The economic rights include:1.the exclusive right to authorise or prohibit the direct or indirect reproduction of a work by any means and in any form in whole or in part(the reproduction right);and 2.the exclusive right to authorise or prohibit any communicatio
290、n to the public of a work(the right of communication to the public).The moral rights are not fully harmonised at the EU level,but will in most MSs at a minimum include the right of the author(s):1.to be identified as the author(s)of any work he/they create(right of paternity);and 2.to prevent others
291、 from subjecting their works to derogatory treatment,in each case for the duration of the works copyright(right of integrity).Whereas the economic rights are transferable from the author to a third party,moral rights are understood to be so closely connected to the person of the author that only he
292、can exercise them.As a result,moral rights are non-transferable.When training a generative AIS,various techniques are used,including text and data mining(TDM).TDM refers to the automated processing of large volumes of text and data to uncover new knowledge or insights.TDM usually requires the copyin
293、g of large quantities of material(that may be copyright protected),extracting the relevant data and recombining it to identify patterns.This is where the right to reproduction comes into play.Given that under EU copyright law the right to reproduction is interpreted in a very broad way,many of the c
294、opies that are made in the process of TDM are generally considered to be an act of reproduction that falls under the reproduction right.As a consequence,all right holders of copyright protected works that are included in a training dataset in principle have to authorise the use of their works for th
295、e training of a generative AIS.In the absence thereof,copyright infringement would occur.However,the exclusive rights of the copyright holder are not absolute.As the requirement to receive authorisation from a right holder may in some cases be overly burdensome for the user of a work(or even violate
296、 fundamental rights),under EU copyright law,a limited number of exceptions to the exclusive rights exist.These exceptions are meant to strike a balance between the exclusive rights of the right holder and safeguarding the public interest and users fundamental rights.The exceptions,for example,permit
297、 the reproduction of works for private use,criticism or review,illustration for teaching or scientific research,and parody.In the context of generative AISs,originally,two exceptions were of particular relevance,namely the exception for temporary reproductions,as found in Article 5(1)of the informat
298、ion society directive,and the exception for teaching and scientific research,as found in Article 5(3),point(a),of the information society directive.30 The exception for temporary reproductions is a mandatory exception in all MSs.It essentially interprets the right to reproduction in a way that is co
299、mpatible with our modern digital society.According to this exception,temporary acts of reproduction,which are transient or incidental and an integral and essential part of a technological process and whose sole purpose is to enable a transmission in a network between third parties by an intermediary
300、 or the lawful use of a work or other subject matter to be made,and which have no independent economic significance,may be undertaken without infringing copyright.Much like the copies that you make in your memory when reading a book do not require the authorisation of the copyright holder of the boo
301、k,the exception for temporary reproductions ensures that also the temporary reproductions that are made in the memory and on the screen of a computer when browsing a website do not require the authorisation of the copyright holders of any works that are included in that website.The exception for tea
302、ching and scientific research is an optional exception.It allows the use of copyrighted works for the sole purpose of illustration for teaching or scientific research without the authorisation of the right holder,as long as the source,including the authors name,is indicated,unless this turns out to
303、be impossible and to the extent justified by the non-commercial purpose to be achieved.Like all other exceptions and limitations,the application of these two exceptions is subject to the three-step-test,which means that they can only be applied in certain special cases which do not conflict with a n
304、ormal exploitation of the work or other subject-matter and do not unreasonably prejudice the legitimate interests of the right holder.Given the many conditions to be fulfilled for these exceptions to apply,for some time,it was unclear if they could be relied upon in the context of reproductions for
305、TDM purposes.Given this legal uncertainty concerning TDM,in 2019 the EU legislator adopted two new mandatory exceptions for TDM in Articles 3 and 4 of its 2019 directive on copyright and related rights in the digital single market(DSM).Article 4 of the DSM directive provides for a general exception
306、which allows anyone to perform TDM on copyright protected works that are lawfully accessible.Both public and private entities can benefit from the exception,and even TDM for commercial purposes is covered by the exception.Although the exception may thus seem to be very broad,it is subject to strict
307、conditions that severely restrict its scope.First of all,reproductions made for the purposes of TDM may only be retained for as long as is necessary for those purposes.Secondly,the exception only applies on the condition that the use of works for TDM purposes has not been expressly reserved(read:pro
308、hibited)by the right holders in an appropriate manner,such as machine-readable means in the case of content made publicly available online.For works that are not publicly available online,a contractual agreement or unilateral declaration may also be an appropriate manner to reserve rights.This opt-o
309、ut mechanism obviously implies a significant weakening of the general exception,as text and data mining will not be allowed if right holders have prohibited the use for text and data mining purposes of their works,for example via a metatag,terms and conditions of a website or service,or a robot prot
310、ocol(robot.txt)if the works are publicly available online.Article 3 of the DSM directive provides for a specific TDM-exception for scientific research.Contrary to the general exception of Article 4,only research organisations and cultural heritage institutions can benefit from the exception,which al
311、lows them to make reproductions of works to which they have lawful access in order to carry out text and data mining for the purposes of scientific research.The 31 exception is mandatory,meaning that right holders cannot oppose text and data mining carried out by research organisations and cultural
312、heritage institutions in the context of scientific research via an express reservation of rights.Nevertheless,right holders are allowed to apply measures to ensure the security and integrity of the networks and databases where their works are hosted,to the extent that these do not go beyond what is
313、necessary to achieve that objective.Thus,although such measures should not interfere with the application of the exception,it is not unlikely that,in practice,measures to ensure the security and integrity of networks and databases may prevent research organisations from performing text and data-mini
314、ng activities on works to which they have lawful access.In any case,beneficiaries of the exception may only store the copies of works that are made through text and data mining with an appropriate level of security for the purposes of scientific research,including for the verification of research re
315、sults.In this context,the DSM directive also requires MSs to encourage right holders,research organisations and cultural heritage institutions to define commonly agreed best practices concerning secure storage and measures for the security and integrity of networks and databases that right holders m
316、ay apply.With these two new exceptions,the EU legislators have thus attempted to resolve the tensions between tech companies and authors regarding the use of their copyright protected works for AI training purposes.Whether this has been a success remains to be seen.Not only is it conceivable that th
317、ese new exceptions will lead to difficulties in interpretation(for example,because the modalities of the opt-out have not(yet)been standardised,and many national variations to the exceptions may arise),but it is also yet to be seen if authors and tech companies will find a compromise on how these ex
318、ceptions are applied in practice.Moreover,the exceptions have been scrutinised for putting the European Unions AI sector at a competitive disadvantage given that,in other jurisdictions,commercial TDM of copyrighted works may be allowed,for example,under the fair use doctrine in the US,without the ob
319、stacle of an opt-out mechanism or other restrictive conditions.It is true that,as a result of the EUs TDM exceptions,EU-based AIS providers may in the future be confronted with authors that have opted-out of the TDM exception and will then be left with two options:incur additional licensing costs to
320、 be able to train their systems on the copyright protected works,or exclude the works from their training datasets altogether.It thus remains to be seen whether and how these differences between US and EU copyright law will impact future AI development.2.2.Can an Can an artificial intelligenceartifi
321、cial intelligence be a creator?Dealing with be a creator?Dealing with(non)creative outputs(non)creative outputs The section above examined the important questions that arise when AIs try to use copyright protected works as a training input.The intellectual property rights issue can also be examined
322、from the other side:can the outputs of generative AISs be protected by copyright or related rights?Many generative AISs today produce creative works that are hardly distinguishable from works created by humans.This prompts the question:are such creative machine-made productions also eligible for cop
323、yright protection?As explained above,in order for a work to be copyright protected,it 32 needs to constitute a concrete and original expression of an authors own intellectual creation.The author is the human who makes the free and creative choices for a work and expresses his personality in it.This
324、inherently human-centric approach to EU copyright law entails that AISs currently cannot be authors of copyright protected works.Indeed,AISs as machines are not able to make creative choices that bring the output they create in the realm of copyright protection.As a result,works that are produced so
325、lely by AI(AI-generated output)are not protected by copyright.This does not,however,mean that the output of generative AISs is never protected by copyright.Where works are produced with an AIS,the relevant question is if the work is a(human)authors own intellectual creation.If there was some human i
326、ntervention in the creation of the output,copyright protection is not excluded.In this respect,the creative process is decisive.A work that has been produced with the help of an AIS will only benefit from copyright protection if,during the creation process,the author has been able to express his cre
327、ative abilities by making free and creative choices that stamp the work with his personal touch.If a work produced by an AIS contains sufficient traces of human creativity in the creative process,it will thus be protected by copyright.Such output is then called AI-assisted output.What degree of huma
328、n creative intervention is required,is hard to determine.Creativity in machine-aided production may occur at three distinct(iterative)phases of the creative process,namely at the conception,execution and redaction phases.In the conception phase,the human will often have the dominant role,as they wil
329、l make most conceptual choices(the choice of the AIS,the selection and curation of input data,etc.).In the execution phase,the AIS takes over much of the human authors role.However,this does not mean that the user remains entirely passive during this phase.Often the user will monitor the output and
330、give feedback to the AIS to guide it towards the desired output.Finally,in the redaction phase,the human author can make many additional creative choices,including rewriting,editing,formatting,cropping and refining.This is why mere human intervention in the conception and redaction phases is in many
331、 cases considered to be sufficient for copyright protection to arise(1).If the intervention of the user of a generative AIS is,however,limited to pushing a button for the AIS to operate,the output would arguably not be eligible for copyright protection,due to the absence of creative choices made by
332、a human author.Much will depend of course on the facts and circumstances of each case.It is thus clear that defining what the threshold is for human intervention to give rise to an original,copyright protected AI output is difficult.This should not come as a surprise,however,as AI output should be l
333、ooked at on a spectrum that ranges from AI-assisted output to AI-generated output.At one end of the spectrum sit AISs that merely execute instructions given by a human author.Much like a regular photo camera,due to a lack of creative capabilities,these AISs cannot claim to be authors of the works they produce and any copyright on original works shall be owned by the author(s)of the works.Further a