《美國律師協會(ABA):2024AI對法律實務的影響研究報告(英文版)(42頁).pdf》由會員分享,可在線閱讀,更多相關《美國律師協會(ABA):2024AI對法律實務的影響研究報告(英文版)(42頁).pdf(42頁珍藏版)》請在三個皮匠報告上搜索。
1、ABA Task Force on Law and Artificial IntelligenceAddressing the Legal Challenges of AIYear 1 Report on the Impact of AI on the Practice of LawAugust 2024a lawyer had been sanctioned for misusing generative AI in the practice of law.Today,AI is oneof the most transformational technological advances o
2、f our generation.Earlier this year,the European Parliament adopted the Artificial Intelligence Act,the worldsfirst comprehensive legal framework for AI.Currently,the United States has no comprehensivefederal legislation that regulates the development of AI or restricts its use.In August 2023,I creat
3、ed the ABA Presidential Task Force on Law and Artificial Intelligencewhich brings together lawyers and judges from across the ABA to address the impact of AI onthe legal profession and the practice of law.The AI Task Force is concentrating its efforts on abroad array of critical AI issues,including
4、AIs impact on the legal profession,the courts,legaleducation,access to justice,governance,risk management,and challenges with generative AI.Despite its promise,AI also presents novel challenges with complex legal and ethical questions.The AI Task Force has been undertaking thoughtful research and an
5、alysis in a rapidly-shiftingregulatory environment.Its dedicated working groups have been engaging the views of lawyers,judges,ethicists,access-to-justice advocates,and academics working across all sectors of theU.S.economy.These teams have undertaken research and developed essential materials tohel
6、p guide responsible and trustworthy adoption of AI tools and technology by the legalprofession.For instance,earlier this year,the AI Task Force released the results of its AI and LegalEducation Survey,a compilation of insights gathered from law school administrators and facultyregarding the integrat
7、ion of AI into legal education.The survey found that 55%are increasinglyincorporating AI into their curricula.An overwhelming majority(83%)reported extra-curricularopportunities,including clinics,where students can learn how to use AI tools effectively.Thesurvey suggests that AI is already having a
8、significant impact on legal education and is likely toresult in additional changes in the years ahead.LETTER FROM THE ABA PRESIDENT 1The proliferation of artificial intelligence over the past few years has beennothing short of revolutionary.Yet,even a few years ago,in 2017,when1,500 senior business
9、leaders in the United States were asked about AI,only 17 percent said they were familiar with it.Even a couple years ago,most lawyers probably did not expect to harness this technology in theirlegal practice.Following the release of ChatGPT in November 2022,legalscholars published papers reflecting
10、on its potential benefits and risks.Inearly 2023,a new,more capable large language model,GPT-4,becamethe first AI to pass all 3 sections of the Uniform Bar Exam.By June 2023,2Through the work of the AI Task Force,the ABA is taking a leadership role in this emerging areaof law and practice.Thanks so
11、much to the members of the AI Task Force,the Special Advisors,the Advisory Council,and the dedicated ABA staff for the depth and breadth of their work.Given the constantly evolving and ever more sophisticated nature of AI,the AI Task Forcewill continue its work during the next bar year.Recognizing t
12、he enormous potential of AI for thelegal profession and beyond,the AI Task Force will continue to provide valuable informationand research for lawyers in all practice areas and to address to some of the most pressing andchallenging legal issues facing us today.Mary SmithABA President(2023-2024)Augus
13、t 2024ABA President Mary Smith addressing the ABA House of Delegates2023 Annual Meeting in Denver,Colorado.3ABA LEADERSHIP ON AIAI and machine learning(ML)systems and capabilities will transform virtually every industrysector and reallocate the tasks performed by humans and machines.AI provides extr
14、aordinaryopportunities for innovation,productivity,error reduction,improved workplace safety,enhancedefficiency,and lower costs.It enables computers and other automated systems to perform tasksthat have historically required human cognition and,for certain tasks,at speeds that far outpacewhat humans
15、 can do.AI increasingly has been used over the past decade by physicians,biologists,astronomers,engineers,judges,lawyers,and individuals.Generative artificial intelligence(AI)has captured headlines and captivated the attention ofindividuals in professions based on language and writing,including lawy
16、ers and law firms,with itsunprecedented ability to create new content.With a few text prompts,generative AI can createnew text,images,audio,video,3D models,data,or other work product that previously couldonly be produced by humans.The release of ChatGPT in November 2022 prompted interest andconcern
17、about the ramifications of generative AI for the legal profession and its broader legalimplications.Recognizing the urgent need to address the transformative impact of AI,ABA President MarySmith launched the ABA Task Force on Law and Artificial Intelligence at the August 2023 ABAAnnual Meeting in De
18、nver as one of her first actions in office.This AI Task Force was establishedto tackle proactively the pressing legal and practice issues arising from the rapid adoption ofgenerative AI and other AI technologies,and to set the professions benchmark for anticipatingand expertly navigating these chall
19、enges.(ambar.org/aiLaw)The AI Task Force has embarked on a comprehensive,year-long exploration of an AItransformation that has accelerated at a rapid pace,now affecting virtually every industry sectorand having a profound impact on legal practice and legal education.AI has presented amultifaceted ar
20、ray of opportunities and challenges that the ABA is uniquely positioned to assessand to help ensure its integration is ethical and responsible and serves the public good.The mission of the AI Task Force is to:(1)address the impact of AI on the legal profession andthe practice of law,and related ethi
21、cal implications,(2)provide insights on developing and usingAI in a trustworthy and responsible manner,and(3)identify ways to address AI risks.Initially,the AI Task Force evaluated the broad array of AI issues under discussion by experts,lawyers and other professionals and identified the major criti
22、cal issues confronting lawyers andjudges in their practices.Throughout the year the AI Task Force has considered a broad array oflegal issues related to AI,including:the profound impact of AI on legal practice,ethical dilemmas,the challenges of generative AI,4access to justice,the integration of AI
23、in the courts,advancements in legal education,and strategies for risk management and governance.Addressing ethical concerns has been a priority for the AI Task Force as practitioners and judgesremain focused on the need to protect client confidentiality.A year later,thanks to the efforts of the ABA
24、AI Task Force and many others,there is greaterunderstanding of the potential risks and rewards of generative AI for legal practitioners and theirclients.This Report addresses the critical AI issues that impact lawyers and judges in the practiceof law,and provides insights and resources that will equ
25、ip the legal community to effectivelyaddress and leverage these developments.Given the rapid pace of change in the AI landscape(the National Institute of Standards andTechnology(NIST)released new guidance documents as this Report was being finalized),and theneed to give the AI developments the atten
26、tion they deserve,the AI Task Force will continue itswork in the new bar year(2024-25).Highlights of the AI Task Forces year include:“Moving With Change:AI and the Law Webinar Series”is a stellar collection of webinars onwhich leading experts delve into critical AI issues and provide valuable perspe
27、ctives on AIopportunities and risks.Program descriptions,along with links to view the webinars,areincluded in this Report.A new AI book,Artificial Intelligence:Legal Issues,Policy,and Practical Strategies,wasunveiled by the Science&Technology Law Section(SciTech),in collaboration with the AI TaskFor
28、ce,on August 1st at the ABA Annual Meeting.The book features contributions from over40 preeminent authorities offering legal analysis and reflections on the influence that AI willhave on both the legal profession and the law.It provides practical advice to attorneys,judges,and executives.Legal Educa
29、tion Survey Report.The ABA gathered insights from law school faculty andadministrators regarding the integration of AI into legal education.Over half of the lawschools that responded to the survey reported that they offer classes dedicated to teachingstudents about AI,while many law schools are cont
30、emplating changes to their curricula inresponse to the increasing prevalence of AI tools.The AI Task Force has been assisted in its work by the ABA sections,divisions,forums,and otherentities,including tech-savvy young lawyers,who have provided their unique expertise in diversepractice areas.These e
31、ntities have for years presented programs,published materials,andprovided opportunities for ABA members to participate in important discussions on AI.The ABA remains committed to leading the profession in understanding and addressing the legaland ethical complexities of AI and other emerging technol
32、ogies.Acknowledgements6Meet the Special Advisors8AI and The Legal Profession11AI and Legal Ethics13AI and the Courts18AI and Legal Education2225AI Risk Management and Mitigation26TABLE OF CONTENTSAI and Access to Justice AI Governance31AI Challenges:Generative AI14The views expressed herein represen
33、t the opinions of the authors.They have not beenreviewed or approved by the House of Delegates or the Board of Governors of the AmericanBar Association and,accordingly,should not be construed as representing the position ofthe Association or any of its entities.This Report is a product of the work o
34、f the 2024 Task Force on Law and ArtificialIntelligence.AI Task Force members have contributed to the work on which this Report isbased but not all Task Force members are authors.5ABA Entities:Collaboration Across the ABA 352023-24 AI Task Force Programs and Events38ACKNOWLEDGEMENTS6The AI Task Forc
35、e includes a diverse group of 50 leading AI and legal experts,including sevenSpecial Advisors.Many of these individuals are computer scientists or engineers;they all havedeep technology experience and have held leadership positions in law firms and corporations,government,academia,or public service.
36、The AI Task Force is grateful to Chair Lucy Thomson;Vice Chairs Laura Possessky,JamesSandman,Cynthia Cwik,Ted Claypoole,and Roland Trope;and Entity Liaison Leader Ruth HillBro for their skillful leadership;and to ABA staff,Joseph Gartner,Director and Counsel,BenWoodson,and Lanita Thomas.Special than
37、ks go to the Working Group chairs who led the many activities and initiatives of theAI Task Force on the broad range of AI issues addressed this year:Ted Claypoole,Practice ofLaw;James Sandman,Access to Justice;Cynthia Cwik,Governance,Generative AI Challenges,and Legal Education;Roland Trope,Risk Ma
38、nagement and Mitigation;co-chairs MauraGrossman,Judge Scott Schlegel,and Hon.Herbert Dixon(ret.),AI and the Courts;and LauraPossessky and Ruth Hill Bro,Strategic Communications.We convey our appreciation to the Special Advisors for their time and expertise in addressingthe critical AI issues faced b
39、y lawyers and judges in their day-to-day practices,and for providingtheir insights about AI developments and challenges of the past year.Thank you to all those on the AI Task Force who came together to present remarkableprograms,part of the“Moving With Change:AI and the Law Webinar Series,”and to pu
40、blishinformative articles and reports.Task Force MembersLucy L.Thomson Ted Claypoole Cynthia Cwik Laura Possessky James Sandman Roland Trope Ruth Hill Bro John G.Buchanan,III Mike Fricklas William Garcia Dazza Greenwood Maura R.Grossman Eric Hibbard Farhana Y.Khera Adriana Luedke Ruth Okediji Regina
41、 Sam Penti Claudia Ray Paul Rosenzweig Advisory CouncilTheresa Harris R.Patrick Huston Stacy Marz Dr.Willie May Bridget McCormack Darrell Mottley Andrew Perlman Hon.Delissa Ridgway Hon.Scott Schlegel Reva Schwartz Karen Silverman Thomas Smedinghoff Hon.Samuel Thumma Stephen Wu Brian Beck Deon Woods
42、Bell Hon.Margarita SolanoBernal(ret.)Hon.Susan G.Braden(ret.)George Chen Shay Cleary John S.Cooke Hon.Herbert B.Dixon,Jr.(ret.)Katherine B.Forrest Hon.Paul Grimm(ret.)Special AdvisorsMichael ChertoffIvan Fong Daniel HoMichelle LeeTrooper SandersMiriam VogelSeth P.Waxman7In addition,the AI Task Force
43、 worked with sections,divisions,forums and other ABA entities tocollaborate with lawyers across the ABA and to extend our efforts to those lawyers with specialexpertise in subject-matter areas.We appreciate these liaisons and the ABA section,division,forum directors.We also thank the additional indi
44、viduals below who supported our work by speaking on webinarsand contributing to our reports.Leighton B.R.Allen Merri Baldwin Karen Buzard Lindsay Edelstein Dr.Lance Eliot Prof.Hany Farid Katherine Fick Magistrate Judge Gabriel A.Fuentes Peter Geovanes Dr.Margaret Hagen Elizabeth Kelly Lisa Lifshitz
45、Prof.Daniel Linna Katherine Lowry Alexandria(Lexi)Lutz Jennifer Mabey Ian McDougall Louise Nemschoff Ekta Oza Damien Riehl Spencer Rubin Madhu Srikumar Josh Strickland Prof.Gabriel Teninbaum Hon.E.Kenneth Wright Jr.MEET THE SPECIAL ADVISORS8INSIGHTS FROM THE SPECIAL ADVISORS ON AI DEVELOPMENTS AND C
46、HALLENGESTo shape the focus of its inquiry,the AI Task Force relied on the insights of seven prominentthought leaders on law and technology.The ABA Presidential Speaker Series program,AI:The New Frontier,brought these insights to the ABA membership.Special Advisors DanielHo,Michelle Lee,Trooper Sand
47、ers,Miriam Vogel,and Seth Waxman discussed how AI hasthe potential to transform the practice of law;major initiatives of the White House ExecutiveOrder 14110 on the Safe,Secure,and Trustworthy Development and Use of ArtificialIntelligence;and international AI developments.Digital Legal AssistantsThe
48、 rise of digital legal assistants promises to bring about an inevitable,perhaps even seismic,shift in the legal profession and the practice of law.Privacy,Property Rights and DeepfakesPrivacy and property rights in ones physical image and voice are areas of thelaw that will be materially affected by
49、 AI.This was demonstrated by actress Ivan Fong,Executive Vice President,General Counsel and Secretary,Medtronic;former DHS General Counsel;and former Deputy AssociateAttorney General,U.S.Department of Justice.Their ability to analyze and condense large volumes of information and produce creative and
50、reliable responses to prompts,especially the drafting of legal documents,will reduce thosekinds of work currently done by legal professionals.Courts and bar associations should accelerate efforts to develop guidance for the responsibleand effective integration of AI into the practice of law,as well
51、as standards and testingprotocols to protect practitioners and the public from errors,hallucinations,and other ways inwhich immature digital legal assistants can cause harm.Michael Chertoff,Chairman,Chertoff Group;former Secretary,U.S.Department of Homeland Security(DHS);and former Judge,U.S.Court o
52、fAppeals for the Third Circuit.Scarlett Johanssons complaint that after she declined to voice an AI assistant for ChatGPT,Open AI adopted an artificial voice that sounded identical.This unapproved simulation of anindividuals voice and image will raise questions about the legality of access to the sa
53、mplesthat generated that artificial voice or image under privacy laws,copyright and publicity rights.One key question is whether the artificial voice or image is truly identical or sufficientlydistinguishable to avoid a claim of appropriation.A related issue with the proliferation of deep fakes will
54、 be ensuring that the rules of evidencerelating to authentication are adapted to verify the genuineness of recordings or photographsoffered in court.Digital replication of persons will be a salient issue in upcoming electioncampaigns.9Addressing bias,privacy,copyright,professional responsibility and
55、 liability issues will facilitatethe responsible use of generative AI.As digital legal assistants improve in quality andreliability,they will be able to give competent legal advice,at least in certain domains.Performance Benchmarks for Legal AI TechnologyThe central issue for responsible adoption of
56、 AI in the legal profession lies in of the three,and even innovations.This technology presents fascinating and novel intellectualproperty questions related to protection,ownership and infringement of intellectual propertyrights.How our society and legal system answer these questions has a profound i
57、mpact on theincentives to create,invent and invest.It is critically important that we get this right.I commend the American Bar Association for its efforts to focus on these and other issuesraised by artificial intelligence.Michelle Lee,CEO,Obsidian Strategies,Inc.;former Under Secretary ofCommerce
58、for Intellectual Property and Director U.S.Patent and TrademarkOffice;Vice President(AI)Amazon Web Services,Google.Intellectual Property QuestionsFor the first time,computers aided by generative AI are able to perform thequintessentially human task of creating text,sound,images,a combination Daniel
59、Ho,Stanford University Professor of Law and Political Science;SeniorFellow,Stanford Institute for Economic Policy Research Director,Regulation,Evaluation,and Governance Lab (RegLab);and Associate Director,StanfordInstitute for Human-Centered Artificial Intelligence (HAI).rigorous assessments of AI-b
60、ased systems for specific tasks.Unlike the general AI field,legalAI technology has been remarkably opaque,lacking the kind of performance benchmarks thathave been the measure of and catalyst for AI innovation.In one study,we documented thathallucinations with legal AI providers range from 17%to 33%i
61、n a benchmark dataset ofrealistic and challenging queries(e.g.,bar exam and appellate litigation questions).Law firms,bar associations,academics,and technology providers must develop transparencyand benchmarking requirements to assess the appropriateness,trustworthiness,benefits,andrisks of specific
62、 AI tools.If we fail,the problem of legal“hallucinations”highlighted by ChiefJustice John Roberts the propensity of AI models to make up cases,facts,holdings,statutes,and regulations may materialize:legal AI and hallucinated misinformation willerode trust and“dehumanize the law.”10Safe and Responsib
63、le Use of AI AI has not only caught the publics imagination(instilling both excitement andfear),it has put pressure on leaders in business,government,and societysuch as lawyers and judges.Bringing AI to heel and ensuring democratic acc-Seth Waxman,Partner,WilmerHale;and former Solicitor Generalof th
64、e United StatesMiriam Vogel,CEO Equal AI and NAIAC ChairLawyers Role in AI GovernanceAs AI continues to transform our lives and work,we have a role to play inunderstanding this new technology and its wide-ranging impacts,bothpositive and negative.Lawyers play a critical role in this AI-driven world
65、byensuring that the technology operates in compliance with our values,as codi-fied in our laws.The release of large language models has expedited the need for legaloversight and guidance.Clients are using AI hiring tools,consumer facing chat bots,andperhaps even a mortgage lending algorithm or healt
66、h care diagnostic tool.It is our duty toensure these technologies comply with emerging laws and creatively interpret their use withinexisting legal frameworks,such as consumer protection,civil rights,and financial laws andregulations.As with previous technological advances,lawyers will be on the fro
67、nt lines,developing guardrails and setting limits to ensure AI is safe and accessible for all users acrosssociety.Competence with AIArtificial intelligenceand generative AI in particularhas injected newpromise,new peril,and widespread uncertainty with the technologies thattouch our lives in myriad w
68、ays.Legal rights and responsibilities,and indeed the legal system more broadly,are no exception.The work of the AI Task Force,and thethoughtful,well-written new book,Artificial Intelligence:Legal Issues,Policy,and PracticalStrategies,couldnt be more timely,as we all learn to understand and adjust bo
69、th ourconduct and our expectations to this brave new world.ountability is challenging well-established areas of the law such as intellectual property andnational security.Organizations from government agencies dispensing justice and human services tobusinesses driving commerce and community-serving
70、institutions must ensure they areready to deploy AI in a safe and responsible manner.Lawyers have a critical role to play in anorganizations AI readiness that calls for keeping folks on the right side of rules while alsohelping to foster a healthy organizational culture that advances good business p
71、ractice andvalues.So much guiding AIs safe and responsible use is beyond the reach of law and policyand will be determined by the conventions and norms that guide everyday lifeTrooper Sanders,CEO,Benefits Data Trust;and Member,National ArtificialIntelligence Advisory Committee(NAIAC).AI AND THE LEGA
72、L PROFESSION11AI AS A TOOL OF LEGAL PRACTICE The impact of AI on law practice will be far-reaching.AI has the potential to improve manyaspects of legal practice.AI can make us better lawyers.It can open up new career trajectoriesand enable lawyers to perform sophisticated tasks while freeing them fr
73、om the more routine orless interesting work.e lawyers to perform sophisticated tasks while freeing them from the moreroutine or less interesting work.EXTRACTIVE AI The most common AI tool used today in law firms and legal departments is based in extractive AI.The tool set known as extractive AI make
74、s predictions,provides analysis,and creates AI productsbased only on the set of data fed into the AI model.It does not search the internet for informationor speculate on topics outside of the chosen data set.Extractive AI is provided for a limited set ofdocuments in a law firms document database suc
75、h as the depositions,affidavits,and testimonyfor a complex trial,or the hearing transcripts from a regulators meetings on a topic of interest and the AI is prompted to find answers and information within that set of documents.Thisfunctionality maximizes the value of AI for lawyers because it capital
76、izes on what AI does best ingest large volumes of information and find,organize,and categorize items of information withinthe set.Some law firms and corporate legal departments have designed their own extractive AI,but most have worked with AI vendors.Document and Data Management The use cases for e
77、xtractive AI tools are strongest for legal practices trying to manage large datasets those practices that decades ago filled war rooms with bankers boxes stuffed with paper primarily litigation and merger and acquisitions practice.These lawyers can use AI to make muchmore sophisticated analytical qu
78、eries about the documents fed into the AI tool.In the past,theycould run word searches of the data,but little more.Legal Research and Analysis Another growing utility for legal extractive AI is analysis of pre-existing law,rules,regulations,orpublic statements by regulators.Once the AI has ingested
79、a full complement of this material,lawyers can ask the extractive AI to find patterns relevant to current client needs.For example,lawyers could feed into their AI tool many years of decisions from a single judges docket and askthe extractive AI to predict how the judge would rule on a current case,
80、citing support from theearlier decisions.The same can be accomplished with labor arbitration decisions from a singlearbitrator or with antitrust arguments made by an opponent in an upcoming case.Analysis andproduction of documents based on a large but specific data set are some of the most interesti
81、ngwork being generated by law firms and legal departments using extractive AI tools.12Contracts Transactional attorneys can use extractive AI as well,asking for examples of vendor-favoringindemnity terms found in software contracts within the legal departments procurement database.A companys counsel
82、 can ask its extractive AI to show every contract in the procurement databasethat deviates from the companys standard form with regard to limitation on liability,and to listthe contracts in order of highest company risk to the lowest.The company could use the AI tosearch the metadata of these contra
83、cts to highlight which procurement officers were potentiallyputting the company at highest risk.More Traditional Uses of AI Tools The types of AI that operate physical objects in the natural world may be growing fast,but thereare few,if any,direct applications for the legal profession.Still,there ar
84、e many general workplaceuses of AI that may be implemented by legal industry employers.An example of a workplace usefor AI is biometric analysis,which may be used by law firms for security purposes,or by theirclients for clocking hourly workers in and out.Decision-making AI May be used by many busin
85、esses and governments in lending,fraud analysis,the rental industry,and human resources.The legal industry likely participates in hiring decisions that utilize AI many hiring managers use AI to winnow the job applicant pool into a manageable list of likelyprospects but still relies on human brain po
86、wer to make most hiring decisions.Human trainingand judgment are what lawyers are selling to their clients,so ceding advisory decisions to machinelearning tools has not caught on among most attorneys.Future AI technology may shift thisdynamic.Optimizing AI Is everywhere in our modern world,making pr
87、ocesses more efficient and products more effective.The optimization AI that checks spelling and grammar is likely used by all lawyers,or should be.AI that suggests word choices is easy and built into present consumer and business computing.Lawyers are using this AI to optimize their processes and wi
88、ll use more of it,as this AI continuesto be built into the baseline tools that all information businesses use.AI as an Object of Practice Many lawyers are exposed to AI not only as an internal practice tool,but as the actual subjectmatter of client work where,for example,corporations,governments,non
89、-profits,and schoolsystems are using AI to conduct various aspects of business.Lawyers must learn the risks andcapabilities of AI to protect their clients and assist them in mitigating their risks.To this end,the AI Task Force is working with a group of tech-savvy business lawyers in the ABAYoung La
90、wyers Division to publish model contract terms for entities incorporating AI into theirbusiness.Lawyers will need guidance and support to help clients sort out the applications andopportunities of AI while minimizing risks.The AI industry is both growing and consolidating,asmany players invest in or
91、 acquire companies developing and using AI.Lawyers are a leadingresource in managing these investments and acquisitions.Generative AI,which produces new outputs based on prompts from users,has the potential toimprove many aspects of the practice of law,including increasing the speed at which many ta
92、skscan be performed and reducing the amount of time spent on routine tasks.Law schools areincreasingly integrating generative AI into their curricula.Generative AI also could reduce theaccess to justice gap by making legal resources more widely available.It is important for the legal profession to h
93、ave an understanding and awareness of the potentialuses of this technology,as well as of risks associated with generative AI,including privacy andsecurity risks,the generation of inaccurate content,and intellectual property issues(such ascopyright infringement).As more lawyers use generative AI tool
94、s,many law firms have conducted training on AI for theirattorneys,contracted for client-safe versions,and promoted the active use of generative AI toolsin their practice.Some are using generative AI tools to provide first drafts of documents and toproduce correspondence.Certain issues,however,have s
95、lowed the growth of generative AI for lawyers.Well-publicizedcases demonstrating improper use of the technology,including imposition of discipline andsanctions against lawyers using generative AI,have led a number of law firms to limit preparationof work product using generative AI.Uncertainty about
96、 how the U.S.state rules of professionalconduct will address whether and how state supreme courts and their disciplinary agenciesdiscipline lawyers for use or misuse of generative AI has slowed adoption of the technology.AI CHALLENGES:GENERATIVE AI13AI AND LEGAL ETHICS14More than a decade ago,the AB
97、A amended the ABA Model Rules of Professional Conduct(“ABAModel Rules”)to reflect the impact of technology on 21st Century law practice.With every technological transformation comes complex and challenging legal and ethicalquestions about the practice of law.Being aware of the risks and limitations
98、of generative AI is thefirst step for legal practitioners in ensuring that the technology can be used safely andresponsibly,and in accordance with their professional obligations.Using the ABA Model Rules as a guide,the following discussion highlights a few of the rules presented when a lawyer uses g
99、enerative AI in the practice of law.Although the ABA Model Ruleswere not written to address specific technologies,they are comprehensive enough to permit theresponsible and ethical use of generative AI tools in legal practice.By following the relevantprofessional conduct rules,an attorney can safely
100、 and effectively use generative AI tools to assistclients.“Hallucinations”It is important for lawyers to understand that language-driven generative AI is not a search enginewith drafting features,but is instead a prediction engine that simply attempts to predict word byword an answer to a prompt.The
101、 generated answer may contain errors.This does not mean that this technology will not beuseful to lawyers.It simply means that lawyers are responsible for confirming the existence,accuracy,and appropriateness of the citations they submit to a court,whether or not a court hasspecial rules about AI.Co
102、mpetence Under longstanding professional rules,lawyers are responsible for providing competentrepresentation to clients.When using generative AI in client representation,lawyers should have areasonable understanding of the capabilities and limitations of the specific generative AItechnology that the
103、 lawyer might use.A misunderstanding of the technology can lead to problematic reliance on generative AI results,not only due to fabrications of sources and citations,but also because,as stated by the State Barof Michigan,JL-155(October 27,2023),“an algorithm may weigh factors that the law or societ
104、ydeem inappropriate or do so with a weight that is inappropriate in the context presented.AIdoes not understand the world as humans do and,unless instructed otherwise,its results mayreflect an ignorance of norms or case law precedent.”15Diligence,Consultation,and Communications In addition to compet
105、ence,lawyers must provide diligent,timely work.AI might improve thespeed of delivery and quality of the work product when used correctly.Like the competencerequirement,Rule 1.2 could be triggered by uninformed or imprecise use of generative AI.Practicing lawyers must“reasonably consult with the clie
106、nt about the means by which the clientsobjectives are to be accomplished.”They have a responsibility to explain to clients whattechnology is being used for client matters.To do so,a lawyer needs to understand the risks andopportunities that come with generative AI if the lawyer is using that technol
107、ogy.Confidentiality Attorneys must be careful not to reveal information relating to the representation when usinggenerative AI on the clients behalf without the clients informed consent.Many generative AIplatforms do not provide confidentiality for the prompts input into the tool or the outputsprodu
108、ced by it.Unless they represent otherwise,generative AI companies are likely to use theseprompts for additional training of their AI models.The prompts and the responses they producecould be revealed to the general public either by accident or by specially-designed inquiries to thegenerative AI tool
109、.Some U.S.lawyers using generative AI for client work are now contracting for generative AI toolsthat do not use the lawyers prompts as further training for the model.The vendors for thesetools claim that they eliminate prompts and AI tool inputs and outputs after use,so that theconfidentiality of a
110、ll of this information is protected.Before undertaking client work using a generative AI tool,a lawyer must understand howinformation submitted in a prompt will be used and shared and also where it will be stored.Thedata security of AI model companies and law offices using these products can put inf
111、ormationrelating to a clients representation at risk.Client Billing AI technology could affect billing practices.A brief written by prompting a generative AI programmight take substantially less time to complete than one written directly by a lawyer,even after thelawyer has checked all the citations
112、.Profiting from the time savings may be a violation of the ABAModel Rules if the lawyer promised to bill clients by the hour but does not provide the billingdiscount that occurred due to generative AI-created efficiencies.Productive use of generative AImay lead to more flat-fee or retainer agreement
113、s and less pure hourly billing.Deepfakes and Candor to the Court Generative AI can be used to produce writings,audio,pictures,video,and other material that canlead to false impressions,such as creating a faked video of a person buying drugs or robbing astore,or(as in a recent Maryland case)a faked r
114、ecording of a rival making socially unacceptablestatements.Lawyers and clients could misuse generative AI to create misinformation,disinformation,deepfakes,and other made-up audio,video,and photography.AI can create false evidence that could lead to ethical violations if it is used at trial or in se
115、ttlementdiscussions.It also is improper for a lawyer to actively benefit from the“liars dividend”byclaiming that real evidence has been faked.As generative AI-produced deepfakes become easierto create and harder to disprove,lawyers will need to take additional care in using evidenceprovided by clien
116、ts and others.Where a client or a clients agent has used generative AI to create deepfakes or othermisinformation,that clients lawyer has an obligation to question the authenticity of the evidencebefore it is offered to a court or in settlement negotiations with an opposing party.Responsibility for
117、Lawyers Agents The ABA Model Rules confer an affirmative duty on lawyers to supervise the professional conductof employees and agents.Therefore,lawyers are responsible for supervising any person usinggenerative AI to create work product to ensure the accuracy and reliability of all aspects of thecon
118、tent,just as a lawyer would for content drafted by an associate or a paralegal.But what if the lawyers agents associates,paralegals,or assistants were using the technologyon the lawyers behalf?The lead lawyer still holds the responsibility to understand what is beingdone for the client and what the
119、risks are to the lawyer and the clients.16ABA Model Rules of Professional Conduct,Formal Ethics Opinion 512,Generative Artificial Intelligence Tools,ABA Standing Committee on Ethicsand Professional Responsibility,July 29,2024.STATE BAR ETHICS RULES AND GUIDANCE AI ethics opinions outline how lawyers
120、 can implement AI in their practices while continuing tomeet their professional obligations.CALIFORNIA State Bar Of California Standing Committee On Professional Responsibility AndConduct Practical Guidance For The Use Of Generative Artificial Intelligence In The Practice OfLaw DISTRICT OF COLUMBIA-
121、Ethics Opinion 388 Attorneys Use of Generative ArtificialIntelligence in Client Matters“Most lawyers are not computer programmers or engineers and are not expected to have thosespecialized skills.As technology that can be used in legal practice evolves,however,lawyers whorely on the technology shoul
122、d have a reasonable and current understanding of how to use thetechnology with due regard for its potential dangers and limitations.So it is with generative AItechnology.”FLORIDA Florida Bar Ethics Opinion 24-1(January 19,2024)KENTUCKY Ethics Opinion KBA E-457(March 15,2024)MICHIGAN State Bar Of Mic
123、higan Ethics Opinion Ji-155(October 27,2023)“Judicial officers,like lawyers,have an ethical obligation to maintain competence with and furthereducate themselves on advancing technology,including but not limited to artificial intelligence(AI).”NEW JERSEY Legal Practice:Preliminary Guidelines On The U
124、se Of Artificial Intelligence ByNew Jersey Lawyers PENNSYLVANIA Pennsylvania Bar Association Committee on Legal Ethics and ProfessionalResponsibility and Philadelphia Bar Association Professional Guidance Committee Joint FormalOpinion 2024-200 Ethical Issues Regarding The Use Of Artificial Intellige
125、nce WEST VIRGINIA West Virginia Office of Disciplinary Counsel(ODC)released a legal ethicsopinion regarding the use of AI(June 26,2024)17Courts are at a critical crossroads for the use of AI technologies by the legal profession.Judges,for example,are increasingly using AI in court administration and
126、 in the criminal justice system.While ethical and evidentiary concerns are making headlines,AI also promises to provide newsolutions to improve access to justice and the courts.The AI Task Force has been active in addressing the significant impact of AI on the judicial system,providing a series of e
127、ducational programs on AI technologies,generative AI tools,and deepfakesto equip judges,court staff,and legal professionals with the knowledge and tools necessary toaddress AI-related challenges effectively.Through discussions and educational offerings,aworking group dedicated to issues specific to
128、the courts has identified several critical insights andchallenges.Combatting Deepfakes The issue of deepfakes(realistic but fake digital records)remains a significant concern.The abilityof AI to conjure up realistic but completely fabricated text,sound,graphics,and video means ithas become increasin
129、gly difficult to spot these fakes.There is a growing need for reliable toolsand standards,such as C2PA,to discern fact from fiction in the digital realm and to authenticatethe source and legitimacy of digital records.Lawyers,judges,and technology experts are workingon several fronts to address this
130、problem.Judicial Responses to AI Judges at all levels of the judicial system across the country have issued dozens of judicial AIstanding orders,imposing widely varying requirements on lawyers use of AI.These orders reflectjudges concerns about protecting confidential client information and ensuring
131、 that lawyers fulfilltheir ethical obligations.However,they are creating an array of inconsistent and often vague rulesthat may be confusing and difficult to comply with.This proliferation of judicial standing orderscould have the unintended consequence of discouraging the use of generative AI tools
132、 by lawyersand self-represented litigants,and potentially hindering the development of innovative AI-basedsolutions to improve court administration and access to justice.The use of generative AI in judicial chambers has sparked diverse opinions,highlighting the needfor ongoing debate and discussion.
133、Moreover,the potential of data analytics in the courtroompresents both exciting opportunities and complex ethical considerations.Proposed Changes to the Rules of Evidence Among several initiatives to address the use of AI in the courtroom,Judge Paul Grimm(ret.)andProfessor Maura R.Grossman,both of w
134、hom are members of the AI Task Force,have proposedamendments to the Federal Rules of Evidence to address the issues created by AI technologies.Judge Grimm and Dr.Grossman have expressed their view that the current rules forauthenticating and admitting AI-generated or potentially AI-generated evidenc
135、e are sufficientlyadaptable to manage issues arising from deepfakes without needing a higher standard of proof foradmissibility.However,they recommend the adoption of some procedural safeguards and astronger judicial gatekeeping role for situations where AI-generated or potentially AI-generatedevide
136、nce is at issue.AI AND THE COURTS18Upcoming AI and the Courts Webinars To support ongoing discussions about AI challenges and opportunities,the AI Task Force plansto present educational programs to provide valuable insights to judicial officers,their staff,andother legal professionals.Anticipated to
137、pics include:Judicial standing orders related to court filings prepared using generative AIThis debate-style webinar will introduce lawyers and judges to online compilations of existingstanding orders and related scholarship,fostering a deeper understanding of the varyingapproaches to this issue.Adm
138、issibility and authenticity of AI-generated evidence,particularly deepfakes.Evidence scholars and members of the Advisory Committee on the Federal Rules of Evidencewill be invited to discuss the strengths and weaknesses of different approaches and theCommittees current“wait-and-see”decision.This ses
139、sion will also cover tools that arecurrently available to judges for addressing AI evidence and deepfakes.Courts experimenting with generative AI tools in a court-sponsored sandbox.This technical session will address the pros and cons of different model types,retrievalaugmented generation(RAG),fine-
140、tuning,and other relevant issues.It will feature technicalexperts and representatives from court systems that have been experimenting with these tools.AI Resources for the Judiciary Comprehensive education and resources on AI-related issues are necessary for thejudiciary and court staff to understan
141、d the impact of AI on the judicial system.The AI TaskForce,through its dedicated Working Group on AI and the Courts,has a strong commitment toequipping judges,court staff,and legal professionals with the tools they need to navigate therapidly evolving landscape of AI in the legal field.19Summary of
142、the Grimm-Grossman Rule Proposals to Amend FRE 901 to Address AI-generated Evidence and DeepfakesHon.Paul W.Grimm(ret.)and Professor Maura R.Grossman have proposed anamendment to Federal Rule of Evidence 901(b)(9)and a new Federal Rule of Evidence901(c)with the following provisions:Federal Rule of E
143、vidence 901(b)(9)for AI EvidencePurpose:Address AI as evidence when the parties are in agreement that the evidence isthe product of an AI system.Current Federal Rule 901(b)(9)specifies that evidence about a process or system mustdemonstrate accuracy to be authenticated.Proposed Changes:Terminology U
144、pdate:Replace the term“accurate”with“valid and reliable”to addressthe nuance that evidence can sometimes be accurate but not consistently reliable(e.g.,a broken watch is accurate twice a day).1.AI-Specific Requirements:For known AI-generated evidence,the proponent mustdescribe the software or progra
145、m used and show that the software produced validand reliable results in the specific instance.2.New Federal Rule of Evidence 901(c)for Potentially Deepfake EvidencePurpose:Address the challenge of authenticating electronic evidence suspected to befabricated or altered,particularly with the rise of A
146、I-generated deepfakes.Proposed New Provisions:1.Burden of Proof:The party challenging the evidence must demonstrate that it is more likely than notfabricated or altered.2.Proponents Responsibility:If the challenge is successful,the proponent must then prove that the probative valueof the evidence ou
147、tweighs its likely prejudicial effect.3.Application:The rule applies to all computer-generated or electronic evidence,not just thosetypically authenticated under Rule 901(b)(9).Emphasis on Validity and ReliabilityThe proposal stresses the need for a demonstration by the proponent of AI evidence ofbo
148、th validity and reliability,paralleling concerns in Federal Rule of Evidence 702regarding expert testimony,as juries cannot easily discern deepfake inauthenticity.Procedural safeguards at the admissibility stage are crucial due to AIs potential toproduce unreliable event representations.Judge Grimm
149、and Dr.Grossman urge theapplication of the Daubert standard(or the requirements set forth in Federal Rule ofEvidence 702)to evidence that is known to be the product of an AI system.Suggestion for Additional Safeguards:The proposal supplements these procedural requirements when the evidence is ofdisp
150、uted origin.It imposes a reverse preponderance of the evidence showing by the partychallenging the evidence as deepfake and a balancing of probative value versus prejudiceby the Court under Federal Rule of Evidence 403.2021IMPACT OF AI ON THE LEGAL PROFESSION AND THE COURTS U.S.SUPREME COURT CHIEF J
151、USTICE JOHN ROBERTSChief Justice John Roberts highlighted the expected impact of AI on the legal professionand the Courts in his 2023 Year-End Report on the Federal Judiciary.He opined that:“human judges will be around for a while.But with equal confidence I predict that judicialworkparticularly at
152、the trial levelwill be significantly affected by AI.Those changes willinvolve not only how judges go about doing their job,but also how they understand the rolethat AI plays in the cases that come before them.”“Machines cannot fully replace key actors in court.Judges,for example,measure thesincerity
153、 of a defendants allocution at sentencing.Nuance matters:Much can turn on ashaking hand,a quivering voice,a change of inflection,a bead of sweat,a momentshesitation,a fleeting break in eye contact.And most people still trust humans more thanmachines to perceive and draw the right inferences from the
154、se clues.”“Appellate judges,too,perform quintessentially human functions.Many appellate decisionsturn on whether a lower court has abused its discretion,a standard that by its natureinvolves fact-specific gray areas.Others focus on open questions about how the law shoulddevelop in new areas.AI is ba
155、sed largely on existing information,which can inform but notmake such decisions.”“Rule 1 of the Federal Rules of Civil Procedure directs the parties and the courts to seek the“just,speedy,and inexpensive”resolution of cases.Many AI applications indisputably assistthe judicial system in advancing tho
156、se goals.As AI evolves,courts will need to consider itsproper uses in litigation.In the federal courts,several Judicial Conference Committeesincluding those dealing with court administration and case management,cybersecurity,andthe rules of practice and procedure,to name just a fewwill be involved i
157、n that effort.”WHAT ROLE CAN ARTIFICIAL INTELLIGENCE PLAY IN ADDRESSING THEJUSTICE GAP IN AMERICA?AI,and particularly generative AI,can improve access to justice.The technology can bedeveloped to provide reliable and accessible information for pro se litigants and much-neededsupport for legal servic
158、es attorneys.With trustworthy and responsible generative AI tools,individuals without legal representation can have the ability to get basic legal information toinform them about options when legal issues arise.AI tools could also alleviate the repetitive,labor-intensive,and sometimes tedious tasks
159、that can often fill a legal advocates day,particularly with high-volume caseloads in most non-profit legal services offices.The access to justice crisis in America is huge.Addressing it requires solutions at a scaleproportionate to the magnitude of the problem.Generative AI,with its capability to ge
160、neratecomprehensive responses to plain language prompts,has the potential to improve access tojustice at a scale that prior interventions have been unable to achieve.It has the potential todemocratize law to make the law accessible to,and usable by,the people the justice system isintended to serve.T
161、he challenge for the legal profession is realizing that potential whilemanaging the risks inherent in current AI systems.The National Center for State Courts estimates that both parties are represented by lawyersin only 24 percent of state court civil cases.The Legal Services Corporation estimates t
162、hat 92 percent of the substantial civil legalproblems of low-income Americans receive no or inadequate help.The World Justice Projects 2023 Rule of Law Index ranks the United States 115th out of142 countries for the accessibility and affordability of civil justice.Among the 46thwealthiest countries
163、in the world,the United States ranks 46th.Federal funding for the Legal Services Corporation,the nonprofit established by Congressin 1974 to fund civil legal aid,and now the largest legal aid funder in the United States,amounts to less than what Americans spend every year on Halloween costumes for t
164、heirpets.The matters in which Americans lack access to legal representation often involve the most basicof human needs:shelter(protection against unlawful evictions and foreclosures);personalsafety(protection orders against abusers);family stability(child custody,child support,guardianships,and adop
165、tions);and financial subsistence(job security,wage theft,and access tobenefits programs).These are high-volume,high-stakes matters.Each year,tens of millions ofAmericans have to navigate the legal system by themselves.They confront a system created bylawyers for lawyers,based on the assumption that
166、everyone has a lawyer.The system wasnever designed for unrepresented individuals,who now appear in more than three-quarters ofcivil cases in state courts,as the intended users.22AI AND ACCESS TO JUSTICE The magnitude of the access-to-justice crisis in the United States has been well documented.23The
167、 access-to-justice problem is a problem of scale.Solutions to it must be commensurate withthe magnitude of the problem:the solutions must be at scale.Generative AI has the potential toimprove access to justice in two ways:(1)by increasing the efficiency and productivity of legalservices and pro bono
168、 lawyers,so that they can assist many more people with higher levels ofservice;and(2)by making accurate,usable,and understandable legal information and assistanceeasily available to individuals with civil legal problems.These technology-based tools can improveaccess by reallocating legal staff time
169、to focus on more complex legal needs and by offeringinformation that may prevent some legal needs from arising in the first place.To realize this potential,the legal profession will need to address four high-priority areas:Training and educating the access-to-justice community in the use of AI tools
170、.Currently,there is a wide range of familiarity and comfort levels with generative AI among legal servicesproviders,pro bono lawyers,and other justice system stakeholders.Many seem to know onlyabout the risks AI tools can pose,with little understanding of the benefits of AI and the waysto manage its
171、 risks.The community would benefit from widely accessible training andeducation in the different AI tools available,their capabilities,their limitations,and theresponsible use of AI for different purposes.Publicizing actual use cases.Perhaps the most effective way to educate the access-to-justicecom
172、munity about the responsible use of generative AI is to publicize cases of actual use bytrusted,competent,and innovative community members,and by scholars working with thecommunity.Use cases make training concrete and practical.Training program faculty shouldinclude community members who are themsel
173、ves actual users and who can engage withtraining participants to explore use cases in detail.Two recent publications provide excellentexamples of helpful use cases.2.3.1.Developing quality standards.As Dr.Margaret Hagan of Stanford Law School has noted,thelegal domain currently lacks well-defined qu
174、ality metrics for assessing the performance of AItools.Quality evaluation is particularly important for assessing tools that individuals mightuse for help with their own legal problems.Dr.Hagan put the challenge this way:“What areconcrete criteria by which we might evaluate the quality of a technolo
175、gy providers responsewhen someone asks.for help for an eviction notice theyve received,a debt lawsuittheyre facing,or a divorce they want to file?How can we determine if there are benefits,problems,harms,or other quality concerns with the response the provider gives to theperson?”Dr.Hagan has propos
176、ed an initial set of specific criteria by which to judge quality.Her work is important.The access-to-justice community needs standards of the kind Dr.Hagan is developing.1 C.Chien&M,Kim,Generative AI and Legal Aid:Results from a Field Study and 100 Use Cases to Bridge the Justice Gap,https:/ Apr.11,
177、2024);R.Brescia&J.Sandman,“Artificial Intelligenceand Access to Justice:A Potential Game Changer in Closing the Justice Gap,”Artificial Intelligence:Legal Issues,Policy,and PracticalStrategies(ABA 2024).2 M.Hagan,“Good AI Legal Help,Bad AI Legal Help:Establishing Quality Standards for Responses to P
178、eoples Legal Problem Stories,”https:/ Jan.20,2024).1224Making reliable legal AI tools accessible to and affordable by legal services providers andpublic interest organizations.Subscription costs for the best and most reliable legal AI toolspose a risk of making those tools unaffordable to and inacce
179、ssible to the access-to-justicecommunity.Those costs may have the unintended consequence of widening the justice gap by making powerful new legal tools available to clients of means and their lawyers that arenot available to low-and moderate-income people and their lawyers.Mitigating disparities int
180、he justice system means ensuring affordable access to AI legal tools.Affordable technologysolutions should be raised with and addressed by legal AI developers and the legalcommunity,particularly with law firms that have the financial means to provide assistance tolegal services providers and public
181、interest organizations.4.Initiatives addressing the use of AI to improve access to justice must be closely coordinated withthe courts.The implications of AI in the administration of justice are complex.As courts grapplewith these issues,it is important to ensure that they consider the needs of self-
182、representedlitigants.While there are certainly risks associated with using AI in the context of legal services,these tools offer tremendous possibilities for reducing the access-to-justice crisis in the UnitedStates in significant ways.INTERNATIONAL DEVELOPMENTS G7 Statement on AI At a meeting of re
183、presentatives of the Bar Associations and Law Societies of the G7countries(G7 Bars)on October 30,2023 in Paris,the G7 Bar leaders concluded thatGenerative AI is a potentially disruptive technology that could profoundly change the legalprofession and access to legal services.The G7 Bars stated that t
184、hey are aware of the need to assess its implications for thepractice of the legal profession,the operation of judicial systems more generally,ethicaland professional rules that might be affected,and training to help lawyers understand thebenefits and limitations of AI.The G7 Bars are committed to pa
185、rticipating in relevantnational and international bodies and initiatives in order to draw attention to the corevalues of the legal profession,proper administration of justice,and the right to a fair trial.As the United States is a statement signatory,the AI Task Force provided input on thedraft of t
186、he G7 Bars Statement on AI.On behalf of the United States,ABA President MarySmith signed the G7 Bars Statement on AI on March 21,2024,along with representativesfrom all the G7 Bars.Read G7 statement here.AI technology presents both practical and pedagogical challenges for legal education.Lawschools
187、must raise students awareness of AIs capabilities and limitations and how thoseimplicate ethical obligations.Considering the significant impact that AI could have on the legalprofession,attorneys and legal professionals will need to understand how it works,how it isdeveloped and used,what advantages
188、 it can bring,such as increasing efficiency and access tolegal resources,the risks it can create,and the legal and ethical issues that may arise with its use.Law schools play an important role in ensuring that lawyers are educated about technology.Theymust not only instill traditional lawyering skil
189、ls such as problem-solving and judgment,but theyalso need to acknowledge and support the reality that lawyers of the future will need toincorporate AI tools into legal service delivery.Law schools will need to integrate training on theeffective use of technology tools into their curricula.The AI Tas
190、k Force,through its working group on legal education,surveyed law schooladministrators and faculty to gain insights on their preparedness and plans for integrating AI intocurricula.The results,reflected in the AI Task Forces AI and Legal Education Report,show thatlaw schools are adapting to these de
191、velopments with AI and are increasingly incorporating AI intotheir curricula.Specifically,over half(55%)of the law schools that responded to the survey reported that theyoffer classes dedicated to teaching students about AI.Moreover,an overwhelming majority(83%)reported the availability of curricula
192、r opportunities,including clinics,where students can learnhow to use AI tools effectively.In addition,85%of responding law schools contemplate changesto their curricula in response to the increasing prevalence of AI tools.As the legal landscape continues to evolve with technology,law schools should
193、continue toprepare students for the future of law practice,ensuring they are equipped with not only legalknowledge,but the ability to leverage technology to meet the changing demands of theprofession and the public.AI AND LEGAL EDUCATION25AI and Legal Education Survey Results 2024 The survey was com
194、pleted by 29 law school deans or faculty members and found thatlaw schools are increasingly incorporating AI into their curricula.26As with each technological development before it,AI is introducing new legal risks.Theseencompass risks to the development of AI,as well as risks resulting from,or caus
195、ed bycompromises in,the development,deployment,or use of AI.More specifically,two types of AIrisks are important to assess:AI design and development risks,which include cybersecurity,privacy,and bias,as well as theaccuracy,reliability,and safety of AI applications,products,services and capabilities.
196、Risks caused by the use of AI,including:Intellectual property(IP),unfair trade practices,and fraudTrustworthy and responsible AI,human oversight,accountability and transparencyRole in creating and spreading disinformation During the year,the AI Task Force has addressed these risks from multiple vant
197、age points,including an in-depth look at the NIST AI Risk Management Framework to advance responsibleand trustworthy AI,Executive Order 14110 on the Safe,Secure,and Trustworthy Developmentand Use of Artificial Intelligence,and the newly established U.S.AI Safety Institute.Further,it has addressed re
198、sponsible AI governance regimes,implications for intellectualproperty,cybersecurity and privacy,and deepfakes and their impact on the courts and on societymore broadly.Entire programs were devoted to AI governance and risk management and therole of lawyers.AI Task Force members spoke about AI risks
199、at numerous conferences,includingthe SciTech 5th Annual AI&Robotics National Institute and the RSA security conference.In recommending guardrails for the development and use of AI,the ABA House of Delegates tooka significant step at the 2023 mid-year meeting with the adoption of Resolution 604,which
200、 urgedhuman oversight and control accountability,and transparency for AI.AI has the potential to help anticipate and even prevent many types of losses.Insuranceproviders are developing solutions for some of the more salient generative AI risks:cybersecurity,privacy and fraud risks;intellectual prope
201、rty and copyright infringement;productliability and performance issues;regulatory and compliance risks;and vendor and supplier risks.The AI Task Forces Risk Management working group is conducting a review and assessment ofgenerative AI applications to identify risks and limitations disclosed in thei
202、r terms of service(ToS),as well as related documentation such as,for example,“system cards,”“score cards,”“transparency reports,”and privacy impact assessments.The results of the research and analysiswill provide essential information prospective users of a new technology tool will need tounderstand
203、 the potential risks of using the tool.This project is ongoing and a white paper withthe findings is anticipated in the upcoming bar year.AI RISK MANAGEMENT RISK MANAGEMENT AND MITIGATIONPRIVACY AI systems rely on staggering amounts of data,including personal data,to train algorithmsand enhance perf
204、ormance.While data(including personal data)might well be a companysmost important asset,driving business success,the resulting privacy issues also must beaddressed.For example:27The AI Task Force has compiled and curated an extensive collection of authoritative and scholarlyresearch,analyses,publica
205、tions and programs from private sector organizations,academia,government,and the courts that identifyidentify emergent AI risks.These are all available on theAI Task Force website and the list is updated frequently.Data-powered AI is being used to make predictions and decisions about individuals,bot
206、h as consumers and as employees.Such use could have privacy implications but alsoraise bias and discrimination issues where the AI was trained on data that reflectsharmful biases(though AI conversely also might be a way to counteract bias).Generative AI expands and accelerates AI capabilities,moving
207、 it beyond automationand pattern recognition,resulting in increased productivity,efficiency,and countlessother benefits.Yet those same abilities also can be used to sort through hundreds ofthousands of emails,texts,documents,and more to identify individuals such aswhistleblowers,potential targets of
208、 law enforcement investigations,and other types ofsurveillance.Scraping personal data from publicly available websites can be done on a massive scale,raising potential privacy risks e.g.,data repurposing(using data beyond its originalpurpose);data spillovers(collecting data on individuals who are be
209、yond the identifiedtarget group and do not know data has been collected or are unable to requestdeletion,correction,etc.);creating digital dossiers that could be used to generatecontent geared to an individuals inclinations(from customized marketing to influencingthe way the person votes);and more.I
210、n some cases,no law exists to restrict,or guide,what is being done by companies andgovernment.Where there is law,it is sometimes silent on how personal data should becollected and used in a way that preserves privacy.In other cases,the law lacks teeth,withthe financial incentives to act otherwise ou
211、tweighing the risk of technical violations.CYBERSECURITY AI can be both a sword and shield when it comes to cybersecurity.Generative AI creates newcybersecurity risks of hacking and fraud as it changes the nature of cybersecurity attacks.Some of the most serious types of attacks of the past decade w
212、ill be amplified andexacerbated by AI as it mimics the kinds of attacks humans do(but with increasingsophistication and speed)and may eventually engage in attacks that humans cant yetenvision.At the same time,AI techniques are expected to enhance cybersecurity by both assistinghuman system managers
213、to monitor,analyze,and respond to adversarial threats to cybersystems,while automating certain routine tasks.INTELLECTUAL PROPERTY(IP):FOCUS ON COPYRIGHT A prevalent issue over the past year involves copyright disputes arising from generative AI.Generative AI pulls data from the internet and is bein
214、g used to produce text,songs,pictures,videos and other content based on pre-existing materials.Many generative AI programs pullthis data without regard for the copyright status of the source material.Consequently,inmany instances,creating a new work using generative AI presents a significant risk of
215、copyright infringement.Several court cases involving copyright infringement by AI are currently winding their waythrough the court system.Getty Images has sued multiple generative AI companies forinfringing its copyrights in images used to train generative AI models,which it has proved bydemonstrati
216、ng that the AI applications have simply spit out the same Getty Images pictures watermark and all.The New York Times has sued ChatGPT,Microsoft and others both tostop using copyrighted material in generative AI model training and to stop allowing agenerative AI model from summarizing those copyright
217、ed materials,which it argues redirectsusers from visiting the New York Times website.Artists have sued generative AI companiesfor feeding their art into the AI model to create generated images“in the style”of thoseartists.These cases and many more are testing the competence and creativity of lawyers
218、litigating how generative AI models work and what harm they might cause.There is also the question of whether a new work created by generative AI is protected undercopyright laws.Initially,there was some debate among copyright lawyers about whethercontent produced by generative AI was protectable.Ho
219、wever,the U.S.Copyright Office hasissued guidance stating that works generated by AI and not by humans-are not consideredcopyrightable material.The Copyright Office continues to study the impact of AI oncopyrights.A related issue concerns the content license that users grant to the providers of AI t
220、oolsthrough use of the tools.AI providers frequently include in their terms of use a license to useall user-input content for broad purposes.For example,the terms of use for OpenAIsChatGPT service state that OpenAI“may use Content including all user inputs and outputsto provide,maintain,develop and
221、improve our Services.”Consequently,by using the tool,users grant automatically a broad copyright license to OpenAI in their Content.Additionally,since there is no confidentiality obligation associated with this disclosure,any trade secretprotection or confidentiality of the users Content may be comp
222、romised.Notably,under thecurrent terms of use,ChatGPT users may opt out of allowing OpenAI to use their Content formodel training purposes,but there is no opt-out provision for the other aspects of the user-granted license.Users should therefore pay attention to the applicable terms of use beforeusi
223、ng any AI tool or service.Read the report here.28AI can detect new cyber threats,combat bots(automated threats),predict risk of breaches(create an IT asset inventory and assess vulnerabilities),and provide better endpoint protectionby flagging events that deviate from an established baseline and tak
224、ing action.INSURANCE In an entirely different field,lawyers and insurance carriers are reviewing how to protectpeople from AI that operates equipment in the physical world.Standard methods ofconsidering traffic accidents are changed when a vehicle is being driven by an AI modelwithout a human involv
225、ed.Mitigating The Risks of Generative AI Through Insurance The potential benefits of generative Al are manifold.But how could this new tool possibly gowrong?And when it does,can insurance mitigate the damage?Potential insurance solutions for some of the more salient risks of generative AI are outlin
226、edbelow.Generative AI risks,like other types of losses and liabilities,can trigger multiple linesof insurance,so when thinking about placing insurance or making a claim,it is important toconsider the companys insurance portfolio as a whole.The risks are of two types:injury tothird parties(liability
227、risk)or loss suffered by the business itself(first party risk).Somepolicies,like cyber-risk policies,combine both first-and third-party coverage.Cybersecurity,Privacy and Fraud Risks Bad actors can deploy generative AI to create highly convincing fake content,includingimages,videos,and text,thereby
228、enabling new forms of fraud and cybercrime.Generative AIalso increases the risk of data leaks and privacy violations.Recommended Coverage:Intellectual Property and Copyright Infringement Some generative AI models were trained on copyrighted or otherwise IP-protected datawithout proper licensing,resu
229、lting in lawsuits alleging infringement.Recommended Coverage:Product Liability and Performance Issues Generative AI systems can produce flawed or biased outputs,leading to product failures,financial losses,or discrimination claims if used in critical applications.The question ofwhether generative AI
230、 systems are a product or service is the subject of ongoing litigation.29Cyber insurance with affirmative coverage for AI systems wrongful collection orunintentional data leaks affecting third parties.Employment practices liability(“EPL”)policies also may respond when the data involves actual or pro
231、spective employees.Crime/fraud insurance to cover losses from Al-enabled fraud schemes.Technology errors and omissions(Tech E&O)insurance to cover claims ofIP/copyright infringement from generative AI outputs.Potential specialty coverage to cover costs of defending against patent infringementclaims.
232、30Product liability insurance for generative AI providers to cover losses from faulty Alsystems.Performance guarantee insurance like Munich Res aiSure to insure against Al modelfailures.Recommended Coverage:Vendor and Supplier Risks Companies also should consider the risks presented by vendors and s
233、uppliers and thecontractual mechanisms and insurance coverage available to transfer those risks.Companies should consider including indemnity and insurance procurement clauses intheir vendor and customer contracts,to ensure that they transfer risks appropriately.Those clauses also can ensure that th
234、ose risks are properly secured through counter-parties insurance,sometimes referred to as“other peoples insurance.”AI GOVERNANCE31Overview:AI Governance Recent Developments There are a variety of legal AI governance tools,including existing and proposed laws andregulations(domestic and international
235、)and best practices published by government andnongovernmental institutions.This summary provides a brief overview of some recent activity inthis rapidly evolving area of AI governance.At the federal level,in 2023 the White House issued Executive Order 14110 on the Safe,Secure,and Trustworthy Develo
236、pment and Use of Artificial Intelligence(EO),which followed the WhiteHouse Blueprint for an AI Bill of Rights.The blueprint had as its priorities the development ofsafe and effective systems;protection from discrimination;data privacy protections;transparency(i.e.,notice of the use of an automated s
237、ystem);and the opportunity to opt out andinteract with a person instead of an automated system.The EO set out as its policies and goalsthat AI should be safe and secure(which requires robust,reliable,repeatable,and standardizedevaluations of AI systems);AI systems should be tested after they are dep
238、loyed;developers andinstitutions should minimize security risks;the government should support responsible innovationand competition;and AI should support the creation of jobs,advance equity and civil rights,andoperate transparently.No federal agency is directly responsible for regulating AI.However,
239、the Department ofCommerce is playing a significant role,with the Secretary of Commerce having most of theresponsibility for implementing the EO.AI presents issues and risks in areas already administeredby several agencies.The Federal Trade Commission(FTC),Department of Justice(DOJ),Consumer Financia
240、l Protection Bureau(CFPB),and Equal Employment Opportunity Commission(EEOC)released a joint statement clarifying that their“agencies enforcement authorities applyto automated systems”and pledging to use their enforcement powers to combat discriminationand bias in automated systems.Although it does n
241、ot issue regulations,NIST has an important role in the governments researchand promulgation of standards for AI.NIST published an AI Risk Management Framework.Thisvoluntary framework describes what risk means,how to measure it,and how to evaluate where itwill present as an issue.It defines trustwort
242、hy AI as AI systems that are valid and reliable,safe,secure and resilient,accountable and transparent,explainable and interpretable,privacyenhanced,and fair with harmful bias managed.1This section is based on the chapter“Governing AI in a Changing World,”by Cynthia H.Cwik,Karen E.Silverman,and Josep
243、h Blass,in Cwik,C.H.,Suarez,C.A.,&Thomson,L L.(Eds.).Artificial Intelligence:Legal issues,Policy and Practical Strategies.American Bar Association(Aug.2024).2 Exec.Order No.14,110,88 Fed.Reg.75,191(Oct.30,2023).3 White House Off.of Sci.&Tech.Poly,Blueprint for an AI Bill of Rights(Oct.2022),https:/w
244、ww.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.4 The blueprint also described design principles that would support these goals:careful design from the outset of system development;publicoversight;independent testing and reporting;and personalization to the indivi
245、dual(in terms of data collected,control of that data,andexplanations given).5 FTC,Joint Statement on Enforcement Efforts against Discrimination and Bias in Automated Systems(Apr.25,2023).6 NIST,AI Risk Management Framework(Jan.26,2023),https:/www.nist.gov/itl/ai-risk-management-framework.12345632Saf
246、ety,security,resiliency,explainability and interpretability,privacy enhancement,and fairnessare all considered subgoals of validity and reliability.NIST also houses the AI Safety Institute,which focuses on studying and addressing the risks of AI,initially focusing on the prioritiesidentified in the
247、EO.In 2021 Congress passed the National AI Initiative Act of 2020.This act directed agencies andgovernment funders to allocate resources to AI research and focus on the impacts and potentialuses of AI.It also established the National AI Advisory Committee(NAIAC),comprised of expertswith a broad and
248、interdisciplinary range of AI experience,to advise the President on AI issues.NAIAC issued its Year One report in May 2023 and reports and recommendations on relatedtopics,including foundation models and generative AI.Dozens of federal laws related to the use of AI have been introduced in the 118th
249、Congress.Thefollowing issues have been addressed in these bills:having government catch up with technologyby increasing data literacy education;protecting national security;increasing military readiness;modernizing the governments AI resources(including establishing commissions to study theproblem);
250、increasing transparency and accountability in the use of AI systems);regulating the useof data to protect consumer privacy;regulating how children use social media and AI systems(and vice versa);and regulating the use of AI and deepfakes.States have passed laws governing the use of AI.In May 2024 Co
251、lorado passed the Colorado AIAct,the first state law to establish broad requirements for developers and deployers of“high-riskartificial intelligence systems,”(defined to include“any AI system that,when deployed,makesor is a substantial factor in making,a consequential decision.)”The law goes into e
252、ffect inFebruary 2026 and will require developers and deployers of high risk AI systems to takereasonable care to protect consumers from algorithmic discrimination,and it establishesdisclosure requirements for AI systems that are intended to interact with consumers.A dozen statesAlabama,California,C
253、olorado,Connecticut,Illinois,Louisiana,New Jersey,New York,North Dakota,Texas,Vermont,and Washingtonhave enacted laws requiring theirgovernments to study the impacts of AI and improve their institutional knowledge of AI.7 NIST,U.S.Artificial Intelligence Safety Institute,https:/www.nist.gov/aisi(las
254、t visited May 23,2024).815 U.S.C.ch.119 9401 et seq.9Brennan Ctr.for Just.,Artificial Intelligence Legislation Tracker,https:/www.brennancenter.org/our-work/research-reports/artificial-intelligence-legislation-tracker(last updated Jan.5,2024).10Some bills that have advanced include the AI Leadership
255、 Training Act,S.1564,which would require the U.S.Office of Personnel Management(OPM)to establish an AI training program for federal employees;the AI Training Expansion Act,H.R.4503,which would expand AI training in theexecutive branch;the TAG Act,S.1865,which would require agencies to be transparent
256、 when using automated systems to make decisions orinteract with the public;the AI Accountability Act,H.R.3369,which would direct the Department of Commerce to study AI accountability;andthe AI Lead Act,S.2293,which would require agencies to establish chief AI officers.All these bills were introduced
257、 in the 118th Congress(2023).11See SB 24-205(Col.2024).12See SB 24-205(Col.2024).789101112Utah also passed the Artificial Intelligence Policy Act,which imposes transparency obligations oncertain entities use of generative AI and limits the ability of entities to claim as a legal defensethat generati
258、ve AI was to blame for violations of consumer protection laws.SB 149(Utah 2024).Other AI laws have been enacted outside the United States,including the European Unionspassage of the European AI Act.The AI Act sets rules governing the development anddeployment of high-risk systems:developers must tak
259、e steps to mitigate risks,ensure high-quality datasets,document their systems,and have meaningful human oversight.Developersmust notify people when they are interacting with a chatbot or biometric or emotion recognitionsystems.AI-generated content and deepfakes must be labeled as such,and the abilit
260、y to detectAI-generated media should be baked into the system that generates such media.13Several other states have proposed AI bills pending,including Californias proposed SB-1047,the Safe and Secure Innovation for FrontierArtificial Intelligence Models Act.In addition,some states have also taken a
261、ction through executive orders,although these mostly direct stateagencies to study the problem,often with the goal of determining how best to integrate AI into government practices.Wisc.Exec.Ord.211;Okla.Exec.Ord.2023-24;Cal.Exec.Ord.N-12-23;N.J.Exec.Ord.346;Va.Exec.Ord.5;Pa.Exec.Ord.2023-19.14Press
262、 Release,Eur.Parl.,Artificial Intelligence Act:Deal on Comprehensive Rules for Trustworthy AI(Dec.9,2023),https:/www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai.331314RESPONSIBLE AI(RAI):PROOFING AI SYSTEMS FOR FUT
263、URE AI GOVERNANCE REGIMES Common Responsible AI Principles The themes emerging from AI governance efforts reflect common“responsible AI”principles.These principles will constitute best practices for developing and deploying AI systems,regardless of whether the law requires it.While there are many se
264、ts of RAI principles available,and they may vary in their articulation and how they prioritize attributes of responsibletechnology,they generally share certain fundamental tenets.These include:Human-centeredness:AI aligns with essential human rights and needs,such as autonomy.Responsible AI protects
265、 the rights of individuals impacted by the technology,includingfairness and privacy,human agency and dignity,and more generally,a commitment that thetechnology works for the benefit of humans,not the other way around.Accountability:Humans are explicitly responsible for the impact of AI.Accountabilit
266、y iscrucial in a legal context to uphold due process rights and address any harm that results.Safety and Security:AI does not harm users or allow for security breaches or data leaks.Related concepts include robustness,resiliency,accuracy,and quality.Transparency and Explainability:The supervisors an
267、d subjects of AI decision-making canunderstand how AI works.Some AI systems operate as a“black box,”making it hard to traceoutputs back to inputs or logic.Transparency can relate to technology or to the humanprocesses surrounding it.Explainability involves effectively communicating how an AI modelfu
268、nctions to humans.Ethics and Fairness:AI models adhere to prevailing standards,beyond whatever the lawrequires in a given place and at a given time.Related concepts include privacy and freedomfrom bias,manipulation,and security risks.RAI principles are aspirational and their implementation requires
269、a blend of human processesand technical efforts to select,design,and monitor AI in order to align them with human,social,political,cultural,and legal values.34ABA ENTITIES:COLLABORATION ACROSS THE ABA 35Long before the launch of ChatGPT in November 2022,AI was transforming what the ABA does,with sec
270、tions,divisions,forums,and other ABA entities already exploring these issues throughprograms,publications,and participation(committees,working groups,etc.),in addition tocontributing to policy(ABA House of Delegates Resolutions/Reports).These four Ps constitutethe pillars of the ABAs approach to AI,
271、the transformational technology that has extraordinarypotential for both promise and peril.Programs.ABA entities have offered a wide range of innovative,timely,and high-quality AIprogramming for years but this year saw an abundance of great programs offered across theABA both live and recorded,and m
272、any with CLE credit.Conferences:AI&Robotics National InstituteThe first ABA conference on AI began in 2020 when SciTech presented the Artificial Intelligence&Robotics National Institute.SciTechs 6th AI National Institute will be held on October 14-15in collaboration with the Intellectual Property La
273、w Section,offering new breakout tracks on AIsearly legal flashpoints:(1)IP and(2)data protection Click here to learn more.AI and the Practice of Law SummitThe inaugural AI and the Practice of Law Summit was presented by the ABA Center forInnovation in 2024.The program provided practical tools that l
274、awyers could apply to thepractices,along with workshops that explored the intersection of law and AI from multipleperspectives.Podcasts and Webinar Series:The many ABA podcasts and webinars on AI include the following series:“National Security Law Today”podcast Standing Committee on Law and National
275、 Security“Mind the Gap:Dialogs on Artificial Intelligence”podcastBusiness Law Section/Business Law Today“AI and the Legal Profession:Navigating Opportunities and Challenges”webinar Civil Rights and Social Justice Section“Introduction to Artificial Intelligence and Environmental and Energy Law”webina
276、r Section of Environment,Energy,and Resources)Forthcoming:“Intersections of GenAI and Cybersecurity:Reckoning and Responding to the Risks”webinar Cybersecurity Legal Task Force(Free to current state/federal law clerks)36Publications.Insights on AI can be found in a wide range of ABA writings,whether
277、 its blogposts,magazines(from articles to entire issues),law journals,newsletters;white papers;andbooks.Five years ago,the ABA released The Law of Artificial Intelligence and Smart Machines:Understanding A.I.and the Legal Impact(2019)from the Business Law Section(AI Task ForceVice Chair Ted Claypool
278、e was the editor of that book).Fast forward to the latest ABA book on AI:Artificial Intelligence:Legal Issues,Policy,andPractical Strategies(2024),created by the SciTech Section,in collaboration with the AI TaskForce).More are scheduled for release in 2025.Participation.The first ABA AI committee wa
279、s established 17 years ago(2007-08 bar year)bySciTech.Since then,many ABA entities have established groups or outreach initiatives that focuson,or address,AI issues,including the IP Sections Artificial Intelligence/Machine Learning TaskForce;the Antitrust Sections Privacy and Information Security Co
280、mmittee and its Antitrust AITask Force Discussion Series;the Civil Rights and Social Justice Sections AI and EconomicJustice Project(conducted survey re impact of AI on low-income/marginalizedindividuals/communities).Many state bars that work with the ABA have also established AI TaskForces or other
281、 AI groups.Policy.The ABA has adopted a number of AI-related resolutions:ABA Resolution 112:Urges courts and lawyers to address the emerging AI ethical and legalissues related to the usage of AI in the practice of law.19A11(adopted August 2019).ABA Resolution 700:Urges governments to refrain from us
282、ing pretrial risk assessment toolsunless data supporting risk assessment is transparent,publicly disclosed,and validated todemonstrate the absence of bias-22V700(adopted in February 2022).ABA Resolution 604:Urges organizations that design,develop,deploy,and use AI systemsand capabilities to follow s
283、everal guidelines to help ensure human oversight and control,accountability and transparency in AI 23M604(adopted February 2023).Even as ABA entities undertake these initiatives,they are finding ways to collaborate with eachother and external organizations.The AI Task Force is facilitating this coll
284、aboration and entity-wide communication,with the Task Force offering opportunities for ABA entity liaisons and statebar AI groups to meet at regular intervals and inform the work of the AI Task Force.The widerange of participating ABA entities reflect the way AI is affecting every practitioner(solo/
285、smallfirm/general practice,law students,young lawyers,etc.)and changing every substantive practicearea.37The AI Task Force has provided an online forum for AI(ambar.org/aiLaw)designed to highlightthe vast array of what ABA entities are doing and help individuals find relevant policy,programs,publica
286、tions,and means of participating.As the national voice of the legal profession,with entities that cover an extensive range ofsubstantive practice areas,no one matches the ABA in terms of the breadth and depth in which itcan approach the continually emerging legal issues of AI technology.Human Rights
287、 Challenges with Artificial Intelligence by Lucy Thomson andTrooper Sanders highlights the ABAs efforts to address the legal and ethicalchallenges of AI,focusing on privacy,discrimination,and human rights,andemphasizes initiatives like the AI Bill of Rights and Executive Order 14110 toensure respons
288、ible AI use.Civil Rights and Social Justice,human rights,TECHNOLOGY and THE LAW(Vol.49,No.4,May 2024)You can read the full article here.ABA Presidential Speaker SeriesA.I.The New Frontier:Panel of Special Advisors for the ABA Task Force on Law andArtificial Intelligence-November 2023 Professor Danie
289、l Ho,Michelle Lee,Trooper Sanders,Miriam Vogel,and Seth Waxman,interviewed by Lucy Thomson,AI Task Force Chair.The Special Advisors discussed how AI has the potential to transform the practice of law,anddiscussed initiatives of the new White House AI Executive Order,the new U.S.SafetyInstitute,and i
290、nternational developments.Law PracticePrimer on AI Technologies and Definitions-March 2024Professor Maura Grossman,Theresa Harris,Stacy Marz,and Judge Scott Schlegel.This webinar provides a foundational understanding of essential AI concepts and terms,highlighting their presence in everyday tools an
291、d their emerging application in legaltechnologies.The program is tailored for less tech-savvy individuals,helping them to graspcommon AI-related vocabulary,and gain a basic understanding of how AI algorithms workand the everyday applications of AI.How Large Law Firms Are Incorporating AI into Practi
292、ce-January 10,2024Katherine Lowry,BakerHostetler;William Garcia,Thompson Hine;and Peter Geovanes,McGuireWoods,interviewed by Ted Claypoole.This program discussed the innovative integration of AI within large law firms.The speakersshared their experiences,strategies,and insights into how AI is transf
293、orming the legallandscape,improving efficiency,and enhancing client service.They also discussed the uniquechallenges and ethical dynamics that lawyers should consider when implementing AI.A Roundtable on Generative AI:Practical Advice for Attorneys-March 14,2024Karen Silverman,Brian Beck,Daniel“Dazz
294、a”Greenwood,Maura Grossman,and Lisa LifshitzExperts addressed these topics:1)understanding Generative AI,including what it is and howit works;2)exploring use cases for law firms and in-house legal departments;3)procurementconsiderations and examining and negotiating key contract terms when acquiring
295、 generativeAI products;4)establishing policies for law firms regarding the use of AI;and 5)understanding cautionary issues,including bias,confidentiality and IP.2023-24 AI TASK FORCE PROGRAMS AND EVENTS3839AI Crash Course for Bar Leaders and Lawyers:Uses,Misuses,and Ethics-February 3,2024ABA Mid-Yea
296、r Meeting,Louisville KY.AI Task Force collaboration with the NationalConference of Bar Presidents(NCBP).Ted Claypoole,Ian McDougall,General CounselLexisNexis Global;Damien Riehl(Chair,Minnesota State Bar Workgroup on AI);Lisa LIfshitz,Director of Canadian Technology Law Association);Lucy Thomson;Mar
297、ri Baldwin,formerchair of the State Bar of California Committee on Professional Responsibility;and Trish Rich AI experts who understand the multi-faceted complexity of AI from use cases that increaseproductivity or produce misinformation to ethical dilemmas discussed how lawyers canshape how AI is u
298、nderstood and used in the bar and the legal world.The AI Trap:The Missing Guardrails for Lawyers-ABA Annual Meeting 2023,Denver,CO;presented by the Cyber Legal Task Force;co-sponsored by the AI Task Force.Moderated byPBS journalist Deena Temple-Raston,the program included a wide-ranging discussion w
299、ithspeakers Dr.Lance Eliot,Dazza Greenwood,and AI Task Force Chair Lucy Thomson.CLE in the City-AI Hot Topics Every Lawyer Needs to Know-August 1,2024ABA Annual Meeting 2024,Chicago Professor Daniel W.Linna,Jr.,Director of Law and Technology Initiatives,Northwestern U,Josh Strickland,Motorola Soluti
300、ons;Honorable E.Kenneth Wright Jr.,Presiding Judge,FirstMunicipal District,Circuit Court of Cook County;Magistrate Judge Gabriel A.Fuentes,U.S.District Court,Northern District of Illinois;Leighton B.R.Allen,Foley&Lardner LLP;JayneR.Reardon,Ethics&Professional Responsibility Counsel;Lucy L.Thomson.Th
301、e latest developments with AI were addressed by speakers with broad perspectives lawfirm and corporate counsel,academia,the judiciary,and ethics and professionalresponsibility counsel.GovernanceAI Governance:A Conversation with Reva Schwartz of the National Institute of Standardsand Technology(NIST)
302、about NISTs new AI Risk Management Framework-September 28,2023Reva Schwartz,NIST,Cynthia CwikThis program provided an overview of the NIST AI Framework,its real-world applications,and how organizations can leverage it to advance trustworthy and responsible AI practices.40AI Governance:A Conversation
303、 with Miriam Vogel,President and CEO of EqualAI andNAIAC Chair-Miriam Vogel and Cynthia CwikMiriam Vogel discussed key issues regarding AI governance,including NAIACs importantwork,the future of AI governance,and best practices for the private and public sectors.AI Governance:A Conversation with Eli
304、zabeth Kelly-June 13,2024-Elizabeth Kelly,Director of the newly created U.S.AI Safety Institute,interviewed by Cynthia Cwik.Governance and Risk ManagementAI Governance and Risk Management The Role of Lawyers-April 18,2024Trooper Sanders,Katherine Fick,IBM;Karen Buzard,Allen&Overy LLP;Madhu SrikumarE
305、xperts discussed the role of lawyers in a variety of settings in managing the governanceand risk of AI,as well as how leading lawyers have integrated AI in their work and careers.Risk ManagementThe Impact of Deepfakes on the Justice System-January 2024Professor Hany Farid,Hon.Paul Grimm(ret.),Profes
306、sor Maura Grossman.The experts explained what deepfakes are and how they are made,the intricacies ofidentifying deepfakes,exploring evidentiary and Daubert issues,and discussing the C2PAstandard to identify the provenance of digital material.Unraveling AIs Impact on Intellectual Property:Expert Pers
307、pectives-April 25,2024 Lindsay R.Edelstein,Mitchell Silberberg;Claudia Ray,Kirkland&Ellis;Ekta Oza,Linklaters;Louise Nemschoff,Los Angeles attorney.As the role and impact of generative AI in copyright continues to evolve,the expert panelexamined hot topics in AI and IP Law.Access to JusticeArtificia
308、l Intelligence,Law Schools and Access to Justice-Jim Sandman,Margret Hagen,Gabrial Tenenbaum and Daniel Linna The experts discussed how law schools have developed new programs to teach students howto use technology and innovation,including AI,to improve access to legal services.41AI and the CourtsDa
309、ta Analytics and the Courts:Essential Information for an Emerging Generative AIFunction-June 17,2024-Judge Sam Thumma,Judge Scott Schlegel,Jennifer Mabey,andHon.Ronald J.Hedges(ret.)The speakers discussed the use of generative AI for transcription and language interpretation,Utahs pre-trial risk ass
310、essment tool,the use of AI for predicting case outcomes,and issuesrelated to court data analytics.Legal EducationThe Implications for Generative AI on Legal Education:A Conversation with Dean AndrewPerlman-December 14,2023 Suffolk Law Dean Andrew Perlman;interviewed by CynthiaCwik.This program explo
311、red the impact of AI on legal education and legal training.ABA 5th Annual Artificial Intelligence(AI)&Robotics National Institute(presented by theABA Science&Technology Law Section;co-sponsored by the AI Task Force)-October 9-10,2023 Santa Clara University School of Law AI Task Force members were sp
312、eakers:Steve Wu(Institute Chair),Ruth Hill Bro,TedClaypoole,Cynthia Cwik,Eric Hibbard,Patrick Huston,and Lucy Thomson.Partner ProgramsRSA Conference(RSAC),San Francisco,CA(May 2024)At the largest security conference in the world,ten AI Task Force members delivered akeynote fireside chat,panel presen
313、tations,and participated with government and privatesector experts on discussions of the impact of AI.Artificial Intelligence and the Courts Scientific Evidence and the Courts,AmericanAssociation for the Advancement of Science(AAAS)-September 22,2023.Dr.David Doermann,Prof.Rashida Richardson,and the Hon.Paul W.Grimm(Ret.).moderator Lucy Thomson,AI Task Force Chair.AI&Emerging Technology Partnership program#4:Tools and Data,U.S.Patent andTrademark Office-September 27,2023.AI Task Force Advisory Council Member Darrell Motley spoke about the AI Task Force on the“Legal Issues”panel.