《美國安全與新興技術中心:2024有效治理人工智能的原則研究報告(英文版)(15頁).pdf》由會員分享,可在線閱讀,更多相關《美國安全與新興技術中心:2024有效治理人工智能的原則研究報告(英文版)(15頁).pdf(15頁珍藏版)》請在三個皮匠報告上搜索。
1、Policy BriefJuly 2024Enabling Principles for AI GovernanceAuthorsOwen J.DanielsDewey Murdick Center for Security and Emerging Technology|1 Introduction The question of how to govern artificial intelligence(AI)is rightfully top of mind for U.S.lawmakers and policymakers alike.Strides in the developme
2、nt of high-powered large language models(LLMs)like ChatGPT/GPT-4o,Claude,Gemini,and Microsoft Copilot have demonstrated the potentially transformative impact that AI could have on society,replete with opportunities and risks.At the same time,international partners in Europe and competitors like Chin
3、a are taking their own steps toward AI governance.1 In the United States and abroad,public analyses and speculation about AIs potential impact generally lie along a spectrum ranging from utopian at one endAI as enormously beneficial for societyto dystopian on the otheran existential risk that could
4、lead to the end of humanityand many nuanced positions in between.LLMs grabbed public attention in 2023 and sparked concern about AI risks,but other models and applications,such as prediction models,natural language processing(NLP)tools,and autonomous navigation systems,could also lead to myriad harm
5、s and benefits today.Challenges include discriminatory model outputs based on bad or skewed input data,risks from AI-enabled military weapon systems,as well as accidents with AI-enabled autonomous systems.Given AIs multifaceted potential,in the United States,a flexible approach to AI governance offe
6、rs the most likely path to success.The different development trajectories,risks,and harms from various AI systems make the prospect of a one-size-fits-all regulatory approach implausible,if not impossible.Regulators should begin to build strength through the heavy lifting of addressing todays challe
7、nges.Even if early regulatory efforts need to be revised regularly,the cycle of repetition and feedback will lead to improved muscle memory,crucial to governing more advanced future systems whose risks are not yet well understood.President Bidens October 2023 Executive Order on the Safe,Secure,and T
8、rustworthy Development and Use of Artificial Intelligence,as well as proposed bipartisan AI regulatory frameworks,have provided useful starting points for establishing a comprehensive approach to AI governance in the United States.2 These stand atop existing statements and policies by federal agenci
9、es like the U.S.Department of Justice,the Federal Trade Commission,as well as the U.S.Equal Employment Opportunity Commission,among others.3 In order for future AI governance efforts to prove most effective,we offer three principles for U.S.policymakers to follow.We have drawn these thematic princip
10、les Center for Security and Emerging Technology|2 from across CSETs wide body of original,in-depth research,as well as granular findings and specific recommendations on different aspects of AI,which we cite throughout this report.They are:1.Know the terrain of AI risk and harm:Use incident tracking
11、and horizon-scanning across industry,academia,and the government to understand the extent of AI risks and harms;gather supporting data to inform governance efforts and manage risk.2.Prepare humans to capitalize on AI:Develop AI literacy among policymakers and the public to be aware of AI opportuniti
12、es,risks,and harms while employing AI applications effectively,responsibly,and lawfully.3.Preserve adaptability and agility:Develop policies that can be updated and adapted as AI evolves,avoiding onerous regulations or regulations that become obsolete with technological progress;ensure that legislat
13、ion does not allow incumbent AI firms to crowd out new competitors through regulatory capture.These principles are interlinked and self-reinforcing:continually updating the understanding of the AI landscape will help lawmakers remain agile and responsive to the latest advancements,and inform evolvin
14、g risk calculations and consensus.1.Know the terrain of AI risk and harm As AI adoption progresses,supporting data will be necessary to better understand the types,and extent of,various public and societal risks and harms.U.S.regulators should prioritize collecting information on AI incidents to inf
15、orm policymaking and take necessary corrective measures,while preserving the technologys benefits and not stifling innovation.Ideally,an effective,multipronged approach to AI governance would mix incident reporting,evaluation science,and intelligence collection.Capture data on AI harms through incid
16、ent reporting.AI systems should be tested rigorously before deployment,including with each update,but they may be prone to drift or failure in environments dissimilar to their testing conditions and can behave in ways unforeseen by system developers.4 Malicious actors can also use AI to cause intent
17、ional harm,for instance using generative AI to perpetuate fraud by creating deepfake images or videos.5 In conceptualizing harm on the spectrum of minimal to existential risk,lawmakers can consider harm exposure in four buckets:1)demonstrated harms;2)probable harms involving known risks in deployed
18、AI systems;3)implied harms,where studies could uncover new weaknesses in deployed systems;Center for Security and Emerging Technology|3 and 4)speculative harms,including existential risks.6 These four risk-based buckets provide structure to different harms that regulators can use in AI governance.In
19、cident collection would entail collecting data from accidents and events where AI systems caused harm,relying on mandatory,voluntary,and citizen reporting of risks and harms.7 A public incident reporting system would not cover military or intelligence AI incidents,and there could be a separate chann
20、el for reporting sensitive AI incidents,protected within secure enclaves.Mandatory and voluntary reporting would likely need to be overseen by federal agencies with clear regulatory roles and distance from AI developers,such as the Federal Aviation Administration or the Securities and Exchange Commi
21、ssion.8 Citizen reporting could be collected either as part of a governmental complaint reporting system or for public consumption by nongovernmental organizations like the UL Research Institutes,the Organization for Economic Cooperation and Development,or even a news media outlet.Initially,incident
22、 reporting could prioritize incidents that generate tangible harms and shift political will,including fatalities,major property damage,or child safety.CSET research has explored the pros and cons of these risk collection approaches.9 Knowledge garnered through incident reporting would help achieve s
23、everal goals.First,it could help improve public awareness around existing real-world AI risks and harms.With clearer insights into todays most pressing AI challenges,regulators and legislators can better shape laws and address liability issues of public interest.Second,as patterns of AI incidents de
24、velop across different industries,regulators may be able to prioritize certain AI governance actions based on the prevalence of certain harms.For example,regulators might create risk-based requirements for certain AI systems to undergo retesting and recertification if and when iterative improvements
25、 are made to models,similar to how the U.S.Food and Drug Administration subjects high-risk medical devices like pacemakers to continuous evaluation.10 Incident collection would provide regulators with more granular data to better identify new or more serious harms and to rapidly devise robust respon
26、ses.11 Third,developing an incident reporting system is a concrete bureaucratic step that could beget more government action to address AI harms.It would require determining where a mandatory and voluntary reporting incident collection body would sit within the U.S.government,along with the criteria
27、 for different reporting requirements.It would also require an action plan and implementation process to stand it up,and the establishment of a decision-making process for budgeting and Center for Security and Emerging Technology|4 resource allocation.The process of establishing this body would gene
28、rate momentum and build muscle memory that carries over to work on thornier AI governance questions.Finally,incident reporting could help build U.S.leadership in AI governance globally.Building a strong exemplar of an incident monitoring and reporting center could facilitate collaboration,exchanges,
29、and best-practice sharing with other nations.Incubating international cooperation could make the United States more aware and better prepared to address AI harms that may be more prevalent in other parts of the world,and help build a common foundation with other countries to monitor and spread aware
30、ness of shared AI risks.Invest in evaluation and measurement methods to strengthen our understanding of cutting-edge AI systems.The science of measuring the properties of AI systems,especially the capabilities of foundation models that can be adapted for many different downstream tasks,is currently
31、in early development.Investment is needed to advance basic research into how to evaluate AI models and systems,and to develop standardized methods and tool kits that AI developers and regulators can use.Policymakers creation of appropriate governance mechanisms for AI depends on their ability to und
32、erstand what AI systems can and cannot do,and how these systems rate on trustworthiness properties such as robustness,fairness,and security.The establishment of the U.S.Artificial Intelligence Safety Institute within the National Institute of Standards and Technology is a promising step in this dire
33、ction,though it may currently lack sufficient resourcing to accomplish the tasks it has been set under the 2023 AI executive order and other policy guidance.Build a robust horizon scanning capability to monitor new and emerging AI developments,both domestically and internationally.Alongside incident
34、 collection,maintaining information awareness and avoiding technological surprise(unexpectedly discovering that competitors have developed advanced capabilities)will allow U.S.legislators and regulators to be adaptive in addressing risks and potential harms.12 Horizon scanning capabilities would be
35、relevant for a range of agencies and bodies,and could take on unique relevant focus areas.For instance,an open-source technical monitoring center would be instrumental for the United States.It could help the U.S.intelligence community and other federal agencies by establishing a core capability to t
36、rack progress in various AI fields throughout commercial industry,academia,and government.This would not only keep the community well-informed but also enhance the integration of open-source knowledge Center for Security and Emerging Technology|5 with classified sources,thereby improving the overall
37、 intelligence gathering and interpretation processparticularly focused outside of the United States.13 For intelligence community agencies,this monitoring would likely focus on specific technology that augments military systems;agencies outside the intelligence community might focus their horizon sc
38、anning on AI applications that could have a significant(though less clearly defined)impact on the economic competitiveness and societal well-being of the United States.Scanning the horizon for new and emerging capabilities can help to ensure that regulators are prepared to handle emerging challenges
39、 from abroad.This could be valuable amid competition with China or other authoritarian states that develop capabilities with negative implications for democratic societies,such as AI for mass surveillance or for generating and spreading political disinformation.Robust U.S.horizon-scanning capabiliti
40、es could improve policymakers responsiveness to the latest threats across AI fields and applications.14 2.Prepare humans to capitalize on AI AI is ultimately a tool,and like other tools,familiarity with its strengths and limitations is critical to its effective use.Without adequately educated and tr
41、ained human users,society will struggle to realize AIs potential safely and securely.This section presents several points for how regulators and policymakers can prepare the human side of the equation for emerging AI policy challenges.Develop AI literacy among policymakers.AI literacy for policymake
42、rs is key to effectively understanding and governing risks from AI.At a minimum,policymakers should understand different types of AI models at a basic level.They should also grasp AIs present strengths and limitations for certain tasks,recognize AI models outputs,and acknowledge the technical and so
43、cietal risks from factors like bias or data issues.Policymakers should be keenly aware of the ways that AI systems can be imperfect and prone to unexpected,sometimes strange failures,often with limited transparency or explainability.They will need to understand in what contexts using certain AI mode
44、ls is suitable and how machine inputs may bias human decision-making.Grounding in these and other details of AI systems will be important for understanding how new AI differs from current models and for anticipating new regulatory challenges.15 Developing training and curricula for those in policy p
45、ositions could help build AI literacy today,while investing in AI education would benefit the policymakers of tomorrow and society in general.16 Develop AI literacy among the public.Building public AI literacy,beginning as early as possible and continuing throughout adulthood,can help citizens grasp
46、 the Center for Security and Emerging Technology|6 opportunities,risks,and harms posed by AI to society.For instance,AI literacy can help workers across fields where intelligent systems are already starting to be appliedranging from industrial manufacturing to healthcare and financeto better underst
47、and the limitations of systems that help them perform their jobs.Knowing when to rely on the outputs of AI systems or to exercise skepticism,particularly in decision-making contexts,will be important.Alerting workers in other fields to the possibility of upskilling programs and accreditations could
48、create employment opportunities beyond the cutting-edge of AI in competencies like computer and information science.AI literacy will be key to participation in the economy of the future for both workers and consumers.Promoting AI literacy could also help the public use outputs from systems like LLMs
49、 appropriately to boost productivity and grasp where risks of plagiarism or copyright infringement might exist.The United States could look to countries that have attempted to implement their own public AI literacy programs,such as Finland,for best practices and lessons learned in trying to provide
50、citizens with digital skills.17 More broadly,alerting the public to the risks of convincing AI-generated disinformation,including text,images,videos,and other multimedia that could manipulate public opinion,could help citizens remain alert to risks from artificial content.18 This could be a first li
51、ne of defense against nefarious attempts by malicious actors to use AI to harm democratic processes and societies.AI developers should also be alert to and versed in the risks of harm that integrating their models into different products could create.3.Preserve adaptability and agility Finally,given
52、 the dynamic nature of AI research,development,deployment,and adoption,policymakers must be able to incorporate new knowledge into governance efforts.Allowing space to iteratively build and update policies as technology changes and incorporating learning into policy formulation could make AI governa
53、nce more flexible and effective.Consider where existing processes and authorities can already help govern AI if certain implementation gaps are addressed.AI is likely to require some new types of regulations and novel policy solutions,but not all regulations for AI will need to be cut from whole clo
54、th.Using existing regulations offers the benefits of speed and familiarity to lawmakers,as well as the ability to fall back on previously delineated authorities among federal agencies(compared to the need to litigate overlapping authorities between existing agencies and newly created AI governance a
55、gencies).Policymakers will need to differentiate between truly novel and comparatively familiar questions Center for Security and Emerging Technology|7 that AI systems may raise.There are harms that existing protections,such as the Federal Trade Commission Act and the Civil Rights Act of 1964,might
56、already cover when it comes to issues like copyright infringement or discrimination.Other AI applications mix corporate activity,product development,and commercialization in familiar ways that are already covered by protections by bodies like the Federal Trade Commission or the U.S.Food and Drug Adm
57、inistration.19 For effective AI governance,policymakers must identify where gaps exist in legal structures and authorities,as well as areas where implementation infrastructure could be lacking.Where applicable legislation does already exist,it will be important to consider where agencies require new
58、 resources for analyzing AI systems and applications,such as relevant expertise,sandboxes,and other assessment tools.Given AIs wide-ranging applications and their tendency to get at points of tension in current practices and procedures,new guidance and implementing statutes may be necessary to ensur
59、e that existing laws are effective.In some cases,the regulators that enforce these laws may be able to address some of the challenges posed by AI,but they may be reluctant to do so based on resource constraints,lack of precedent with a new technology,or the need to overcome procedural hurdles.Examin
60、ing where procedural changes or additional resources can unlock the potential for existing laws to be applied to AI may allow lawmakers to move more quickly in addressing harms with regulation,rather than tailoring bespoke solutions to AI problems.Where it is less clear that existing regulatory or l
61、egal frameworks apply,regulators should consider how to develop frameworks that are flexible and can be adapted to incorporate new information.The National Institute of Science and Technologys Artificial Intelligence Risk Management Framework(AI RMF 1.0)is a compelling example of a policy document d
62、esigned to be adapted based on new information and knowledge.20 The United States can also draw on its mix of state and federal regulations to aggregate data and information and explore the suitability of flexible,experimental governance approaches.21 Remain open to future AI capabilities that may e
63、volve in new,unanticipated,and unexpected ways.AI models and applications are diverse,and not all technological progress will be identical.Policymakers should remain open to the possibility that future AI advancements will not rely on the same factors that enabled recent progress.For example,much of
64、 the progress in LLM development was driven by a mix of algorithmic improvements and increases in computing power,achieved at great cost,over roughly the past decade.22 Companies may use more compute to fill the increasing demand for LLM-based products and to continue to innovate in the near term,at
65、 an Center for Security and Emerging Technology|8 increasingly high cost.That said,it is possible that meaningful future advancement may come not just from research achieved with massive compute,but also from algorithmic innovation or improvements in data processing that require smaller amounts to a
66、dvance the state of the art.23 Indeed,CSET research suggests that growth in the amount of compute used to train large models appears to be slowing.24 Policymakers should be aware of new trendsthrough connection to information sources like open-source collection,incident reporting,and horizon scannin
67、gand be prepared to effectively regulate to mitigate the risks and capitalize on the opportunities inherent in new AI models.Lawmakers should consider the costs and tradeoffs involved when planning AI governance approaches.Estimating the labor and resourcing required to implement various governance
68、regimes is an essential step in selecting a feasible strategy.For example,consider regulatory capture,which occurs when a regulatory agency,created to act in the publics interest,instead advances the commercial or special interests of the industry it is charged with regulating,often resulting in pol
69、icies and decisions that favor the regulated entities rather than the public.Congress should welcome not only input from AI companies as legislators develop regulatory policy,but also their cooperation in regulatory enforcement.Industry can help identify the latest trends in AI development,including
70、 nascent risks and harms,and it has a large,highly-skilled workforce whose knowledge the government can draw on.25 However,lawmakers should keep in mind that companies are not disinterested parties and have their own visions for how to gain and cement advantageous market positions.26 Regulatory capt
71、ure presents similar risks in AI as in other industries.27 However,avoiding it is likely to require the maintenance of a large,skilled government workforce capable of tasks like assessing risks and harms from AI models,and performing analysis and testing.This is likely to be both difficult to attain
72、 and costly.While the government could limit such costs by adopting governance models that shift responsibility for testing and risk mitigation onto firms,allowing major AI firms to entrench regulatory positions could permit firms to develop standards that benefit their development models at the exp
73、ense of others.Depending on the scope of effort involved,if lawmakers seek to eliminate certain AI risks,they may be more willing to devote costly resources to develop a high-intensity,government-first approach that avoids regulatory capture.If risk minimization is sufficient,avoiding regulatory cap
74、ture may be less of a priority.Keeping these trade-offs in mind will be key going forward.Center for Security and Emerging Technology|9 Conclusion AI governance shapes how humans develop and use AI in ways that reflect their societal values.By adhering to the principles outlined in this briefunderst
75、anding AI incidents,closely monitoring tech advancement,fostering AI literacy,and maintaining regulatory flexibilitythe United States can lead in responsible AI development.This approach will help safeguard important societal values,promote innovation,and navigate the dynamic landscape of AI advance
76、ments.These enabling principles offer a roadmap for crafting agile,informed policies that can keep pace with technological progress and ensure AI benefits society as a whole.The next step is for leaders,policymakers,and regulators to craft governance oversight that allows innovation to progress unde
77、r watchful supervision and in an atmosphere of accountability.Center for Security and Emerging Technology|10 Authors Owen J.Daniels is the Andrew W.Marshall fellow at CSET.Dewey Murdick is the executive director of CSET.Acknowledgments For helpful feedback and suggestions,the authors would like to t
78、hank Igor Mikolic-Torreira,Jack Shanahan,Miriam Vogel,Jill Crisman,Helen Toner,Zachary Arnold,Heather Frase,and Jack Corrigan.They would also like to thank Jahnavi Mukul and Shelton Fitch for editorial support.2024 by the Center for Security and Emerging Technology.This work is licensed under a Crea
79、tive Commons Attribution-Non Commercial 4.0 International License.To view a copy of this license,visit https:/creativecommons.org/licenses/by-nc/4.0/.Document Identifier:doi:10.51593/20240029 Center for Security and Emerging Technology|11 Endnotes 1 For Europe,the EU AI Act is the preeminent piece o
80、f AI regulation.See:Adam Satariano,“E.U.Agrees on Landmark Artificial Intelligence Rules,”The New York Times,December 8,2023,https:/ Mia Hoffmann,“The EU AI Act:A Primer,”Center for Security and Emerging Technology,September 26,2023,https:/cset.georgetown.edu/article/the-eu-ai-act-a-primer/.See also
81、,“The EU Artificial Intelligence Act:Up-to-Date Developments and Analyses of the EU AI Act.”EU Artificial Intelligence Act,2024,https:/artificialintelligenceact.eu/.For China,see,for example,the CSET translation of,“Regulations for the Promotion of the Development of the Artificial Intelligence Indu
82、stry in Shanghai Municipality,”the Standing Committee of the 15th Shanghai Municipal Peoples,originally published September 23,2022,https:/cset.georgetown.edu/publication/regulations-for-the-promotion-of-the-development-of-the-artificial-intelligence-industry-in-shanghai-municipality/.2 The White Ho
83、use,“Executive Order on the Safe,Secure,and Trustworthy Development and Use of Artificial Intelligence,”October 30,2023,https:/www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/;Office of
84、 Management and Budget,Executive Office of the President,OMB Memorandum M-24-10,“Advancing Governance,Innovation,and Risk Management for Agency Use of Artificial Intelligence”(2024),https:/www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-A
85、gency-Use-of-Artificial-Intelligence.pdf.In September 2023,U.S.Senators Richard Blumenthal(D-CT)and Josh Hawley(R-MO),chair and ranking member of the U.S.Senate Subcommittee on Privacy,Technology,and the Law,introduced a Bipartisan Framework for U.S.AI Act.The framework calls for establishing a lice
86、nsing regime administered by an independent oversight body;ensuring legal accountability for harms caused by AI;promoting transparency;protecting consumers and children;and defending national security amid international competition.See,Senator Richard Blumenthal and Senator Josh Hawley,“Bipartisan F
87、ramework for U.S.AI Act,”Senate Subcommittee on Privacy,Technology,and the Law,September 07,2023,https:/www.blumenthal.senate.gov/imo/media/doc/09072023bipartisanaiframework.pdf.Senator Chuck Schumer(D-NY)introduced a high-level SAFE Innovation Framework around AI.The acronym SAFE in Senator Schumer
88、s framework stands for Security,Accountability,Foundations(in democratic values),and Explain(i.e.,“determine what information the federal government needs from AI developers and deployers to be a better steward of the public good,and what information the public needs to know about an AI system,data,
89、or content.See,Senator Chuck Schumer,“Safe Innovation Framework,”2023,https:/www.democrats.senate.gov/imo/media/doc/schumer_ai_framework.pdf.3 Department of Labor,“Joint Statement on Enforcement of Civil Rights,Fair Competition,Consumer Protection,and Equal Opportunity Laws in Automated Systems,”Apr
90、il 2024,https:/www.dol.gov/sites/dolgov/files/OFCCP/pdf/Joint-Statement-on-AI.pdf.4 For example,certain facial recognition systems performed poorly in identifying individuals with darker skin complexions.“Incident 484:US CBP Apps Failure to Detect Black Faces Reportedly Blocked Asylum Applications,”
91、AI Incident Database,January 18,2023,https:/incidentdatabase.ai/cite/484/#r2803.Center for Security and Emerging Technology|12 5“Incident 88:Jewish Baby Strollers Provided Anti-Semitic Google Images,Allegedly Resulting from Hate Speech Campaign,”AI Incident Database,August 15,2017,https:/incidentdat
92、abase.ai/cite/88/#r2183.6 Heather Frase and Owen Daniels,“Understanding AI Harms:An Overview,”Center for Security and Emerging Technology,August 11,2023,https:/cset.georgetown.edu/article/understanding-ai-harms-an-overview/;Mia Hoffmann and Heather Frase,“Adding Structure to AI Harm An Introduction
93、to CSETs AI Harm Framework,”Center for Security and Emerging Technology,July 2023,https:/cset.georgetown.edu/publication/adding-structure-to-ai-harm/.7 Heather Frase and Ren Bin Lee Dixon,“AI Incident Collection:An Observational Study of the Great AI Experiment,”Center for Security and Emerging Tech
94、nology,September 18,2023,https:/cset.georgetown.edu/wp-content/uploads/20230044-AI-Incident-Collection_-An-Explainer.pdf.8 Jack Corrigan,Owen J.Daniels et al.,“Governing AI with Existing Authorities:A Case Study in Commercial Aviation,”Center for Security and Emerging Technology(forthcoming).9 Frase
95、 and Dixon,“AI Incident Collection:An Observational Study of the Great AI Experiment.”10 Mina Narayanan,Alexandra Seymour,Heather Frase,and Karson Elmgren,“Repurposing the Wheel:Lessons for AI Standards,”Center for Security and Emerging Technology,November 2023,https:/cset.georgetown.edu/wp-content/
96、uploads/20230021-Repurposing-the-Wheel-Final-11.29.2023-1.pdf.11 Helen Toner,Jessica Ji,John Bansemer et al.,“Skating to Where the Puck is Going:Anticipating and Managing Risks from Frontier AI Systems,”Center for Security and Emerging Technology and Google DeepMind,October 2023,https:/cset.georgeto
97、wn.edu/wp-content/uploads/Frontier-AI-Roundtable-Paper-Final-2023CA004-v2.pdf.12 Mark F.Cancian,“Technological Surprise,”Avoiding Coping with Surprise in Great Power Conflicts,Center for Strategic and International Studies(CSIS),February 2018.http:/www.jstor.org/stable/resrep22428.9.13 Tarun Chhabra
98、,William Hannas,Dewey Murdick,and Anna Puglisi,“Open-Source Intelligence for S&T Analysis,”Center for Security and Emerging Technology,September 2020,https:/cset.georgetown.edu/publication/open-source-intelligence-for-st-analysis/.14 Dewey Murdick,“For a Senate Homeland Security and Governmental Aff
99、airs Subcommittee on Emerging Threats and Spending Oversight hearing on Advanced Technology:Examining Threats to National Security,”Center for Security and Emerging Technology,September 19,2023,https:/cset.georgetown.edu/wp-content/uploads/2023-09-19-Emerging-Threats-and-Spending-Oversight-Subcommit
100、tee-Written-Testimony-v1.5.pdf;Dewey Murdick,“Advanced Technology:Examining Threats to National Security,”Center for Security and Emerging Technology,September 19,Center for Security and Emerging Technology|13 2023,https:/cset.georgetown.edu/publication/advanced-technology-examining-threats-to-natio
101、nal-security/.15 Toner,Ji,Bansemer,et al,“Skating to Where the Puck is Going:Anticipating and Managing Risks from Frontier AI Systems,”https:/cset.georgetown.edu/wp-content/uploads/Frontier-AI-Roundtable-Paper-Final-2023CA004-v2.pdf.16 Diana Gehlhaus,Luke Koslosky,Kayla Goode and Claire Perkins,“U.S
102、.AI Workforce:Policy Recommendations,”Center for Security and Emerging Technology,October 2021,https:/cset.georgetown.edu/wp-content/uploads/CSET-U.S.-AI-Workforce-Policy-Recommendations.pdf.17 Tarmo Virki,“Finland Seeks to Teach 1%of All Europeans Basics on AI,”Reuters,January 2020,https:/ Josh A.G
103、oldstein and Andrew Lohn,“Deepfakes,Elections,and Shrinking the Liars Dividend,”Brennan Center for Justice,January 23,2024,https:/www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend;Josh A Goldstein,Jason Chao,Shelby Grossman,Alex Stamos and Michael Tomz,
104、“How persuasive is AI-generated propaganda?”PNAS Nexus 3,no.2,(February 2024),https:/ Zachary Arnold and Micah Musser,“The Next Frontier in AI Regulation Is Procedure,”Lawfare,August 10,2023,https:/www.lawfaremedia.org/article/the-next-frontier-in-ai-regulation-is-procedure.20 National Institute of
105、Standards and Technology,“Artificial Intelligence Risk Management Framework,”U.S.Department of Commerce,January 2023,https:/nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.21 David A.Wolfe,“Experimental Governance:Conceptual Approaches and Practical Cases,”OECD,https:/www.oecd.org/cfe/regionaldevelop
106、ment/Wolfe(2018)ExperimentalGovernanceConceptualApproaches.pdf.22 Anson Ho,Tamay Besiroglu,and Ege Erdil et al.,“Algorithmic progress in language models,”arXiv preprint arXiv:2403.05812(2024),https:/arxiv.org/abs/2403.05812.23 Ibid.;Micah Musser,Rebecca Gelles,Ronnie Kinoshita et al.,“The Main Resou
107、rce is the Human:A Survey of AI Researchers on the Importance of Compute,”Center for Security and Emerging Technology,April 2023,https:/cset.georgetown.edu/publication/the-main-resource-is-the-human/;Husanjot Chahal Helen Toner,and Ilya Rahkovsky,“Small Datas Big AI Potential,”Center for Security an
108、d Emerging Technology,September 2021,https:/cset.georgetown.edu/wp-content/uploads/CSET-Small-Datas-Big-AI-Potential-1.pdf.24 This assessment was based on download data from Github,an online code repository,and subsequent analysis by CSET.Andrew J.Lohn,“Scaling AI Cost and Performance of AI at the L
109、eading Edge,”Center for Security and Emerging Technology,December 2023,https:/cset.georgetown.edu/wp-Center for Security and Emerging Technology|14 content/uploads/Scaling-AI-Cost-and-Performance-of-AI-at-the-Leading-Edge.pdf;Andrew J.Lohn and Micah Musser,“AI and Compute,”Center for Security and Em
110、erging Technology,January 2022,https:/cset.georgetown.edu/wp-content/uploads/AI-and-Compute-How-Much-Longer-Can-Computing-Power-Drive-Artificial-Intelligence-Progress_v2.pdf.25 Toner,Ji,Bansemer et al.,“Skating to Where the Puck is Going:Anticipating and Managing Risks from Frontier AI Systems,”http
111、s:/cset.georgetown.edu/wp-content/uploads/Frontier-AI-Roundtable-Paper-Final-2023CA004-v2.pdf.26 Owen Tucker-Smith,“Congress wants to regulate AI.Big Tech is eager to help,”Los Angeles Times,July 5,2023,https:/ For example,in the aviation industry,the Federal Aviation Administration has increasingly come to rely on aircraft designers and manufacturers to certify aircraft,raising questions about safety and quality assurance.See,“Cockpit Automation,Flight Systems Complexity,and Aircraft Certification:Background and Issues for Congress,”Congressional Research Service,October 3,2019,https:/