《網絡安全中負責任的人工智能——公私合作伙伴關系的作用.pdf》由會員分享,可在線閱讀,更多相關《網絡安全中負責任的人工智能——公私合作伙伴關系的作用.pdf(18頁珍藏版)》請在三個皮匠報告上搜索。
1、The Role of Policy+PublicThe Role of Policy+Public-Private Collaboration in AI Private Collaboration in AI and Cybersecurityand CybersecurityBethany AbbateBethany AbbateManager,AI PolicyManager,AI PolicySoftware&Information Industry Association Software&Information Industry Association(SIIA)(SIIA)Ab
2、out MeOregon Ties:Grew up in Beaverton,OR BA in Civic Communication and Media,Willamette University Worked in various State of Oregon government officesWashington,D.C.Career:MA in Strategy,Cybersecurity and Intelligence from Johns Hopkins School for Advanced International Studies(SAIS)Have worked on
3、 a range of tech policy issues on behalf of a congressional committee,multinational companies,and investors Currently focused on AI policy at SIIAAbout SIIABased in Washington,D.C.the Software&Information Industry Association(SIIA)is the principal trade association for the information industry.From
4、digital platforms and global financial networks,to education technology providers and B2B media companies-SIIA represents the businesses and organizations that make the world work.SIIA Policy is comprised of a team of nonpartisan experts specializing in data privacy,education policy,intellectual pro
5、perty,artificial intelligence,competition and cybersecurity issues.Our diverse membership requires us to construct consensus among members across divides.Together,we work to maintain a healthy information ecosystem-one that encourages creation,dissemination and productive use for the benefit of all.
6、To learn more about SIIA visit OverviewOverviewThis presentation is structured into three parts to first explore AIs impact on cybersecurity,outline the key policy issues shaping its use,and demonstrate how policy that encourages public-private collaboration is essential to driving responsible innov
7、ation and strengthening security.What are the opportunities and threats?Impact of AI on CyberImpact of AI on Cyber0101What AI policy mechanisms are being considered?Where is cybersecurity woven in?AI Policy Landscape 101AI Policy Landscape 1010202How can government and industry collaborate to advanc
8、e AI safety/cybersecurity goals and maintain U.S.leadership in innovation?How can policy work as a lever to encourage meaningful collaboration?PublicPublic-Private CollaborationPrivate Collaboration0303The Impact of AI on Cyber Capabilities The Impact of AI on Cyber Capabilities Many organizations a
9、nd government agencies are strategizing as to how they can adopt AI as part of their broader cybersecurity strategy.AI adoption in cybersecurity is acceleratingAI adoption in cybersecurity is acceleratingAI is transforming cyber defense by shifting from reactive security measures,which address known
10、 threats,to proactive strategies that use machine learning to predict and prevent emerging threats in real time.Transforming cyber defense:proactive vs.reactive security Transforming cyber defense:proactive vs.reactive security While AI enhances automation,it also increases the need for human oversi
11、ght(“human in the loop”)to manage complex risks,ensure ethical use,and address AI-driven attacks.Elevating the human component of cybersecurityElevating the human component of cybersecurityAI can continuously monitor network activity to detect abnormal patterns,leading to faster identification of po
12、tential cyberattacks and reducing response times.Machine learning models can identify new,evolving malware by analyzing vast amounts of data,resulting in more accurate detection and fewer false positives compared to traditional methods.Enhanced Malware DetectionEnhanced Malware DetectionAI systems c
13、an analyze email patterns and user behavior to identify and block phishing attempts before they reach users,decreasing the risk of successful attacks and data breaches.Improved Phishing PreventionImproved Phishing PreventionOpportunities to Leverage AI in CybersecurityOpportunities to Leverage AI in
14、 CybersecurityCan promote increased efficiency,improved accuracy,cost reduction,improved scalability.Automated Threat DetectionMalicious actors can leverage AI to create more Malicious actors can leverage AI to create more sophisticated and automated attacks,resulting in more sophisticated and autom
15、ated attacks,resulting in more frequent and harderfrequent and harder-toto-detect breaches across multiple detect breaches across multiple systems.Attackers using AIsystems.Attackers using AI-driven hacking tools can driven hacking tools can launch attacks at scale with minimal human launch attacks
16、at scale with minimal human intervention,increasing the number of systems intervention,increasing the number of systems compromised and overwhelming traditional defense compromised and overwhelming traditional defense mechanisms.mechanisms.AIAI-Powered Attacks,Automated Powered Attacks,Automated Hac
17、king ToolsHacking ToolsAI can be used to generate highly convincing deepfake AI can be used to generate highly convincing deepfake content and more personalized phishing schemes,content and more personalized phishing schemes,making it harder for individuals and systems to discern making it harder fo
18、r individuals and systems to discern legitimate communications from fraudulent ones.legitimate communications from fraudulent ones.Deepfake and Phishing ManipulationDeepfake and Phishing ManipulationOrganizations worldwide spend around$200 billion a Organizations worldwide spend around$200 billion a
19、 year on cybersecurity products and services.Yet they year on cybersecurity products and services.Yet they struggle to fill cybersecurity jobs,and the 28%vacancy struggle to fill cybersecurity jobs,and the 28%vacancy rate for those positions is impeding their ability to rate for those positions is i
20、mpeding their ability to address escalating threats.The shortage not only address escalating threats.The shortage not only widens the cyber resource disparities amongst large vs widens the cyber resource disparities amongst large vs smaller organizations,but the lack of diversity in the smaller orga
21、nizations,but the lack of diversity in the profession also remains a concern.profession also remains a concern.Ongoing Cyber Workforce ShortageOngoing Cyber Workforce ShortageEmerging and Ongoing ThreatsEmerging and Ongoing ThreatsAn October 2024 BCG Study notes that close to six out of ten(58%)cybe
22、rsecurity leaders expressed concern over new adversarial techniques,including AI-enabled cyberattacks.Meanwhile,there remains an ongoing shortage of cybersecurity professionals and resources to address the evolving threat landscape.Synthetic Content/DeepfakesSynthetic Content/Deepfakes5 5Snapshot of
23、 Key Issues in AI PolicySnapshot of Key Issues in AI PolicyAI safety is essential for establishing a basis to ensure the ethical deployment of AI technologies,protect public interests,and mitigate potential risks associated with their use in various sectors.AI SafetyAI SafetyAI applications often re
24、quire access to personal data,raising concerns about privacy and data protection.Privacy and SecurityPrivacy and SecurityIncreased potential for misinformation,manipulation of public opinion,creation of Nonconsensual Intimate Imagery(NCII)and undermine of trust in digital media,which can lead to sig
25、nificant societal and ethical challenges.1 12 23 34 4Mechanisms for accountability and governance to ensure responsible use of AI and address potential harms.Accountability and GovernanceAccountability and GovernanceThe AI landscape is vast and rapidly evolving,but significant and thoughtful efforts
26、 are being made within the policy community to address critical questions surrounding its responsible development,governance,and societal impact.Algorithmic BiasAlgorithmic BiasAI systems can inherit biases from the data they are trained on,potentially leading to discrimination.6 6AI Literacy and Pu
27、blic AwarenessAI Literacy and Public AwarenessAI literacy and public awareness are crucial for empowering individuals and organizations to understand,navigate,and responsibly engage with AI technologies,ultimately fostering informed decision-making and trust in AI systems.U.S.CongressSenate AI Worki
28、ng GroupHouse AI Task ForceExecutive BranchDepartment of Commerce NIST U.S.AISI BISWhite House NTIA OSTP OMB ONCD NSCInternational ForumsOECDState LevelHundreds of AI bills across statesSpotlight:Colorado and CaliforniaWhere the Action is HappeningLegislationG7International Network of AISIsUNEUCounc
29、il of EuropeOther Agencies CISA NSAThe Role of Standards and FrameworksThe Role of Standards and FrameworksThe state of cybersecurity standards is relatively mature,with established frameworks and guidelines such as the National Institute of Standards and Technology(NIST)Cybersecurity Framework and
30、ISO/IEC 27001 that help organizations manage risks and implement best practices for protecting information systems.In contrast,AI standards are still in the early stages of development,as stakeholders to address the unique challenges posed by AI technologies.However,voluntary frameworks such as the
31、NIST AI RMF and subsequent profiles are serving as a basis for AI guidance.In the absence of regulation,standards often emerge as essential frameworks to guide the ethical development and deployment of technologies,providing benchmarks for best practices and promoting accountability among industry s
32、takeholdersThe National Standards Strategy for Critical and Emerging Technology(USG NSSCET)roadmap is intended to reinforce the U.S.Governments commitment to standards development led by the private sector and enhanced by partnerships with public institutions and calls for robust engagement in the s
33、tandardization of critical and emerging technologies(CETs)to protect U.S.national and economic security.NSSCET RoadmapNSSCET RoadmapIn collaboration with the private and public sectors,NIST has developed a framework to better manage risks to individuals,organizations,and society associated with arti
34、ficial intelligence(AI).The NIST AI Risk Management Framework(AI RMF)is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design,development,use,and evaluation of AI products,services,and systems.NIST AI RMFNIST AI RMFThe Role of Voluntary C
35、ommitmentsThe Role of Voluntary CommitmentsVoluntary commitments from companies play a critical role in promoting responsible innovation and addressing ethical concerns in the absence of formal regulations.These commitments include pledges to develop safe,transparent,and accountable AI systems,mitig
36、ate bias and discrimination,prioritize cybersecurity,and safeguard privacy.By voluntarily adopting best practices and collaborating with governments,civil society,and international organizations,companies can help shape industry standards,build public trust in AI,and preemptively address potential r
37、isks,all while maintaining flexibility in innovation and fostering a proactive approach to AI governance.Example:Frontier AI Safety Commitments,AI Seoul Summit 2024Example:Frontier AI Safety Commitments,AI Seoul Summit 2024The UK and Republic of Korea governments announced that 16 organizations have
38、 pledged to develop and deploy frontier AI models responsibly,adhering to voluntary commitments aimed at identifying,assessing,and mitigating severe risks posed by advanced AI systems.The commitments focus on conducting risk assessments,implementing safety thresholds,promoting transparency,and colla
39、borating with governments and external actors to ensure AI is used safely and effectively.The signatories also commit to sharing information,enhancing cybersecurity,and contributing to addressing global challenges through frontier AI.Additional Examples of AI Voluntary CommitmentsAdditional Examples
40、 of AI Voluntary CommitmentsThese commitments encompass several key areas:1.1.Safety:Safety:Companies commit to internal and external red-teaming of AI models to evaluate misuse and societal risks,including threats to national security,cybersecurity,and biosecurity.This includes advancing research i
41、n AI safety and ensuring transparency around red-teaming procedures to build public trust and confidence.2.2.Security:Security:Safeguarding AI model weights is a priority,with companies investing in cybersecurity and insider threat protections.They are also encouraging third-party discovery of vulne
42、rabilities by establishing bounty systems or including AI systems in existing bug bounty programs.3.3.Trust:Trust:To promote transparency,companies will develop mechanisms,such as watermarking or provenance systems,to help users identify AI-generated content.They will also publicly report model capa
43、bilities and limitations to ensure users understand the societal risks associated with AI,such as bias and discrimination.U.S.Voluntary AI Commitments from 16 AI CompaniesU.S.Voluntary AI Commitments from 16 AI CompaniesImageImage-Based Sexual Abuse(IBSA)PrinciplesBased Sexual Abuse(IBSA)PrinciplesT
44、he White House issued a Call to Action to Combat Image-Based Sexual Abuse for tech and civil society,of which civil society organizations and tech industry leaders collaborated in a multistakeholder working group focused on combatting IBSA.The working group convened a series of meetings to provide o
45、pportunities for experts and survivor advocates to share information about the definitions,scope,and impact of various forms of IBSA(including the nonconsensual distribution of intimate images(NCII)including AI generated intimate deepfakes.Tech industry leaders also shared information about their ex
46、isting and ongoing efforts to prevent and address IBSA,creating a path for civil society and industry to collaborate on the development of actionable best practices.8 voluntary principles are derived from these discussions,and these principles will inform the development of industry best practices a
47、nd be refined as technology advances and industry standards evolve.NIST Cybersecurity Framework 2.0 AI Profile NIST Cybersecurity Framework 2.0 AI Profile National Cyber Strategy National Cyber Strategy AI Cyber ChallengesAI Cyber ChallengesExamples of Ongoing and Emerging U.S.AI/Cyber Policy Initia
48、tivesExamples of Ongoing and Emerging U.S.AI/Cyber Policy InitiativesWhile cybersecurity undertones are implied in the ongoing AI policy conversations,there are still relatively few concrete inWhile cybersecurity undertones are implied in the ongoing AI policy conversations,there are still relativel
49、y few concrete initi iti atives addressing this intersection.atives addressing this intersection.However,momentum has been increasing in recent months to prioritize this.Existing initiatives underscore the critical imporHowever,momentum has been increasing in recent months to prioritize this.Existin
50、g initiatives underscore the critical importantance of publicce of public-private collaboration,private collaboration,as they aim to harness the expertise and resources of both sectors to develop innovative AI and cybersecurity solutions that as they aim to harness the expertise and resources of bot
51、h sectors to develop innovative AI and cybersecurity solutions that effeff ectively address emerging threats ectively address emerging threats and ensure the responsible use of technology.and ensure the responsible use of technology.AIxCC challenge encourages competitors across the U.S.to identify a
52、nd fix software vulnerabilities using AI.Led by the Defense Advanced Research Projects Agency(DARPA),this competition includes collaboration with several top AI companies who are lending their expertise and making their cutting-edge technology available for the challenge.It intends to drive the crea
53、tion of new technologies to rapidly improve the security of computer code.NISTs Cybersecurity Center for Excellence is evaluating how to use existing frameworks,such as the Cybersecurity Framework(CSF),to assist organizations as they face new or expanded risks from AI and are launching a new communi
54、ty profile effort.The U.S.National Cyber Strategy,released in 2023,emphasizes the role of AI in strengthening national cybersecurity efforts.It outlines objectives for integrating AI into cybersecurity practices and highlights the importance of collaboration among public and private sectors to enhan
55、ce cybersecurity resilience.Executive Order on Safe,Secure and Trustworthy AIExecutive Order on Safe,Secure and Trustworthy AI The executive order highlights cybersecurity as a critical focus area within AI governance.The EO calls attention to strengthening AI system security and resilience,emphasiz
56、ing that AI systems must be robust and secure.Calls for standardized evaluations,testing,and performance monitoring of AI to mitigate risks,particularly in areas such as cybersecurity,biotechnology,critical infrastructure,and national security.The order underscores the importance of ensuring that AI
57、 systems are resilient against misuse.CISA/NSA Initiatives CISA/NSA Initiatives Cybersecurity Collaboration Center,NSACybersecurity Collaboration Center,NSAJoint Cyber Defense Collaborative,CISAJoint Cyber Defense Collaborative,CISADevelopment of good AI policy Development of good AI policy requires
58、 a new model of publicrequires a new model of public-private collaborationprivate collaborationEffective AI policy requires close collaboration among government,the private sector,civil society,and academia.As policy has lagged innovation,responsible actors in the private sector have led on developi
59、ng accountability measures,mitigating AI-associated risks,and pioneering state of the art compliance measures.More can be done,and working together across silos is essential to address ongoing societal concerns,ensure continued innovation,and cultivate the expertise and resources necessary for respo
60、nsible adoption of AI.Policy Levers that Encourage Collaboration Policy Levers that Encourage Collaboration and Enhance U.S.Leadershipand Enhance U.S.LeadershipInvest in fundamental AI research and development.Invest in fundamental AI research and development.The United States cannot continue to be
61、a leader in responsible AI without providing the necessary resources to support responsible innovation.Congress has the opportunity to increase funding for important initiatives.These include funding NIST,the Department of Energys Science Division,and the National Science Foundation.This includes en
62、suring that NIST has adequate funds to continue to advance its work on AI within the AI Safety Institute,and full funding of the programs set out in the National AI Research Resource(NAIRR).Continue International Alignment EffortsContinue International Alignment EffortsEngagement across U.S.governme
63、nt agencies and international bodies to harmonize the approaches to managing AI risks.This can help ensure that AI safety standards are coherent and interoperable,reducing the burden on companies operating in multiple jurisdictions and already complying with existing marginal risk standards.Need for
64、 Comprehensive Federal LawNeed for Comprehensive Federal LawAs we have seen in the context of consumer privacy,where there is no comprehensive federal law,a patchwork of divergent state requirements has created challenges for industry,increased compliance costs,and increased uncertainty among consum
65、ers.AI is used in countless applications across the country and a patchwork of legal and policy frameworks will undermine public trust,suppress innovation,and hurt U.S.leadership on AI governance.More information in More information in SIIAs Blueprint for Government Oversight and Regulation of AISII
66、As Blueprint for Government Oversight and Regulation of AITakeawaysCollaboration as a CatalystEffective partnerships between public and private sectors are essential for driving innovation in AI applications for cybersecurity,enabling rapid responses to evolving threats.Realizing AIs potential to pr
67、omote equality rather than perpetuate inequality requires work,and that work will stem from active collaboration.Importance of Human OversightWhile AI enhances threat detection and response,ongoing human oversight remains critical to managing complex risks,ensuring ethical use,and making informed de
68、cisions in cybersecurity practices.Bolstering the cyber workforce and improving AI literacy are crucial for equipping professionals with the necessary skills and knowledge to effectively interpret AI-driven insights,respond to emerging threats,and navigate the ethical implications of AI technologies
69、.Role of Policy+R&DBalanced policy solutions and robust funding are vital for fostering collaborative efforts that foster responsible AI governance,support research and development,and ensure that both public and private entities can effectively leverage AI technologies(and strengthen their cybersecurity posture while doing so).Q&AThank you!Email: