《美國安全與新興技術中心:2024人工智能安全與自動化偏見人機交互的風險研究報告(英文版)(34頁).pdf》由會員分享,可在線閱讀,更多相關《美國安全與新興技術中心:2024人工智能安全與自動化偏見人機交互的風險研究報告(英文版)(34頁).pdf(34頁珍藏版)》請在三個皮匠報告上搜索。
1、Issue BriefNovember 2024AI Safety and Automation BiasThe Downside of Human-in-the-LoopAuthorsLauren KahnEmelia S.ProbascoRonnie KinoshitaAI Safety and Automation BiasThe Downside of Human-in-the-LoopAuthorsLauren KahnEmelia S.ProbascoRonnie KinoshitaCenter for Security and Emerging Technology|1 Exec
2、utive Summary Automation bias is the tendency for an individual to over-rely on an automated system.It can lead to increased risk of accidents,errors,and other adverse outcomes when individuals and organizations favor the output or suggestion of the system,even in the face of contradictory informati
3、on.Automation bias can endanger the successful use of artificial intelligence by eroding the users ability to meaningfully control an AI system.As AI systems have proliferated,so too have incidents where these systems have failed or erred in various ways,and human users have failed to correct or rec
4、ognize these behaviors.This study provides a three-tiered framework to understand automation bias by examining the role of users,technical design,and organizations in influencing automation bias.It presents case studies on each of these factors,then offers lessons learned and corresponding recommend
5、ations.User Bias:Tesla Case Study Factors influencing bias:Users personal knowledge,experience,and familiarity with a technology.Users degree of trust and confidence in themselves and the system.Lessons learned from case study:Disparities between user perceptions and system capabilities contribute t
6、obias and may lead to harm.Recommendation:Create and maintain qualification standards for user understanding.User misunderstanding of a systems capabilities or limitations is asignificant contributor to incidents of harm.Since user understanding iscritical to safe operation,system developers and ven
7、dors must invest inclear communications about their systems.Center for Security and Emerging Technology|2 Technical Design Bias:Airbus and Boeing Design Philosophies Case Study Factors influencing bias:The systems overall design,user interface,and how it provides user feedback.Lessons learned from c
8、ase study:Even with highly trained users such as pilots,systems interfaces contribute to automation bias.Different design philosophies have different risks.No single approach is necessarily perfect,and all require clear,consistent communication and application.Recommendation:Value and enforce consis
9、tent design and design philosophies that account for human factors,especially for systems likely to be upgraded.When necessary,justify and make clear any departures from a design philosophy to legacy users.Where possible,develop common design criteria,standards,and expectations,and consistently comm
10、unicate them(either through organizational policy or industry standard)to reduce the risk of confusion and automation bias.Organizational Policies and Procedure Bias:Army Patriot Missile System vs.Navy AEGIS Combat System Case Study Factors influencing bias:Organizational training,processes,and poli
11、cies.Lessons learned from case study:Organizations can employ the same tools and technologies in very different ways based on protocols,operations,doctrine,training,and certification.Choices in each of these areas of governance can embed automation biases.Organizational efforts to mitigate automatio
12、n bias can be successful but mishaps are still possible,especially when human users are under stress.Center for Security and Emerging Technology|3 Recommendation:Where autonomous systems are used by organizations,design and regularly review organizational policies appropriate for technical capabilit
13、ies and organizational priorities.Update policies and processes as technologies change to best account for new capabilities and mitigate novel risks.If there is a mismatch between the goals of the organization and policies governing how capabilities are used,automation bias and poor outcomes are mor
14、e likely.Across these three case studies,it is clear that“human-in-the-loop”cannot prevent all accidents or errors.Properly calibrating technical and human fail-safes for AI,however,poses the best chance for mitigating the risks of using AI systems.Center for Security and Emerging Technology|4 Table
15、 of Contents Executive Summary.1 Introduction.5 What Is Automation Bias?.6 A Framework for Understanding and Mitigating Automation Bias.8 Case Studies.10 Case Study 1:How User Idiosyncrasies Can Lead to Automation Bias.10 Teslas Road to Autonomy.10 Behind the Wheel:Teslas Autopilot and the Human Ele
16、ment.11 Case Study 2:How Technical Design Factors Can Induce Automation Bias.13 The Human-Machine Interface:Airbus and Boeing Design Philosophies.14 Boeing Incidents.16 Airbus Incidents.17 Case Study 3:How Organizations Can Institutionalize Automation Bias.18 Divergent Organizational Approaches to A
17、utomation:Army vs.Navy.19 Patriot:A Bias Towards the System.21 AEGIS:A Bias Towards the Human.22 Conclusion.24 Authors.26 Acknowledgments.26 Endnotes.27 Center for Security and Emerging Technology|5 Introduction In contemporary discussions about artificial intelligence,a critical but often overlooke
18、d aspect is automation biasthe tendency of human users to overly rely on AI systems.Left unaddressed,automation bias can and has harmed both AI and autonomous system users and innocent bystanders in examples that range from false legal accusations to death.Automation bias,therefore,presents a signif
19、icant challenge in the real-world application of AI,particularly in high-stakes contexts such as national security and military operations.Successful deployment of AI systems relies on a complex interdependence between AI systems and the humans responsible for operating them.Addressing automation bi
20、as is necessary to ensure successful,ethical,and safe AI deployment,especially when the consequences of overreliance or misuse are most severe.As societies incorporate AI into systems,decision-makers thus need to be prepared to mitigate the risks associated with automation bias.Automation bias can m
21、anifest and be intercepted at the user,technical design,and organizational levels.We provide three case studies that explain how factors at each of these levels can make automation bias more or less likely,derive lessons learned,and highlight possible mitigation strategies to alleviate these complex
22、 issues.Center for Security and Emerging Technology|6 What Is Automation Bias?Automation bias is the tendency for a human user to overly rely on an automated system,reflecting a cognitive bias that emerges from the interaction between a human and an AI system.When affected by automation bias,users t
23、end to decrease their vigilance in monitoring both the automated system and the task it is performing.1 Instead,they place excessive trust in the systems decision-making capabilities and inappropriately delegate more responsibility to the system than it is designed to handle.In severe instances,user
24、s might favor the systems recommendations even when presented with contradictory evidence.Automation bias most often presents in two ways:as an error of omission,when a human fails to take action because the automation did not alert them(as discussed in the first case study on vehicles);or as an err
25、or of commission,when a human follows incorrect directions from the automation(as discussed in the case study on the Patriot Missile System).2 In this analysis,we also discuss an instance where a bias against the automation causes harm(i.e.,the third case study on the AEGIS weapons system).Automatio
26、n bias does not always result in catastrophic events,but it increases the likelihood of such outcomes.Mitigating automation bias can help to improve human oversight,operation,and management of AI systems and thus mitigate some risks associated with AI.The challenge of automation bias has only grown
27、with the introduction of progressively more sophisticated AI-enabled systems and tools across different application areas including policing,immigration,social welfare benefits,consumer products,and militaries(see Box 1).Hundreds of incidents have occurred where AI,algorithms,and autonomous systems
28、were deployed without adequate training for users,clear communication about their capabilities and limitations,or policies to guide their use.3 Center for Security and Emerging Technology|7 While automation bias is a challenging problem,it is a tractable issue that society can tackle throughout the
29、AI development and deployment process.The avenues through which automation bias can manifestnamely at the user,technical,and organizational levelsalso represent points of intervention to mitigate automation bias.Box 1.Automation Bias and the UK Post Office Scandal In a notable case of automation bia
30、s,a faulty accounting system employed by the UK Post Office led to the wrongful prosecution of 736 UK sub-postmasters for embezzlement.Although it did not involve an AI system,automation bias and the myth of“infallible systems”played a significant roleusers willingly accepted system errors despite s
31、ubstantial evidence to the contrary,favoring the unlikely case that hundreds of postmasters were involved in theft and fraud.4 As one author of an ongoing study into the case highlighted,“This is not a scandal about technological failing;it is a scandal about the gross failure of management.”5 Cente
32、r for Security and Emerging Technology|8 A Framework for Understanding and Mitigating Automation Bias Technology must be fit for purposes,and users must understand those purposes to be able to appropriately control systems.Furthermore,knowing when to trust AI and when and how to closely monitor AI s
33、ystem outputs is critical to its successful deployment.6 A variety of factors calibrate trust and reliance in the minds of operators,and they generally fall into one of three categories(though each category can be shaped by the context within which the interaction may occur,such as situations of ext
34、reme stress or,conversely,fatigue):7 factors intrinsic to the human user,such as biases,experience,and confidence in using the system;factors inherent to the AI system,such as its failure modes(the specific ways in which it might malfunction or underperform)and how it presents and communicates infor
35、mation;and,factors shaped by organizational or regulatory rules and norms,mandatory procedures,oversight requirements,and deployment policies.Organizations implementing AI must avoid myopically focusing only on the technical“machine”side to ensure the successful deployment of AI.Management of the hu
36、man aspect of these systems deserves equal consideration,and management strategies should be adjusted according to context.Recognizing these complexities and potential pitfalls,this paper presents case studies for three controllable factors affecting automation bias(user,technical,organizational)tha
37、t correspond to the aforementioned factors that shape the dynamics of human-machine interaction(see Table 1).Center for Security and Emerging Technology|9 Table 1.Factors Affecting Automation Bias Factors Description Case Study User Users personal knowledge,experience,and familiarity with a technolo
38、gy Users degree of trust and confidence in themselves and the system Tesla and driving automation Technical Design The systems overall design,the structure of its user interface,and how it provides user feedback Airbus and Boeing design philosophies Organization Organizational processes shaping AI u
39、se and reliance U.S.Armys management and operation of the Patriot Missile System vs.U.S.Navys management and operation of the AEGIS Combat System An additional layer of task-specific factors,such as time constraints,task difficulty,workload,and stress,can exacerbate or alternatively reduce automatio
40、n bias.8 These factors should be duly considered in the design of the system,as well as training and organizational policies,but are beyond the scope of this paper.Center for Security and Emerging Technology|10 Case Studies Case Study 1:How User Idiosyncrasies Can Lead to Automation Bias Individuals
41、 bring their personal experiencesand biasesto their interactions with AI systems.9 Research shows that greater familiarity and direct experience with self-driving cars and autonomous vehicle technologies make individuals more likely to support autonomous vehicle development and consider them safe to
42、 use.Conversely,behavioral science research demonstrates that a lack of technological knowledge can lead to fear and rejection,while having only a little familiarity with a particular technology can result in overconfidence in its capabilities.10 The case of increasingly“driverless”cars illustrates
43、how the individual characteristics and experiences of users can shape their interactions and automation bias.Furthermore,as the case study on Tesla below illuminates,even system improvements designed to mitigate the risks of automation bias may have limited effectiveness in the face of a persons bia
44、s.Teslas Road to Autonomy Cars have become increasingly automated over time.Manufacturers and engineers have introduced cruise control and a flurry of other advanced driver assistance systems(ADAS)aimed at improving driving safety and reducing the likelihood of human error,alongside other features s
45、uch as lane drift systems and blind spot sensors.The U.S.National Highway Traffic Safety Administration suggests that full automation has the potential to“offer transformative safety opportunities at their maturity,”but caveat that these are a future technology.*As they make clear on their website i
46、n bolded capital letters,cars that perform“all aspects of the driving task while you,as the driver,are available to take over driving if requested.ARE NOT AVAILABLE ON TODAYS VEHICLES FOR CONSUMER PURCHASE IN THE UNITED STATES.”11 Even if these *The Society of Automotive Engineers(SAE)(in collaborat
47、ion with the International Organization for Standardization,or ISO)has established six levels of driving automation,from 0 to 5.Level 0,or no automation,represents cars without systems such as adaptive cruise control.On the other end of the spectrum,Levels 4 and 5 suggest cars that may not even requ
48、ire a steering wheel to be installed.Levels 1 and 2 include those systems with increasingly competent driver support features like those mentioned above.In all of these systems,however,the human is driving,“even if your feet are off the pedals and you are not steering.”It is at Level 3,where automat
49、ion begins to take over,that the line between“self-driving”and“driverless”becomes fuzzier,with the vehicle relying less on the driver unless the vehicle requests their engagement.Levels 4 and 5 never require human intervention.See“SAE Levels of Driving Automation Refined for Clarity and Internationa
50、l Audience,”SAE International Blog,May 3,2021,https:/www.sae.org/blog/sae-j3016-update.Center for Security and Emerging Technology|11 cars were available,it is important to consider the possibility that while autonomy might eliminate certain kinds of accidents or human errors(like distracted driving
51、),it has the potential to create new ones(like over-trusting autopilot).12 Studies suggest that ADAS adoption by drivers is often opportunistic,and simply a byproduct of upgrading their vehicles.Drivers learn about the vehicles capabilities in an ad-hoc manner,sometimes just receiving an over-the-ai
52、r software update that comes with written notes.There are no exams or certifications required for these updates.Studies have also shown that where use of an ADAS system is solely experiential,such as when a driver adopts an autonomous vehicle without prior training,human misuse or misunderstanding o
53、f ADAS systems can happen after only a few encounters behind the wheel.13 Furthermore,at least one study found that drivers who are exposed to more capable automated systems first tended to establish a baseline of trust when interacting with other(potentially less capable)automated systems.14 This t
54、rust and confidence in ADAS vehicles can manifest as distracted driving,to the point of drivers ignoring warnings,taking longer to react to emergencies,or taking risks they would not take in the absence of automation.15 Behind the Wheel:Teslas Autopilot and the Human Element In the weeks leading up
55、to the first fatal U.S.accident involving Teslas Autopilot in 2016,the companys then-president,Jon McNeill,personally tested the system in a Model X.In an email following his test,McNeill praised the systems seemingly flawless performance,admitting,“I got so comfortable under Autopilot that I ended
56、up blowing by exits because I was immersed in emails or calls(I know,I know,not a recommended use).”16 Despite marketing that suggests the Tesla Full Self-Driving Capability(FSD)might achieve full autonomy without human intervention,these features currently reside firmly within the suite of ADAS cap
57、abilities.17 Investigations into that first fatal accident found that the driver had been watching a movie and had ignored multiple alerts to maintain hands on the wheel when the Autopilot failed to distinguish a white trailer from a bright sky,leading to a collision that killed the driver.18 Since
58、then,there have been a range of incidents involving Teslas Autopilot suite of software,which includes what is called a“Full Self-Driving Capability.”These incidents led the National Highway Traffic Safety Administration(NHTSA)to examine nearly one thousand crashes and launch over 40 investigations i
59、nto accidents in which Autopilot features were reported to have been in use.19 In its initial investigations,NHTSA found“at least 13 crashes involving one or more fatalities and many more involving serious injuries in which Center for Security and Emerging Technology|12 foreseeable driver misuse of
60、the system played an apparent role.”20 Also,among NHTSAs conclusions was that“Autopilots design was not sufficient to maintain drivers engagement.”21 In response to NHTSAs investigation and increasing scrutiny,in December 2023 Tesla issued a safety recall of two million of its vehicles equipped with
61、 the Autosteer functionality.22 In its recall announcement,Tesla acknowledged that:“In certain circumstances when Autosteer is engaged,the prominence and scope of the features controls may not be sufficient to prevent driver misuse of the SAE Level 2 advanced driver-assistance feature.”23 As a part
62、of this recall,Tesla sought to address the driver engagement problem with an over-the-air software update that added more controls and alerts to“encourage the driver to adhere to their continuous driving responsibility whenever Autosteer is engaged.”That encouragement manifested as:“increasing the p
63、rominence of visual alerts on the user interface,simplifying engagement and disengagement of Autosteer,additional checks upon engaging Autosteer and eventual suspension from Autosteer use if the driver repeatedly fails to demonstrate continuous and sustained driving responsibility while the feature
64、is engaged.”24 Training or certification was not included with the software update;however,a text summary of the software update was provided for users to optionally review,and videos of users indicate that the instructions were easy to ignore.Users also had the option to ignore safety features in t
65、he update altogether.The efficacy of these specific changes(either individually or in total)is not yet clear.In April 2024,NHTSA launched a new investigation into Teslas Autosteer and the software update it performed in December 2023 but,as explained earlier,experiential encounters alone can imprope
66、rly calibrate the trust new drivers place in their autonomous vehicles.25 Case Study 1:Key Takeaways from User Level Case Study Wider gaps in misalignment between perceived and actual technology capabilities can lead to,or otherwise exacerbate,automation bias.Automation bias will be impacted by the
67、users level of prior knowledge and experience,which should be of particular concern in safety critical situations.Center for Security and Emerging Technology|13 In the U.S.,drivers are often considered the responsible party in car accidents,particularly when it comes to the role of the driver and th
68、e role of the system.26 As David Zipper,Senior Fellow at the MIT Mobility Initiative,explained:“In the United States,the responsibility for road safety largely falls on the individual sitting behind the wheel,or riding a bike,or crossing the street.American transportation departments,law enforcement
69、 agencies,and news outlets frequently maintain that most crashesindeed,94 percent of them,according to the most widely circulated statisticare solely due to human error.Blaming the bad decisions of road users implies that nobody else could have prevented them.”27 However,even the most experienced an
70、d knowledgeable human users are not free from the risk of overreliance in the face of poor interface and system design,and there is a peculiar dynamic at play with autonomous vehicles:When incidents occur,blame often falls on the software.28 While the software may not be blameless,the combination of
71、 the system and inappropriate human use must also be considered in identifying the causes of harm.Therefore,ways of intervening or monitoring to prevent inappropriate use by drivers should be sought out alongside ways of improving the systems technical features and design.Case Study 2:How Technical
72、Design Factors Can Induce Automation Bias A review of crashes in the aviation industry demonstrates that even in cases where users are highly trained,actively monitored,possess a thorough understanding of the technologys capabilities and limitations,and can be assured not to misuse or abuse the tech
73、nology,a poorly designed interface can make automation bias more likely.Fields dedicated to optimizing these links between the user and the system,such as human factors engineering and UI/UX design,are devoted to integrating and applying knowledge about human capabilities,limitations,and psychology
74、into the design and development of technological systems.29 Physical details,from the size and location of a button to the shape of a lever or selection menu to the color of a flashing light or image,seem small or insignificant.Yet these features can play a pivotal role in shaping human interactions
75、 with technology and ultimately determining a systems utility.The importance of considering human interaction in the design and operation of these systems cannot be overstatedneglecting the human element in design can lead to inefficiencies at best,and unsafe and dangerous conditions at worst.Poorly
76、 designed Center for Security and Emerging Technology|14 interfaces,characterized by features as simple as drop-down menus with a lack of clear distinctions,were,for example,at the core of the accidental issuance of a widespread emergency alert in Hawaii that warned of an imminent,inbound ballistic
77、missile attack.30 Design choices,intentionally or not,shape and establish specific behavioral pathways for how humans operate and rely on the systems themselves.In other words,these design choices can directly embed and/or exacerbate certain cognitive biases,including automation bias.These design ch
78、oices are especially consequential when it comes to hazard alerts,such as visual,haptic,and auditory alarms.The commercial aviation industry illustrates how automation bias can be directly influenced by system designs:The Human-Machine Interface:Airbus and Boeing Design Philosophies Automation has b
79、een central to the evolution of the airplane since its inceptionit took less than ten years from the first powered flight to the earliest iterations of autopilot.31 In the years since,aircraft flight management systems,including those that are AI-enabled,have become successively capable.Today,a grea
80、t deal of the routine work of flying a plane is handled by automated systems.This has not rendered pilots obsolete,however.32 On the contrary,pilots must now incorporate the aircraft systems interpretation and reaction to external conditions before determining the most appropriate response,rather th
81、an directly engaging with their surroundings.While overall,flying has become safer due to automation,automation bias represents an ever-present risk factor.33 As early as 2002,a joint FAA-industry study warned that the significant challenge for the industry would be to manufacture aircraft and desig
82、n procedures that are less error-prone and more robust to errors involving incorrect human response after failure.34 While there are international standards as well as a general consensus among aircraft manufacturers that flight crews are ultimately responsible for safe aircraft operation,the two le
83、ading commercial aircraft providers in the United States,Airbus and Boeing,are known for their opposite design philosophies.35 The differences between them illustrate different approaches to the automation bias challenge.In Airbus aircraft,the automated system is designed to insulate and protect pil
84、ots and flight crews from human error.The pilots control is bounded by“hard”limits,designed to allow for manipulation of the flight controls but prohibitive of any changes in altitude or speed,for example,that would lead to structural damage or loss of control of the aircraft(in other words,actions
85、to exceed the manufacturers defined flight envelope).Center for Security and Emerging Technology|15 In contrast,in Boeing aircraft,the pilot is the absolute and final authority and can use natural actions with the systems to essentially“insist”upon a course of action.These“soft”limits exist to warn
86、and alert the pilot but can be overridden and disregarded,even if it means the aircraft will exceed the manufacturers flight envelope.These design differences may help explain why some airlines only operate single-type fleets;pilots typically stick to one type of aircraft,and cross-training pilots i
87、s possible but costly and,therefore,uncommon.36 Table 2 shows an FAA summary of the different design philosophies:Table 2:Airbus and Boeing Design Philosophies Airbus Boeing Automation must not reduce overall aircraft reliability;it should enhance aircraft and systems safety,efficiency,and economy.A
88、utomation must not lead the aircraft out of the safe flight envelope,and it should maintain the aircraft within the normal flight envelope.Automation should allow the user to use the safe flight envelope to its full extent,should this be necessary due to extraordinary circumstances.Within the normal
89、 flight envelope,the automation must not work against operator inputs,except when absolutely necessary for safety.The pilot is the final authority for the operation of the airplane.Both crew members are ultimately responsible for the safe conduct of the flight.Flight crew tasks,in order of priority,
90、are safety,passenger comfort,and efficiency.Design for crew operations is based on pilots past training and operational experience.Design systems are error tolerant.The hierarchy of design alternatives is simplicity,redundancy,and automation.Apply automation as a tool to aid,not replace,the pilot.Ad
91、dress fundamental human strengths,limitations,and individual differencesfor both normal and nonnormal operations.Center for Security and Emerging Technology|16 Use new technologies and functional capabilities only when:1)They result in clear and distinct operational or efficiency advantages,and 2)Th
92、ere is no adverse effect to the human-machine interface.Source:Kathy Abbott,“Human Factors Engineering and Flight Deck Design,”in The Avionics Handbook,edited by Cary Spitzer,CRC Press LLC,2001.Despite the divergence in their design philosophies,both aircraft types maintain high levels of popularity
93、 and safety,with“virtually every large passenger plane that is flown in the Western world”being built by either Airbus or Boeing,proving the effectiveness of their respective approaches when consistently applied across design,training,and operations.37 Neither is immune,however,to accidents or failu
94、res,especially when these philosophies are violated,or changes are not adequately communicated to users.Boeing Incidents On October 29,2018,Lion Air Flight 610 crashed.Less than six months later,on March 10,2019,Ethiopian Airlines Flight 302 crashed.Both incidents,plus a third incident involving ano
95、ther Boeing 737 Max 8 aircraft that narrowly avoided a crash,resulted in a combined 346 fatalities.While the exact nature of these accidents varied,all three were ultimately attributed to complications arising from Boeings introduction of new softwarethe Maneuvering Characteristics Augmentation Syst
96、em,or MCAS.The MCAS system was engineered to assist in maintaining the 737 Maxs stability during flight and prevent conditions that could lead to a stall.While the system was designed to assist the pilot and could be overridden,the update was not well communicated to the pilots and thus may have vio
97、lated one of Boeings principles to“design for crew operations based on pilots past training and operational experience.”38 While Boeings failure to communicate the change adequately is reminiscent of issues Tesla has faced communicating updates to drivers,the deviation from past design principles ma
98、y have further served to undermine the pilots control.39 Indeed,a review by the National Transportation Safety Board determined that“in all three flights,the pilot responses differed and did not match the assumptions of pilot responses to unintended MCAS operation on which Boeing based its hazard cl
99、assifications.”40 Center for Security and Emerging Technology|17 Airbus Incidents Perhaps an even more powerful case study concerning the consequences of technology design choices is the case of Air France Flight 447,which crashed in the Atlantic on June 1,2009.Nearly three years later,the French Ci
100、vil Aviation Safety Investigation Authority released its final report detailing how technical issues caused by ice on parts of the plane led to inconsistent speed measures and the shutting off of autopilot.This shutoff caused the crew to make choices that stalled the planean uncommon occurrence than
101、ks to onboard automated systemsand eventually led to the crash.41 Post-accident reporting and subsequent analysis raised the question that even if one conceded the design flaw that led to the initial autopilot shutoff,“How could the pilots have a computer yelling“stall”at them and not realize they w
102、ere in a stall?”42 Ultimately,it was a confluence of human error and poor system design.The system design issue was with the flight management system,which presented a flurry of alerts and warnings to the pilot that“made it overwhelmingly difficult to recognize what was happening.”43 In addition to
103、the alerts,it was clear that automation itself played some role in the crash.In particular,there was a concern that approaches like Airbus,which emphasized protecting the pilot,actually went too far and were eroding pilot capabilities and skills by making them too dependent on the automated systems.
104、44 Ironically,as automation has made air travel much safer,it has also reduced the instances where a human pilot must take control of the plane in more complicated situations.This may in turn degrade the pilots ability to properly control the plane when it is most needed.Center for Security and Emer
105、ging Technology|18 Both Boeings and Airbus past incidents underscore the complexity and risks associated with human-machine interaction.The interface design,physical layout,and functionality of controls directly influence user behavior and decision-making processes.In essence,design can induce user
106、biases,including automation bias.Both design approacheswhether prioritizing human control or protectioncan be successful when communicated effectively,consistently,and purposefully.Human factors design choices should not be an afterthought.The rationale behind technical design choices should be alig
107、ned with organizational goals,priorities,and preferences.In these cases,users can better anticipate system behavior,respond promptly to changing circumstances,and more rapidly identify and explain any deviations from the norm,hopefully before accidents occur.That said,no system is 100%error-proof.Ca
108、se Study 3:How Organizations Can Institutionalize Automation Bias While the Airbus incident with Air France 447 is a case study in human factors design choices,the after-action report also explained that“the behavior observed at the time of an event is often consistent with,or an extension of,a spec
109、ific culture and work organization.”45 Organizational factors influencing automation bias include formal guidance documents,institutional processes,procurement guidelines,audits or inspections,incentive programs,and stated priorities,as well as informal norms or training expectations.These factors s
110、hould be appreciated as both a source of risk and a hedge against errors by humans or technologies.46 Organizational policies and Case Study 2:Key Takeaways from Technical Design Level Case Study Even with highly trained users,system design flaws can induce more harms.Neglecting human factors in sys
111、tem design can undermine users ability to operate technology effectively and safely.Maintaining a“human-in-the-loop”is insufficient to preventing accidents or errors.There must be clear communication between the human user and the system,as well as sufficient training for the user,such that the“hand
112、over”from the system to the human does not become a weak point.Different design philosophies have different risks;they are not necessarily inherently better or worse.Any approach requires clear,consistent communication and application.Center for Security and Emerging Technology|19 processes for risk
113、 reduction are widely practiced in areas such as occupational safety and cybersecurity.The healthcare field has extensively studied the factors that make for“high reliability organizations,”a term that was first studied in the context of aircraft carrier operations.47 These organizational controls t
114、ake as a premise that if“we cannot change the human condition,we can change the conditions under which humans work.”48 Divergent Organizational Approaches to Automation:Army vs.Navy The U.S.military provides an insightful case study of how an organization can shape automation bias.The military is ab
115、le to exercise significant control over its users through organizational policies and nearly a centurys worth of experience deploying highly automated defensive systems to service members.Within the military,the Army and Navy deploy a very similar automated missile defense system with two very diffe
116、rent approaches.The Navys AEGIS weapons system and the Armys Patriot system are tiered autonomy systems that scan for incoming air threats(missiles or aircraft),track them with highly capable radars,and guide missiles(for AEGIS the“Standard Missile”or SM;for Patriot,the Patriot Advanced Capability o
117、r PAC)to strike an incoming threat.49 The systems are capable of supervised autonomous operations up to and including launching defensive missiles without human input,in comparable ways(see Table 3).They have been widely viewed as successful defensive systems since the late 1980s,though there have b
118、een notable disasters associated with both.50 Center for Security and Emerging Technology|20 Table 3:Comparison of AEGIS and Patriot Weapons Systems Autonomous Functions AEGIS51 Patriot52 Manual Identification The user must evaluate a detected radar track and assign an identity(e.g.,friend,unknown,h
119、ostile)based on the tracks location,speed,the Identification Friend or Foe System(IFF),and electronic emissions.Manual Identification The user must evaluate a detected radar track and assign an identity(e.g.,friend,unknown,hostile)based on the tracks location,speed,IFF,and electronic emissions.IFF,I
120、dentification,Drop-Track Doctrine These three separate doctrines can be individually or collectively activated to perform track identification tasks.IFF doctrine automatically performs an IFF query within a certain geographic area.Identification doctrine automatically identifies a detected track and
121、 assigns an identity(e.g.,friend,unknown,or hostile)based on location,speed,IFF and course.Drop-Track will automatically remove tracks from a users display if they meet predefined criteria for being incorrect tracks(e.g.,weather-related clutter).Automatic Identification Mode The system will automati
122、cally identify a detected track and assign an identity(e.g.,friend,unknown,hostile)based on the tracks location,speed,IFF,and electronic emissions.Auto SM Doctrine The system automatically identifies threatening targets and notifies users to manually engage.Semiautomatic Engagement Mode The system a
123、utomatically identifies and prioritizes threatening targets for users to manually engage.Auto-Special Doctrine The system will automatically engage and fire against threats that meet set parameters without human user action required.A human user can halt the engagement.Automatic Engagement Mode The
124、system will automatically engage and fire against threats that meet set parameters without human user action required.A human user can halt the engagement.Center for Security and Emerging Technology|21 While the two systems function very similarly,the U.S.Army and Navy employ different approaches in
125、 how they are governed and provide a useful study of how organizations can shape user interactions.Patriot:A Bias Towards the System Both the Army and Navy employ detailed,specific instructions and processes to govern deployed weapons.Among these are rules of engagement(ROE),weapon control status or
126、ders,self-defense engagement criteria,and airspace control orders which are among the controls“developed specifically for the theater,and put into operation quickly to reduce the possibility of fratricide.”53 Despite these controls and consistent success in Operation Desert Storm,in 2003 the Patriot
127、 system was involved in three separate friendly fire incidents during Operation Iraqi Freedom:one that mistook a Patriot battery for an enemy surface-to-air missile system,and two that misclassified coalition aircraft.The latter incidents resulted in three fatalities.54 In 2005,the Defense Science B
128、oard conducted a review of the overall performance of the Patriot system in Operation Iraqi Freedom and found that these incidents followed the“Swiss cheese”model of safety incidents,a result of a series of failures“some human and some machine”that all contributed to the unfortunate outcomes.Among t
129、heir conclusions as to the source of the fratricides,they included fault with the Patriot system operating philosophy,protocols,displays,and software,which they found inappropriately tailored for the mission.55 On this point,the report elaborated that the Army preferred to use the system in the“auto
130、matic”mode so it could operate faster.56 Official Army guidance from 2002 does instruct users that the“default”mode for Patriot is to fight in the“automatic engagement mode”as opposed to manual or automatic identification mode(see Table 3).In the case of theater ballistic missile(TBM),for example,th
131、e instruction states:“When the system has classified a target as a TBM,engagement decisions and the time in which the user has to make those decisions are very limited.”57 In addition to this documented guidance,Air Defense Artillery training was criticized as a factor contributing to automation bia
132、s and cognitive off-loading by users because it emphasized“rote drills versus the exercise of high-level judgment.”58 Center for Security and Emerging Technology|22 Between doctrine(guidance to operate in an automatic mode),training(which was“rote”),and the success of Patriot 10 years earlier in Ope
133、ration Desert Storm,Patriot operators became biased toward“reacting quickly,engaging early,and trusting the system without question.”59 The bias was such that in some of the incidents,Patriot was operating only in semi-automatic engagement mode and a human user confirmed an engagement on an incorrec
134、tly identified track.As one researcher later put it,“Patriot operators,while nominally in control,exhibited automation bias:an unwarranted and uncritical trust in automation.In essence,control responsibility is ceded to the machine.”60 It was put more bluntly by a later researcher:“A semi-automatic
135、system in the hands of an inadequately trained staff is de facto a fully automated system.”61 AEGIS:A Bias Towards the Human Balancing the dynamic roles between human and machine is complicated.Moreover,as the AEGIS system demonstrates,weighting decision-making towards the human will not eliminate a
136、ll risks from autonomous systems.The AEGIS weapons systems are central pillars of air defense for the U.S.Navy.Despite its capabilities and centrality to naval defense,Navy training and doctrine show a preference towards decisions by users rather than determinations by autonomous systems.These biase
137、s are visible in the staffing,doctrine,and training for AEGIS.An AEGIS air engagement,for example,will involve several qualified sailors(officers and enlisted)in the task of identifying a radar track,tasks they can manually perform even when the system is in an autonomous mode.Furthermore,Navy train
138、ing documentation makes clear that elements of AEGIS are prone to failure,saying,for example in a 1991 training manual:“It is quite possible that the IFF equipment may be functioning improperly.The only reasonable recourse in the event of no IFF return is to get as many air units as possible on the
139、contact.If time is short,and we cannot receive the correct IFF response,we must assume that the contact is an enemy.”62 Unclassified documents also make clear that the onus of responsibility is placed on the AEGIS watch-standers;for example,the same training document concludes an overview of the AEG
140、IS system with the following paragraph:“Your training in combat systems is a never-ending process which you must approach with an aggressive and unremitting attitude until actions become almost second nature.Your duties are many and complex;to be effective requires your total commitment.”63 Center f
141、or Security and Emerging Technology|23 Despite the sophistication of the AEGIS system and the emphasis on human control,in 1988 during the Iran-Iraq war,the USS Vincennes,one of the first ships to employ AEGIS,inadvertently shot down civilian aircraft Iran Air Flight 655,having mistaken it for an Ir
142、anian fighter aircraft.The incident occurred within the context of extreme stress:The ship was concurrently engaging Iranian ships,and intelligence and warnings suggested an assault that particular weekend.Furthermore,the USS Stark had been struck by an Iraqi jet a year earlier.64 The shootdown resu
143、lted in the deaths of nearly 300 people and further tension between the United States and Iran that has continued to today.65 Analysis of the USS Vincennes incident found that AEGIS worked correctly.It identified the aircraft in question as a civilian aircraft ascending from launch.However,the Vince
144、nnes crew did not seem to recognize the information,instead reporting that the aircraft was descending and a military aircraft,thus justifying defensive weapons.66 The USS Vincennes incident shows that even when humans are taught and trained to be skeptical of a system,users can fail to correctly in
145、terpret the systems output or appropriately trust the technology,particularly under situations of extreme stress.67 The AEGIS and Patriot weapons systems show how organizational policies play a significant role in shaping automation bias.In the case of AEGIS,the Navy organized itself to preference h
146、uman decisions.In the case of Patriot,the Army made decisions that preference automated system decisions.The 2003 Patriot fratricides and the 1988 USS Vincennes incident further highlight that regardless of approach,there are risks of mishap.The Army has successfully employed Patriot for decades and
147、 it is a coveted defensive weapon,despite tragic past errors.The Navy has also successfully employed AEGIS under different rules and assumptions,but it has also experienced at least one lethal failure when sailors were under extreme stress.Therefore,organizational decisions that shape automation bia
148、s are not fail-safe against risk,and they must be carefully considered in light of technology capabilities,user understanding,and context of deployment.Case Study 3:Key Takeaways from Organizational Level Case Study Organizations can employ the same tools and technologies in very differentways,based
149、 on protocols,operations,doctrine,training,and certification.Choices in each of these areas of governance can embed automation biases.Organizational efforts to mitigate automation bias can be successful,butmishaps are still possible,especially when human users are under stress.Center for Security an
150、d Emerging Technology|24 Conclusion Unaddressed automation bias has already culminated in catastrophic accidents.From these case studies of past mishaps,we identify three important factors affecting a users automation bias:those intrinsic to the human user,such as personal biases,experience,and conf
151、idence in using the system;those inherent to the AI system,such as how it can be operated or how it presents information;and those created by organizational factors such as standard processes and procedures.Addressing these factors affecting risk in the application of AI,particularly in safety-criti
152、cal contexts,requires focused attention during the design and deployment of AI systems.With the lessons learned from the three case studies,we recommend as a starting point the following mitigations:Create and maintain qualification standards for user understanding.In each ofour case studies,we lear
153、ned that misunderstandings by users often contributedto the incident,either generally or due to a specific recent system change orupgrade.Since user understanding is critical to safe operation,systemdevelopers and vendors must invest in clear communications about theirsystems and organizations and g
154、overnments may need to create qualificationor re-qualification regimes appropriate to the technology and its use.Value and enforce consistent design and design philosophies,especially forsystems likely to be upgraded.When necessary,justify departures from adesign philosophy and make choices well-kno
155、wn to legacy users.Wherepossible,develop common design criteria,standards,and expectations,andconsistently communicate them(either through organizational policy orindustry standards)to reduce the risk of confusion and automation bias.Where autonomous systems are used by organizations,design and regu
156、larlyreview organizational policies to be consistent with technical capabilities andorganizational priorities.Update policies and processes as technologies changeto best account for new capabilities and mitigate novel risks.Look foropportunities to implement principles of high-reliability organizati
157、ons around themanagement of frontline AI deployment.The risk of accidents or misuse of AI-enabled systems will evolve alongside technology,the design of human-machine interactions,and user understanding.The successful,safe,and ethical deployment of AI relies not only on its capacity to work seamless
158、ly with human users but also on the competence and accountability of the humans Center for Security and Emerging Technology|25 overseeing,monitoring,managing,using,and ultimately relying on these systems.If humans“in-the-loop”are to be effective,they must learn when and how to cognitively offload ta
159、sks to AI systems.Center for Security and Emerging Technology|26 Authors Lauren Kahn completed her contributions to this research while she was a senior research analyst at CSET,and she is currently on assignment to the Office of the Deputy Assistant Secretary of Defense for Force Development and Em
160、erging Capabilities under an Intergovernmental Personnel Act agreement with CSET.Emelia S.Probasco is a senior fellow at CSET.Ronnie Kinoshita is the deputy director of data science and research at CSET.The views expressed herein are the authors and do not necessarily reflect those of the U.S.govern
161、ment.Acknowledgments The authors would like to thank Cat Aiken,John Bansemer,Quinn Baumgartner,Nathan Bos,Mitchell Brown,Daniella Castillo,James Dunham,Shelton Fitch,Bob Lennox,Matt Mahoney,Igor Mikolic-Torreira,Chris Murphy,Mina Narayanan,Michael Partida,Ioana Puscas,Helen Toner,and Vikram Venkatra
162、m for feedback and assistance.2024 by the Center for Security and Emerging Technology.This work is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License.To view a copy of this license,visit https:/creativecommons.org/licenses/by-nc/4.0/.Document Identifier:doi:10.515
163、93/20230057 Center for Security and Emerging Technology|27 Endnotes 1 Kate Goddard,Abdul Roudsari,Jeremy C Wyatt,“Automation bias:a systematic review of frequency,effect mediators,and mitigators,”Journal of the American Medical Informatics Association,Volume19,Issue 1(2012):12127,https:/doi.org/10.1
164、136/amiajnl-2011-000089.2 Mary Cummings,“Automation Bias in Intelligence Time Critical Decision Support Systems,”AIAA 1st Intelligent Systems Technical Conference(AIAA),Chicago,IL(2012),https:/doi.org/10.2514/6.2004-6313.3 N.A.,Incidents,AI Incident Database,accessed April 2,2024,https:/incidentdata
165、base.ai/apps/incidents/.4 Grace Augustine,Jan Lodge,Mislav Radic,“Mr.Bates vs The Post Office depicts one of the UKs worst miscarriages of justice:heres why so many victims didnt speak out,”The Conversation,January 4,2024,https:/ N.A.,“Victims of UK Post Office IT scandal faced four main barriers to
166、 speaking out new research,”University of Bath,January 8,2024,https:/www.bath.ac.uk/announcements/victims-of-uk-post-office-it-scandal-faced-four-main-barriers-to-speaking-out-new-research/.6 S.W.A.Dekker,D.D.Woods,“MABA-MABA or Abracadabra?Progress on Human-Automation Co-ordination,”Cognition,Techn
167、ology&Work,Volume 4(2002):240244,https:/doi.org/10.1007/s101110200022.7 Sara E.McBride,Wendy A.Rogers,Arthur D.Fisk,“Understanding human management of automation errors,”Theoretical Issues in Ergonomics Science,Volume 15,Issue 6(2013):545577,https:/doi.org/10.1080/1463922X.2013.817625,and Kate Godda
168、rd,Abdul Roudsari,Jeremy C.Wyatt,“Automation bias:a systematic review of frequency,effect mediators,and mitigators,”Journal of the American Medical Informatics Association,Volume 19,Issue 1(2012):121127,https:/doi.org/10.1136/amiajnl-2011-000089.8 Kate Goddard,Abdul Roudsari,Jeremy C.Wyatt,“Automati
169、on bias:Empirical results assessing influencing factors,”International Journal of Medical Informatics,Volume 83,Issue 5(2014):368375,https:/doi.org/10.1016/j.ijmedinf.2014.01.001.9 David M.Sanbonmatsu,David L.Strayer,Zhenghui Yu,Francesco Biondi,Joel M.Cooper,“Cognitive underpinnings of beliefs and
170、confidence in beliefs about fully automated vehicles,”Transportation Research Part F:Traffic Psychology and Behaviour,Volume 55(2018):114122,https:/psycnet.apa.org/doi/10.1016/j.trf.2018.02.029;Michael C.Horowitz,Lauren Kahn,Julia Macdonald,Jacquelyn Schneider,“Adopting AI:how familiarity breeds bot
171、h trust and contempt,”AI&Society,Volume 39(2023):17211735,https:/doi.org/10.1007/s00146-023-01666-5;Robert E.Center for Security and Emerging Technology|28 Burnkrant,Alain Cousineau,“Informational and Normative Social Influence in Buyer Behavior,”Journal of Consumer Research,Volume 2,Issue 3(1975):2
172、06215,https:/psycnet.apa.org/doi/10.1086/208633.10 Michael C.Horowitz,Lauren Kahn,“Bending the Automation Bias Curve:A Study of Human and AI-Based Decision Making in National Security Contexts,”International Studies Quarterly,Volume 68,Issue 2(2024),https:/doi.org/10.1093/isq/sqae020;Justin Kruger,D
173、avid Dunning,“Unskilled and unaware of it:How difficulties in recognizing ones own incompetence lead to inflated self-assessments,”Journal of Personality and Social Psychology,Volume 77,no.6(1999):11211134,https:/psycnet.apa.org/doi/10.1037/0022-3514.77.6.1121;Carmen Sanchez,David Dunning,“Overconfi
174、dence among beginners:Is a little learning a dangerous thing?”Journal of Personality and Social Psychology,Volume 114,no.1(2018):1028,https:/doi.org/10.1037/pspa0000102.11 National Highway Traffic Safety Administration,“Automated Vehicles for Safety,”U.S.Department of Transportation,accessed April 2
175、,2024,https:/www.nhtsa.gov/vehicle-safety/automated-vehicles-safety#:text=In%20some%20circumstances%2C%20automated%20technologies,%2C%20injuries%2C%20and%20economic%20tolls.12 N.A.,“Despite warnings,many people treat partially automated vehicles as self-driving,”Insurance Institute for Highway Safet
176、y(IIHS)/Highway Loss Data Institute(HLDI),October 11,2022,https:/www.iihs.org/news/detail/despite-warnings-many-people-treat-partially-automated-vehicles-as-self-driving.13 Moritz Krber,Eva Baseler,Klaus Bengler,“Introduction matters:Manipulating trust in automation and reliance in automated driving
177、,”Applied Ergonomics,Volume 66(2018):1831.https:/doi.org/10.1016/j.apergo.2017.07.006.14 Chris Schwarz,John Gaspar,Timothy Brown,“The effect of reliability on drivers trust and behavior in conditional automation,”Cognition,Technology&Work,Volume 21(2019):4154.https:/ Apoorva P.Hungund,Ganesh Pai,Anu
178、j K.Pradhan,“Systematic Review of Research on Driver Distraction in the Context of Advanced Driver Assistance Systems,”Transportation Research Record,Volume 2675,Issue 9(2021):756765,https:/doi.org/10.1177/03611981211004129;Moritz Krber,Eva Baseler,Klaus Bengler,“Introduction matters:Manipulating tr
179、ust in automation and reliance in automated driving,”Applied Ergonomics,Volume 66(2018):1831.https:/doi.org/10.1016/j.apergo.2017.07.006.16 Danny Yadron,Dan Tynan,“Tesla driver dies in first fatal crash while using autopilot mode,”The Guardian,June 30,2016,https:/ Levine and Hyunjoo Jin,“Next Autopi
180、lot trial to test Teslas blame-the-driver defense,”Reuters,March 11,2024,https:/ Russ Mitchell,“DMV probing whether Tesla violates state regulations with self-driving claims,”Los Angeles Times,May 17,2021,https:/ for Security and Emerging Technology|29 california-fsd-autopilot-safety;“Teslas Autopil
181、ot misleading because humans still in control:Pete Buttigieg,”New York Post,May 11,2023,https:/ Mihalascu,“Tesla FSD Might Reach Level 4 Or Level 5 Autonomy This Year:Musk,”Inside EVs,July 7,2023,https:/ N.A.,“Incident 52:Tesla on Autopilot Killed Driver in Crash in Florida While Watching Movie,”AI
182、Incident Database,accessed April 2,2024,https:/incidentdatabase.ai/cite/52/.19 N.A.,“NHTSA ACTION NUMBER:PE21020 Autopilot&First Responder Scenes,”August 13,2021;N.A.,“NHTSA ACTION NUMBER:EA22002 Autopilot System Driver Controls,”June 8,2022.20 N.A.“NHTSA ACTION NUMBER:RQ24009 OPEN INVESTIGATION,Rec
183、all 23V838 Remedy Effectiveness,”April 25,2024.21 N.A.,“Additional Information Regarding EA22002,”National Highway Traffic Safety Administration,April 25,2024,https:/static.nhtsa.gov/odi/inv/2022/INCR-EA22002-14496.pdf.22 David Shepardson,“US probes Tesla recall of 2 million vehicles over Autopilot,
184、”Reuters,April 26,2024,https:/ N.A.,“Update Vehicle Firmware to Prevent Driver Misuse of Autosteer,”Tesla,N.D.,https:/ N.A.,“Update Vehicle Firmware to Prevent Driver Misuse of Autosteer,”Tesla,N.D.,https:/ N.A.,“Federal Regulators Investigating Teslas Autopilot Recall Fix,”Consumer Reports,April 26
185、,2024,https:/www.consumerreports.org/cars/car-safety/tesla-autopilot-recall-fix-does-not-address-safety-problems-a5133751100/;N.A.“NHTSA ACTION NUMBER:RQ24009 OPEN INVESTIGATION,Recall 23V838 Remedy Effectiveness,”April 25,2024;Moritz Krber,Eva Baseler,Klaus Bengler“Introduction matters:Manipulating
186、 trust in automation and reliance in automated driving,”Applied Ergonomics,Volume 66(2018):1831.https:/doi.org/10.1016/j.apergo.2017.07.006.26 Qiyuan Zhang,Christopher D.Wallbridge,Dylan M.Jones,Phillip L.Morgan,“Public perception of autonomous vehicle capability determines judgment of blame and tru
187、st in road traffic accidents,”Transportation Research Part A:Policy and Practice,Volume 179(2024),https:/doi.org/10.1016/j.tra.2023.103887.27 David Zipper,“The Deadly Myth That Human Error Causes Most Car Crashes,”The Atlantic,November 26,2021,https:/ for Security and Emerging Technology|30 28 Kathl
188、een L.Mosier,Linda J.Skitka,Susan Heers,Mark Burdick,“Automation Bias:Decision Making and Performance in High-Tech Cockpits,”The International Journal of Aviation Psychology,Volume 8,Issue 1(1998):4763,https:/doi.org/10.1207/s15327108ijap0801_3.29 John Dowell,John Long,“Towards a conception for an e
189、ngineering discipline of human factors,”Ergonomics,Volume 32,no.11(1989):15131535,https:/doi.org/10.1080/00140138908966921.30 Amy B.Wang,“Hawaii missile alert:How one employee pushed the wrong button and caused a wave of panic,”The Washington Post,January 14,2018,https:/ Hern,“Hawaii missile false a
190、larm due to badly designed user interface,reports say,”The Guardian,January 15,2018,https:/ Flaherty,“What the Erroneous Hawaiian Missile Alert Can Teach Us About Error Prevention,”Nielsen Norman Group,January 16,2018,https:/ Tara Leggett,“The Evolution of Autopilot,”Key.Aero,August 21,2020,https:/w
191、ww.key.aero/article/evolution-autopilot.32 Lawrence J.Prinzel III,Team-Centered Perspective for Adapted Automation Design,(Hampton,VA:NASA Langley Research Center,2003).33 Charles E.Billings,Aviation Automation:The Search For A Human-Centered Approach,(Mahwah,NJ:Lawrence Erlbaum Associates,Publisher
192、s,1997);“Statistical Summary of Commercial Jet Airplane Accidents:Worldwide Operations 19592022,”Boeing,September 2023,https:/www.faa.gov/sites/faa.gov/files/2023-10/statsum_summary_2022.pdf.34 N.A.,Commercial Airplane Certification Process Study:An Evaluation of Selected Aircraft Certification,Oper
193、ations,and Maintenance Processes,(Washington,DC:U.S.Department of Transportation Federal Aviation Administration,2002).35 Title 14 Code of Federal Regulations,Part 25,https:/www.ecfr.gov/current/title-14/chapter-I/subchapter-C/part-25?toc=1;“Regulations,”European Union Aviation Safety Agency,accesse
194、d April 4,2024,https:/www.easa.europa.eu/en/regulations;Alexander Z.Ibsen,“The politics of airplane production:The emergence of two technological frames in the competition between Boeing and Airbus,”Technology in Society,Volume 31,Issue 4(2009):342349,https:/doi.org/10.1016/j.techsoc.2009.10.006.36
195、Joe Kunzler,“Alaska Airlines To Become All-Boeing Carrier By October,”Simple Flying,May 7,2023,https:/ Herstam,“How Do Pilots Retain Their Type Ratings?,”Simple Flying,June 4,2023,https:/ becomes one of the first to enable pilots to fly both A350 and A380 aircraft,”Etihad Airways,February 14,2024,ht
196、tps:/ for Security and Emerging Technology|31 37 Sylvia Pfeifer,Philip Georgiadis,Steff Chvez,“How Boeings troubles are upsetting the balance of power in aviation,”Financial Times,January 28,2024,https:/ Kathy Abbott,“Human Factors Engineering and Flight Deck Design,”in The Avionics Handbook,edited
197、by Cary Spitzer,CRC Press LLC,2001.39 N.A.,Safety Recommendation Report:Assumptions Used in the Safety Assessment Process and the Effects of Multiple Alerts and Indications on Pilot Performance,(Washington,DC:National Transportation Safety Board,2019);Sinad Baker,“Boeing shunned automation for decad
198、es.When the aviation giant finally embraced it,ad automated system in the 737 Max kicked off the biggest crisis in its history,”Business Insider,April 4,2020,https:/ Before the Subcommittee on Aviation of the Committee on Transportation and Infrastructure,House of Representatives,116th Congress,“Sta
199、tus of the Boeing 737 Max:Stakeholder Perspectives,”June 19,2019.https:/www.congress.gov/event/116th-congress/house-event/LC64168/text.40 N.A.,Safety Recommendation Report:Assumptions Used in the Safety Assessment Process and the Effects of Multiple Alerts and Indications on Pilot Performance,(Washi
200、ngton,DC:National Transportation Safety Board,2019).41 N.A.,“Final Report on the accident on 1st June 2009 to the Airbus A330-203 registered F-GZCP operated by Air France flight AF 447 Rio de Janeiro-Paris,”(Le Bourget Cedex,France:Bureau dEnqutes et dAnalyses pour la scurit de laviation civile,2012
201、).42 Roman Mars,“Air France Flight 447 and the Safety Paradox of Automated Cockpits,”Slate,June 25,2015,https:/ Polek,“Court Acquits Airbus,Air France in AF447 Manslaughter Trial,”Aviation International News,April 17,2023,https:/ Langewiesche,“The Human Factor,”Vanity Fair,September 17,2014,https:/
202、Paulus A.J.M.de Wit,Roberto Moraes Cruz,“Learning from AF447:Human-machine interaction,”Safety Science,Volume 112,(2019):4856,https:/doi.org/10.1016/j.ssci.2018.10.009.44 Nick Oliver,Thomas Calvard,Kristina Potonik,“The Tragic Crash of Flight AF447 Shows the Unlikely but Catastrophic Consequences of
203、 Automation,”Harvard Business Review,September 15,2017,https:/hbr.org/2017/09/the-tragic-crash-of-flight-af447-shows-the-unlikely-but-catastrophic-consequences-of-automation.45 N.A.“Final Report on the accident on 1st June 2009 to the Airbus A330-203 registered F-GZCP operated by Air France flight A
204、F 447 Rio de Janeiro-Paris,”(Le Bourget Cedex,France:Bureau dEnqutes et dAnalyses pour la scurit de laviation civile,2012).Center for Security and Emerging Technology|32 46 Paola Amaldi,Anthony Smoker,“An Organizational Study into the Concept of Automation in Safety Critical Socio-technical System,”
205、IFIP Advances in Information and Communication Technology,(Berling:Springer,2013):183197,https:/doi.org/10.1007/978-3-642-41145-8_16.47 See for example,Kathleen M.Sutcliffe,“High Reliability Organizations(HROs),”Best Practice&Research Clinical Anaesthesiology,Safety in Anaesthesia,Volume 25,no.2(Jun
206、e 1,2011):13344;and K.H.Roberts,“New challenges in organizational research:high reliability organizations,”Industrial Crisis Quarterly,1989.48 James Reason,“Human error:models and management,”The BMJ(Clinical research ed.),Volume 320(2000):768770,https:/doi.org/10.1136/bmj.320.7237.768.49 N.A.,“AEGI
207、S Weapon System,”U.S.Navy,20 September 2021,https:/www.navy.mil/Resources/Fact-Files/Display-FactFiles/Article/2166739/aegis-weapon-system/;Center for Strategic and International Security(CSIS)Missile Defense Project,“Patriot,”August 23,2023,https:/missilethreat.csis.org/system/patriot/.50 Missile D
208、efense Project,“Aegis Ballistic Missile Defense,”Center for Strategic and International Studies,June 14,2018,last modified August 4,2021,https:/missilethreat.csis.org/system/aegis/;Missile Defense Project,“Patriot,”Center for Strategic and International Studies,June 14,2018,last modified August 23,2
209、023,https:/missilethreat.csis.org/system/patriot/.51 Sharif H.Calfee,“Autonomous Agent-Based Simulation of an AEGIS Cruiser Combat Information Center Performing Battle Group Air-Defense Commander Operations,”Naval Postgraduate School,March,2003,https:/faculty.nps.edu/ncrowe/oldstudents/calfee-thesis
210、.htm;R.Stephen Howard,“Combat systems and weapons department management,”Naval Education and Training Program Management Support Activity,Pensacola,Fl.September 1991.52 Headquarters,Department of the Army,“FM 44-15-1:Operations and Training Patriot,”Washington,DC,February 1987,https:/www.bits.de/NRA
211、NEU/others/amd-us-archive/FM44-15-1Pt1%2887%29.pdf.53 N.A.,“FM 44-85 Patriot Battalion and Battery Operations,”Washington,DC:Headquarters Department of the Army,21 February 1997,https:/ N.A.,Report of the Defense Science Board Task Force on Patriot System Performance:Report Summary,(Washington,DC:Of
212、fice of the Under Secretary of Defense for Acquisition,Technology,and Logistics,2005).55 N.A.,Report of the Defense Science Board Task Force on Patriot System Performance:Report Summary,(Washington,DC:Office of the Under Secretary of Defense for Acquisition,Technology,and Logistics,2005).Center for
213、Security and Emerging Technology|33 56 N.A.Report of the Defense Science Board Task Force on Patriot System Performance:Report Summary,(Washington,DC:Office of the Under Secretary of Defense for Acquisition,Technology,and Logistics,2005).57 N.A.,“FM 3-01.85,Patriot Battalion and Battery Operations,”
214、Washington,DC:Headquarters Department of the Army,13 May 2002.58 John K.Hawley,Anna L.Mares,Developing Effective Human Supervisory Control for Air and Missile Defense Systems(Adelphi,MD:Army Research Laboratory,2006).59 John K.Hawley,“Patriot Wars:Automation and the Patriot Air and Missile Defense S
215、ystem,”Center for a New American Security,January 25,2017,https:/as.org/publications/reports/patriot-wars;N.A.,Military Aircraft Accident Summary:Aircraft Accident to Royal Air Force Tornado GR MK4A ZG710(London,UK:Ministry of Defence Directorate of Air Staff,2004).60 Mary Cummings,“Automation Bias
216、in Intelligence Time Critical Decision Support Systems,”AIAA 1st Intelligent Systems Technical Conference(AIAA),Chicago,IL(2012),https:/doi.org/10.2514/6.2004-6313.61 John K.Hawley,“Patriot Wars:Automation and the Patriot Air and Missile Defense System,”Center for a New American Security,January 25,
217、2017,https:/as.org/publications/reports/patriot-wars.62 R.Stephen Howard,Combat systems and weapons department management(Pensacola,FL:Naval Education and Training Program Management Support Activity,September 1991).63 R.Stephen Howard,Combat systems and weapons department management(Pensacola,FL:Na
218、val Education and Training Program Management Support Activity,September 1991).64 N.A.,Formal Investigation into the Circumstances Surrounding the Downing of Iran Air Flight 655 on 3 July 1988,(Washington,DC:The Department of Defense,1988).65 Jon Gambrell,“30 years later,US downing of Iran flight ha
219、unts relations,”The Associated Press,July 3,2018,https:/ Anthony Tingle,“Human-Machine Team Failed Vincennes,”Proceedings,Volume 144,no.7(2018),https:/www.usni.org/magazines/proceedings/2018/july/human-machine-team-failed-vincennes.67 Study of the USS Vincennes incident under the Navys Tactical Deci
220、sion Making Under Stress program point to other sources of error besides automation bias,such as the design of the AEGIS user interface and its relationship to other human cognitive biases like framing errors and confirmation bias.For more see,Jeffrey G.Morrison,Richard T.Kelly,Ronald A.Moore,Susan G.Hutchins,“Tactical decision making under stress(TADMUS)decision support system,”Calhoun:The NPS Institutional Archive(1996),https:/